One of the hottest use cases we see these days is an extended capacity in-memory cluster, or in other words fast data. This means we can have a cluster of multiple TB (that is too big to store in- memory only) and still perform real time (few millisecond) calculations with data persistency. Some of the challenges we face here include:

  • Supporting a few TB for an in memory cluster is very expensive (even with the costs savings of using in-memory)
  • Performing real time calculations in a few milliseconds on big data is a challenging task on its own
  • Having all the data persistent and highly available is also not an in-memory given

There are many industries that can leverage a solution that would comply with these requirements: Finance, eCommerce, Online Gaming etc. Each one of them can have a flow that would require real time event processing for a massive amount of data. For example, one of our customers is a large online gaming application that needs to calculate real time promotions for players. There are thousands of real time promotions for millions of users at the same time. We also need to perform some aggregation calculations on these numbers and store them in our cluster too. If that isn’t hard enough, this entire process has to be persistent as well. If you only use an in-memory cluster you don’t necessarily have enough storage and it is definitely not persistent. On the other side, if you use a NoSQL solution it might not be fast enough since it is mostly disk based.


Interested in fast event processing? You can try MemoryXtend here! 


XAP, an in memory data grid platform, combined with optimized access to SSD, provides a solution for the above challenges. In simple terms, users are provided SSD disks for each cluster node. This extends the data capacity for each node. Additionally, the data is persistent on SSD (both for data recovery and for fast initial loading time of the cluster). But how can this possibly still be fast?

The use of a XAP in-memory cluster to store the indexes and also some of the objects allows the query engine to very quickly look for the objects as the ids. Metadata and indexes are stored in memory so the queries can run there. Also, since the objects payload is stored on the SSD, in some use cases we even reduce the GC time which can always be painful.

Now what is the difference between that and storing a NoSQL database on SSD? XAP MemoryXtend doesn’t simply use SSD as a fast disk. It also use the proprietary SanDisk software, ZetaScale, which is the SSD API that allows for key value store making even the SSD usage extremely fast. The use of ZetaScale software improves the SSD access dramatically. The solution supports application data growth to tens of terabytes using in-memory data grid building blocks. XAP MemoryXtend dramatically improves application performance by eliminating expensive database reads, parallelizing storage access and ultimately, reducing the expense associated with using DRAM alone. While the performance is impacted slightly by access to the SSD, ultimately, the indexes and some of the objects as well as the metadata are stored in-memory, therefore searches are still very quick.

To summarize, as we can see, XAP MemoryXtend provides users the ability to extend their in memory capacity, reduce the price (by replacing RAM with SSD) and achieve data persistency. We recently conducted a webcast with our partners at SanDisk in which we discussed this very issue – how do we achieve faster speeds with ever-increasing volumes of data? Below are some of the challenges we discussed and how GigaSpaces and SanDisk work together to provide a solution. If you’re interested in reading a case study, our last blog post discusses how Wolters Kluwer implemented XAP MemoryXtend.

Making IMC More Cost Effective Using Solid State Drives…Combined Use Cases