Cache < Data Grid < Database

I would like to clarify definitions for the following technologies:

  • In-Memory Distributed Cache
  • In-Memory Data Grid
  • In-Memory Database

These three terms are, surprisingly, often used interchangeably and yet technically and historically they represent very different products and serve different, sometimes very different, use cases.

It’s also important to note that there’s no specifications or industry standards on what cache, or data grid or database should be (unlike java application servers and JEE, for example). There was and still is an attempt to standardize caching via JSR107 but it has been years (almost a decade) in the making and it is hopelessly outdated by now (I’m on the expert group).

Tricycle vs. Bike vs. Motorcycle

First of all, let me clarify that I am discussing caches, data grids and databases in the context of in-memory, distributed architectures. Traditional disk-based databases and non-distributed in-memory caches or databases are out of scope for this article.

cache_grid_db

Chronologically, caches, data grids and databases were developed in that order: starting from simple caching to more complex data grids and finally to distributed in-memory databases. The first distributed caches appeared in the late 1990s, data grids emerged around 2002-2003 and in-memory databases have really came to the forefront in the last 5 years.

All of these technologies are enjoying a significant boost in interest in the last couple years thanks to explosive growth in-memory computing in general fueled by 30% YoY price reduction for DRAM and cheaper Flash storage.

Despite the fact that I believe that distributed caching is rapidly going away, I still think it’s important to place it in its proper historical and technical context along with data grids and databases.

In-Memory Distributed Caching

The primary use case for caching is to keep frequently accessed data in process memory to avoid constantly fetching this data from disk, which leads to the High Availability (HA) of that data to the application running in that process space (hence, “in-memory” caching).

Most of the caches were built as distributed in-memory key/value stores that supported a simple set of ‘put’ and ‘get’ operations and optionally some sort of read-through and write-through behavior for writing and reading values to and from underlying disk-based storage such as an RDBMS. Depending on the product, additional features like ACID transactions, eviction policies, replication vs. partitioning, active backups, etc. also became available as the products matured.

These fundamental data management capabilities of distributed caches formed the foundation for the technologies that came later and were built on top of them such as In-Memory Data Grids.

In-Memory Data Grid

The feature of data grids that distinguishes them from distributed caches was their ability to support co-location of computations with data in a distributed context and consequently provided the ability to move computation to data. This capability was the key innovation that addressed the demands of rapidly growing data sets that made moving data to the application layer increasing impractical. Most of the data grids provided some basic capabilities to move the computations to the data.

This new and very disruptive capability also marked the start of the evolution of in-memory computing away from simple caching to a bona-fide modern system of record. This evolution culminated in today’s In-Memory Databases.

In-Memory Database

The feature that distinguishes in-memory databases over data grids is the addition of distributed MPP processing based on standard SQL and/or MapReduce, that allows to compute over data stored in-memory across the cluster.

Just as data grids were developed in response to rapidly growing data sets and the necessity to move computations to data, in-memory databases were developed to respond to the growing complexities of data processing. It was no longer enough to have simple key-based access or RPC type processing. Distributed SQL, complex indexing and MapReduce-based processing across TBs of data in-memory are necessary tools for today’s demanding data processing.

Adding distributed SQL and/or MapReduce type processing required a complete re-thinking of data grids, as focus has shifted from pure data management to hybrid data and compute management.

Full Disclosure

GridGain develops an In-Memory Database product as part of its end-to-end in-memory computing suite.

GridGain In-Memory Database: Plain English Overview

I picked this chapter up from the GridGain’s Document Center. I like it as it gives simple, plain English high-level overview of our In-Memory Database: no coding, no diagrams, no deep dives. Just quick and easy rundown of what’s there…

At a Glance

GridGain IMDB is a distributed, Java-based, object-based key-value datastore. Logically it can be viewed as a collection of one or more caches (a.k.a maps or dictionaries). Each cache is a distributed collection of key-value pairs. Both key and value are represented as Java object and can be of any user-defined type.

Every cache must be pre-configured individually and cannot be created on the fly (due to distributed consistency semantics). You’ll find that cache and cache projections will be your main API entry points while working with GridGain IMDB in embedded mode.

Each cache has many configuration properties with the main one being its type. GridGain IMDB supports three cache types: local, replicated and partitioned.

As name implies the local mode stores all data locally without any distribution providing lightweight transactional local storage. Replicated cache replicates (copies) data to all nodes in the cluster resulting in best high availability but reducing overall database in-memory capacity since data is copied everywhere. Partitioned mode is the most scalable mode as it equally partitions data across all nodes in the cluster so that each node is only responsible for a small portion of the data.

Combination of these storage modes in a single database (as well as many specific configuration and optimization properties available for each mode) make GridGain IMDB very convenient distributed datastore as it doesn’t force you to use just one specific storage model.

GridGain IMDB stores data in layered storage system that consists of 4 layers: JVM on-heap memory, JVM off-heap memory, local disk-based swap space, and optional durable cache store. Each layer can store more data but entails progressively higher latencies for data access. Developer has full control over configuration of these layers.

Another interesting characteristic of GridGain IMDB is that it was developed first as a highly distributed system and only later it became a full fledged database. This reversed approach makes data and processing distribution a natural capability of the database.

GridGain IMDB is based on unique HyperClustering technology that enables GridGain IMDB scale to 1000s of nodes in a single transactional topology (based on actual production customers).

GridGain IMDB clustering is based on peer-to-peer topology, its transaction implementation is based on advanced MVCC-based design, and its partitioning is based on automatic multilayer consistent hashing implementation – free from sharding limitations or other crude data distribution approaches.

High Performance Computing (HPC) Integration

One of the most unique characteristics of GridGain IMDB is the full integration of In-Memory HPC at the core of the database.

Many traditional RDBMS and No/NewSQL databases only address data storage and rudimentary data processing. In this scenario the data is retrieved from the database and has to be moved to some other processing node. Once data is processed, it is usually discarded.

Such data movement between different layers, even minimal, is almost always at the core of the scalability and performance problems in highly distributed systems.

GridGain IMDB was designed from the ground up to minimize unnecessary data movements and instead move computations to the data whenever possible – hence its integration of HPC technology is at the very core of the database. Computations are dramatically smaller in size – often by factor of 1000x, they don’t change as often as the data, have strong and easily defined affinity to the data they require, and typically provide only negligible load on network and JVMs.

What is even more important – this approach allows for clean processing parallelization of data stored in the database since the computing task can now be intelligently split into sub-tasks that can be sent to remote nodes to work in parallel on their respective local data sub-sets with absolutely zero global resource contention.

GridGain IMDB supports MapReduce, distributed SQL, MPP, MPI, RPC, File System, and Document API type of data processing and querying – the deepest and the widest eco-system of HPC processing paradigms provided by any database or HPC framework.

Accessing Database

GridGain IMDB can be queried and programmed in many different ways. In external context you can use Java, C++, or C# drivers. GridGain IMDB also natively supports custom REST and Memcached protocol.

In embedded mode you can use distributed SQL and JDBC as well as Lucene, Text and full-scan queries. For complex data computations you can use in-memory MapReduce, MPP, RPC and MPI-based processing. All programming techniques in embedded mode have deeply customizable APIs including distributed extensions to SQL, Java or Scala-based custom SQL functions, streaming MapReduce, distributed continuations, connected tasks support, etc.

GridGain IMDB also provides in-memory file system (GGFS – GridGain File System) as well as full support for MongoDB Document API protocol.

Embedded vs. External Access

Unlike many traditional, NewSQL and NoSQL databases GridGain IMDB is designed to be easily programmable in embedded mode.

Traditional (external) approach dictates that database should be deployed separately and the data processing applications access it through some networking protocol and client library (i.e. the driver). This implies significant driver overhead and data movement that makes any HPC or real-time database processing impossible as we discussed above.

While supporting the external access as well via its C++, .NET, and Java drivers – GridGain IMDB also natively supports embedded mode where data processing logic can be deployed directly into the database itself and therefore can be programmatically accessed in the same process. In other words, GridGain IMDB allows to initiate a distributed data processing task right from the database process itself removing any driver overhead and its significant API limitations – enabling rich functionality and sub-millisecond response for complex distributed data processing tasks.

Among many benefits, this is becoming critically important capability for rapidly growing machine-to-machine and streaming use cases that don’t have human interaction delays built in and require minimal latencies and linear horizontal scalability.

Fault Tolerance and Durability

GridGain IMDB provides advanced capabilities when it comes to fault tolerance and durability.

Each cache can be configured with one or more active backups which provides data redundancy when a node crashes as well as improved performance in read-mostly scenarios. On topology changes (node leaves or joins) the comprehensive pre-loading subsystem will make sure that data is synchronously or asynchronously re-partitioned while maintaining the desired consistency and availability. Each cache can be independently configured for transactional read-through and write-through to a durable storage such as RDBMS, HDFS, or file system to make sure that data is backed up in durable datastore, if required.

In case of network segmentation, a.k.a. “split-brain” problem, GridGain IMDB provides pluggable segmentation resolution architecture where dirty writes or reads are impossible regardless of how segmented your cluster gets.

For complex and mission critical deployments GridGain IMDB provides data center replication. When data center replication is turned on, GridGain IMDB will automatically make sure that each data center is consistently backing up its data to other data centers (there can be more than one). GridGain supports both active-active and active-passive modes for replication.

Transactions

GridGain IMDB has full support for distributed transactions supporting all ACID properties including support for Optimistic and Pessimistic concurrency levels and READ_COMMITTED, REPEATABLE_READ, and SERIALIZABLE isolation levels.

For JEE environments, like application servers, GridGain IMDB provides automatic integration with JTA/XA. Essentially GridGain becomes an XA resource and will automatically check if there is an active JTA transaction present.

In addition to transactions where GridGain IMDB allows to execute multiple data operations atomically, GridGain also supports single atomic CAS (compare-and-set) operations, such as put-if-absent, compare-and-set, and compare-and-remove.

For more information head over to GridGain’s In-Memory Database.