Debunking DRAM vs. Flash Controversy vis-a-vis In-Memory Processing

Debunking DRAM vs. Flash Controversy vis-a-vis In-Memory Processing

Wikibon produced an interesting material (looks like paid by Aerospike, NoSQL database recently emerged by resurrecting failed CitrusLeaf and acquihiring AlchemyDB, which product, of course, was recommended in the end) that compares NoSQL databases based on storing data in flash-based SSD vs. storing data in DRAM.

There are number of factual problems with that paper and I want to point them out.

Note that Wikibon doesn’t mention GridGain in this study (we are not a NoSQL datastore per-se after all) so I don’t have any bone in this game other than annoyance with biased and factually incorrect writing.

“Minimal” Performance Advantage of DRAM vs SSD

The paper starts with a simple statement “The minimal performance disadvantage of flash, relative to main memory…”. Minimal? I’ve seen number of studies where performance difference between SSDs and DRAM range form 100 to 10,000 times. For example, this University of California, Berkeley study claims that SSD bring almost no advantage to the Facebook Hadoop cluster and DRAM pre-caching is the way forward.

Let me provide even shorter explanation. Assuming we are dealing with Java – SSD devices are visible to Java application as typical block devices, and therefore accessed as such. It means that a typical object read from such device involves the same steps as reading this object from a file: hardware I/O subsystem, OS I/O subsystem, OS buffering, Java I/O subsystem & buffering, Java deserialization and induced GC. And… if you read the same object from DRAM – it involves few bytecode instructions – and that’s it.

Native C/C++ apps (like MongoDB) can take a slightly quicker route with memory mapped files (or various other IPC methods) – but the performance increase will not be significant (for obvious reason of needing to read/swap the entire pages vs. single object access pattern in DRAM).

Yet another recent technical explanation of the disadvantages of SSD storage can be found here (talking about Oracle’s “in-memory” strategy).

MongoDB, Cassandra, CouchDB DRAM-based?

Amid all the confusion on this topic it’s no wonder the author got it wrong. Neither MongoDB, Cassandra or CouchDB are in-memory systems. They are disk-based systems with support for memory caching. There’s nothing wrong with that and nothing new – every database developed in the last 25 years naturally provides in-memory caching to augment it’s main disk storage.

The fundamental difference here is that in-memory data systems like GridGain, SAP HAHA, GigaSpaces, GemFire, SqlFire, MemSQL, VoltDB, etc. use DRAM (memory) as the main storage medium and use disk for optional durability and overflow. This focus on RAM-based storage allows to completely re-optimized all main algorithms used in these systems.

For example, ACID implementation in GridGain that provides support for full-featured distributed ACID transactions beats every NoSQL database (EC-based) out there in read and even write performance: there are no single key limitations, no consistency trade offs to make, no application-side MVCC, no user-based conflict resolutions or other crutches – it just works the same way as it works in Oracle or DB2 – but faster.

2TB Cluster for $1.2M 🙂

If there was on piece in the original paper that was completely made up to fit the predefined narrative it was a price comparison. If the author thinks that 2TB RAM cluster costs $1.2M today – I have not one but two Golden Gate bridges to sell just for him…

Let’s see. A typical Dell/HP/IBM/Cisco blade with 256GB of DRAM will cost below $20K if you just buy on the list prices (Cisco seems to offer the best prices starting at around $15K for 256GB blades). That brings the total cost of 2TB cluster well below $200K (with all network and power equipment included and 100s TBs of disk storage).

Is this more expensive that SSD only cluster? Yes, by 2.5-3x times more expensive. But you are getting dramatic performance increase with the right software that more than justifies that price increase.

Conclusion

2-3x times price difference is nonetheless important and it provides our customers a very clear choice. If price is an issue and high performance is not – there are disk-based systems of wide varieties. If high performance and sub-second response on processing TBs of data is required – the hardware will be proportionally more expensive.

However, with 1GB of DRAM costing less than 10 USD and DRAM prices dropping 30% every 18 months – the era of disks (flash or spinning) is clearly coming to its logical end. It’s normal… it’s a progress and we all need to learn how to adapt.

Has anyone seen tape drives lately?

In-Memory Data Grids… Explained.

Many companies that would not have considered using in-memory technology in the past due to its cost are now changing their core systems’ architectures to accommodate it. They want to take advantage of the low-latency transaction processing in-memory technology offers. With the price of 1GB of RAM less than a one dollar and RAM prices dropping 30% every 18 months it has become economically affordable to load entire operational datasets into memory and achieve dramatic performance improvements.

Companies are using this, for example, to perform calculations or create live dashboards that give management immediate insight into crucial operational data from their systems. Currently, users often have to wait until the end of a reporting period for batch jobs to process the accumulated data and generate the desired reports.

Modern in-memory technology connects to existing data stores such as Hadoop or traditional data warehouses and makes this data available in RAM, where it can then be queried or used in processing tasks with an unprecedented performance. The power of such insight in real-time lets companies react exponentially faster and more flexible than what current systems allow.

This paper is meant to help readers understand what the key features of modern in-memory products are and how they affect the eventual integration and performance. Two key components are the underlying basis for the core capabilities of in-memory technology: in-memory compute and data grids. This paper concentrates on the in-memory data grids.

What is an In-Memory Data Grid?

The goal of an In-Memory Data Grid (IMDG) is to provide extremely low latency access to, and high availability of, application data by keeping it in memory and to do so in a highly parallelized way. By loading terabytes of data into memory, an IMDG is able to support most of the Big Data processing requirements. At a very high level an IMDG is a distributed key-value object store similar in its interface to a typical concurrent hash map. You store and retrieve objects using keys.

Unlike systems where keys and values are limited to byte arrays or strings, an IMDG can have any application domain object as either a value or a key. This provides tremendous flexibility: exactly the same object your business logic is using can be kept in the data grid – without the extra step of marshaling and de-marshaling. It also simplifies the use of the data grid because you can in most cases interface with the distributed data store like with a simple hash map.

Being able to work with domain objects directly is one of the main differences between IMDGs and In-Memory Databases (IMDB). With the latter, users still need to perform Object-To-Relational Mapping (ORM) which typically adds significant performance overhead and complexity. With in-memory data grids this is avoided.

How Do In-Memory Data Grids Differ From Other Solutions?

An IMDG, in general, is significantly different from products such as NoSql databases, IMDBs or NewSql databases. For example, here are just some of the GridGain’s IMDG features that make it unique:

  • Distributed ACID transactions with in-memory optimized 2PC protocol
  • Data Partitioning across a cluster (including fully replication)
  • Work with domain objects rather than with primitive types or “documents”
  • Tight integration with In-Memory Compute Grid (IMCG)
  • Zero Deployment for both IMCG and IMDG
  • Pluggable segmentation (a.k.a. “brain split” problem) resolution
  • Pluggable expiration policies (including built-in LRU, LIRS, random and time-based)
  • Read-through and write-through with pluggable durable store
  • Synchronous and asynchronous operations throughout
  • Pluggable data overflow storage
  • Master/master data replication and invalidation in both synchronous and asynchronous modes
  • Write-behind cache store support
  • Automatic, manual and delayed pre-loading on topology changes
  • Support for fully active replicas (backups)
  • Support for structured and unstructured data
  • Pluggable indexing support

Essentially IMDGs in their purest form can be viewed as distributed hash maps with each key cached on a particular cluster node – the bigger the cluster, the more data you can cache. The trick to this architecture is to make sure that the processing occurs on those cluster nodes where the required data is cached. By doing this all cache operations become local and there is no, or minimal, data movement within the cluster. In fact, when using a well-designed IMDG there should be absolutely no data movement on stable topologies. The only time when some of the data is moved is when new nodes join or some existing nodes leave, hence causing some data repartitioning within the cluster.

The picture below shows a classic IMDG with a key set of {k1, k2, k3} where each key belongs to a different node. The external database component is optional. If present, then IMDGs will usually automatically read data from the database or write data to it (a.k.a. read-through and write-through logic):

Even though IMDGs usually share some common basic functionality, there are many features and implementation details that are different between vendors. When evaluating an IMDG product, pay attention to eviction policies, (pre)loading techniques, concurrent repartitioning or its memory overhead, for example. Also pay attention to the ability to query data at runtime. Some IMDGs, such as GridGain for example, allow users to query in-memory data using standard SQL, including support for distributed joins, which is pretty rare.

The typical use of IMDGs is to partition data across the cluster and then send collocated computations to the nodes where the data is. Since computations are usually part of Compute Grids and have to be properly deployed, load-balanced, failed-over or scheduled, the integration between Compute Grids and IMDGs is very important to obtain the best performance. Especially when both In-Memory Compute and Data Grids are optimized to work together and utilize the same APIs, it is faster for developers to deploy system that offers the highest performance reliably.

Distributed ACID Transactions

One of the distinguishing characteristic of IMDGs is support for Distributed ACID Transactions. Generally, a 2-phase-commit (2PC) protocol is used to ensure data consistency within a cluster. Different IMDGs will have different underlying locking mechanisms, but more advanced implementations provide concurrent locking mechanisms (like MVCC – Multi-Version Concurrency Control), reduce network chattiness to a minimum, and specifically optimize its main algorithms for in-memory processing – guaranteeing transactional ACID consistency with very high performance.

Guaranteed data consistency is one of the main differences between IMDGs and NoSQL databases.

NoSQL databases are usually designed with an Eventual Consistency (EC) approach where data is allowed to be inconsistent for a period of time as long as it will eventually become consistent. Generally, the writes on EC-based systems are somewhat fast, but reads are slow (to be more precise: as fast as writes are). Latest IMDGs with an *optimized* 2PC protocol should at least match, if not outperform, EC-based systems on writes, and be significantly faster on reads. It is interesting to note that the industry has made a full circle moving from a then-slow 2PC approach to the EC approach, and now from EC to an optimized 2PC, which often is significantly faster.

Different products have optimized the 2PC protocol in different ways, but generally the purpose of all optimizations is to increase concurrency, minimize network overhead, and reduce the number of locks a transaction requires to complete. As an example, Google’s distributed global database, Spanner, is based on a transactional 2PC approach simply because 2PC provided a faster and more straightforward way to guarantee data consistency and a high throughput. GridGain introduced “HyperLocking” technology that enabled effective single and group distributed locking that is at the core of its transactional performance.

Distributed data grid transactions in GridGain span data cached on local as well as remote nodes. While automatic enlisting into JEE/JTA transactions is supported, GridGain data grid also allows users to create more light-weight cache transactions which are often more convenient to use. GridGain cache transactions support all ACID properties that you would expect from any transaction, including support for Optimistic and Pessimistic concurrency levels and Read-Committed, Repeatable-Read, and Serializable isolation levels. If a persistent data store is configured, then the transactions will also automatically span the data store.

Multiversion Concurrency Control (MVCC)

GridGain’s in-memory data grid concurrency is based on advanced implementation of MVCC (Multi Version Concurrency Control) – the same technology used by practically all database management systems. It provides practically a lock free concurrency management by maintaining multiple version of data instead of using locks with a wide scope. Thus, MVCC in GridGain provides a backbone for high performance and overall system throughput for systems under load.

In-Memory SQL Queries

What use would be from caching all the data in memory if you could not query it? The in-memory platform should offer a variety of different ways to query its data, such as standard SQL-based queries or Lucene-based text queries.

The JDBC driver implementation lets you to query distributed data from the GridGain cache using standard SQL queries and the standard JDBC API. It will automatically get only the fields you actually need from the objects stored in cache.

The GridGain SQL query type lets you perform distributed cache queries using standard SQL syntax. There are almost no restrictions as to which SQL syntax can be used. All inner, outer, or full joins are supported, as well as rich set of SQL grammar and functions. The ability to join different classes of objects stored in cache or across different caches makes GridGain queries a very powerful tool. All indices are usually kept in memory resulting in very low latencies for the execution of queries.

Text queries are available when you are working with unstructured text data. GridGain can index such data with the Lucene or H2Text engine to let you query large volumes of text efficiently.

If there is no need to return result to the caller, all query results can be visited directly on the remote nodes. Then all the logic is performed directly on the remotely queried nodes without sending any queried data to the caller. This way analytics can be run directly on structured or unstructured data with in-memory speed and low latencies. At the same time GridGain provides applications and developers a familiar way to retrieve and analyze the data.

Here’s the quick example. Notice how Java code looks 100% identical as if you talk to a standard SQL database – yet you are working in in-memory data platform:

	// Register JDBC driver.
	Class.forName("org.gridgain.jdbc.GridJdbcDriver");
	 
	// Open JDBC connection.
	conn = DriverManager.getConnection(
	    "jdbc:gridgain:/ / localhost/" + CACHE_NAME,
	    configuration()
	);
	 
	// Create prepared statement.
	PreparedStatement stmt = conn.prepareStatement(
	    "select name, age from Person where age >= ?"
	);
	 
	// Configure prepared statement.
	stmt.setInt(1, minAge);
	 
	// Get result set.
	ResultSet rs = stmt.executeQuery();

BigMemory Support

Traditionally JVM has been very good with Garbage Collection (GC). However, when running with large amounts of memory available, GC pauses can get very long. This generally happens because GC now has a lot more memory to manage and often cannot cope without stopping your application completely (a.k.a. lock-the-world pauses) and allowing itself to catch up. In our internal tests with heap size set to 60G or 90G GC pauses some times were as long as 5 minutes. Traditionally this problem was solved by starting multiple JVMs on the same physical box, but that does not always work very well as some applications want to collocate large amounts of data in one JVM for faster processing.

To mitigate large GC pauses, GridGain supports BigMemory with data allocated off-heap instead of on-heap. Thus, the JVM GC does not know about it and does not slow down. You can start your Java application with a relatively small heap, e.g. below 512M, and then let GridGain utilize hundreds of gigabytes of memory as off-heap data cache. Whenever data is first accessed, it gets cached in the on-heap memory. Then, after a certain period of non-use, it gets placed into off-heap memory cache. If your off-heap memory gets full, the least used data can be optionally evicted to the disk overflow store, also called swap store.

One of the distinguishing characteristics of GridGain off-heap memory is that the on-heap memory foot print is constant and does not grow with the size of your data. Also, an off-heap cache entry has very little overhead, which means that you can fit more data in memory. Another interesting feature of GridGain is that both primary and secondary indices for SQL can be optionally kept in off-heap memory as well.

Datacenter Replication

When working with multiple data centers it is important to make sure that if one data center goes down, another data center is fully capable of picking up its load and data. Data center replication is meant to solve exactly this problem. When data center replication is turned on, GridGain data grid will automatically make sure that each data center is consistently backing up its data to other data centers (there can be one ore more).

GridGain supports both active-active and active-passive modes for replication. In active-active mode both data centers are fully operational online and act as a backup copy of each other. In active-passive node, only one data center is active and another data center serves only as a backup for the active data center.

Datacenter replication can be either transactional or eventually-consistent. In transactional mode, a data grid transaction will be considered complete only when all the data has be replicated to another datacenter. If the replication step failed, then the whole transaction will be rolled back on both datacenters. In eventually consistent mode transaction will usually complete before the replication finished. In this mode the data is usually concurrently buffered on one data center and then gets flushed to another data center either when buffer fills up or when certain time period elapses. Eventually consistent mode is generally a lot faster, but it also introduces a lag between updates on one data center and data being replicated to another.

If one of the datacenters goes offline, then another will immediately take responsibility for it. Whenever the crashed data center goes back online then it will receive all the updates it has missed from another data center.

In-Memory Compute Grid Integration

Integration between IMCG and IMDG is based on idea of `affinity routing`. Affinity routing is one of the key concepts behind Compute and Data Grid technologies (whether they are in-memory or disk based). In general, affinity routing allows to co-locate a job and the data set this job needs to process.

The idea is pretty simple: if job and data are not co-located, then job will arrive on some remote node and will have to fetch necessary data from yet another node where data is stored. Once processed this data will most likely will have to be discarded (since it’s already stored and backed up elsewhere). This process induces expensive network trip plus all associated marshaling and demarshaling. At scale – this behavior can bring almost any system to a halt.

Affinity co-location solves this problem by co-locating job with its necessary data set. We say that there is an affinity between a processing (i.e. job) and the data that this processing requires – and therefore we can route the job based on this affinity to a node where data is stored to avoid unnecessary network trips and extra marshaling and demarshaling.

GridGain provides advanced capabilities for affinity co-location: from a simple single-method call to sophisticated APIs supporting complex affinity keys and non-trivial topologies.

Summary

In-memory data grids are used throughout a wide spectrum of industries in applications as diverse as risk analytics, trading systems, bioinformatics, ecommerce or online gaming. Essentially, every project that struggles with scalability and performance can benefit from in-memory processing and an in-memory data grid architecture. When you consider different products, make sure you have the advanced features outlined in this paper. This way you can find an optimal solution for your needs and ensure right at the onset that your solution will actually scale flawlessly in those critical moments when you need it to scale.

In-Memory Compute Grid… Explained

Dmitriy Setrakyan provided an excellent explanation for In-Memory Data Grid (IMDG) in his blog http://gridgain.blogspot.com/2012/11/in-memory-data-grids-explained.html.

I will try to provide a similar description for In-Memory Compute Grid (IMCG).

PDF version of this article is available.

IMCG – In-Memory Compute Grid

One of the main ideas Dmitriy put forward is the importance of integration between in-memory storage (IMDG) and in-memory processing (IMCG) to be able to build truly scalable applications. Yet – the IMCG and its implementations are seen less frequently than IMDG mainly due to the historical reason described below.

Most vendors to this day concentrate first on storage technology (IMDG, NoSQL, or NewSQL variety). Once the storage product is built – adding any type of non-rudimentary IMCG capability on top of it becomes increasingly difficult, if not impossible overall (we'll see why it is so below). Thus, generally, IMCG capabilities are more fundamental to the overall product and therefore have to be built first or together to be used at the core of the storage side.

It should be no surprise, by the way, that GridGain and Hadoop are still the only products on the market that successfully combine both storage and processing in one product (although very differently), while there are dozens of storage-only projects available (and probably hundreds if you count every NoSQL attempt on GitHub).

Core Concepts

The easiest way to understand IMCGs is through a comparison to IMDGs. While IMDGs focus on distributed in-memory storage and management of large data sets by partitioning this data across available computers in the grid, IMCG concentrate on efficiently executing algorithms (i.e. user's code or instructions) across the same set of computers on the same grid. And that's all there's to it: IMDG is all about storing and managing data in-memory, and IMCG is all about processing and computing across the same data.

When seen from this vantage point – it is pretty clear why tight integration between IMDG and IMCG is so important: they are practically two sides of the same coin – storage and processing, that both coalesce around your data.

Most of the functionality in any IMCG can be split into four individual groups:

  1. Distributed Deployment & Provisioning
  2. Distributed Resources Management
  3. Distributed Execution Models (a.k.a. IMCG Breadth)
  4. Distributed Execution Services (a.k.a. IMCG Depth)

1. Distributed Deployment & Provisioning

Historically deployment and provisioning of the user's code onto the grid for execution was one of the core reasons why grid computing in general was considered awkward and cumbersome at best, and downright unusable at worst. From the early products like Globus, Grid Engine, DataSynapse, Platform Computing, and such, to today's Hadoop and most of the NoSQL projects – deploying and re-deploying your changes is a manual step that involves rebuilding all of your libraries, copying them everywhere, and restarting your services. Some systems will do copying & restarting for you (Hadoop) and some will require you to do it manually via some UI-based crutch.

This problem is naturally exacerbated by the fact that IMCGs are a distributed technology to begin with and are routinely used on topologies consisting of dozens if not hundreds of computers. Stopping services, redeploying libraries and re-starting services during developing, CI testing and staging in such topologies becomes a major issue.

GridGain is the first IMCG that simplifies this issue by providing "zero deployment" capabilities. With "zero depoloyment" all necessary JVM classes and resources are loaded on demand. Further, GridGain provides three different modes of peer-to-peer deployment supporting the most complex deployment environments like custom class loaders, WAR/EAR files, etc.

Zero deployment technology enables users to simply bring default GridGain nodes online with these nodes then immediately becoming part of the data and compute grid topology that can store any user objects or perform any user tasks without any need for explicit deployment of user’s classes or resources.

2. Distributed Resources Management

Resource management in distributed systems usually refers to the ability to manage physical devices such as computers, networks, and storage as well as software components like JVM, runtimes and OSes. Specifics of that obviously differ based on whether or not the IMCG is deployed on some kind of managed infrastructure like AWS, how it is DevOps managed, etc.

One of the most important resource management functions of any IMCG is automatic discovery and maintaining consistent topology (i.e. the set of compute nodes). Automatic discovery allows the user to add and remove compute nodes from the IMCG topology at runtime while maintaining zero downtime for the tasks running on the IMCG. Consistent topology ensures that any topology changes (nodes failing and leaving, or new nodes joining) viewed by all compute nodes in the same order and consistently.

GridGain provides the most sophisticated discovery system among any IMCG. Pluggable and user-defined Discovery SPI is at the core of GridGain's ability to provide fully automatic and consistent discovery functionality for GridGain nodes. GridGain is shipped with several out-of-the-box implementations including IP-multicast- and TCP/IP-based implementations with direct support for AWS S3 and Zookeeper.

3. Distributed Execution Models (a.k.a IMCG Breadth)

Support for different distributed execution models is what makes IMCG a compute framework. For clarity let's draw a clear distinction between an execution model (such as MapReduce) and the particular algorithms that can be implemented using this model (i.e. Distributed Search): there is a finite set of execution models but practically an infinite set of possible algorithms.

Generally, the goal of any IMCG (as well as of any compute framework in general) is to support as many different execution models as possible, providing the end-user with the widest set of options on how a particular algorithm can be implemented and ultimately executed in the distributed environment. That's why we often call it IMCG Breadth.

GridGain's IMCG, for a example, provides direct support for the following execution models:

  • MapReduce Processing

    GridGain provides general distributed fork-join type of processing optimized for in-memory. More specifically, MapReduce type processing defines the method of splitting original compute task into multiple sub-tasks, executing these sub-tasks in parallel on any managed infrastructure and aggregating (a.k.a. reducing) results back to one final result.

    GridGain's MapReduce is essentially a distributed computing paradigm that allows you to map your task into smaller jobs based on some key, execute these jobs on Grid nodes, and reduce multiple job results into one task result. This is essentially what GridGain’s MapReduce does. However, the difference of GridGain MapReduce from other MapReduce frameworks, like Hadoop for example, is that GridGain MapReduce is geared towards streaming low-latency in-memory processing.

    If Hadoop MapReduce task takes input from disk, produces intermediate results on disk and outputs result onto disk, GridGain does everything Hadoop does in memory – it takes input from memory via direct API calls, produces intermediate results in memory and then creates result in-memory as well. Full in-memory processing allows GridGain provide results in sub-seconds whereas other MapReduce frameworks would take minutes.

  • Streaming Processing & CEP

    Streaming processing and corresponding Complex Event Processing (CEP) is a type of processing where input data is not static but rather constantly "streaming" into the system. Unlike other MapReduce frameworks which spawn different external executable processes which work with data from disk files and produce output onto disk files (even when working in streaming mode), GridGain Streaming MapReduce seamlessly works on streaming data directly in-memory.

    As the data comes in into the system, user can keep spawning MapReduce tasks and distribute them to any set of remote nodes on which the data is processed in parallel and result is returned back to the caller. The main advantage is that all MapReduce tasks execute directly in-memory and can take input and store results utilizing GridGain in-memory caching, thus providing very low latencies.

  • MPP/RPC Processing

    GridGain also provides native support for classic MPP (massively parallel processing) and RPC (Remote Procedure Call) type of processing including direct remote closure execution, unicast/broadcast/reduce execution semantic, shared distribution sessions and many other features.

  • MPI-style Processing

    GridGain's high performance distributed messaging provides MPI-style (i.e. message passing based distribution) processing capabilities. Built on proprietary asynchronous IO and world's fastest marshaling algorithm GridGain provides synchronous and asynchronous semantic, distributed events and pub-sub messaging in a distributed environment.

  • AOP/OOP/FP/SQL Integrated Processing

    GridGain is the only platform that integrates compute grid capabilities into existing programming paradigms such as AOP, OOP, FP and SQL:

    • You can use AOP to annotate your Java or Scala code for automatic MapReduce or MPP execution on the grid.
    • You can use both OOP and pure FP APIs for MapReduce/MPP/RPC execution of your code.
    • GridGain allows to inject executable closures into SQL execution plan allowing you to inject your own filters, local and remote reducers right into the ANSI SQL.

3. Distributed Execution Services (a.k.a IMCG Depth)

In many respects the distributed execution services is the "meat" around proverbial execution models' "bones". Execution services refer to many dozens of deep IMCG features that support various execution strategies and models including services such as distributed failover, load balancing, collision resolution, etc. – hence the moniker of IMCG Depths.

Many such features are shared between different IMCGs and general compute frameworks – but some are unique to a particular product. Here is a short list of some of the key execution services provided by GridGain's IMCG:

  • Pluggable Failover

    Failover management and resulting fault tolerance is a key property of any grid computing infrastructure. Based on its SPI-based architecture GridGain provides totally pluggable failover logic with several popular implementations available out-of-the-box. Unlike other grid computing frameworks GridGain allows to failover the logic and not only the data.

    With grid task being the atomic unit of execution on the grid the fully customizable and pluggable failover logic enables developer to choose specific policy much the same way as one would choose concurrency policy in RDBMS transactions.

    Moreover, GridGain allows to customize the failover logic for all tasks, for group of tasks or even for every individual task. Using meta-programming techniques the developer can even customize the failover logic for each task execution.

    This allows to fine tune how grid task reacts to the failure, for example:
    – Fail entire task immediately upon failure of any of its jobs (fail-fast approach)
    – Failover failed job to other nodes until topology is exhausted (fail-slow approach)

  • Pluggable Topology Resolution

    GridGain provides the ability to either directly or automatically select a subset of grid nodes (i.e. a topology) on which MapReduce tasks will be executed. This ability gives tremendous flexibility to the developer in deciding where its task will be executed. The decision can be based on any arbitrary user or system information. For example, time of the day or day of the week, type of task, available resources on the grid, current or average stats from a given node or aggregate from a subset of nodes, network latencies, predefined SLAs, etc.

  • Pluggable Resource Matching

    For cases when some grid nodes are more powerful or have more resources than others you can run into scenarios where nodes are not fully utilizes or over-utilized. Under-utilization and over-utilization are both equally bad for a grid – ideally all grid nodes in the grid should be equally utilized. GridGain provides several ways to achieve equal utilization across the grid including, for example:

    Weighted Load Balancing
    If you know in advance that some nodes are, say, 2 times more powerful than others, you can attach proportional weights to the nodes. For examples, part of your grid nodes would get weight of 1 and the other part would get weight of 2. In this case job distribution will be proportional to node weights and nodes with heavier weight will proportionally get more jobs assigned to them than nodes with lower weights. So nodes with weight 2 will get 2 times more jobs than nodes with weight 1.

    Adaptive Load Balancing
    For cases when nodes are not equal and you don’t know exactly how different they are, GridGain will automatically adapt to differences in load and processing power and will send more jobs to more powerful nodes and less jobs to weaker nodes. GridGain achieves that by listening to various metrics on various nodes and constantly adapting its load balancing policy to the differences in load.

  • Pluggable Collision Resolution

    Collision resolution allows to regulate how grid jobs get executed when they arrive on a destination node for execution. Its functionality is similar to tasks management via customizable GCD (Great Central Dispatch) on Mac OS X as it allows developer to provide custom job dispatching on a single node. In general a grid node will have multiple jobs arriving to it for execution and potentially multiple jobs that are already executing or waiting for execution on it. There are multiple possible strategies dealing with this situation, like all jobs can proceed in parallel, or jobs can be serialized i.e., or only one job can execute in any given point of time, or only certain number or types of grid jobs can proceed in parallel, etc…

  • Pluggable Early and Late Load Balancing

    GridGain provides both early and late load balancing for our Compute Grid that is defined by load balancing and collision resolution SPIs – effectively enabling full customization of the entire load balancing process. Early and late load balancing allows adapting the grid task execution to non-deterministic nature of execution on the grid.

    Early load balancing is supported via mapping operation of MapReduce process. The mapping – the process of mapping jobs to nodes in the resolved topology – happens right at the beginning of task execution and therefore it is considered to be an early load balancing

    Once jobs are scheduled and have arrived on the remote node for execution they get queued up on the remote node. How long this job will stay in the queue and when it’s going to get executed is controlled by the collision SPI – that effectively defines the late load balancing stage.

    One implementation of the load balancing orchestrations provided out-of-the-box is a job stealing algorithm. This detects imbalances at a late stage and sends jobs from busy nodes to the nodes that are considered free right before the actual execution.

    Grid and cloud environments are often heterogeneous and non-static, tasks can change their complexity profiles dynamically at runtime and external resources can affect execution of the task at any point. All these factors underscore the need for proactive load balancing during initial mapping operation as well as on destination nodes where jobs can be in waiting queues.

  • Distributed Task Session

    A distributed task session is created for every task execution and allows for sharing state between different jobs within the task. Jobs can add, get, and wait for various attributes to be set, which allows grid jobs and tasks to remain connected in order to synchronize their execution with each other and opens a solution to a whole new range of problems.

    Imagine for example that you need to compress a very large file (let’s say terabytes in size). To do that in a grid environment you would split such file into multiple sections and assign every section to a remote job for execution. Every job would have to scan its section to look for repetition patterns. Once this scan is done by all jobs in parallel, jobs would need to synchronize their results with their siblings so compression would happen consistently across the whole file. This can be achieved by setting repetition patterns discovered by every job into the session.

  • Redundant Mapping Support

    In some cases a guarantee of a timely successful result is a lot more important than executing redundant jobs. In such cases GridGain allows you to spawn off multiple copies of the same job within your MapReduce task to execute in parallel on remote nodes. Whenever the first job completes successfully, the other identical jobs are cancelled and ignored. Such an approach gives a much higher guarantee of successful timely job completion at the expense of redundant executions. Use it whenever your grid is not overloaded and consuming CPU for redundancy is not costly.

  • Node Local Cache

    When working in a distributed environment often you need to have a consistent local state per grid node that is reused between various job executions. For example, what if multiple jobs require a database connection pool for their execution – how do they get this connection pool to be initialized once and then reused by all jobs running on the same grid node? Essentially you can think about it as a per-grid-node singleton service, but the idea is not limited to services only, it can be just a regular Java bean that holds some state to be shared by all jobs running on the same grid node.

  • Cron-based Scheduling

    In addition to running direct MapReduce tasks on the whole grid or any user-defined portion of the grid (virtual subgrid), you can schedule your tasks to run repetitively as often as you need. GridGain supports Cron-based scheduling syntax for the tasks, so you can schedule your tasks to run using the familiar standard Cron syntax that we are all used to.

  • Partial Asynchronous Reduction

    Sometimes when executing MapReduce tasks you don’t need to wait for all the remote jobs to complete in order for your task to complete. A good example would be a simple search. Let’s assume, for example, that you are searching for some pattern from data cached in GridGain data grid on many remote nodes. Once the first job returns with found pattern you don’t need to wait for other jobs to complete as you already found what you were looking for. For cases like this GridGain allows you to reduce (i.e. complete) your task before all the results from remote jobs are received – hence the name “partial asynchronous reduction”. The remaining jobs belonging to your task will be cancelled across the grid in this case.

  • Pluggable Task Checkpoints

    Checkpointing a job provides the ability to periodically save its state. This becomes especially useful in combination with fail-over functionality. Imagine a job that may take 5 minute to execute, but after the 4th minute the node on which it was running crashed. The job will be failed over to another node, but it would usually have to be restarted from scratch and would take another 5 minutes. However, if the job was checkpointed every minute, then the most amount of work that could be lost is the last minute of execution and upon failover the job would restart from the last saved checkpoint. GridGain allows you to easily checkpoint jobs to better control overall execution time of your jobs and tasks.

  • Distributed Continuations

    Continuations are useful for cases when jobs need to be suspended and their resources need to be released. For example, if you spawn a new task from within a job, it would be wrong to wait for that task completion synchronously because the job thread will remain occupied while waiting, and therefore your grid may run out of threads. The proper approach is to suspend the job so it can be continued later, for example, whenever the newly spawned task completes.

    This is where GridGain continuations become really helpful. GridGain allows users to suspend and restart thier jobs at any point. So in our example, where a remote job needs to spawn another task and wait for the result, our job would spawn the task execution and then suspend itself. Then, whenever the new task completes, our job would wake up and resume its execution. Such approach allows for easy task nesting and recursive task execution. It also allows you to have a lot more cross-dependent jobs and tasks in the system than there are available threads.

  • Integration with IMDG

    Integration with IMDG based on affinity routing is one of the key concepts behind Compute and Data Grid technologies (whether they are in-memory or disk based). In general, affinity routing allows to co-locate a job and the data set this job needs to process.

    The idea is pretty simple: if jobs and data are not co-located, then jobs will arrive on some remote node and will have to fetch the necessary data from yet another node where the data is stored. Once processed this data most likely will have to be discarded (since it’s already stored and backed up elsewhere). This process induces an expensive network trip plus all associated marshaling and demarshalling. At scale – this behavior can bring almost any system to a halt.

    Affinity co-location solves this problem by co-locating the job with its necessary data set. We say that there is an affinity between processing (i.e. the job) and the data that this processing requires – and therefore we can route the job based on this affinity to a node where data is stored to avoid unnecessary network trips and extra marshaling and demarshaling. GridGain provides advanced capabilities for affinity co-location: from a simple single-method call to sophisticated APIs supporting complex affinity keys and non-trivial topologies.

Example

The following examples demonstrate a typical stateless computation task of Pi-number calculation on the grid (written in Scala – but can be easily done in Java or Groovy or Clojure as well). This example shows how tremendously simple the implementation can be with GridGain – literally just a dozen lines of code.

Note that this is a full source code – copy'n'paste it, compile it and run it. Note also that it works on one node – and just as well on a thousand nodes in the grid or cloud with no code change – just linearly faster. What is even more interesting is that this application automatically includes all these execution services:

  • Auto topology discovery
  • Auto load balancing
  • Distributed failover
  • Collision resolution
  • Zero code deployment & provisioning
  • Pluggable marshaling & communication

Scala code:

import org.gridgain.scalar._
import scalar._
import scala.math._

object ScalarPiCalculationExample {
    private val N = 10000

    def main(args: Array[String]) {
        scalar {
            println("Pi estimate: " +
                grid$.spreadReduce(for (i <- 0 until grid$.size()) yield () => calcPi(i * N))(_.sum))
        }
    }

    def calcPi(start: Int): Double =
        // Nilakantha algorithm.
        ((max(start, 1) until (start + N)) map 
            (i => 4.0 * (2 * (i % 2) - 1) / (2 * i) / (2 * i + 1) / (2 * i + 2)))
            .sum + (if (start == 0) 3 else 0)
}

GridGain 4.3.1 Released!

GridGain 4.3.1 service release includes several important bug fixes and host of new optimizations. It is 100% backward compatible and it is highly recommended update for anyone running production systems on 4.x code line.

Details

Date November 10th, 2012
Version 4.3.1e
Build 10112012

New Features and Enhancements

  • Added remove operation to data loader
  • Significantly improved performance of partition to node mapping
  • Added GridSerializationBenchmark for comparing performance of Java, Kryo, and GridGain serialization
  • Added property-based configuration to remote clients
  • Optimized concurrency for asynchronous methods in C++ client
  • Removed support for Groovy++ DSL Grover

Core Bug Fixes

  • Unmarshalling of SimpleDateFormat fails with NPE
  • Possible NPE in Indexing Manager when using distributed data structures
  • Swap partition iterator skips entries if off-heap iterator is empty
  • `GridDataLoader` does not allow to cache primitive arrays
  • Excessive memory consumption in indexing SPI
  • Add check on startup that GridOptimizedMarshaller is supported by running JDK version
  • If ordered message is timed out, other messages for the same topic may not be processed
  • ScalarPiCalculationExample does not provide correct estimate for PI

Client Connectivity Bug Fixes

  • Client router with explicit default configuration leads to NPE.
  • Repair REST client support to make session token and client ID optional
  • Ping does not work properly in C++ client

Visor Management Bug Fixes

  • Clear and Compact operations in Visor do not account for node selection
  • Move Visor management tasks into a separate thread pool
  • Preload dialog in Visor does not show correct number of keys
  • GC dialog in Visor waits indefinitely for dead nodes
  • Increase tooltip dismiss time in Visor
  • Visor log search does not show nodes table correctly on Windows

Download It Now!