Four Myths of In-Memory Computing

As any fast growing technology In-Memory Computing has attracted a lot of interest and writing in the last couple of years. It’s bound to happen that some of the information gets stale pretty quickly – while other is simply not very accurate to being with. And thus myths are starting to grow and take hold.

I want to talk about some of the misconceptions that we are hearing almost on a daily basis here at GridGain and provide necessary clarification (at least from our our point of view). Being one of the oldest company working in in-memory computing space for the last 7 years we’ve heard and seen all of it by now – and earned a certain amount of perspective on what in-memory computing is and, most importantly, what it isn’t.

In-Memory Computing

Let’s start at… the beginning. What is the in-memory computing? Kirill Sheynkman from RTP Ventures gave the following crisp definition which I like very much:

“In-Memory Computing is based on a memory-first principle utilizing high-performance, integrated, distributed main memory systems to compute and transact on large-scale data sets in real-time – orders of magnitude faster than traditional disk-based systems.”

The most important part of this definition is “memory-first principle”. Let me explain…

Memory-First Principle

Memory-first principle (or architecture) refers to a fundamental set of algorithmic optimizations one can take advantage of when data is stored mainly in Random Access Memory (RAM) vs. in block-level devices like HDD or SSD.

RAM has dramatically different characteristics than block-level devices including disks, SSDs or Flash-on-PCI-E arrays. Not only RAM is ~1000x times faster as a physical medium, it completely eliminates the traditional overhead of block-level devices including marshaling, paging, buffering, memory-mapping, possible networking, OS I/O, and I/O controller.

Let’s look at example: say you need to read a single record in your program.

In in-memory context your code will be compiled to interact with memory controller and read it directly from local RAM in the exact format you need (i.e. your object representation in particular programming language) – in most cases that will result in a simple pointer arithmetic. If you use proper vectorized execution technique – you’ll often read it from L2 cache of your CPUs. All in all – we are talking about nanoseconds and this performance is guaranteed for all cases.

If you read the same record form block-level device – you are in for a very different ride… Your code will have to deal with OS I/O, buffered read, I/O controller, seek time of the device, and de-marshaling back the byte stream that you get from it to an object representation that you actually need. In worst case scenario – we’re talking dozen milliseconds. Note that SSDs and Flash-on-PCI-E only improves portion of the overhead related to seek time of the device (and only marginally).

Taking advantage of these differences and optimizing your software accordingly – is what memory-first principle is all about.


Now, let’s get to the myths.

Myth #1: It’s Too Expensive

This is one of the most enduring myths of in-memory computing. Today – it’s simply not true. Five or ten years ago, however, it was indeed true. Look at the historical chart of USD/MB storage pricing to see why:
storge_prices

The interesting trend is that price of RAM is dropping 30% every 12 months or so and is solidly on the same trajectory as price of HDD which is for all practical reasons is almost zero (enterprises care more today about heat, energy, space than a raw price of the device).

The price of 1TB RAM cluster today is anywhere between $20K and $40K – and that includes all the CPUs, over petabyte of disk based storage, networking, etc. CIsco UCS, for example, offers very competitive white-label blades in $30K range for 1TB RAM setup: http://buildprice.cisco.com/catalog/ucs/blade-server Smart shoppers on eBay can easily beat even the $20K price barrier (as we did at GridGain for our own recent testing/CI cluster).

In a few years from now the same 1TB TAM cluster setup will be available for $10K-15K – which makes it all but commodity at that level.

And don’t forget about Memory Channel Storage (MCS) that aims to revolutionize storage by providing the Flash-in-DIMM form factor – I’ve blogged about it few weeks ago.

Myth #2: It’s Not Durable

This myths is based on a deep rooted misunderstanding about in-memory computing. Blame us as well as other in-memory computing vendors as we evidently did a pretty poor job on this subject.

The fact of the matter is – almost all in-memory computing middleware (apart from very simplistic ones) offer one or multiple strategies for in-memory backups, durable storage backups, disk-based swap space overflow, etc.

More sophisticated vendors provide a comprehensive tiered storage approach where users can decide what portion of the overall data set is stored in RAM, local disk swap space or RDBMS/HDFS – where each tier can store progressively more data but with progressively longer latencies.

Yet another source of confusion is the difference between operational datasets and historical datasets. In-memory computing is not aimed at replacing enterprise data warehouse (EDW), backup or offline storage services – like Hadoop, for example. In-memory computing is aiming at improving operational datasets that require mixed OLTP and OLAP processing and in most cases are less than 10TB in size. In other words – in-memory computing doesn’t suffer from all-or-nothing syndrome and never requires you to keep all data in memory.

If you consider the totally of the data stored by any one enterprise – the disk still has a clear place as a medium for offline, backup or traditional EDW use cases – and thus the durability is there where it always has been.

Myth #3: Flash Is Fast Enough

The variations of this myth include the following:

  • Our business doesn’t need this super-fast processing (likely shortsighted)
  • We can mount RAM disk and effectively get in-memory processing (wrong)
  • We can replace HDDs with SSDs to get the performance (depends)

Mounting RAM disk is a very poor way of utilizing memory from every technical angle (see above).

As far as SSDs – for some uses cases – the marginal performance gain that you can extract from flash storage over spinning disk could be enough. In fact – if you are absolutely certain that the marginal improvements is all you ever need for a particular application – the flash storage is the best bet today.

However, for a rapidly growing number of use cases – speed matters. And it matters more and for more businesses every day. In-memory computing is not about marginal 2-3x improvement – it is about giving you 10-100x improvements enabling new businesses and services that simply weren’t feasible before.

There’s one story that I’ve been telling for quite some time now and it shows a very telling example of how in-memory computing relates to speed…

Around 6 years ago GridGain had a financial customer who had a small application (~1500 LOC in Java) that took 30 seconds to prepare a chart and a table with some historical statistical results for a given basket of stocks (all stored in Oracle RDBMS). They wanted to put it online on their website. Naturally, users won’t wait for half a minute after they pressed the button – so, the task was to make it around 5-6 seconds. Now – how do you make something 5 times faster?

We initially looked at every possible angle: faster disks (even SSD which were very expensive then), RAID systems, faster CPU, rewriting everything in C/C++, running on different OS, Oracle RAC – or any combination of thereof. But nothing would make an application run 5x faster – not even close… Only when we brought the the dataset in memory and parallelized the processing over 5 machines using in-memory MapReduce – we were able to get results in less than 4 seconds!

The morale of the story is that you don’t have to have NASA-size problem to utilize in-memory computing. In fact, every day thousands of businesses solving performance problem that look initially trivial but in the end could only be solved with in-memory computing speed.

Speed also matters in the raw sense as well. Look at this diagram from Stanford about relative performance of disks, flash and RAM:
disk_flash_ram

As DRAM closes its pricing gap with flash such dramatic difference in raw performance will become more and more pronounced and tangible for business of all sizes.

Myth #4: It’s About In-Memory Databases

This is one of those mis-conceptions that you hear mostly from analysts. Most analysts look at SAP HANA, Oracle Exalytics or something like QlikView – and they conclude that this is all that in-memory computing is all about, i.e. database or in-memory caching for faster analytics.

There’s a logic behind it, of course, but I think this is rather a bit shortsighted view.

First of all, in-memory computing is not a product – it is a technology. The technology is used to built products. In fact – nobody sells just “in-memory computing” but rather products that are built with in-memory computing.

I also think that in-memory databases are important use case… for today. They solve a specific use case that everyone readily understands, i.e. faster system of records. It’s sort of a low hanging fruit of in-memory computing and it gets in-memory computing popularized.

I do, however, think that the long term growth for in-memory computing will come from streaming use cases. Let me explain.

Streaming processing is typically characterized by a massive rate at which events are coming into a system. Number of potential customers we’ve talked to indicated to us that they need to process a sustained stream of up to 100,000 events per second with out a single event loss. For a typical 30 seconds sliding processing window we are dealing with 3,000,000 events shifting by 100,000 every second which have to be individually indexed, continuously processed in real-time and eventually stored.

This downpour will choke any disk I/O (spinning or flash). The only feasible way to sustain this load and corresponding business processing is to use in-memory computing technology. There’s simply no other storage technology today that support that level of requirements.

So we strongly believe that in-memory computing will reign supreme in streaming processing.

GridGain In-Memory Database: Plain English Overview

I picked this chapter up from the GridGain’s Document Center. I like it as it gives simple, plain English high-level overview of our In-Memory Database: no coding, no diagrams, no deep dives. Just quick and easy rundown of what’s there…

At a Glance

GridGain IMDB is a distributed, Java-based, object-based key-value datastore. Logically it can be viewed as a collection of one or more caches (a.k.a maps or dictionaries). Each cache is a distributed collection of key-value pairs. Both key and value are represented as Java object and can be of any user-defined type.

Every cache must be pre-configured individually and cannot be created on the fly (due to distributed consistency semantics). You’ll find that cache and cache projections will be your main API entry points while working with GridGain IMDB in embedded mode.

Each cache has many configuration properties with the main one being its type. GridGain IMDB supports three cache types: local, replicated and partitioned.

As name implies the local mode stores all data locally without any distribution providing lightweight transactional local storage. Replicated cache replicates (copies) data to all nodes in the cluster resulting in best high availability but reducing overall database in-memory capacity since data is copied everywhere. Partitioned mode is the most scalable mode as it equally partitions data across all nodes in the cluster so that each node is only responsible for a small portion of the data.

Combination of these storage modes in a single database (as well as many specific configuration and optimization properties available for each mode) make GridGain IMDB very convenient distributed datastore as it doesn’t force you to use just one specific storage model.

GridGain IMDB stores data in layered storage system that consists of 4 layers: JVM on-heap memory, JVM off-heap memory, local disk-based swap space, and optional durable cache store. Each layer can store more data but entails progressively higher latencies for data access. Developer has full control over configuration of these layers.

Another interesting characteristic of GridGain IMDB is that it was developed first as a highly distributed system and only later it became a full fledged database. This reversed approach makes data and processing distribution a natural capability of the database.

GridGain IMDB is based on unique HyperClustering technology that enables GridGain IMDB scale to 1000s of nodes in a single transactional topology (based on actual production customers).

GridGain IMDB clustering is based on peer-to-peer topology, its transaction implementation is based on advanced MVCC-based design, and its partitioning is based on automatic multilayer consistent hashing implementation – free from sharding limitations or other crude data distribution approaches.

High Performance Computing (HPC) Integration

One of the most unique characteristics of GridGain IMDB is the full integration of In-Memory HPC at the core of the database.

Many traditional RDBMS and No/NewSQL databases only address data storage and rudimentary data processing. In this scenario the data is retrieved from the database and has to be moved to some other processing node. Once data is processed, it is usually discarded.

Such data movement between different layers, even minimal, is almost always at the core of the scalability and performance problems in highly distributed systems.

GridGain IMDB was designed from the ground up to minimize unnecessary data movements and instead move computations to the data whenever possible – hence its integration of HPC technology is at the very core of the database. Computations are dramatically smaller in size – often by factor of 1000x, they don’t change as often as the data, have strong and easily defined affinity to the data they require, and typically provide only negligible load on network and JVMs.

What is even more important – this approach allows for clean processing parallelization of data stored in the database since the computing task can now be intelligently split into sub-tasks that can be sent to remote nodes to work in parallel on their respective local data sub-sets with absolutely zero global resource contention.

GridGain IMDB supports MapReduce, distributed SQL, MPP, MPI, RPC, File System, and Document API type of data processing and querying – the deepest and the widest eco-system of HPC processing paradigms provided by any database or HPC framework.

Accessing Database

GridGain IMDB can be queried and programmed in many different ways. In external context you can use Java, C++, or C# drivers. GridGain IMDB also natively supports custom REST and Memcached protocol.

In embedded mode you can use distributed SQL and JDBC as well as Lucene, Text and full-scan queries. For complex data computations you can use in-memory MapReduce, MPP, RPC and MPI-based processing. All programming techniques in embedded mode have deeply customizable APIs including distributed extensions to SQL, Java or Scala-based custom SQL functions, streaming MapReduce, distributed continuations, connected tasks support, etc.

GridGain IMDB also provides in-memory file system (GGFS – GridGain File System) as well as full support for MongoDB Document API protocol.

Embedded vs. External Access

Unlike many traditional, NewSQL and NoSQL databases GridGain IMDB is designed to be easily programmable in embedded mode.

Traditional (external) approach dictates that database should be deployed separately and the data processing applications access it through some networking protocol and client library (i.e. the driver). This implies significant driver overhead and data movement that makes any HPC or real-time database processing impossible as we discussed above.

While supporting the external access as well via its C++, .NET, and Java drivers – GridGain IMDB also natively supports embedded mode where data processing logic can be deployed directly into the database itself and therefore can be programmatically accessed in the same process. In other words, GridGain IMDB allows to initiate a distributed data processing task right from the database process itself removing any driver overhead and its significant API limitations – enabling rich functionality and sub-millisecond response for complex distributed data processing tasks.

Among many benefits, this is becoming critically important capability for rapidly growing machine-to-machine and streaming use cases that don’t have human interaction delays built in and require minimal latencies and linear horizontal scalability.

Fault Tolerance and Durability

GridGain IMDB provides advanced capabilities when it comes to fault tolerance and durability.

Each cache can be configured with one or more active backups which provides data redundancy when a node crashes as well as improved performance in read-mostly scenarios. On topology changes (node leaves or joins) the comprehensive pre-loading subsystem will make sure that data is synchronously or asynchronously re-partitioned while maintaining the desired consistency and availability. Each cache can be independently configured for transactional read-through and write-through to a durable storage such as RDBMS, HDFS, or file system to make sure that data is backed up in durable datastore, if required.

In case of network segmentation, a.k.a. “split-brain” problem, GridGain IMDB provides pluggable segmentation resolution architecture where dirty writes or reads are impossible regardless of how segmented your cluster gets.

For complex and mission critical deployments GridGain IMDB provides data center replication. When data center replication is turned on, GridGain IMDB will automatically make sure that each data center is consistently backing up its data to other data centers (there can be more than one). GridGain supports both active-active and active-passive modes for replication.

Transactions

GridGain IMDB has full support for distributed transactions supporting all ACID properties including support for Optimistic and Pessimistic concurrency levels and READ_COMMITTED, REPEATABLE_READ, and SERIALIZABLE isolation levels.

For JEE environments, like application servers, GridGain IMDB provides automatic integration with JTA/XA. Essentially GridGain becomes an XA resource and will automatically check if there is an active JTA transaction present.

In addition to transactions where GridGain IMDB allows to execute multiple data operations atomically, GridGain also supports single atomic CAS (compare-and-set) operations, such as put-if-absent, compare-and-set, and compare-and-remove.

For more information head over to GridGain’s In-Memory Database.

Why MCS Means Rapid In-Memory Computing Adoption

What does the relatively new acronym MCS have to do with the accelerated adoption of in-memory computing? I’d say everything.

MCS stands for Memory Channel Storage and it essentially allows you to put NAND flash storage into a DIMM form factor and enable it to interface with a CPU via a standard memory controller. Put another way, MCS provides a drop-in replacement for DDR3 RDIMMs with 10x the memory capacity and a 10x reduction in price.

Historically, one of the major inhibitors behind in-memory computing adoption was the high cost of DRAM relative to disks and flash storage. While advantages such as 100x performance, lower power consumption and higher reliability were clearly known for years, the price delta was and is still relatively high:

Storage ~ Performance ~ Price
1TB MCS 20-200x TBD ~$5,000
1TB DDR3 RDIMM (32 DIMMs) 1000-10,000x $20,000
1TB PCI-E 10-100x $5,000
1TB SSD 10-100x $1,000
1TB HDD 1x $100

While spinning HDDs are essentially cost-free for enterprise consumption, and flash storage is enjoying mass adoption, DRAM storage still lags behind simply due to higher cost.

MCS-based storage is about to change this once and for all as it aims to bring the price of flash-based DRAM to the same level as today’s SSD and PCI-E flash storage.

MCI vs. PCI-E Flash

If prices are relatively similar between MCS and PCI-E storage, what makes MCS so much more important? The answer is direct memory access vs. block-based device.

All of the PCI-E flash storage today (FusionIO, Violin, basic SSDs, etc.) are recognized by the OS as block devices, i.e. essentially fast hard drives. Applications access these devices via typical file interface involving all typical marshaling, buffering, OS context switching, networking and IO overhead.

MCS provides an option to view its flash storage simply as main system memory, eliminating all the OS/IO/network overhead, while working directly via a highly optimized memory controller – the same controller that handles massive CPU-DDR3 data exchange – and enabling software like GridGain’s to access the flash storage as normal memory. This is a game changer and potentially a final frontier in the storage placement technology. In fact, you can’t place application data any closer to the CPU than the main memory and that is precisely what MCI enables us to do on terabyte and petabyte scale.

Moreover, MCS provides direct improvements over PCI-E storage. Diablo Technology, the pioneer behind MCS technology, claims that MCS is more performant (lower latencies and higher bandwidth) than typical PCI-E and SATA SSDs while providing ever elusive constant latency that is unachievable with standard PCE-E or SSD technologies.

Plug-n-Play

Another important characteristic of MCS storage is the plug-n-play fashion in which it can be used – no custom hardware, no custom software required. Imagine, for example, an array of 100 micro-servers (ARM-based servers in micro form factor), each with 256GB of MCI-based system memory, drawing less than 10 watts of power, costing less than $1000 each.

You now have a cluster with 25TB in-memory storage, 200 cores of processing power, running standard Linux, drawing around 1000 watts for about the same cost as a fully loaded Tesla Model S. Put GridGain’s In-Memory Computing Stack on it and you have an eco-friendly, cost effective, powerful real-time big data cluster ready for any task.

Welcome to the future.

Columnar vs. Key-Value Storage Models

What are the performance differences between in-memory columnar databases like SAP HANA and GridGain’s In-Memory Database (IMDB) utilizing distributed key-value storage? This questions comes up regularly in conversations with our customers and the answer is not very obvious.

Storage Models

First off, let’s clearly state that we are talking about storage model only and its implications on performance for various use cases. It’s important to note that:

  • Storage model doesn’t dictate of preclude a particular transactionality or consistency guarantees; there are columnar databases that support ACID (HANA) and those that don’t (HBase); there are distributed key-value databases that support ACID (GridGain) and those that don’t (for example, Riak and memcached).
  • Storage model doesn’t dictate specific query language; using above examples – GridGain and HANA support SQL – HBase, for example, doesn’t.

Unlike transactionality and query language – performance considerations, however, are not that straightforward.

Note also: SAP HANA has pluggable storage model and experimental row-based storage implementation. We’ll concentrate on columnar storage that apparently accounts for all HANA usage at this point.

HANA’s Columnar Storage Model

Let’s recall what columnar storage model entails in general and note its HANA specifics.

Some of its stand out characteristics include:

  • Data in columnar model is kept in column (vs. rows as in row storage models).
  • Since data in a single column is almost always homogeneous it’s frequently compressed for storage (especially in in-memory systems like HANA).
  • Aggregate functions (i.e. column functions) are very fast on columnar data model since the entire column can be fetched very quickly and effectively indexed.
  • Inserts, updates and row functions, however, are significantly slower than their row-based counterparts as a trade-off of columnar approach (inserting a row leads to multiple columns inserts). Because of this characteristic – columnar databased typically used in R/OLAP scenario (where data doesn’t change) and very rarely in OLTP use cases (where data changes frequently).
  • Since columnar storage is fairly compact it doesn’t generally require distribution (i.e. data partitioning) to store large datasets – the entire database can often be logically stored in memory of a single server. HANA, however, provides comprehensive support for data partitioning.

It is important to emphasize that columnar storage model is ideally suited for very compact memory utilization for the two main reasons:

  • Columnar model is a naturally fit for compression which often provides for dramatic reduction in memory consumption.
  • Since column-based functions are very fast – there is no need for materialized views for aggregated values in exchange for simply computing necessary values on the fly; this leads to significantly reduced memory footprint as well.

GridGain’s IMDB Key-Value Storage Model

Key-value (KV) storage model is less defined than its columnar counterpart and usually involves a fair amount of vendor specifics.

Historically, there are two schools of KV storage models:

  • Traditional (examples include Riak, memcached, Redis). The common characteristic of these systems is a raw, language independent storage format for the keys and values.
  • Data Grid (examples include GridGain IMDB, GigaSpaces, Coherence). The common trait of these systems is the reliance on JVM as underlying runtime platform, and treating keys and values as user-defined JVM objects.

GridGain’s IMDB belongs to Data Grid branch of KV storage models. Some of its key characteristics are:

  • Data is stored in a set of distributed maps (a.k.a. dictionaries or caches); in a simple approximation you can think of a value as a row in row-based model, and a key as that row’s primary key. Following this analogy a single KV map can be approximated as row-based table with automatic primary key index.
  • Keys and values are represented as user-defined JVM objects and therefore no automatic compression can be performed.
  • Data distribution is designed from the ground up. Data is partitioned across the cluster mitigating, in part, lack of compression. Unlike HANA – data partitioning is mandatory.
  • MapReduce is the main API for data processing (SQL is supported as well).
  • Strong affinity and co-location semantics provided by default.
  • No bias towards aggregate or row-based processing performance and therefore no bias towards either OLAP or OLTP applicability.

Performance Considerations

It is somewhat expected that for heavy transactional processing GridGain will provide overall better performance in most cases:

  • Columnar model is rather inefficient in updating or inserting values in multiple columns.
  • Transactional locking is also less efficient in columnar model.
  • Required de-compression and re-compression further degrades performance.
  • KV storage model, on the other hand, provides an ideal model for individual updates as individual objects can be accessed, locked and updated very effectively.
  • Lack of compression in GridGain IMDB makes updates go even faster than in columnar model with compression.

As an example, GridGain just won a public tender for one of the biggest financial institutions in the world achieving 1 billion transactional updates per second on 10 commodity blades costing less than $25K all together. That transactional performance and associated TCO is clearly not the territory any columnar database can approach.

For OLAP workloads the picture is less obvious. HANA is heavily biased towards OLAP processing, and GridGain IMDB is neutral towards it. Both GridGain IMDB and SAP HANA provides comprehensive data partitioning capabilities and allow for processing parallelization – MPP traits necessary for scale out OLAP processing. I believe the actual difference observed by the customers will be driven primarily by three factors rooted deeply in differences between columnar and KV implementations in respective products:

  • Optimizations around data affinity and co-location.
  • Optimizations around the distribution overhead.
  • Optimizations around indexing of partitioned data.

Unfortunately – there’s no way to provide any generalized guidance on performance difference here… We always recommend to try both in your particular scenario, pay attention to specific configuration and tuning around three points mentioned above – and see what results you’ll get. It does take time and resources – but you may be surprised by your findings!

Re-Imagining Ultimate Performance

It’s been somewhat quiet here and on GridGain side for a few months – and we’ve had a good reason for it.

We’ve just announced closing a $10M series B investment and bringing new awesome investor with it. In the last 6 months not only we’ve closed new round of investment, we’ve rebuilt and tripled our sales and business development team, retooled our marketing, released new products, and have 3 other products in the development pipeline to be announced this year. We’ve been busy…

But I think the most important thing we’ve accomplished so far is the crystallization and validation of our vision and strategy around end-to-end stack for In-Memory Computing.

In-Memory Computing

Kirill Sheynkman, one of our board members and investor, probably put it the best: “In-Memory Computing is characterized by using high-performance, integrated, distributed memory systems to manage and transact on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based technologies”.

And yet – In-Memory Computing is not a feature or just a product – it is a new way to compute and store data, a type of revolution we haven’t witnessed since the early seventies when IBM released “winchester” disk IBM 3340 and the era of HDDs has officially began. Today we are in same transitional period moving away from HDDs/SSDs or other block devices to a new era of DRAM-based storage and it creates a tidal wave of innovation in software.

Just as development of cheap HDDs pushed forward database industry in early seventies and the SQL was born – today the relentless data growth coupled with real time requirements for data processing necessitate move to in-memory processing, massive parallelization and unstructured data.

Unlike companies around us – here at GridGain we strongly believe that In-Memory Computing is a paradigm shift. It’s not just a single product, enhancement or feature add-on – it’s a new way to think about the ways we deal with exponentially growing data sets and the different types of payloads this data explosion brings in.

Here at GridGain we want to lead this revolution and we have the vision and technology to do just that…

End-to-End In-Memory Computing Stack

Most of today’s business applications dealing with large data sets (outside of legacy batch processing) are built around three fundamental types of payloads:

  • Database or system of records,
  • High performance, parallelized computations, and
  • Real time, high frequency streaming & CEP data processing.

These three types of payloads (or combination of them) is what at the core of practically every big data end user system built today. Providing in-memory products directly addressing these three types of payloads is what makes GridGain an end-to-end In-Memory Computing stack:

in-memory-ecosystem-white-big

What also important is that everything that GridGain has was built in-house – without any exception. We didn’t acquire some fledging startup, got “merged” into something else, or acqua-hired some dying open source project to quickly fill the gap in the product line – every product we have is built by the same team, from the same mold and came about in course of natural evolution of our product line in the last 7 years.

That’s why you have absolutely zero learning curve when moving from product to product. Our customers often note just how cohesive and unified our products “feel” to them: familiar APIs, principles and concepts, same configuration, same management, same installation, same documentation… and the same engineers helping them with support.

Platforms don’t get built by haphazardly stitching together random pieces of software – they grow organically overtime by dedicated teams.

Integrated Products

Few years ago we’ve noticed one class of customers that would loved to get benefits of in-memory computing but just didn’t have the appetite for the development and simply shy away from using any in-memory computing products all together.

Instead of loosing these customers (like everyone else) we’ve decided to pick some of the most frequent use cases we come across and provide highly integrated, plug-n-play products for them, a.k.a accelerators, so that they can enjoy benefits of the in-memory computing without any need for any development or any changes in their systems what-so-ever.

That’s how In-Memory Hadoop® and In-Memory MongoDB® Accelerators came about. And that’s how cloud storage accelerators are coming about in a few months.

A unique characteristic of GridGain’s integrated products is the “no assembly required” nature in which they integrate. They deliver all the scalability and performance advantages of GridGain’s In-Memory Computing stack with zero code changes and minimal configuration changes to the host products.

Management & Monitoring

No end-to-end stack can be truly called this way without single and unified management and monitoring system. GridGain provides the #1 devops support among any in-memory computing products with its Visor Administration Console. GridGain’s Visor is GUI-based and CLI-based system that provides deep runtime management, monitoring, and operational support for running any GridGain products in production context.

visor_dash2

Time Is Now

Einstein got it right when he said imagination is more important than knowledge. At GridGain, we’ve re-imagined ultimate performance as In-Memory Computing so that you can re-imagine your company for today’s increasingly competitive business environment.

GridGain understands that In-Memory Computing is more than the latest tech trend. It’s the next major shift for an increasingly hyper business world in which organizations face problems that traditional technology can’t even fathom, much less solve. In-Memory Computing is a step all organizations must take to remain competitive, and we’re ready to take that step with you.

You’ll never need to analyze less data. The speed of business will never be slower. Your business challenges will never be simpler. Now is the time for In-Memory Computing – only GridGain gives you a complete solution without any compromises.