Potential of in memory NO SQL database

no-sql-database

In-Memory NoSQL

What is No SQL?
A No-SQL (often interpreted as Not Only SQL) database provides a mechanism for storage and retrieval of data that is modelled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling and finer control over availability.

What is In Memory Database?
An in-memory database (IMDB; also main memory database system or MMDB or memory resident database) is a database management system that primarily relies on main memory for computer data storage. It is contrasted with database management systems that employ a disk storage mechanism. Main memory databases are faster than disk-optimized databases since the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disk.

In Memory Features – Confusion and Big Data:
•    In dynamically scalable partitioned storage systems, whether it is a No-SQL database, file-system or in-memory data grid, changes in the cluster (adding or removing a node) can lead to big data moves in the network to re-balance the cluster.
•    It is important to note that there is a new crop of traditional databases with serious In-Memory “options”. That includes MS SQL 2014, Oracle’s Exalytics and Exadata, and IBM DB2 with BLU offerings. The line is blurry between these and the new pure In-Memory Databases, and for the simplicity I’ll continue to call them In-Memory Databases.
•    It is also important to nail down what we mean by “In-Memory”. Surprisingly – there’s a lot of confusion here as well as some vendors refer to SSDs, Flash-on-PCI, Memory Channel Storage, and, of course, DRAM as “In-Memory”.
•    In reality, most vendors support a Tiered Storage Model where some portion of the data is stored in DRAM (the fastest storage but with limited capacity) and then it gets overflown to a verity of flash or disk devices (slower but with more capacity) – so it is rarely a DRAM-only or Flash-only product. However, it’s important to note that most products in both categories are often biased towards mostly DRAM or mostly flash/disk storage in their architecture.
•    Bottom line is that products vary greatly in what they mean by “In-Memory” but in the end they all have a significant “In-Memory” component.
•    Most In-Memory Databases are your father’s RDBMS that store data “in memory” instead of disk. That’s practically all there’s to it. They provide good SQL support with only a modest list of unsupported SQL features, shipped with ODBC/JDBC drivers and can be used in place of existing RDBMS often without significant changes
•    It’s one of the dirty secrets of In-Memory Databases: one of their most useful features, SQL joins, is also is their Achilles heel when it comes to scalability. This is the fundamental reason why most existing SQL databases (disk or memory based) are based on vertically scalable SMP (Symmetrical Processing) architecture unlike In-Memory Data Grids that utilize the much more horizontally scalable MPP approach.
•    In-Memory Databases provide almost a mirror opposite picture: they often require replacing your existing database (unless you use one of those In-Memory “options” to temporary boost your database performance) – but will demand significantly less changes to the application itself as it will continue to rely on SQL (albeit a modified dialect of it).
You will want to use an In-Memory Database if the following applies to you:
•    You can replace or upgrade your existing disk-based RDBMS
•    You cannot make changes to your applications
•    You care about speed, but don’t care as much about scalability
In other words – you boost your application’s speed by replacing or upgrading RDBMS without significantly touching the application itself.

Why In Memory NoSQL ?
Application developers have been frustrated with the impedance mismatch between the relational data structures and the in-memory data structures of the application. Using NoSQL databases allows developers to develop without having to convert in-memory structures to relational structures.

Case Study:
DataStax Brings In-Memory To NoSQL
Web and mobile applications are getting bigger and people are as impatient as ever. These are two factors hastening the use of in-memory technology, and DataStax introduced the latest database management system (DBMS)  to add in-memory processing capabilities.
DataStax Enterprise is a highly scalable DBMS based on open source Apache Cassandra. Its strengths are flexible NoSQL data modeling, multi-data-center support, and linear scalability on clustered commodity hardware. Customers like eBay, Netflix, and others typically run globally distributed deployments at massive scale.
Use cases for the new feature include scenarios in which semi-static data experience frequent overwrites. Examples include sites or apps with top-10 or top-20 lists that are constantly updated, online games with active leader boards, online gambling sites, or online shopping sites with active “like,” “want,” and “own” listings.
DataStax is following in familiar footsteps, as lots of DBMS vendors are adding in-memory features. Microsoft, for example, has extensively previewed an In-Memory OLTP option (formerly project Hekaton) that will be included in soon-to-be-launched Microsoft SQL Server 2014. And Oracle has announced that it, too, will add an in-memory option for its flagship 12c database. General release of that option isn’t expected until early next year.
The NoSQL realm already has in-memory DBMS options such as Aerospike, which is heavily used in online advertising. But Shumacher said DataStax tends to show up in much higher-scale deployments than Aerospike.

In-memory DBMS vendors MemSQL and VoltDB are taking the trend in the other direction, recently adding flash- and disk-based storage options to products that previously did all their processing entirely in memory. The goal here is to add capacity for historical data for long-term analysis. As in the DataStax case, the idea is to covering a broader range of needs with one product.