Release of Apache Ignite 2.4 - Distributed Database and Caching Platform

On March 1? 201? 4 months after the last version, Apache Ignite 2.4 was released. This release is notable for a number of innovations: support for Java ? multiple optimization and SQL improvements, support for a neural network platform, a new approach to building a topology when working with a disk, and much more.
Apache Ignite Database and Caching Platform Is a platform for distributed data storage (optimized for active use of RAM), as well as for distributed computing in near real time.
Ignite is used where it is necessary to process very quickly large data streams that are too tough for centralized systems.
Examples of use: fast distributed cache; A layer that aggregates data from disparate services (for example, for Customer 360 View); the main horizontally scalable storage (NoSQL or SQL) of operational data; platform for computing, etc.
Next, consider the main innovations of Ignite 2.4.
Baseline Topology solves these problems by fixing a set of nodes that contain disk data and influencing cluster activation, behavior when changing topology and rebalancing.
Baseline Topology is such an important change in Ignite that in the near future we will publish a separate article dedicated to this function.

Thin clients

Now you can create thin clients on the basis of its own binary protocol .
Earlier, clients for .NET and C ++ raised a full-fledged JVM with Ignite for communication with the cluster. This provided easy and cheap access to the extensive functionality of the platform, but customers turned out to be heavyweight.
New thin clients are independent and do not need to use JVM. This significantly reduces resource consumption and increases productivity, and it is now much easier and cheaper for the community to build new clients for a variety of languages, for example, Python.
Version 2.4 has a thin client for .NET.
var cfg = new IgniteClientConfiguration
Host = "???.1"
using (IIgniteClient igniteClient = Ignition.StartClient (cfg))
cache = igniteClient.GetCache
Organization org = new Organization (
New Address ("St. Petersburg, Marata St., 69-7? Building B", 191119),
New Email ("rusales @ gridgain .com "),
? OrganizationType.Private,
? DateTime.Now
//Put the cache entry.
cache.Put (? org);
//Get the record in the deserialized format converted to the desired type.
Organization orgFromCache = cache.Get (1);


Optimization of data loading

Apache Ignite 2.4 adds tools to optimize the initial load and load large amounts of data.
Now you can temporarily turn off WAL (Write Ahead Log) for individual tables in Runtime. This will allow you to load data with minimal impact of disk I /O, which will have a positive impact on bandwidth.
After turning on the WAL, a checkpoint to the disk will be immediately done on the current data from the RAM to ensure data integrity.
You can disable WAL by means of SQL:
- Turn off WAL for the table (and the underlying cache).
- Inclusion, similarly, for individual tables and caches.

or through the API:
ignite.cluster (). isWalEnabled (cacheName); //Check if WAL is on.
ignite.cluster (). enableWal (cacheName); //Enable WAL.
ignite.cluster (). disableWal (cacheName); //Turn off WAL.


Java 9

In Ignite 2.? is added to the existing support for Java 8. Java 9 .

Support extension. NET

Often I had to hear the question: "When will Ignite for .NET start supporting. NET Core?". I am glad to inform you that starting from version 2.? Ignite.NET gets support .NET Core . Moreover, there is support for Mono .
Thanks to this, you can build cross-platform applications on .NET, extending the scope of Ignite to the worlds of Linux and Mac.
In a separate article, we'll talk in more detail about innovations related to .NET - thin client and support for .NET Core and Mono.

Numerous optimizations and improvements to SQL

In Ignite 2.? many changes were made to speed up SQL. This includes: multi-threaded index creation , optimization of deserialization of objects and search by primary key , SQL batching support on cluster side and much more.
In the DDL field, you can set DEFAULT values ​​for the columns in tables created through CREATE TABLE, specify settings for embedding values ​​in index trees and carry out DROP COLUMN .
Example of creating an index with new attributes:
//INLINE_SIZE - the maximum size in bytes for embedding in index trees;
//PARALLEL - the number of indexing threads.
CREATE INDEX fast_city_idx ON sales (country, city) INLINE_SIZE 60 PARALLEL 8;


Neural networks and other improvements Machine Learning

In version 2.4 appeared neural networks on Apache Ignite .
Their key advantage is the high performance of training and model execution. Due to the distributed learning of neural networks and the colocation of computational components with data on the nodes of the cluster, there is no need for ETL and long data transfer to external systems that clog the network.
//Preparing test data.
int samplesCnt = 100000;
//Test data will be a function of sin ^ 2 on the interval[0; pi/2].
pointsGen = () -> (Math.random () + 1) /2 * (Math.PI /2);
f = x -> Math.sin (x) * Math.sin (x);
(new DenseLocalOnHeapVector (new double[]. {x}), new DenseLocalOnHeapVector (new double[]{y})));
//Initialization coach
trainer = MLPGroupUpdateTrainer.getDefault (ignite)
withSyncPeriod (3) .
withTolerance (???) .
withMaxGlobalSteps (100) .
withUpdateStrategy (UpdateStrategies.RProp ());
//Create input for the trainer .
MLPArchitecture conf = new MLPArchitecture ( 1) .
withAddedLayer (1? true, Activators.SIGMOID) .
WithAddedLayer (? true, Activators.SIGMOID);
MLPGroupUpdateTrainerCacheInput trainerInput = new MLPGroupUpdateTrainerCacheInput (conf,
New RandomInitializer (new Random ()), ? cache, 1000);
//Training and verification of results.
MultilayerPerceptron mlp = trainer.train (trainerInput);
int testCnt = 1000;
Matrix test = new DenseLocalOnHeapMatrix (? testCnt);
for (int i = 0; i < testCnt; i++)
test.setColumn (i, new double[]{pointsGen.get ()});
Matrix predicted = mlp.apply (test);
Matrix actual = test. copy (). map (f);
Vector predicted = mlp.apply (test) .getRow (0);
Vector actual = test.copy (). map (f) .getRow (0);
//Show the predicted and actual values ​​
Tracer.showAscii (predicted)
Tracer.showAscii (actual)
System.out.println ("MSE:" + (predicted.minus (actual ) .kNorm (2) /predicted.size ()));


In addition to these changes in the release also included:
initial support for Spark DataFrames ;
optimization of memory consumption when working with a disk;
Multiple disk storage optimizations (for example, when working with WAL);
new monitoring value in JMX (for example, the long-awaited cache memory
? extended topology information
will be available for monitoring).
RPM packages with Ignite ( repository ).
+ 0 -

Add comment