IGNITE-13884 Merged docs into 2.9.1 from 2.9 branch with updates (#8598)

* IGNITE-7595: new Ignite docs (returning the original changes after fixing licensing issues)

(cherry picked from commit 073488ac97517bbaad9f6b94b781fc404646f191)

* IGNITE-13574: add license headers for some imported files of the Ignite docs (#8361)

* Added a proper license header to some files used by the docs.

* Enabled the defaultLicenseMatcher for the license checker.

(cherry picked from commit d928fb8576b22dffbfce90a5541e67dc6cbfe410)

* ignite docs: updated a couple of contribution instructions

(cherry picked from commit 9e8da702068b1232789f8f9f93680f2c6d69ed16)

* IGNITE-13527: replace some references to the readme.io docs with the references to the new pages. The job will be finished as part of IGNITE-13586

(cherry picked from commit 7399ae64972cc097c48769cb5e2d9622ce7f7234)

* ignite docs: fixed broken lings to the SQLLine page

(cherry picked from commit faf4f467e964d478b3d99b94d43d32430a7e88f0)

* IGNITE-13615 Update .NET thin client feature set documentation

* IGNITE-13652 Wrong GitHub link for Apache Ignite With Spring Data/Example (#8420)

* ignite docs: updated the TcpDiscovery.soLinger documentation

* IGNITE-13663 : Represent in the documenttion affection of several node addresses on failure detection v2. (#8424)

* ignite docs: set the latest spring-data artifact id after receiving user feedback

* IGNITE-12951 Update documents for migrated extensions - Fixes #8488.

Signed-off-by: samaitra <saikat.maitra@gmail.com>
(cherry picked from commit 15a5da500c08948ee081533af97a9f1c2c8330f8)

* ignite docs: fixing a broken documentation link

* ignite docs: updated the index page with quick links to the APIs and examples

* ignite docs: fixed broken links and updated the C++ API header

* ignite docs: fixed case of GitHub

* IGNITE-13876 Updated documentation for 2.9.1 release (#8592)

(cherry picked from commit e74cf6ba8711338ed48dd01d1efe12505977f63f)

Co-authored-by: Denis Magda <dmagda@gridgain.com>
Co-authored-by: Pavel Tupitsyn <ptupitsyn@apache.org>
Co-authored-by: Denis Garus <garus.d.g@gmail.com>
Co-authored-by: Vladsz83 <vladsz83@gmail.com>
Co-authored-by: samaitra <saikat.maitra@gmail.com>
Co-authored-by: Nikita Safonov <73828260+nikita-tech-writer@users.noreply.github.com>
Co-authored-by: ymolochkov <ynmolochkov@sberbank.ru>
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index a22b7c641..5347636 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -36,7 +36,7 @@
 
 ## Contributing Documentation
 Documentation can be contributed to
- - End-User documentation https://apacheignite.readme.io/ . Use Suggest Edits. See also [How To Document](https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document).
+ - End-User documentation https://ignite.apache.org/docs/latest/ . Use Suggest Edits. See also [How To Document](https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document).
  - Developer documentation, design documents, IEPs [Apache Wiki](https://cwiki.apache.org/confluence/display/IGNITE). Ask at [Dev List](https://lists.apache.org/list.html?dev@ignite.apache.org) to be added as editor.
  - Markdown files, visible at GitHub, e.g. README.md; drawings explaining Apache Ignite & product internals.
  - Javadocs for packages (package-info.java), classes, methods, etc.
diff --git a/README.txt b/README.txt
index 4d02f4c..c7a2cdf 100644
--- a/README.txt
+++ b/README.txt
@@ -18,13 +18,7 @@
 
 For information on how to get started with Apache Ignite please visit:
 
-    http://apacheignite.readme.io/docs/getting-started
-
-
-You can find Apache Ignite documentation here:
-
-    http://apacheignite.readme.io/docs
-
+    https://ignite.apache.org/docs/latest/
 
 Crypto Notice
 =============
@@ -49,12 +43,12 @@
 The following provides more details on the included cryptographic software:
 
 * JDK SSL/TLS libraries used to enable secured connectivity between cluster
-nodes (https://apacheignite.readme.io/docs/ssltls).
+nodes (https://ignite.apache.org/docs/latest/security/ssl-tls).
 Oracle/OpenJDK (https://www.oracle.com/technetwork/java/javase/downloads/index.html)
 
 * JDK Java Cryptography Extensions build in encryption from the Java libraries is used
 for Transparent Data Encryption of data on disk
-(https://apacheignite.readme.io/docs/transparent-data-encryption)
+(https://ignite.apache.org/docs/latest/security/tde)
 and for AWS S3 Client Side Encryprion.
 (https://java.sun.com/javase/technologies/security/)
 
@@ -74,4 +68,4 @@
 * Apache Ignite.NET uses .NET Framework crypto APIs from standard class library
 for all security and cryptographic related code.
  .NET Classic, Windows-only (https://dotnet.microsoft.com/download)
- .NET Core  (https://dotnetfoundation.org/projects)
\ No newline at end of file
+ .NET Core  (https://dotnetfoundation.org/projects)
diff --git a/config/visor-cmd/node_startup_by_ssh.sample.ini b/config/visor-cmd/node_startup_by_ssh.sample.ini
index f1d8e01..649e0c7 100644
--- a/config/visor-cmd/node_startup_by_ssh.sample.ini
+++ b/config/visor-cmd/node_startup_by_ssh.sample.ini
@@ -15,7 +15,7 @@
 
 # ==================================================================
 # This is a sample file for Visor CMD to use with "start" command.
-# More info: https://apacheignite-tools.readme.io/docs/start-command
+# More info: https://ignite.apache.org/docs/latest/tools/visor-cmd
 # ==================================================================
 
 # Section with settings for host1:
diff --git a/docs/.gitignore b/docs/.gitignore
new file mode 100644
index 0000000..a01b89a
--- /dev/null
+++ b/docs/.gitignore
@@ -0,0 +1,5 @@
+.jekyll-cache/
+_site/
+Gemfile.lock
+.jekyll-metadata
+
diff --git a/docs/Gemfile b/docs/Gemfile
new file mode 100644
index 0000000..f471d02
--- /dev/null
+++ b/docs/Gemfile
@@ -0,0 +1,14 @@
+source "https://rubygems.org"
+
+# git_source(:github) {|repo_name| "https://github.com/#{repo_name}" }
+
+gem 'asciidoctor'
+gem 'jekyll', group: :jekyll_plugins
+gem 'wdm', '~> 0.1.1' if Gem.win_platform?
+group :jekyll_plugins do
+  gem 'jekyll-asciidoc'
+end
+#gem 'pygments.rb', '~> 1.2.1'
+gem 'thread_safe', '~> 0.3.6'
+gem 'slim', '~> 4.0.1'
+gem 'tilt', '~> 2.0.9'
diff --git a/docs/README.adoc b/docs/README.adoc
new file mode 100644
index 0000000..856b993
--- /dev/null
+++ b/docs/README.adoc
@@ -0,0 +1,212 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite Documentation
+:toc:
+:toc-title:
+
+== Overview
+The Apache Ignite documentation is maintained in the repository with the code base, in the "/docs" subdirectory. The directory contains the source files, HTML templates and css styles.
+
+
+The Apache Ignite documentation is written in link:https://asciidoctor.org/docs/what-is-asciidoc/[asciidoc].
+The Asciidoc files are compiled into HTML pages and published to https://ignite.apache.org/docs.
+
+
+.Content of the “docs” directory
+[cols="1,4",opts="stretch"]
+|===
+| pass:[_]docs  | The directory with .adoc files and code-snippets.
+| pass:[_]config.yml | Jekyll configuration file.
+|===
+
+
+== Building the Docs Locally
+
+To build the docs locally, you can install `jekyll` and other dependencies on your machine, or you can use Jekyll docker image.
+
+=== Install Jekyll and Asciidoctor
+
+. Install Jekyll by following this instruction:  https://jekyllrb.com/docs/installation/[window=_blank]
+. In the “/docs” directory, run the following command:
++
+[source, shell]
+----
+$ bundle
+----
++
+This should install all dependencies, including `asciidoctor`.
+. Start jekyll:
++
+[source, shell]
+----
+$ bundle exec jekyll s
+----
+The command compiles the Asciidoc files into HTML pages and starts a local webserver.
+
+Open `http://localhost:4000/docs[window=_blank]` in your browser.
+
+=== Run with Docker
+
+The following command starts jekyll in a container and downloads all dependencies. Run the command in the “/docs” directory.
+
+[source, shell]
+----
+$ docker run -v "$PWD:/srv/jekyll" -p 4000:4000 jekyll/jekyll:latest jekyll s
+----
+
+Open `http://localhost:4000/docs[window=_blank]` in your browser.
+
+== How to Contribute
+
+If you want to contribute to the documentation, add or modify the relevant page in the `docs/_docs` directory.
+This directory contains all .adoc files (which are then rendered into HTML pages and published on the web-site).
+
+Because we use asciidoc for documentation, consider the following points:
+
+* Get familiar with the asciidoc format: https://asciidoctor.org/docs/user-manual/. You don’t have to read the entire manual. Search through it when you want to learn how to create a numbered list, or insert an image, or use italics.
+* Please read the link:https://asciidoctor.org/docs/asciidoc-recommended-practices:[AsciiDoc Recommended Practices] and try to adhere to those when editing the .adoc source files.
+
+
+The following sections explain specific asciidoc syntax that we use.
+
+=== Table of content
+
+The table of content is defined in the `_data/toc.yaml` file.
+If you want to add a new page, make sure to update the TOC.
+
+=== Changing an URL of an existing page
+
+If you rename an already published page or change the page's path in the `/_data/toc.yaml` file,
+you must configure a proper redirect from the old to the new URL in the following files of the Ignite website:
+https://github.com/apache/ignite-website/blob/master/.htaccess
+
+Reach out to documentation maintainers if you need any help with this.
+
+=== Links to other sections in the docs
+All .adoc files are located in the "docs/_docs" directory.
+Any link to the files within the directory must be relative to that directory.
+Remove the file extension (.adoc).
+
+For example:
+[source, adoc]
+----
+link:persistence/native-persistence[Native Persistence]
+----
+
+This is a link to the Native Persistence page.
+
+=== Links to external resources
+
+When referencing an external resource, make the link to open in a new window by adding the `window=_blank` attribute:
+
+[source, adoc]
+----
+link:https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSE_Protocols[Supported protocols,window=_blank]
+----
+
+
+=== Tabs
+
+We use custom syntax to insert tabs. Tabs are used to provide code samples for different programming languages.
+
+Tabs are defined by the `tabs` block:
+```
+[tabs]
+--
+individual tabs are defined here
+--
+```
+
+Each tab is defined by the 'tab' directive:
+
+```
+tab:tab_name[]
+```
+
+where `tab_name` is the title of the tab.
+
+The content of the tab is everything that is given between the tab title, and the next tab or the end of the block.
+
+```asciidoc
+[tabs]
+--
+tab:XML[]
+
+The content of the XML tab goes here
+
+tab:Java[]
+
+The content of the Java tab is here
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+```
+
+=== Callouts
+
+Use the syntax below if you need to bring reader's attention to some details:
+
+[NOTE]
+====
+[discrete]
+=== Callout Title
+Callout Text
+====
+
+Change the callout type to `CAUTION` if you want to put out a warning:
+
+[CAUTION]
+====
+[discrete]
+=== Callout Title
+Callout Text
+====
+
+=== Code Snippets
+
+Code snippets must be taken from a compilable source code file (e.g. java, cs, js, etc).
+We use the `include` feature of asciidoc.
+Source code files are located in the `docs/_docs/code-snippets/{language}` folders.
+
+
+To add a code snippet to a page, follow these steps:
+
+* Create a file in the code snippets directory, e.g. _docs/code-snippets/java/org/apache/ignite/snippets/JavaThinClient.java
+
+* Enclose the piece of code you want to include within named tags (see https://asciidoctor.org/docs/user-manual/#by-tagged-regions). Give the tag a self-evident name.
+For example:
++
+```
+[source, java]
+----
+// tag::clientConnection[]
+ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+try (IgniteClient client = Ignition.startClient(cfg)) {
+    ClientCache<Integer, String> cache = client.cache("myCache");
+    // get data from the cache
+}
+// end::clientConnection[]
+----
+```
+
+* Include the tag in the adoc file:
++
+[source, adoc,subs="macros"]
+----
+\include::{javaCodeDir}/JavaThinClient.java[tag=clientConnection,indent=0]
+----
diff --git a/docs/_config.yml b/docs/_config.yml
new file mode 100644
index 0000000..0562d1a
--- /dev/null
+++ b/docs/_config.yml
@@ -0,0 +1,46 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+exclude: [guidelines.md,  "Gemfile", "Gemfile.lock", README.adoc, "_docs/code-snippets", "_docs/includes", '*.sh']
+attrs: &asciidoc_attributes
+  version: 2.9.1
+  base_url: /docs
+  stylesdir: /docs/assets/css
+  imagesdir: /docs
+  source-highlighter: rouge
+  table-stripes: even
+  javadoc_base_url: https://ignite.apache.org/releases/{version}/javadoc
+  javaCodeDir: code-snippets/java/src/main/java/org/apache/ignite/snippets
+  csharpCodeDir: code-snippets/dotnet
+  githubUrl: https://github.com/apache/ignite/tree/master
+  docSourceUrl: https://github.com/apache/ignite/tree/IGNITE-7595/docs
+collections:
+  docs:
+    permalink: /docs/:path:output_ext
+    output: true
+defaults:
+  -
+    scope:
+      path: ''
+    values:
+      layout: 'doc'
+  -
+    scope:
+      path: '_docs'
+    values:
+      toc: ignite 
+asciidoctor:
+  base_dir: _docs/ 
+  attributes: *asciidoc_attributes
+   
diff --git a/docs/_data/toc.yaml b/docs/_data/toc.yaml
new file mode 100644
index 0000000..750c1d5
--- /dev/null
+++ b/docs/_data/toc.yaml
@@ -0,0 +1,559 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+- title: Documentation Overview
+  url: index
+- title: Quick Start Guides
+  items: 
+    - title: Java
+      url: quick-start/java
+    - title: .NET/C#
+      url: quick-start/dotnet
+    - title: C++
+      url: quick-start/cpp
+    - title: Python
+      url: quick-start/python
+    - title: Node.JS
+      url: quick-start/nodejs
+    - title: SQL
+      url: quick-start/sql
+    - title: PHP
+      url: quick-start/php
+    - title: REST API
+      url: quick-start/restapi
+- title: Installation
+  url: installation
+  items:
+  - title: Installing Using ZIP Archive
+    url: installation/installing-using-zip
+  - title: Installing Using Docker
+    url: installation/installing-using-docker
+  - title: Installing DEB or RPM package
+    url: installation/deb-rpm
+  - title: Kubernetes
+    items: 
+      - title: Amazon EKS 
+        url: installation/kubernetes/amazon-eks-deployment
+      - title: Azure Kubernetes Service 
+        url: installation/kubernetes/azure-deployment
+      - title: Google Kubernetes Engine
+        url: installation/kubernetes/gke-deployment
+  - title: VMWare
+    url: installation/vmware-installation
+- title: Setting Up
+  items:
+    - title: Understanding Configuration
+      url: understanding-configuration
+    - title: Setting Up
+      url: setup
+    - title: Configuring Logging
+      url: logging
+    - title: Resources Injection
+      url: resources-injection
+- title: Starting and Stopping Nodes
+  url: starting-nodes
+- title: Clustering
+  items:
+    - title: Overview
+      url: clustering/clustering
+    - title: TCP/IP Discovery
+      url: clustering/tcp-ip-discovery
+    - title: ZooKeeper Discovery
+      url: clustering/zookeeper-discovery
+    - title: Discovery in the Cloud
+      url: clustering/discovery-in-the-cloud
+    - title: Network Configuration
+      url: clustering/network-configuration
+    - title: Connecting Client Nodes 
+      url: clustering/connect-client-nodes
+    - title: Baseline Topology
+      url: clustering/baseline-topology
+    - title: Running Client Nodes Behind NAT
+      url: clustering/running-client-nodes-behind-nat
+- title: Thin Clients
+  items:
+    - title: Thin Clients Overview
+      url: thin-clients/getting-started-with-thin-clients
+    - title: Java Thin Client
+      url: thin-clients/java-thin-client
+    - title: .NET Thin Client
+      url: thin-clients/dotnet-thin-client
+    - title: C++ Thin Client
+      url: thin-clients/cpp-thin-client
+    - title: Python Thin Client
+      url: thin-clients/python-thin-client
+    - title: PHP Thin Client
+      url: thin-clients/php-thin-client
+    - title: Node.js Thin Client
+      url: thin-clients/nodejs-thin-client
+    - title: Binary Client Protocol
+      items:
+        - title: Binary Client Protocol
+          url: binary-client-protocol/binary-client-protocol
+        - title: Data Format
+          url: binary-client-protocol/data-format
+        - title: Key-Value Queries
+          url: binary-client-protocol/key-value-queries
+        - title: SQL and Scan Queries
+          url: binary-client-protocol/sql-and-scan-queries
+        - title: Binary Types Metadata
+          url: binary-client-protocol/binary-type-metadata
+        - title: Cache Configuration
+          url: binary-client-protocol/cache-configuration
+- title: Data Modeling
+  items: 
+    - title: Introduction
+      url: data-modeling/data-modeling
+    - title: Data Partitioning
+      url: data-modeling/data-partitioning
+    - title: Affinity Colocation 
+      url: data-modeling/affinity-collocation
+    - title: Binary Marshaller
+      url: data-modeling/binary-marshaller
+- title: Configuring Memory 
+  items:
+    - title: Memory Architecture
+      url: memory-architecture
+    - title: Configuring Data Regions
+      url: memory-configuration/data-regions
+    - title: Eviction Policies
+      url: memory-configuration/eviction-policies        
+- title: Configuring Persistence
+  items:
+    - title: Ignite Persistence
+      url: persistence/native-persistence
+    - title: External Storage
+      url: persistence/external-storage
+    - title: Swapping
+      url: persistence/swap
+    - title: Implementing Custom Cache Store
+      url: persistence/custom-cache-store
+    - title: Cluster Snapshots
+      url: persistence/snapshots
+    - title: Disk Compression
+      url: persistence/disk-compression
+    - title: Tuning Persistence
+      url: persistence/persistence-tuning
+- title: Configuring Caches
+  items:
+    - title: Cache Configuration 
+      url: configuring-caches/configuration-overview 
+    - title: Configuring Partition Backups
+      url: configuring-caches/configuring-backups
+    - title: Partition Loss Policy
+      url: configuring-caches/partition-loss-policy
+    - title: Atomicity Modes
+      url: configuring-caches/atomicity-modes
+    - title: Expiry Policy
+      url: configuring-caches/expiry-policies
+    - title: On-Heap Caching
+      url: configuring-caches/on-heap-caching
+    - title: Cache Groups 
+      url: configuring-caches/cache-groups
+    - title: Near Caches
+      url: configuring-caches/near-cache
+- title: Data Rebalancing
+  url: data-rebalancing 
+- title: Data Streaming
+  url: data-streaming
+- title: Using Key-Value API
+  items:
+    - title: Basic Cache Operations 
+      url: key-value-api/basic-cache-operations
+    - title: Working with Binary Objects
+      url: key-value-api/binary-objects
+    - title: Using Scan Queries
+      url: key-value-api/using-scan-queries
+    - title: Read Repair
+      url: read-repair
+- title: Performing Transactions
+  url: key-value-api/transactions
+- title: Working with SQL
+  items:
+    - title: Introduction
+      url: SQL/sql-introduction
+    - title: Understanding Schemas
+      url: SQL/schemas
+    - title: Defining Indexes
+      url: SQL/indexes
+    - title: Using SQL API
+      url: SQL/sql-api
+    - title: Distributed Joins
+      url: SQL/distributed-joins
+    - title: SQL Transactions
+      url: SQL/sql-transactions
+    - title: Custom SQL Functions
+      url: SQL/custom-sql-func
+    - title: JDBC Driver
+      url: SQL/JDBC/jdbc-driver
+    - title: JDBC Client Driver
+      url: SQL/JDBC/jdbc-client-driver
+    - title: ODBC Driver
+      items:
+        - title: ODBC Driver
+          url: SQL/ODBC/odbc-driver
+        - title: Connection String and DSN
+          url:  /SQL/ODBC/connection-string-dsn
+        - title: Querying and Modifying Data
+          url: SQL/ODBC/querying-modifying-data
+        - title: Specification
+          url: SQL/ODBC/specification
+        - title: Data Types
+          url: SQL/ODBC/data-types
+        - title: Error Codes
+          url: SQL/ODBC/error-codes
+    - title: Multiversion Concurrency Control
+      url: transactions/mvcc
+- title: SQL Reference
+  url: sql-reference/sql-reference-overview
+  items:
+    - title: SQL Conformance
+      url: sql-reference/sql-conformance
+    - title: Data Definition Language (DDL)
+      url: sql-reference/ddl
+    - title: Data Manipulation Language (DML)
+      url: sql-reference/dml
+    - title: Transactions
+      url: sql-reference/transactions
+    - title: Operational Commands
+      url: sql-reference/operational-commands
+    - title: Aggregate functions
+      url: sql-reference/aggregate-functions
+    - title: Numeric Functions
+      url: sql-reference/numeric-functions
+    - title: String Functions
+      url: sql-reference/string-functions
+    - title: Data and Time Functions
+      url: sql-reference/date-time-functions
+    - title: System Functions
+      url: sql-reference/system-functions
+    - title: Data Types
+      url: sql-reference/data-types
+- title: Distributed Computing
+  items:
+    - title: Distributed Computing API
+      url: distributed-computing/distributed-computing
+    - title: Cluster Groups
+      url: distributed-computing/cluster-groups
+    - title: Executor Service
+      url: distributed-computing/executor-service
+    - title: MapReduce API
+      url: distributed-computing/map-reduce
+    - title: Load Balancing
+      url: distributed-computing/load-balancing
+    - title: Fault Tolerance
+      url: distributed-computing/fault-tolerance
+    - title: Job Scheduling
+      url: distributed-computing/job-scheduling
+    - title: Colocating Computations with Data
+      url: distributed-computing/collocated-computations
+- title: Code Deployment
+  items:
+    - title: Deploying User Code
+      url: code-deployment/deploying-user-code
+    - title: Peer Class Loading
+      url: code-deployment/peer-class-loading
+- title: Machine Learning
+  items:
+    - title: Machine Learning
+      url: machine-learning/machine-learning
+    - title: Partition Based Dataset
+      url: machine-learning/partition-based-dataset
+    - title: Updating Trained Models
+      url: machine-learning/updating-trained-models
+    - title: Binary Classification
+      items:
+        - title: Introduction
+          url: machine-learning/binary-classification/introduction
+        - title: Linear SVM (Support Vector Machine)
+          url: machine-learning/binary-classification/linear-svm
+        - title: Decision Trees
+          url: machine-learning/binary-classification/decision-trees
+        - title: Multilayer Perceptron
+          url: machine-learning/binary-classification/multilayer-perceptron
+        - title: Logistic Regression
+          url: machine-learning/binary-classification/logistic-regression
+        - title: k-NN Classification
+          url: machine-learning/binary-classification/knn-classification
+        - title: ANN (Approximate Nearest Neighbor)
+          url: machine-learning/binary-classification/ann
+        - title: Naive Bayes
+          url: machine-learning/binary-classification/naive-bayes
+    - title: Regression
+      items:
+        - title: Introduction
+          url: machine-learning/regression/introduction
+        - title: Linear Regression
+          url: machine-learning/regression/linear-regression
+        - title: Decision Trees Regression
+          url: machine-learning/regression/decision-trees-regression
+        - title: k-NN Regression
+          url: machine-learning/regression/knn-regression
+    - title: Clustering
+      items:
+        - title: Introduction
+          url: machine-learning/clustering/introduction
+        - title: K-Means Clustering
+          url: machine-learning/clustering/k-means-clustering
+        - title: Gaussian mixture (GMM)
+          url: machine-learning/clustering/gaussian-mixture
+    - title: Preprocessing
+      url: machine-learning/preprocessing
+    - title: Model Selection
+      items:
+        - title: Introduction
+          url: machine-learning/model-selection/introduction
+        - title: Evaluator
+          url: machine-learning/model-selection/evaluator
+        - title: Split the dataset on test and train datasets
+          url: machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets
+        - title: Hyper-parameter tuning
+          url: machine-learning/model-selection/hyper-parameter-tuning
+        - title: Pipeline API
+          url: machine-learning/model-selection/pipeline-api
+    - title: Multiclass Classification
+      url: machine-learning/multiclass-classification
+    - title: Ensemble Methods
+      items:
+        - title:
+          url: machine-learning/ensemble-methods/introduction
+        - title: Stacking
+          url: machine-learning/ensemble-methods/stacking
+        - title: Bagging
+          url: machine-learning/ensemble-methods/baggin
+        - title: Random Forest
+          url: machine-learning/ensemble-methods/random-forest
+        - title: Gradient Boosting
+          url: machine-learning/ensemble-methods/gradient-boosting
+    - title: Recommendation Systems
+      url: machine-learning/recommendation-systems
+    - title: Importing Model
+      items:
+        - title: Introduction
+          url: machine-learning/importing-model/introduction
+        - title: Import Model from XGBoost
+          url: machine-learning/importing-model/model-import-from-gxboost
+        - title: Import Model from Apache Spark
+          url: machine-learning/importing-model/model-import-from-apache-spark
+- title: Using Continuous Queries
+  url: key-value-api/continuous-queries
+- title: Using Ignite Services
+  url: services/services
+- title: Using Ignite Messaging
+  url: messaging
+- title: Distributed Data Structures
+  items:
+    - title: Queue and Set
+      url: data-structures/queue-and-set
+    - title: Atomic Types 
+      url: data-structures/atomic-types
+    - title: CountDownLatch 
+      url: data-structures/countdownlatch
+    - title: Atomic Sequence 
+      url: data-structures/atomic-sequence
+    - title:  Semaphore 
+      url: data-structures/semaphore
+    - title: ID Generator
+      url: data-structures/id-generator
+- title: Distributed Locks
+  url: distributed-locks
+- title: REST API
+  url: restapi
+- title: .NET Specific
+  items:
+    - title: Configuration Options
+      url: net-specific/net-configuration-options
+    - title: Deployment Options
+      url: net-specific/net-deployment-options
+    - title: Standalone Nodes
+      url: net-specific/net-standalone-nodes
+    - title: Logging
+      url: net-specific/net-logging
+    - title: LINQ
+      url: net-specific/net-linq
+    - title: Java Services Execution
+      url: net-specific/net-java-services-execution
+    - title: .NET Platform Cache
+      url: net-specific/net-platform-cache
+    - title: Plugins
+      url: net-specific/net-plugins
+    - title: Serialization
+      url: net-specific/net-serialization
+    - title: Cross-Platform Support
+      url: net-specific/net-cross-platform-support
+    - title: Platform Interoperability
+      url: net-specific/net-platform-interoperability
+    - title: Remote Assembly Loading
+      url: net-specific/net-remote-assembly-loading
+    - title: Troubleshooting
+      url: net-specific/net-troubleshooting
+    - title: Integrations
+      items:
+        - title: ASP.NET Output Caching
+          url: net-specific/asp-net-output-caching
+        - title: ASP.NET Session State Caching
+          url: net-specific/asp-net-session-state-caching
+        - title: Entity Framework 2nd Level Cache
+          url: net-specific/net-entity-framework-cache
+- title: C++ Specific
+  items:
+    - title: Serialization
+      url: cpp-specific/cpp-serialization
+    - title: Platform Interoperability
+      url: cpp-specific/cpp-platform-interoperability
+    - title: Objects Lifetime
+      url: cpp-specific/cpp-objects-lifetime
+- title: Monitoring
+  items:
+    - title: Introduction
+      url: monitoring-metrics/intro
+    - title: Cluster ID and Tag
+      url: monitoring-metrics/cluster-id
+    - title: Cluster States
+      url: monitoring-metrics/cluster-states
+    - title: Metrics
+      items: 
+        - title: Configuring Metrics
+          url: monitoring-metrics/configuring-metrics
+        - title: JMX Metrics
+          url: monitoring-metrics/metrics
+    - title: New Metrics System 
+      items:
+        - title: Introduction 
+          url: monitoring-metrics/new-metrics-system
+        - title: Metrics
+          url: monitoring-metrics/new-metrics
+    - title: System Views
+      url: monitoring-metrics/system-views
+    - title: Tracing
+      url: monitoring-metrics/tracing
+- title: Working with Events
+  items:
+    - title: Enabling and Listenting to Events
+      url: events/listening-to-events
+    - title: Events
+      url: events/events
+- title: Tools
+  items:
+    - title: Control Script
+      url: tools/control-script
+    - title: Visor CMD
+      url: tools/visor-cmd
+    - title: GridGain Control Center
+      url: tools/gg-control-center
+    - title: SQLLine
+      url: tools/sqlline
+    - title: Tableau
+      url: tools/tableau
+    - title: Informatica
+      url: tools/informatica
+    - title: Pentaho
+      url: tools/pentaho
+- title: Security
+  url: security
+  items: 
+    - title: Authentication
+      url: security/authentication
+    - title: SSL/TLS 
+      url: security/ssl-tls
+    - title: Transparent Data Encryption
+      items:
+        - title: Introduction
+          url: security/tde
+        - title: Master key rotation
+          url: security/master-key-rotation
+    - title: Sandbox
+      url: security/sandbox
+- title: Extensions and Integrations
+  items:
+    - title: Spring
+      items:
+        - title: Spring Boot
+          url: extensions-and-integrations/spring/spring-boot
+        - title: Spring Data
+          url: extensions-and-integrations/spring/spring-data
+        - title: Spring Caching
+          url: extensions-and-integrations/spring/spring-caching
+    - title: Ignite for Spark
+      items:
+        - title: Overview
+          url: extensions-and-integrations/ignite-for-spark/overview
+        - title: IgniteContext and IgniteRDD
+          url:  extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd
+        - title: Ignite DataFrame
+          url: extensions-and-integrations/ignite-for-spark/ignite-dataframe
+        - title: Installation
+          url: extensions-and-integrations/ignite-for-spark/installation
+        - title: Test Ignite with Spark-shell
+          url: extensions-and-integrations/ignite-for-spark/spark-shell
+        - title: Troubleshooting
+          url: extensions-and-integrations/ignite-for-spark/troubleshooting
+    - title: Hibernate L2 Cache
+      url: extensions-and-integrations/hibernate-l2-cache
+    - title: MyBatis L2 Cache
+      url: extensions-and-integrations/mybatis-l2-cache
+    - title: Streaming
+      items:
+        - title: Kafka Streamer
+          url: extensions-and-integrations/streaming/kafka-streamer
+        - title: Camel Streamer
+          url: extensions-and-integrations/streaming/camel-streamer
+        - title: Flink Streamer
+          url: extensions-and-integrations/streaming/flink-streamer
+        - title: Flume Sink
+          url: extensions-and-integrations/streaming/flume-sink
+        - title: JMS Streamer
+          url: extensions-and-integrations/streaming/jms-streamer
+        - title: MQTT Streamer
+          url: extensions-and-integrations/streaming/mqtt-streamer
+        - title: RocketMQ Streamer
+          url: extensions-and-integrations/streaming/rocketmq-streamer
+        - title: Storm Streamer
+          url: extensions-and-integrations/streaming/storm-streamer
+        - title: ZeroMQ Streamer
+          url: extensions-and-integrations/streaming/zeromq-streamer
+        - title: Twitter Streamer
+          url: extensions-and-integrations/streaming/twitter-streamer
+    - title: Cassandra Integration
+      items:
+        - title: Overview
+          url: extensions-and-integrations/cassandra/overview
+        - title: Configuration
+          url: extensions-and-integrations/cassandra/configuration
+        - title: Usage Examples
+          url: extensions-and-integrations/cassandra/usage-examples
+        - title: DDL Generator
+          url: extensions-and-integrations/cassandra/ddl-generator
+    - title: PHP PDO
+      url: extensions-and-integrations/php-pdo
+- title: Plugins
+  url: plugins
+- title: Performance and Troubleshooting
+  items:
+    - title: General Performance Tips
+      url: /perf-and-troubleshooting/general-perf-tips
+    - title: Memory and JVM Tuning
+      url: /perf-and-troubleshooting/memory-tuning
+    - title: Persistence Tuning
+      url: /perf-and-troubleshooting/persistence-tuning
+    - title: SQL Tuning
+      url: /perf-and-troubleshooting/sql-tuning
+    - title: Thread Pools Tuning
+      url: /perf-and-troubleshooting/thread-pools-tuning
+    - title: Troubleshooting and Debugging
+      url: /perf-and-troubleshooting/troubleshooting
+    - title: Handling Exceptions
+      url: /perf-and-troubleshooting/handling-exceptions
+    - title: Benchmarking With Yardstick
+      url: /perf-and-troubleshooting/yardstick-benchmarking
diff --git a/docs/_docs/SQL/JDBC/error-codes.adoc b/docs/_docs/SQL/JDBC/error-codes.adoc
new file mode 100644
index 0000000..f2e1a33
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/error-codes.adoc
@@ -0,0 +1,81 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Error Codes
+
+Ignite JDBC drivers pass error codes in the `java.sql.SQLException` class, used to facilitate exception handling on the application side. To get an error code, use the `java.sql.SQLException.getSQLState()` method. It returns a string containing the ANSI SQLSTATE error code:
+
+[source,java]
+----
+include::{javaCodeDir}/JDBCThinDriver.java[tags=error-codes, indent=0]
+----
+
+
+The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI SQLSTATE] error codes currently supported by Ignite. Note that the list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|0700B|Conversion failure (for example, a string expression cannot be parsed as a number or a date).
+
+|0700E|Invalid transaction isolation level.
+
+|08001|The driver failed to open a connection to the cluster.
+
+|08003|The connection is in the closed state. Happened unexpectedly.
+
+|08004|The connection was rejected by the cluster.
+
+|08006|I/O error during communication.
+
+|22004|Null value not allowed.
+
+|22023|Unsupported parameter type.
+
+|23000|Data integrity constraint violation.
+
+|24000|Invalid result set state.
+
+|0A000|Requested operation is not supported.
+
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
+
+|42000|Query parsing exception.
+
+|50000|Ignite internal error.
+The code is not defined by ANSI and refers to an Ignite specific error. Refer to the `java.sql.SQLException` error message for more information.
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
new file mode 100644
index 0000000..ee2ffeb
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
@@ -0,0 +1,297 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JDBC Client Driver
+:javaFile: {javaCodeDir}/JDBCClientDriver.java
+
+JDBC Client Driver interacts with the cluster by means of a client node.
+
+== JDBC Client Driver
+
+The JDBC Client Driver connects to the cluster by using a lclient node connection. You must provide a complete Spring XML configuration as part of the JDBC connection string, and copy all the JAR files mentioned below to the classpath of your application or SQL tool:
+
+- All the JARs under `{IGNITE_HOME}\libs` directory.
+- All the JARs under `{IGNITE_HOME}\ignite-indexing` and `{IGNITE_HOME}\ignite-spring` directories.
+
+The driver itself is more robust, and might not support the latest SQL features of Ignite. However, because it uses the client node connection underneath, it can execute and distribute queries, and aggregate their results directly from the application side.
+
+The JDBC connection URL has the following pattern:
+
+[source,shell]
+----
+jdbc:ignite:cfg://[<params>@]<config_url>
+----
+
+Where:
+
+- `<config_url>` is required and must represent a valid URL that points to the configuration file for the client node. This node will be started within the Ignite JDBC Client Driver when it (the JDBC driver) tries to establish a connection with the cluster.
+- `<params>` is optional and has the following format:
+
+[source,text]
+----
+param1=value1:param2=value2:...:paramN=valueN
+----
+
+
+The name of the driver's class is `org.apache.ignite.IgniteJdbcDriver`. For example, here's how to open a JDBC connection to the Ignite cluster:
+
+[source,java]
+----
+include::{javaFile}[tags=register, indent=0]
+----
+
+[NOTE]
+====
+[discrete]
+=== Securing Connection
+
+For information on how to secure the JDBC client driver connection, you can refer to the link:security/ssl-tls[Security documentation].
+====
+
+=== Supported Parameters
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`cache`
+
+|Cache name. If it is not defined, then the default cache will be used. Note that the cache name is case sensitive.
+| None.
+
+|`nodeId`
+
+|ID of node where query will be executed. Useful for querying through local caches.
+| None.
+
+|`local`
+
+|Query will be executed only on a local node. Use this parameter with the `nodeId` parameter in order to limit data set by specified node.
+
+|`false`
+
+|`collocated`
+
+|Flag that is used for optimization purposes. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are colocated together on the same node, Ignite can make significant performance and network optimizations.
+
+|`false`
+
+|`distributedJoins`
+
+|Allows use of distributed joins for non-colocated data.
+
+|`false`
+
+|`streaming`
+
+|Turns on bulk data load mode via INSERT statements for this connection. Refer to the <<Streaming Mode>> section for more details.
+
+|`false`
+
+|`streamingAllowOverwrite`
+
+|Tells Ignite to overwrite values for existing keys on duplication instead of skipping them. Refer to the <<Streaming Mode>> section for more details.
+
+|`false`
+
+|`streamingFlushFrequency`
+
+|Timeout, in milliseconds, that data streamer should use to flush data. By default, the data is flushed on connection close. Refer to the <<Streaming Mode>> section for more details.
+
+|`0`
+
+|`streamingPerNodeBufferSize`
+
+|Data streamer's per node buffer size. Refer to the <<Streaming Mode>> section for more details.
+
+|`1024`
+
+|`streamingPerNodeParallelOperations`
+
+|Data streamer's per node parallel operations number. Refer to the <<Streaming Mode>> section for more details.
+
+|`16`
+
+|`transactionsAllowed`
+
+|Presently ACID Transactions are supported, but only at the key-value API level. At the SQL level, Ignite supports atomic, but not transactional consistency.
+
+This means that the JDBC driver might throw a `Transactions are not supported` exception if you try to use this functionality.
+
+However, in cases when you need transactional syntax to work (even without transactional semantics), e.g. some BI tools might force the transactional behavior, set this parameter to `true` to prevent exceptions from being thrown.
+
+|`false`
+
+|`multipleStatementsAllowed`
+
+|JDBC driver will be able to process multiple SQL statements at a time, returning multiple `ResultSet` objects. If the parameter is disabled, the query with multiple statements fails.
+
+|`false`
+
+|`lazy`
+
+|Lazy query execution.
+
+By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimizes the duration of internal database locks, thus increasing concurrency.
+
+However, if the result set is too big to fit in the available memory, it can lead to excessive GC pauses and even `OutOfMemoryError` errors. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+
+|`false`
+
+|`skipReducerOnUpdate`
+
+|Enables server side update feature.
+
+When Ignite executes a DML operation, it first fetches all of the affected intermediate rows for analysis to the query initiator (also known as reducer), and then prepares batches of updated values to be sent to remote nodes.
+
+This approach might impact performance and saturate the network if a DML operation has to move many entries over it.
+
+Use this flag as a hint for Ignite to perform all intermediate rows analysis and updates "in-place" on the corresponding remote data nodes.
+
+Defaults to `false`, meaning that intermediate results will be fetched to the query initiator first.
+|`false`
+
+
+|=======================================================================
+
+[NOTE]
+====
+[discrete]
+=== Cross-Cache Queries
+
+The cache to which the driver is connected is treated as the default schema. To query across multiple caches, you can use Cross-Cache queries.
+====
+
+=== Streaming Mode
+
+It's feasible to add data into a cluster in streaming mode (bulk mode) using the JDBC driver. In this mode, the driver instantiates `IgniteDataStreamer` internally and feeds data to it. To activate this mode, add the `streaming` parameter set to `true` to a JDBC connection string:
+
+[source,java]
+----
+// Register JDBC driver.
+Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+// Opening connection in the streaming mode.
+Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");
+----
+
+Presently, streaming mode is supported only for INSERT operations. This is useful in cases when you want to achieve fast data preloading into a cache. The JDBC driver defines multiple connection parameters that affect the behavior of the streaming mode. These parameters are listed in the parameters table above.
+
+[WARNING]
+====
+[discrete]
+=== Cache Name
+
+Make sure you specify a target cache for streaming as an argument to the `cache=` parameter in the JDBC connection string. If a cache is not specified or does not match the table used in streaming DML statements, updates will be ignored.
+====
+
+The parameters cover almost all of the settings of a general `IgniteDataStreamer` and allow you to tune the streamer according to your needs. Please refer to the link:data-streaming[Data Streaming] section for more information on how to configure the streamer.
+
+[NOTE]
+====
+[discrete]
+=== Time Based Flushing
+
+By default, the data is flushed when either a connection is closed or `streamingPerNodeBufferSize` is met. If you need to flush the data more frequently, adjust the `streamingFlushFrequency` parameter.
+====
+
+[source,java]
+----
+include::{javaFile}[tags=time-based-flushing, indent=0]
+----
+
+== Example
+
+To start processing the data located in the cluster, you need to create a JDBC `Connection` object using one of the methods below:
+
+[source,java]
+----
+// Register JDBC driver.
+Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+// Open JDBC connection (cache name is not specified, which means that we use default cache).
+Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");
+----
+
+Right after that you can execute your SQL `SELECT` queries:
+
+[source,java]
+----
+// Query names of all people.
+ResultSet rs = conn.createStatement().executeQuery("select name from Person");
+
+while (rs.next()) {
+    String name = rs.getString(1);
+}
+
+----
+
+[source,java]
+----
+// Query people with specific age using prepared statement.
+PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
+
+stmt.setInt(1, 30);
+
+ResultSet rs = stmt.executeQuery();
+
+while (rs.next()) {
+    String name = rs.getString("name");
+    int age = rs.getInt("age");
+}
+----
+
+You can use DML statements to modify the data.
+
+=== INSERT
+[source,java]
+----
+// Insert a Person with a Long key.
+PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+stmt.setInt(1, 1);
+stmt.setString(2, "John Smith");
+stmt.setInt(3, 25);
+
+stmt.execute();
+----
+
+=== MERGE
+[source,java]
+----
+// Merge a Person with a Long key.
+PreparedStatement stmt = conn.prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+stmt.setInt(1, 1);
+stmt.setString(2, "John Smith");
+stmt.setInt(3, 25);
+
+stmt.executeUpdate();
+----
+
+=== UPDATE
+
+[source,java]
+----
+// Update a Person.
+conn.createStatement().
+  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");
+----
+
+=== DELETE
+
+[source,java]
+----
+conn.createStatement().execute("DELETE FROM Person WHERE age = 25");
+----
diff --git a/docs/_docs/SQL/JDBC/jdbc-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
new file mode 100644
index 0000000..09438c1
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
@@ -0,0 +1,649 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JDBC Driver
+:javaFile: {javaCodeDir}/JDBCThinDriver.java
+
+Ignite is shipped with JDBC drivers that allow processing of distributed data using standard SQL statements like `SELECT`, `INSERT`, `UPDATE` or `DELETE` directly from the JDBC side.
+
+Presently, there are two drivers supported by Ignite: the lightweight and easy to use JDBC Thin Driver described in this document and link:SQL/JDBC/jdbc-client-driver[JDBC Client Driver] that interacts with the cluster by means of a client node.
+
+== JDBC Thin Driver
+
+The JDBC Thin driver is a default, lightweight driver provided by Ignite. To start using the driver, just add `ignite-core-{version}.jar` to your application's classpath.
+
+The driver connects to one of the cluster nodes and forwards all the queries to it for final execution. The node handles the query distribution and the result's aggregations. Then the result is sent back to the client application.
+
+The JDBC connection string may be formatted with one of two patterns: `URL query` or `semicolon`:
+
+
+
+.Connection String Syntax
+[source,text]
+----
+// URL query pattern
+jdbc:ignite:thin://<hostAndPortRange0>[,<hostAndPortRange1>]...[,<hostAndPortRangeN>][/schema][?<params>]
+
+hostAndPortRange := host[:port_from[..port_to]]
+
+params := param1=value1[&param2=value2]...[&paramN=valueN]
+
+// Semicolon pattern
+jdbc:ignite:thin://<hostAndPortRange0>[,<hostAndPortRange1>]...[,<hostAndPortRangeN>][;schema=<schema_name>][;param1=value1]...[;paramN=valueN]
+----
+
+
+- `host` is required and defines the host of the cluster node to connect to.
+- `port_from` is the beginning of the port range to use to open the connection. 10800 is used by default if this parameter is omitted.
+- `port_to` is optional. It is set to the `port_from` value by default if this parameter is omitted.
+- `schema` is the schema name to access. PUBLIC is used by default. This name should correspond to the SQL ANSI-99 standard. Non-quoted identifiers are not case sensitive. Quoted identifiers are case sensitive. When semicolon format is used, the schema may be defined as a parameter with name schema.
+- `<params>` are optional.
+
+The name of the driver's class is `org.apache.ignite.IgniteJdbcThinDriver`. For instance, this is how you can open a JDBC connection to the cluster node listening on IP address 192.168.0.50:
+
+[source,java]
+----
+include::{javaFile}[tags=get-connection, indent=0]
+----
+
+
+[NOTE]
+====
+[discrete]
+=== Put the JDBC URL in quotes when connecting from bash
+
+Make sure to put the connection URL in double quotes (" ") when connecting from a bash environment, for example: `"jdbc:ignite:thin://[address]:[port];user=[username];password=[password]"`
+====
+
+=== Parameters
+The following table lists all the parameters that are supported by the JDBC connection string:
+
+[width="100%",cols="30%,40%,30%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`user`
+|Username for the SQL Connection. This parameter is required if authentication is enabled on the server.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+|ignite
+
+|`password`
+|Password for SQL Connection. Required if authentication is enabled on the server.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+|`ignite`
+
+|`distributedJoins`
+|Whether to execute distributed joins in link:SQL/distributed-joins#non-colocated-joins[non-colocated mode].
+|false
+
+|`enforceJoinOrder`
+
+|Whether to enforce join order of tables in the query. If set to `true`, the query optimizer does not reorder tables in the join.
+
+|`false`
+
+|`collocated`
+
+| Set this parameter to `true` if your SQL statement includes a GROUP BY clause that groups the results by either primary
+  or affinity key. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If
+  you know in advance that the elements of your query selection are colocated together on the same node and you group by
+  a primary or affinity key, then Ignite makes significant performance and network optimizations by grouping data locally
+   on each node participating in the query.
+|`false`
+
+|`replicatedOnly`
+
+|Whether the query contains only replicated tables. This is a hint for potentially more effective execution.
+
+|`false`
+
+|`autoCloseServerCursor`
+|Whether to close server-side cursors automatically when the last piece of a result set is retrieved. When this property is enabled, calling `ResultSet.close()` does not require a network call, which could improve performance. However, if the server-side cursor is already closed, you may get an exception when trying to call `ResultSet.getMetadata()`. This is why it defaults to `false`.
+|`false`
+
+| `partitionAwareness`
+| Enables xref:partition-awareness[] mode. In this mode, the driver tries to determine the nodes where the data that is being queried is located and send the query to these nodes.
+| `false`
+
+|`partitionAwarenessSQLCacheSize` [[partitionAwarenessSQLCacheSize]]
+| The number of distinct SQL queries that the driver keeps locally for optimization. When a query is executed for the first time, the driver receives the partition distribution for the table that is being queried and saves it for future use locally. When you query this table next time, the driver uses the partition distribution to determine where the data being queried is located to send the query to the right nodes. This local storage with SQL queries invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct SQL queries you are going to perform.
+| 1000
+
+|`partitionAwarenessPartitionDistributionsCacheSize` [[partitionAwarenessPartitionDistributionsCacheSize]]
+| The number of distinct objects that represent partition distribution that the driver keeps locally for optimization. See the description of the previous parameter for details. This local storage with partition distribution objects invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct tables (link:configuring-caches/cache-groups[cache groups]) you are going to use in your queries.
+| 1000
+
+|`socketSendBuffer`
+|Socket send buffer size. When set to 0, the OS default is used.
+|0
+
+|`socketReceiveBuffer`
+|Socket receive buffer size. When set to 0, the OS default is used.
+|0
+
+|`tcpNoDelay`
+| Whether to use `TCP_NODELAY` option.
+|`true`
+
+|`lazy`
+|Lazy query execution.
+By default, Ignite attempts to get and load the whole query result set into memory and then send it to the client. For small and medium result sets, this provides optimal performance and minimizes the duration of internal database locks, thus increasing concurrency.
+However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even 'OutOfMemoryError's. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+|`false`
+
+|`skipReducerOnUpdate`
+|Enables server side updates.
+When Ignite executes a DML operation, it fetches all the affected intermediate rows and sends them to the query initiator (also known as reducer) for analysis. Then it prepares batches of updated values to be sent to remote nodes.
+This approach might impact performance and it can saturate the network if a DML operation has to move many entries over it.
+Use this flag to tell Ignite to perform all intermediate row analysis and updates "in-place" on corresponding remote data nodes.
+Defaults to `false`, meaning that the intermediate results are fetched to the query initiator first.
+|`false`
+
+
+|=======================================================================
+
+For the list of security parameters, refer to the <<Using SSL>> section.
+
+=== Connection String Examples
+
+- `jdbc:ignite:thin://myHost` - connect to myHost on the port 10800 with all defaults.
+- `jdbc:ignite:thin://myHost:11900` - connect to myHost on custom port 11900 with all defaults.
+- `jdbc:ignite:thin://myHost:11900;user=ignite;password=ignite` - connect to myHost on custom port 11900 with user credentials for authentication.
+- `jdbc:ignite:thin://myHost:11900;distributedJoins=true&autoCloseServerCursor=true` - connect to myHost on custom port 11900 with enabled distributed joins and autoCloseServerCursor optimization.
+- `jdbc:ignite:thin://myHost:11900/myschema;` - connect to myHost on custom port 11900 and access to MYSCHEMA.
+- `jdbc:ignite:thin://myHost:11900/"MySchema";lazy=false` - connect to myHost on custom port 11900 with disabled lazy query execution and access to MySchema (schema name is case sensitive).
+
+=== Multiple Endpoints
+
+You can enable automatic failover if a current connection is broken by setting multiple connection endpoints in the connection string.
+The JDBC Driver randomly picks an address from the list to connect to. If the connection fails, the JDBC Driver selects another address from the list until the connection is restored.
+The Driver stops reconnecting and throws an exception if all the endpoints are unreachable.
+
+The example below shows how to pass three addresses via the connection string:
+
+[source,java]
+----
+include::{javaFile}[tags=multiple-endpoints, indent=0]
+----
+
+
+=== Partition Awareness [[partition-awareness]]
+
+[WARNING]
+====
+[discrete]
+Partition awareness is an experimental feature whose API or design architecture might be changed
+before a GA version is released.
+====
+
+Partition awareness is a feature that makes the JDBC driver "aware" of the partition distribution in the cluster.
+It allows the driver to pick the nodes that own the data that is being queried and send the query directly to those nodes
+(if the addresses of the nodes are provided in the driver's configuration). Partition awareness can increase average
+performance of queries that use the affinity key.
+
+Without partition awareness, the JDBC driver connects to a single node, and all queries are executed through that node.
+If the data is hosted on a different node, the query has to be rerouted within the cluster, which adds an additional network hop.
+Partition awareness eliminates that hop by sending the query to the right node.
+
+To make use of the partition awareness feature, provide the addresses of all the server nodes in the connection properties.
+The driver will route requests to the nodes that store the data requested by the query.
+
+[WARNING]
+====
+[discrete]
+Note that presently you need to provide the addresses of all server nodes in the connection properties because the driver does not load them automatically after a connection is opened.
+It also means that if a new server node joins the cluster, you are advised to reconnect the driver and add the node's address to the connection properties.
+Otherwise, the driver will not be able to send direct requests to this node.
+====
+
+To enable partition awareness, add the `partitionAwareness=true` parameter to the connection string and provide the
+endpoints of multiple server nodes:
+
+[source, java]
+----
+include::{javaFile}[tags=partition-awareness, indent=0]
+----
+
+NOTE: Partition Awareness can be used only with the default affinity function.
+
+Also see the description of the two related parameters: xref:partitionAwarenessSQLCacheSize[partitionAwarenessSQLCacheSize] and xref:partitionAwarenessPartitionDistributionsCacheSize[partitionAwarenessPartitionDistributionsCacheSize].
+
+
+=== Cluster Configuration
+
+In order to accept and process requests from JDBC Thin Driver, a cluster node binds to a local network interface on port 10800 and listens to incoming requests.
+
+Use an instance of `ClientConnectorConfiguration` to change the connection parameters:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+  <property name="clientConnectorConfiguration">
+    <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration" />
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration()
+    .setClientConnectorConfiguration(new ClientConnectorConfiguration());
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+The following parameters are supported:
+
+[width="100%",cols="30%,55%,15%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`host`
+
+|Host name or IP address to bind to. When set to `null`, binding is made to `localhost`.
+
+|`null`
+
+|`port`
+
+|TCP port to bind to. If the specified port is already in use, Ignite tries to find another available port using the `portRange` property.
+
+|`10800`
+
+|`portRange`
+
+| Defines the number of ports to try to bind to. E.g. if the port is set to `10800` and `portRange` is `100`, then the server tries to bind consecutively to any port in the `[10800, 10900]` range until it finds a free port.
+
+|`100`
+
+|`maxOpenCursorsPerConnection`
+
+|Maximum number of cursors that can be opened simultaneously for a single connection.
+
+|`128`
+
+|`threadPoolSize`
+
+|Number of request-handling threads in the thread pool.
+
+|`MAX(8, CPU cores)`
+
+|`socketSendBufferSize`
+
+|Size of the TCP socket send buffer. When set to 0, the system default value is used.
+
+|`0`
+
+|`socketReceiveBufferSize`
+
+|Size of the TCP socket receive buffer. When set to 0, the system default value is used.
+
+|`0`
+
+|`tcpNoDelay`
+
+|Whether to use `TCP_NODELAY` option.
+
+|`true`
+
+|`idleTimeout`
+
+|Idle timeout for client connections.
+Clients are disconnected automatically from the server after remaining idle for the configured timeout.
+When this parameter is set to zero or a negative value, the idle timeout is disabled.
+
+|`0`
+
+|`isJdbcEnabled`
+
+|Whether access through JDBC is enabled.
+
+|`true`
+
+|`isThinClientEnabled`
+
+|Whether access through thin client is enabled.
+
+|`true`
+
+
+|`sslEnabled`
+
+|If SSL is enabled, only SSL client connections are allowed. The node allows only one mode of connection: `SSL` or `plain`. A node cannot receive both types of client connections. But this option can be different for different nodes in the cluster.
+
+|`false`
+
+|`useIgniteSslContextFactory`
+
+|Whether to use SSL context factory from the node's configuration (see `IgniteConfiguration.sslContextFactory`).
+
+|`true`
+
+|`sslClientAuth`
+
+|Whether client authentication is required.
+
+|`false`
+
+|`sslContextFactory`
+
+|The class name that implements `Factory<SSLContext>` to provide node-side SSL. See link:security/ssl-tls[this] for more information.
+
+|`null`
+|=======================================================================
+
+[WARNING]
+====
+[discrete]
+=== JDBC Thin Driver is not thread safe
+
+The JDBC objects `Connections`, `Statements`, and `ResultSet` are not thread safe.
+Do not use statements and results sets from a single JDBC Connection in multiple threads.
+
+JDBC Thin Driver guards against concurrency. If concurrent access is detected, an exception
+(`SQLException`) is produced with the following message:
+
+....
+"Concurrent access to JDBC connection is not allowed
+[ownThread=<guard_owner_thread_name>, curThread=<current_thread_name>]",
+SQLSTATE="08006"
+....
+====
+
+
+=== Using SSL
+
+You can configure the JDBC Thin Driver to use SSL to secure communication with the cluster.
+SSL must be configured both on the cluster side and in the JDBC Driver.
+Refer to the  link:security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section for the information about cluster configuration.
+
+To enable SSL in the JDBC Driver, pass the `sslMode=require` parameter in the connection string and provide the key store and trust store parameters:
+
+[source, java]
+----
+include::{javaFile}[tags=ssl,indent=0]
+----
+
+The following table lists all parameters that affect SSL/TLS connection:
+
+[width="100%",cols="30%,40%,30%"]
+|====
+|Parameter |Description |Default Value
+|`sslMode`
+a|Enables SSL connection. Available modes:
+
+* `require`: SSL protocol is enabled on the client. Only SSL connection is available.
+* `disable`: SSL protocol is disabled on the client. Only plain connection is supported.
+
+|`disable`
+
+|`sslProtocol`
+|Protocol name for secure transport. Protocol implementations supplied by JSSE: `SSLv3 (SSL)`, `TLSv1 (TLS)`, `TLSv1.1`, `TLSv1.2`
+|`TLS`
+
+|`sslKeyAlgorithm`
+
+|The Key manager algorithm to be used to create a key manager. Note that in most cases the default value is sufficient.
+Algorithms implementations supplied by JSSE: `PKIX (X509 or SunPKIX)`, `SunX509`.
+
+| `None`
+
+|`sslClientCertificateKeyStoreUrl`
+
+|URL of the client key store file.
+This is a mandatory parameter since SSL context cannot be initialized without a key manager.
+If `sslMode` is `require` and the key store URL isn't specified in the Ignite properties, the value of the JSSE property `javax.net.ssl.keyStore` is used.
+
+|The value of the
+`javax.net.ssl.keyStore`
+system property.
+
+|`sslClientCertificateKeyStorePassword`
+
+|Client key store password.
+
+If `sslMode` is `require` and the key store password isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.keyStorePassword` is used.
+
+|The value of the `javax.net.ssl.
+keyStorePassword` system property.
+
+|`sslClientCertificateKeyStoreType`
+
+|Client key store type used in context initialization.
+
+If `sslMode` is `require` and the key store type isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.keyStoreType` is used.
+
+|The value of the
+`javax.net.ssl.keyStoreType`
+system property.
+If the system property is not defined, the default value is `JKS`.
+
+|`sslTrustCertificateKeyStoreUrl`
+
+|URL of the trust store file. This is an optional parameter; however, one of these properties must be set: `sslTrustCertificateKeyStoreUrl` or `sslTrustAll`
+
+If `sslMode` is `require` and the trust store URL isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStore` is used.
+
+|The value of the
+`javax.net.ssl.trustStore` system property.
+
+|`sslTrustCertificateKeyStorePassword`
+
+|Trust store password.
+
+If `sslMode` is `require` and the trust store password isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStorePassword` is used.
+
+|The value of the
+`javax.net.ssl.trustStorePassword` system property
+
+|`sslTrustCertificateKeyStoreType`
+
+|Trust store type.
+
+If `sslMode` is `require` and the trust store type isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStoreType` is used.
+
+|The value of the
+`javax.net.ssl.trustStoreType`
+system property. If the system property is not defined the default value is `JKS`
+
+|`sslTrustAll`
+
+a|Disables server's certificate validation. Set to `true` to trust any server certificate (revoked, expired, or self-signed SSL certificates).
+
+CAUTION: Do not enable this option in production on a network you do not entirely trust. Especially anything using the public internet.
+
+|`false`
+
+|`sslFactory`
+
+|Class name of the custom implementation of the
+`Factory<SSLSocketFactory>`.
+
+If `sslMode` is `require` and a factory is specified, the custom factory is used instead of the JSSE socket factory. In this case, other SSL properties are ignored.
+
+|`null`
+|====
+
+
+//See the `ssl*` parameters of the JDBC driver, and `ssl*` parameters and `useIgniteSslContextFactory` of the `ClientConnectorConfiguration` for more detailed information.
+
+The default implementation is based on JSSE, and works through two Java keystore files:
+
+- `sslClientCertificateKeyStoreUrl` - the client certificate keystore holds the keys and certificate for the client.
+- `sslTrustCertificateKeyStoreUrl` - the trusted certificate keystore contains the certificate information to validate the server's certificate.
+
+The trusted store is an optional parameter, however one of the following parameters: `sslTrustCertificateKeyStoreUrl` or `sslTrustAll` must be configured.
+
+[WARNING]
+====
+[discrete]
+=== Using the "sslTrustAll" option
+
+Do not enable this option in production on a network you do not entirely trust, especially anything using the public internet.
+====
+
+If you want to use your own implementation or method to configure the `SSLSocketFactory`, you can use JDBC Driver's `sslFactory` parameter. It is a string that must contain the name of the class that implements the interface `Factory<SSLSocketFactory>`. The class must be available for JDBC Driver's class loader.
+
+== Ignite DataSource
+
+The DataSource object is used as a deployed object that can be located by logical name via the JNDI naming service. JDBC Driver's `org.apache.ignite.IgniteJdbcThinDataSource` implements a JDBC DataSource interface allowing you to utilize the DataSource interface instead.
+
+In addition to generic DataSource properties, `IgniteJdbcThinDataSource` supports all the Ignite-specific properties that can be passed into a JDBC connection string. For instance, the `distributedJoins` property can be (re)set via the `IgniteJdbcThinDataSource#setDistributedJoins()` method.
+
+Refer to the link:{javadoc_base_url}/org/apache/ignite/IgniteJdbcThinDataSource.html[JavaDocs] for more details.
+
+== Examples
+
+To start processing the data located in the cluster, you need to create a JDBC Connection object via one of the methods below:
+
+[source, java]
+----
+// Open the JDBC connection via DriverManager.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://192.168.0.50");
+----
+
+or
+
+[source,java]
+----
+include::{javaFile}[tags=connection-from-data-source,indent=0]
+----
+
+Then you can execute SQL SELECT queries as follows:
+
+[source,java]
+----
+include::{javaFile}[tags=select,indent=0]
+----
+
+You can also modify the data via DML statements.
+
+=== INSERT
+
+[source,java]
+----
+include::{javaFile}[tags=insert,indent=0]
+----
+
+
+=== MERGE
+
+
+[source,java]
+----
+include::{javaFile}[tags=merge,indent=0]
+
+----
+
+
+=== UPDATE
+
+
+[source,java]
+----
+// Update a Person.
+conn.createStatement().
+  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");
+----
+
+
+=== DELETE
+
+
+[source,java]
+----
+conn.createStatement().execute("DELETE FROM Person WHERE age = 25");
+----
+
+
+== Streaming
+
+JDBC Driver allows streaming data in bulk using the `SET` command. See the `SET` command link:sql-reference/operational-commands#set-streaming[documentation] for more information.
+
+
+
+
+
+
+== Error Codes
+
+The JDBC drivers pass error codes in the `java.sql.SQLException` class, used to facilitate exception handling on the application side. To get an error code, use the `java.sql.SQLException.getSQLState()` method. It returns a string containing the ANSI SQLSTATE error code defined:
+
+
+[source,java]
+----
+include::{javaFile}[tags=handle-exception,indent=0]
+----
+
+
+
+The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI SQLSTATE] error codes currently supported by Ignite. Note that the list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|0700B|Conversion failure (for example, a string expression cannot be parsed as a number or a date).
+
+|0700E|Invalid transaction isolation level.
+
+|08001|The driver failed to open a connection to the cluster.
+
+|08003|The connection is in the closed state. Happened unexpectedly.
+
+|08004|The connection was rejected by the cluster.
+
+|08006|I/O error during communication.
+
+|22004|Null value not allowed.
+
+|22023|Unsupported parameter type.
+
+|23000|Data integrity constraint violation.
+
+|24000|Invalid result set state.
+
+|0A000|Requested operation is not supported.
+
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
+
+|42000|Query parsing exception.
+
+|50000| Internal error.
+The code is not defined by ANSI and refers to an Ignite specific error. Refer to the `java.sql.SQLException` error message for more information.
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/connection-string-dsn.adoc b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
new file mode 100644
index 0000000..6c5e1c4
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
@@ -0,0 +1,255 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Connection String and DSN
+
+== Connection String Format
+
+The ODBC Driver supports standard connection string format. Here is the formal syntax:
+
+[source,text]
+----
+connection-string ::= empty-string[;] | attribute[;] | attribute; connection-string
+empty-string ::=
+attribute ::= attribute-keyword=attribute-value | DRIVER=[{]attribute-value[}]
+attribute-keyword ::= identifier
+attribute-value ::= character-string
+----
+
+
+In simple terms, an ODBC connection URL is a string with parameters of the choice separated by semicolon.
+
+== Supported Arguments
+
+The ODBC driver supports and uses several connection string/DSN arguments. All parameter names are case-insensitive - `ADDRESS`, `Address`, and `address` all are valid parameter names and refer to the same parameter. If an argument is not specified, the default value is used. The exception to this rule is the `ADDRESS` attribute. If it is not specified, `SERVER` and `PORT` attributes are used instead.
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Attribute keyword |Description |Default Value
+
+|`ADDRESS`
+|Address of the remote node to connect to. The format is: `<host>[:<port>]`. For example: `localhost`, `example.com:12345`, `127.0.0.1`, `192.168.3.80:5893`.
+If this attribute is specified, then `SERVER` and `PORT` arguments are ignored.
+|None.
+
+|`SERVER`
+|Address of the node to connect to.
+This argument value is ignored if ADDRESS argument is specified.
+|None.
+
+|`PORT`
+|Port on which `OdbcProcessor` of the node is listening.
+This argument value is ignored if `ADDRESS` argument is specified.
+|`10800`
+
+|`USER`
+|Username for SQL Connection. This parameter is required if authentication is enabled on the server.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+|Empty string
+
+|`PASSWORD`
+|Password for SQL Connection. This parameter is required if authentication is enabled on the server.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+|Empty string
+
+|`SCHEMA`
+|Schema name.
+|`PUBLIC`
+
+|`DSN`
+|DSN name to connect to.
+| None.
+
+|`PAGE_SIZE`
+|Number of rows returned in response to a fetching request to the data source. Default value should be fine in most cases. Setting a low value can result in slow data fetching while setting a high value can result in additional memory usage by the driver, and additional delay when the next page is being retrieved.
+|`1024`
+
+|`DISTRIBUTED_JOINS`
+|Enables the link:SQL/distributed-joins#non-colocated-joins[non-colocated distributed joins] feature for all queries that are executed over the ODBC connection.
+|`false`
+
+|`ENFORCE_JOIN_ORDER`
+|Enforces a join order of tables in SQL queries. If set to `true`, the query optimizer does not reorder tables in the join.
+|`false`
+
+|`PROTOCOL_VERSION`
+|Used to specify ODBC protocol version to use. Currently, there are following versions: `2.1.0`, `2.1.5`, `2.3.0`, `2.3.2`, `2.5.0`. You can use earlier versions of the protocol for backward compatibility.
+|`2.3.0`
+
+|`REPLICATED_ONLY`
+|Set this property to `true` if the query is to be executed over fully replicated tables. This can enforce execution optimizations.
+|`false`
+
+|`COLLOCATED`
+| Set this parameter to `true` if your SQL statement includes a GROUP BY clause that groups the results by either primary
+or affinity key. When Ignite executes a distributed query, it sends sub-queries to individual cluster members. If
+you know in advance that the elements of your query selection are colocated together on the same node and you group by
+a primary or affinity key, then Ignite makes significant performance and network optimizations by grouping data locally
+ on each node participating in the query.
+|`false`
+
+|`LAZY`
+|Lazy query execution.
+By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimize duration of internal database locks, thus increasing concurrency.
+However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even `OutOfMemoryError` errors. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+|`false`
+
+|`SKIP_REDUCER_ON_UPDATE`
+|Enables server side update feature.
+When Ignite executes a DML operation, first, it fetches all the affected intermediate rows for analysis to the query initiator (also known as reducer), and only then prepares batches of updated values that will be sent to remote nodes.
+This approach might affect performance, and saturate the network if a DML operation has to move many entries over it.
+Use this flag to tell Ignite to do all intermediate rows analysis and updates "in-place" on corresponding remote data nodes.
+Defaults to `false`, meaning that intermediate results will be fetched to the query initiator first.
+|`false`
+
+|`SSL_MODE`
+|Determines whether the SSL connection should be negotiated with the server. Use `require` or `disable` mode as needed.
+| None.
+
+|`SSL_KEY_FILE`
+|Specifies the name of the file containing the SSL server private key.
+| None.
+
+|`SSL_CERT_FILE`
+|Specifies the name of the file containing the SSL server certificate.
+| None.
+
+|`SSL_CA_FILE`
+|Specifies the name of the file containing the SSL server certificate authority (CA).
+| None.
+|=======================================================================
+
+== Connection String Samples
+You can find samples of the connection string below. These strings can be used with `SQLDriverConnect` ODBC call to establish connection with a node.
+
+
+[tabs]
+--
+tab:Authentication[]
+[source,text]
+----
+DRIVER={Apache Ignite};
+ADDRESS=localhost:10800;
+SCHEMA=somecachename;
+USER=yourusername;
+PASSWORD=yourpassword;
+SSL_MODE=[require|disable];
+SSL_KEY_FILE=<path_to_private_key>;
+SSL_CERT_FILE=<path_to_client_certificate>;
+SSL_CA_FILE=<path_to_trusted_certificates>
+----
+
+tab:Specific Cache[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=localhost:10800;CACHE=yourCacheName
+----
+
+tab:Default cache[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=localhost:10800
+----
+
+tab:DSN[]
+[source,text]
+----
+DSN=MyIgniteDSN
+----
+
+tab:Custom page size[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=example.com:12901;CACHE=MyCache;PAGE_SIZE=4096
+----
+--
+
+
+
+== Configuring DSN
+The same arguments apply if you prefer to use link:https://en.wikipedia.org/wiki/Data_source_name[DSN] (Data Source Name) for connection purposes.
+
+To configure DSN on Windows, you should use a system tool called `odbcad32` (for 32-bit [x86] systems) or `odbc64` (for 64-bit systems) which is an ODBC Data Source Administrator.
+
+When installing the DSN tool, _if you use the pre-built msi file_, make sure you've installed Microsoft Visual C++ 2010 (https://www.microsoft.com/en-ie/download/details.aspx?id=5555[32-bit/x86] or https://www.microsoft.com/en-us/download/details.aspx?id=14632[64-bit/x64]).
+
+Launch this tool, via `Control panel->Administrative Tools->Data Sources (ODBC)`. Once the ODBC Data Source Administrator is launched, select `Add...->Apache Ignite` and configure your DSN.
+
+
+image::images/odbc_dsn_configuration.png[Configuring DSN]
+
+
+To do the same on Linux, you have to locate the `odbc.ini` file. The file location varies among Linux distributions and depends on a specific Driver Manager used by the Linux distribution. As an example, if you are using unixODBC then you can run the following command which will print system wide ODBC related details:
+
+
+[source,text]
+----
+odbcinst -j
+----
+
+
+Use the `SYSTEM DATA SOURCES` and `USER DATA SOURCES` properties to locate the `odbc.ini` file.
+
+Once you locate the `odbc.ini` file, open it with the editor of your choice and add the DSN section to it, as shown below:
+
+[source,text]
+----
+[DSN Name]
+description=<Insert your description here>
+driver=Apache Ignite
+<Other arguments here...>
+----
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/data-types.adoc b/docs/_docs/SQL/ODBC/data-types.adoc
new file mode 100644
index 0000000..ab2d8e1
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/data-types.adoc
@@ -0,0 +1,38 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Types
+
+Supported data types listing.
+
+The following SQL data types, listed in this link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types[specification], are supported:
+
+- `SQL_CHAR`
+- `SQL_VARCHAR`
+- `SQL_LONGVARCHAR`
+- `SQL_SMALLINT`
+- `SQL_INTEGER`
+- `SQL_FLOAT`
+- `SQL_DOUBLE`
+- `SQL_BIT`
+- `SQL_TINYINT`
+- `SQL_BIGINT`
+- `SQL_BINARY`
+- `SQL_VARBINARY`
+- `SQL_LONGVARBINARY`
+- `SQL_GUID`
+- `SQL_DECIMAL`
+- `SQL_TYPE_DATE`
+- `SQL_TYPE_TIMESTAMP`
+- `SQL_TYPE_TIME`
diff --git a/docs/_docs/SQL/ODBC/error-codes.adoc b/docs/_docs/SQL/ODBC/error-codes.adoc
new file mode 100644
index 0000000..a1d29ce
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/error-codes.adoc
@@ -0,0 +1,155 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Error Codes
+
+To get an error code, use the `SQLGetDiagRec()` function. It returns a string holding the ANSI SQL error code defined. For example:
+
+[source,c++]
+----
+SQLHENV env;
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+SQLCHAR connectStr[] = "DRIVER={Apache Ignite};SERVER=localhost;PORT=10800;SCHEMA=Person;";
+SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, 0, 0, 0, SQL_DRIVER_COMPLETE);
+
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] = "SELECT firstName, lastName, resume, salary FROM Person";
+SQLRETURN ret = SQLExecDirect(stmt, query, SQL_NTS);
+
+if (ret != SQL_SUCCESS)
+{
+	SQLCHAR sqlstate[7] = "";
+	SQLINTEGER nativeCode;
+
+	SQLCHAR message[1024];
+	SQLSMALLINT reallen = 0;
+
+	int i = 1;
+	ret = SQLGetDiagRec(SQL_HANDLE_STMT, stmt, i, sqlstate,
+                      &nativeCode, message, sizeof(message), &reallen);
+
+	while (ret != SQL_NO_DATA)
+	{
+		std::cout << sqlstate << ": " << message;
+
+		++i;
+		ret = SQLGetDiagRec(SQL_HANDLE_STMT, stmt, i, sqlstate,
+                        &nativeCode, message, sizeof(message), &reallen);
+	}
+}
+----
+
+The table below lists all the error codes supported by Ignite presently. This list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|01S00
+|Invalid connection string attribute.
+
+|01S02
+|The driver did not support the specified value and substituted a similar value.
+
+|08001
+|The driver failed to open a connection to the cluster.
+
+|08002
+|The connection is already established.
+
+|08003
+|The connection is in the closed state. Happened unexpectedly.
+
+|08004
+|The connection is rejected by the cluster.
+
+|08S01
+|Connection failure.
+
+|22026
+|String length mismatch in data-at-execution dialog.
+
+|23000
+|Integrity constraint violation (e.g. duplicate key, null key and so on).
+
+|24000
+|Invalid cursor state.
+
+|42000
+|Syntax error in request.
+
+|42S01
+|Table already exists.
+
+|42S02
+|Table not found.
+
+|42S11
+|Index already exists.
+
+|42S12
+|Index not found.
+
+|42S21
+|Column already exists.
+
+|42S22
+|Column not found.
+
+|HY000
+|General error. See error message for details.
+
+|HY001
+|Memory allocation error.
+
+|HY003
+|Invalid application buffer type.
+
+|HY004
+|Invalid SQL data type.
+
+|HY009
+|Invalid use of null-pointer.
+
+|HY010
+|Function call sequence error.
+
+|HY090
+|Invalid string or buffer length (e.g. negative or zero length).
+
+|HY092
+|Option type out of range.
+
+|HY097
+|Column type out of range.
+
+|HY105
+|Invalid parameter type.
+
+|HY106
+|Fetch type out of range.
+
+|HYC00
+|Feature is not implemented.
+
+|IM001
+|Function is not supported.
+|=======================================================================
diff --git a/docs/_docs/SQL/ODBC/odbc-driver.adoc b/docs/_docs/SQL/ODBC/odbc-driver.adoc
new file mode 100644
index 0000000..9f4e9b8
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/odbc-driver.adoc
@@ -0,0 +1,343 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ODBC Driver
+
+== Overview
+Ignite includes an ODBC driver that allows you both to select and to modify data stored in a distributed cache using standard SQL queries and native ODBC API.
+
+For detailed information on ODBC please refer to link:https://msdn.microsoft.com/en-us/library/ms714177.aspx[ODBC Programmer's Reference].
+
+The ODBC driver implements version 3.0 of the ODBC API.
+
+== Cluster Configuration
+
+The ODBC driver is treated as a dynamic library on Windows and a shared object on Linux. An application does not load it directly. Instead, it uses the Driver Manager API that loads and unloads ODBC drivers whenever required.
+
+Internally, the ODBC driver uses TCP to connect to a cluster. The cluster-side connection parameters can be configured via the `IgniteConfiguration.clientConnectorConfiguration` property.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration"/>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+
+ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+cfg.setClientConnectorConfiguration(clientConnectorCfg);
+
+----
+--
+
+Client connector configuration supports the following properties:
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`host`
+|Host name or IP address to bind to. When set to null, binding is made to `localhost`.
+|`null`
+
+|`port`
+|TCP port to bind to. If the specified port is already in use, Ignite will try to find another available port using the `portRange` property.
+|`10800`
+
+|`portRange`
+|Defines the number of ports to try to bind to. E.g. if the port is set to `10800` and `portRange` is `100`, then server will sequentially try to bind to any port from `[10800, 10900]` until it finds a free port.
+|`100`
+
+|`maxOpenCursorsPerConnection`
+|Maximum number of cursors that can be opened simultaneously for a single connection.
+|`128`
+
+|`threadPoolSize`
+|Number of request-handling threads in the thread pool.
+|`MAX(8, CPU cores)`
+
+|`socketSendBufferSize`
+|Size of the TCP socket send buffer. When set to 0, the system default value is used.
+|`0`
+
+|`socketReceiveBufferSize`
+|Size of the TCP socket receive buffer. When set to 0, the system default value is used.
+|`0`
+
+|`tcpNoDelay`
+|Whether to use the `TCP_NODELAY` option.
+|`true`
+
+|`idleTimeout`
+|Idle timeout for client connections.
+Clients will automatically be disconnected from the server after being idle for the configured timeout.
+When this parameter is set to zero or a negative value, idle timeout will be disabled.
+|`0`
+
+|`isOdbcEnabled`
+|Whether access through ODBC is enabled.
+|`true`
+
+|`isThinClientEnabled`
+|Whether access through thin client is enabled.
+|`true`
+|=======================================================================
+
+
+You can change these parameters as shown in the example below:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/odbc.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+...
+ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+
+clientConnectorCfg.setHost("127.0.0.1");
+clientConnectorCfg.setPort(12345);
+clientConnectorCfg.setPortRange(2);
+clientConnectorCfg.setMaxOpenCursorsPerConnection(512);
+clientConnectorCfg.setSocketSendBufferSize(65536);
+clientConnectorCfg.setSocketReceiveBufferSize(131072);
+clientConnectorCfg.setThreadPoolSize(4);
+
+cfg.setClientConnectorConfiguration(clientConnectorCfg);
+...
+----
+--
+
+A connection that is established from the ODBC driver side to the cluster via `ClientListenerProcessor` is also configurable. Find more details on how to alter connection settings from the driver side link:SQL/ODBC/connection-string-dsn[here].
+
+== Thread-Safety
+
+The current implementation of Ignite ODBC driver only provides thread-safety at the connections level. This means that you should not access the same connection from multiple threads without additional synchronization, though you can create separate connections for every thread and use them simultaneously.
+
+== Prerequisites
+
+Apache Ignite ODBC Driver was officially tested on:
+
+[cols="1,3a"]
+|===
+|OS
+|- Windows (XP and up, both 32-bit and 64-bit versions)
+- Windows Server (2008 and up, both 32-bit and 64-bit versions)
+- Ubuntu (18.04 64-bit)
+
+|C++ compiler
+
+|MS Visual C++ (10.0 and up), g++ (4.4.0 and up)
+
+|Visual Studio
+
+|2010 and above
+|===
+
+== Building ODBC Driver
+
+Ignite is shipped with pre-built installers for both 32- and 64-bit versions of the driver for Windows. So if you just want to install ODBC driver on Windows you may go straight to the <<Installing ODBC Driver>> section for installation instructions.
+
+If you use Linux you will still need to build ODBC driver before you can install it. So if you are using Linux or if you still want to build the driver by yourself for Windows, then keep reading.
+
+Ignite ODBC Driver source code is shipped as part of the Ignite package and it should be built before usage.
+
+Since the ODBC Driver is written in {cpp}, it is shipped as part of Ignite {cpp} and depends on some of the {cpp} libraries. More specifically, it depends on the `utils` and `binary` Ignite libraries. This means that you will need to build them prior to building the ODBC driver itself.
+
+We assume here that you are using the binary Ignite release. If you are using the source release, instead of `%IGNITE_HOME%\platforms\cpp` path you should use `%IGNITE_HOME%\modules\platforms\cpp` throughout.
+
+=== Building on Windows
+
+You will need MS Visual Studio 2010 or later to be able to build the ODBC driver on Windows. Once you have it, open Ignite solution `%IGNITE_HOME%\platforms\cpp\project\vs\ignite.sln` (or `ignite_86.sln` if you are running 32-bit platform), left-click on odbc project in the "Solution Explorer" and choose "Build". Visual Studio will automatically detect and build all the necessary dependencies.
+
+The path to the .sln file may vary depending on whether you're building from source files or binaries. If you don't see your .sln file in `%IGNITE_HOME%\platforms\cpp\project\vs\`, try looking in `%IGNITE_HOME%\modules\platforms\cpp\project\vs\`.
+
+NOTE: If you are using VS 2015 or later (MSVC 14.0 or later), you need to add `legacy_stdio_definitions.lib` as an additional library to odbc project linker's settings in order to be able to build the project. To add this library to the linker input in the IDE, open the context menu for the project node, choose `Properties`, then in the `Project Properties` dialog box, choose `Linker`, and edit the `Linker Input` to add `legacy_stdio_definitions.lib` to the semi-colon-separated list.
+
+Once the build process is complete, you can find `ignite.odbc.dll` in `%IGNITE_HOME%\platforms\cpp\project\vs\x64\Release` for the 64-bit version and in `%IGNITE_HOME%\platforms\cpp\project\vs\Win32\Release` for the 32-bit version.
+
+NOTE: Be sure to use the corresponding driver (32-bit or 64-bit) for your system.
+
+=== Building installers on Windows
+
+Once you have built driver binaries you may want to build installers for easier installation. Ignite uses link:http://wixtoolset.org[WiX Toolset] to generate ODBC installers, so to build them you'll need to download and install WiX. Make sure you have added the `bin` directory of the WiX Toolset to your PATH variable.
+
+Once everything is ready, open a terminal and navigate to the directory `%IGNITE_HOME%\platforms\cpp\odbc\install`. Execute the following commands one by one to build installers:
+
+
+[tabs]
+--
+tab:64-bit driver[]
+[source,shell]
+----
+candle.exe ignite-odbc-amd64.wxs
+light.exe -ext WixUIExtension ignite-odbc-amd64.wixobj
+----
+
+tab:32-bit driver[]
+[source,shell]
+----
+candle.exe ignite-odbc-x86.wxs
+light.exe -ext WixUIExtension ignite-odbc-x86.wixobj
+----
+--
+
+As a result, `ignite-odbc-amd64.msi` and `ignite-odbc-x86.msi` files should appear in the directory. You can use them to install your freshly built drivers.
+
+=== Building on Linux
+
+On a Linux-based operating system, you will need to install an ODBC Driver Manager of your choice to be able to build and use the Ignite ODBC Driver. The ODBC Driver has been tested with link:http://www.unixodbc.org[UnixODBC].
+
+==== Prerequisites
+include::includes/cpp-linux-build-prerequisites.adoc[]
+
+NOTE: The JDK is used only during the build process and not by the ODBC driver itself.
+
+==== Building ODBC driver
+- Create a build directory for cmake. We'll refer to it as `${CPP_BUILD_DIR}`
+- (Optional) Choose installation directory prefix (by default `/usr/local`). We'll refer to it as `${CPP_INSTALL_DIR}`
+- Build and install the driver by executing the following commands:
+
+[tabs]
+--
+tab:Ubuntu[]
+[source,bash,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON ${IGNITE_HOME}/platforms/cpp -DCMAKE_INSTALL_PREFIX=${CPP_INSTALL_DIR}
+make
+sudo make install
+----
+
+tab:CentOS/RHEL[]
+[source,shell,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake3 -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON  ${IGNITE_HOME}/platforms/cpp -DCMAKE_INSTALL_PREFIX=${CPP_INSTALL_DIR}
+make 
+sudo make install
+----
+
+--
+
+After the build process is over, you can find out where your ODBC driver has been placed by running the following command:
+
+[source,shell]
+----
+whereis libignite-odbc
+----
+
+The path should look something like: `/usr/local/lib/libignite-odbc.so`
+
+== Installing ODBC Driver
+
+In order to use ODBC driver, you need to register it in your system so that your ODBC Driver Manager will be able to locate it.
+
+=== Installing on Windows
+
+For 32-bit Windows, you should use the 32-bit version of the driver. For the
+64-bit Windows, you can use the 64-bit driver as well as the 32-bit. You may want to install both 32-bit and 64-bit drivers on 64-bit Windows to be able to use your driver from both 32-bit and 64-bit applications.
+
+==== Installing using installers
+
+NOTE: Microsoft Visual C++ 2010 Redistributable Package for 32-bit or 64-bit should be installed first.
+
+This is the easiest way and one should use it by default. Just launch the installer for the version of the driver that you need and follow the instructions:
+
+32-bit installer: `%IGNITE_HOME%\platforms\cpp\bin\odbc\ignite-odbc-x86.msi`
+64-bit installer: `%IGNITE_HOME%\platforms\cpp\bin\odbc\ignite-odbc-amd64.msi`
+
+==== Installing manually
+
+To install ODBC driver on Windows manually, you should first choose a directory on your
+file system where your driver or drivers will be located. Once you have
+chosen the location, you have to put your driver there and ensure that all driver
+dependencies can be resolved as well, i.e., they can be found either in the `%PATH%` or
+in the same directory where the driver DLL resides.
+
+After that, you have to use one of the install scripts from the following directory
+`%IGNITE_HOME%/platforms/cpp/odbc/install`. Note, that you may need OS administrator privileges to execute these scripts.
+
+[tabs]
+--
+tab:x86[]
+[source,shell]
+----
+install_x86 <absolute_path_to_32_bit_driver>
+----
+
+tab:AMD64[]
+[source,shell]
+----
+install_amd64 <absolute_path_to_64_bit_driver> [<absolute_path_to_32_bit_driver>]
+----
+
+--
+
+
+=== Installing on Linux
+
+To be able to build and install ODBC driver on Linux, you need to first install
+ODBC Driver Manager. The ODBC driver has been tested with link:http://www.unixodbc.org[UnixODBC].
+
+Once you have built the driver and performed the `make install` command, the ODBC Driver i.e. `libignite-odbc.so` will be placed in the `/usr/local/lib` folder. To install it as an ODBC driver in your Driver Manager and be able to use it, perform the following steps:
+
+- Ensure linker is able to locate all dependencies of the ODBC driver. You can check this by using `ldd` command. Assuming ODBC driver is located under `/usr/local/lib`:
++
+`ldd /usr/local/lib/libignite-odbc.so`
++
+If there are unresolved links to other libraries, you may want to add directories with these libraries to the `LD_LIBRARY_PATH`.
+
+- Edit file `${IGNITE_HOME}/platforms/cpp/odbc/install/ignite-odbc-install.ini` and ensure that Driver parameter of the Apache Ignite section points to where `libignite-odbc.so` is located.
+
+- To install the ODBC driver, use the following command:
+
+[source,shell]
+----
+odbcinst -i -d -f ${IGNITE_HOME}/platforms/cpp/odbc/install/ignite-odbc-install.ini
+----
+To perform this command, you may need root privileges.
+
+Now the Apache Ignite ODBC driver is installed and ready for use. You can connect to it and use it just like any other ODBC driver.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/querying-modifying-data.adoc b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
new file mode 100644
index 0000000..bfe7834
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
@@ -0,0 +1,491 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Querying and Modifying Data
+
+== Overview
+This page elaborates on how to connect to a cluster and execute a variety of SQL queries using the ODBC driver.
+
+
+At the implementation layer, the ODBC driver uses SQL Fields queries to retrieve data from the cluster.
+This means that from ODBC side you can access only those fields that are link:SQL/sql-api#configuring-queryable-fields[defined in the cluster configuration].
+
+Moreover, the ODBC driver supports DML (Data Modification Layer), which means that you can modify your data using an ODBC connection.
+
+NOTE: Refer to the link:{githubUrl}/modules/platforms/cpp/examples/odbc-example[ODBC example] that incorporates complete logic and exemplary queries described below.
+
+== Configuring the Cluster
+As the first step, you need to set up a configuration that will be used by the cluster nodes.
+The configuration should include caches configurations as well with properly defined `QueryEntities` properties.
+`QueryEntities` are essential for the cases when your application (or the ODBC driver in our scenario) is going to query and modify the data using SQL statements.
+Alternatively you can create tables using DDL.
+
+[tabs]
+--
+tab:DDL[]
+[source,cpp]
+----
+SQLHENV env;
+
+// Allocate an environment handle
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+// Use ODBC ver 3
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+
+// Allocate a connection handle
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+// Prepare the connection string
+SQLCHAR connectStr[] = "DSN=My Ignite DSN";
+
+// Connecting to the Cluster.
+SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_COMPLETE);
+
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query1[] = "CREATE TABLE Person ( "
+    "id LONG PRIMARY KEY, "
+    "firstName VARCHAR, "
+    "lastName VARCHAR, "
+    "salary FLOAT) "
+    "WITH \"template=partitioned\"";
+
+SQLExecDirect(stmt, query1, SQL_NTS);
+
+SQLCHAR query2[] = "CREATE TABLE Organization ( "
+    "id LONG PRIMARY KEY, "
+    "name VARCHAR) "
+    "WITH \"template=partitioned\"";
+
+SQLExecDirect(stmt, query2, SQL_NTS);
+
+SQLCHAR query3[] = "CREATE INDEX idx_organization_name ON Organization (name)";
+
+SQLExecDirect(stmt, query3, SQL_NTS);
+----
+
+tab:Spring XML[]
+[source,xml]
+----
+include::code-snippets/xml/odbc-cache-config.xml[tags=ignite-config;!discovery, indent=0]
+----
+--
+
+As you can see, we defined two caches that will contain the data of `Person` and `Organization` types.
+For both types, we listed specific fields and indexes that will be read or updated using SQL.
+
+
+== Connecting to the Cluster
+
+After the cluster is configured and started, we can connect to it from the ODBC driver side. To do this, you need to prepare a valid connection string and pass it as a parameter to the ODBC driver at the connection time. Refer to the link:SQL/ODBC/connection-string-dsn[Connection String] page for more details.
+
+Alternatively, you can also use a link:SQL/ODBC/connection-string-dsn#configuring-dsn[pre-configured DSN] for connection purposes as shown in the example below.
+
+
+[source,c++]
+----
+SQLHENV env;
+
+// Allocate an environment handle
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+// Use ODBC ver 3
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+
+// Allocate a connection handle
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+// Prepare the connection string
+SQLCHAR connectStr[] = "DSN=My Ignite DSN";
+
+// Connecting to Ignite Cluster.
+SQLRETURN ret = SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_COMPLETE);
+
+if (!SQL_SUCCEEDED(ret))
+{
+  SQLCHAR sqlstate[7] = { 0 };
+  SQLINTEGER nativeCode;
+
+  SQLCHAR errMsg[BUFFER_SIZE] = { 0 };
+  SQLSMALLINT errMsgLen = static_cast<SQLSMALLINT>(sizeof(errMsg));
+
+  SQLGetDiagRec(SQL_HANDLE_DBC, dbc, 1, sqlstate, &nativeCode, errMsg, errMsgLen, &errMsgLen);
+
+  std::cerr << "Failed to connect to Ignite: "
+            << reinterpret_cast<char*>(sqlstate) << ": "
+            << reinterpret_cast<char*>(errMsg) << ", "
+            << "Native error code: " << nativeCode
+            << std::endl;
+
+  // Releasing allocated handles.
+  SQLFreeHandle(SQL_HANDLE_DBC, dbc);
+  SQLFreeHandle(SQL_HANDLE_ENV, env);
+
+  return;
+}
+----
+
+
+== Querying Data
+
+After everything is up and running, we're ready to execute `SQL SELECT` queries using the `ODBC API`.
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] = "SELECT firstName, lastName, salary, Organization.name FROM Person "
+  "INNER JOIN \"Organization\".Organization ON Person.orgId = Organization.id";
+SQLSMALLINT queryLen = static_cast<SQLSMALLINT>(sizeof(queryLen));
+
+SQLRETURN ret = SQLExecDirect(stmt, query, queryLen);
+
+if (!SQL_SUCCEEDED(ret))
+{
+  SQLCHAR sqlstate[7] = { 0 };
+  SQLINTEGER nativeCode;
+
+  SQLCHAR errMsg[BUFFER_SIZE] = { 0 };
+  SQLSMALLINT errMsgLen = static_cast<SQLSMALLINT>(sizeof(errMsg));
+
+  SQLGetDiagRec(SQL_HANDLE_DBC, dbc, 1, sqlstate, &nativeCode, errMsg, errMsgLen, &errMsgLen);
+
+  std::cerr << "Failed to perfrom SQL query: "
+            << reinterpret_cast<char*>(sqlstate) << ": "
+            << reinterpret_cast<char*>(errMsg) << ", "
+            << "Native error code: " << nativeCode
+            << std::endl;
+}
+else
+{
+  // Printing the result set.
+  struct OdbcStringBuffer
+  {
+    SQLCHAR buffer[BUFFER_SIZE];
+    SQLLEN resLen;
+  };
+
+  // Getting a number of columns in the result set.
+  SQLSMALLINT columnsCnt = 0;
+  SQLNumResultCols(stmt, &columnsCnt);
+
+  // Allocating buffers for columns.
+  std::vector<OdbcStringBuffer> columns(columnsCnt);
+
+  // Binding colums. For simplicity we are going to use only
+  // string buffers here.
+  for (SQLSMALLINT i = 0; i < columnsCnt; ++i)
+    SQLBindCol(stmt, i + 1, SQL_C_CHAR, columns[i].buffer, BUFFER_SIZE, &columns[i].resLen);
+
+  // Fetching and printing data in a loop.
+  ret = SQLFetch(stmt);
+  while (SQL_SUCCEEDED(ret))
+  {
+    for (size_t i = 0; i < columns.size(); ++i)
+      std::cout << std::setw(16) << std::left << columns[i].buffer << " ";
+
+    std::cout << std::endl;
+
+    ret = SQLFetch(stmt);
+  }
+}
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+[NOTE]
+====
+[discrete]
+=== Columns binding
+
+In the example above, we bind all columns to the SQL_C_CHAR columns. This means that all values are going to be converted to strings upon fetching. This is done for the sake of simplicity. Value conversion upon fetching can be pretty slow; so your default decision should be to fetch the value the same way as it is stored.
+====
+
+== Inserting Data
+
+To insert new data into the cluster, `SQL INSERT` statements can be used from the ODBC side.
+
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] =
+	"INSERT INTO Person (id, orgId, firstName, lastName, resume, salary) "
+	"VALUES (?, ?, ?, ?, ?, ?)";
+
+SQLPrepare(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+// Binding columns.
+int64_t key = 0;
+int64_t orgId = 0;
+char name[1024] = { 0 };
+SQLLEN nameLen = SQL_NTS;
+double salary = 0.0;
+
+SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0);
+SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &orgId, 0, 0);
+SQLBindParameter(stmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,	sizeof(name), sizeof(name), name, 0, &nameLen);
+SQLBindParameter(stmt, 4, SQL_PARAM_INPUT, SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, &salary, 0, 0);
+
+// Filling cache.
+key = 1;
+orgId = 1;
+strncpy(name, "John", sizeof(name));
+salary = 2200.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 1;
+strncpy(name, "Jane", sizeof(name));
+salary = 1300.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 2;
+strncpy(name, "Richard", sizeof(name));
+salary = 900.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 2;
+strncpy(name, "Mary", sizeof(name));
+salary = 2400.0;
+
+SQLExecute(stmt);
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+Next, we are going to insert additional organizations without the usage of prepared statements.
+
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query1[] = "INSERT INTO \"Organization\".Organization (id, name) VALUES (1L, 'Some company')";
+
+SQLExecDirect(stmt, query1, static_cast<SQLSMALLINT>(sizeof(query1)));
+
+SQLFreeStmt(stmt, SQL_CLOSE);
+
+SQLCHAR query2[] = "INSERT INTO \"Organization\".Organization (id, name) VALUES (2L, 'Some other company')";
+
+  SQLExecDirect(stmt, query2, static_cast<SQLSMALLINT>(sizeof(query2)));
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+[WARNING]
+====
+[discrete]
+=== Error Checking
+
+For simplicity the example code above does not check for an error return code. You will want to do error checking in production.
+====
+
+== Updating Data
+
+Let's now update the salary for some of the persons stored in the cluster using SQL `UPDATE` statement.
+
+
+[source,c++]
+----
+void AdjustSalary(SQLHDBC dbc, int64_t key, double salary)
+{
+  SQLHSTMT stmt;
+
+  // Allocate a statement handle
+  SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+  SQLCHAR query[] = "UPDATE Person SET salary=? WHERE id=?";
+
+  SQLBindParameter(stmt, 1, SQL_PARAM_INPUT,
+      SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, &salary, 0, 0);
+
+  SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG,
+      SQL_BIGINT, 0, 0, &key, 0, 0);
+
+  SQLExecDirect(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+  // Releasing statement handle.
+  SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+}
+
+...
+AdjustSalary(dbc, 3, 1200.0);
+AdjustSalary(dbc, 1, 2500.0);
+----
+
+== Deleting Data
+
+Finally, let's remove a few records with the help of SQL `DELETE` statement.
+
+[source,c++]
+----
+void DeletePerson(SQLHDBC dbc, int64_t key)
+{
+  SQLHSTMT stmt;
+
+  // Allocate a statement handle
+  SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+  SQLCHAR query[] = "DELETE FROM Person WHERE id=?";
+
+  SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT,
+      0, 0, &key, 0, 0);
+
+  SQLExecDirect(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+  // Releasing statement handle.
+  SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+}
+
+...
+DeletePerson(dbc, 1);
+DeletePerson(dbc, 4);
+----
+
+== Batching With Arrays of Parameters
+
+The ODBC driver supports batching with link:https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/using-arrays-of-parameters[arrays of parameters] for DML statements.
+
+Let's try to insert the same records we did in the example above but now with a single `SQLExecute` call:
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocating a statement handle.
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] =
+	"INSERT INTO Person (id, orgId, firstName, lastName, resume, salary) "
+	"VALUES (?, ?, ?, ?, ?, ?)";
+
+SQLPrepare(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+// Binding columns.
+int64_t key[4] = {0};
+int64_t orgId[4] = {0};
+char name[1024 * 4] = {0};
+SQLLEN nameLen[4] = {0};
+double salary[4] = {0};
+
+SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, key, 0, 0);
+SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, orgId, 0, 0);
+SQLBindParameter(stmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,	1024, 1024, name, 0, &nameLen);
+SQLBindParameter(stmt, 4, SQL_PARAM_INPUT, SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, salary, 0, 0);
+
+// Filling cache.
+key[0] = 1;
+orgId[0] = 1;
+strncpy(name, "John", 1023);
+salary[0] = 2200.0;
+nameLen[0] = SQL_NTS;
+
+key[1] = 2;
+orgId[1] = 1;
+strncpy(name + 1024, "Jane", 1023);
+salary[1] = 1300.0;
+nameLen[1] = SQL_NTS;
+
+key[2] = 3;
+orgId[2] = 2;
+strncpy(name + 1024 * 2, "Richard", 1023);
+salary[2] = 900.0;
+nameLen[2] = SQL_NTS;
+
+key[3] = 4;
+orgId[3] = 2;
+strncpy(name + 1024 * 3, "Mary", 1023);
+salary[3] = 2400.0;
+nameLen[3] = SQL_NTS;
+
+// Asking the driver to store the total number of processed argument sets
+// in the following variable.
+SQLULEN setsProcessed = 0;
+SQLSetStmtAttr(stmt, SQL_ATTR_PARAMS_PROCESSED_PTR, &setsProcessed, SQL_IS_POINTER);
+
+// Setting the size of the arguments array. This is 4 in our case.
+SQLSetStmtAttr(stmt, SQL_ATTR_PARAMSET_SIZE, reinterpret_cast<SQLPOINTER>(4), 0);
+
+// Executing the statement.
+SQLExecute(stmt);
+
+// Releasing the statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+NOTE: This type of batching is currently supported for `INSERT`, `UPDATE`, `DELETE`, and `MERGE` statements and does not work for `SELECTs`. The data-at-execution capability is not supported with Arrays of Parameters batching either.
+
+== Streaming
+
+The ODBC driver allows streaming data in bulk using the `SET` command. See the `SET` link:sql-reference/operational-commands#set-streaming[command documentation] for more information.
+
+NOTE: In streaming mode, the array of parameters and data-at-execution parameters are not supported.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/specification.adoc b/docs/_docs/SQL/ODBC/specification.adoc
new file mode 100644
index 0000000..68e671b
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/specification.adoc
@@ -0,0 +1,1090 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Specification
+
+== Overview
+
+ODBC defines several Interface conformance levels. In this section you can find which features are supported by the Apache Ignite ODBC driver.
+
+== Core Interface Conformance
+
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Feature |Supported|Comments
+
+|Allocate and free all types of handles, by calling `SQLAllocHandle` and `SQLFreeHandle`.
+|YES
+|
+
+|Use all forms of the `SQLFreeStmt` function.
+|YES
+|
+
+|Bind result set columns, by calling `SQLBindCol`.
+|YES
+|
+
+|Handle dynamic parameters, including arrays of parameters, in the input direction only, by calling `SQLBindParameter` and `SQLNumParams`.
+|YES
+|
+
+|Specify a bind offset.
+|YES
+|
+
+|Use the data-at-execution dialog, involving calls to `SQLParamData` and `SQLPutData`
+|YES
+|
+
+|Manage cursors and cursor names, by calling `SQLCloseCursor`, `SQLGetCursorName`, and `SQLSetCursorName`.
+|PARTIALLY
+|`SQLCloseCursor` is implemented. Named cursors are not supported by Ignite SQL.
+
+|Gain access to the description (metadata) of result sets, by calling `SQLColAttribute`, `SQLDescribeCol`, `SQLNumResultCols`, and `SQLRowCount`.
+|YES
+|
+
+|Query the data dictionary, by calling the catalog functions `SQLColumns`, `SQLGetTypeInfo`, `SQLStatistics`, and `SQLTables`.
+|PARTIALLY
+|`SQLStatistics` is not supported.
+
+|Manage data sources and connections, by calling `SQLConnect`, `SQLDataSources`, `SQLDisconnect`, and `SQLDriverConnect`. Obtain information on drivers, no matter which ODBC level they support, by calling `SQLDrivers`.
+|YES
+|
+
+|Prepare and execute SQL statements, by calling `SQLExecDirect`, `SQLExecute`, and `SQLPrepare`.
+|YES
+|
+
+|Fetch one row of a result set or multiple rows, in the forward direction only, by calling `SQLFetch` or by calling `SQLFetchScroll` with the `FetchOrientation` argument set to `SQL_FETCH_NEXT`
+|YES
+|
+
+|Obtain an unbound column in parts, by calling `SQLGetData`.
+|YES
+|
+
+|Obtain current values of all attributes, by calling `SQLGetConnectAttr`, `SQLGetEnvAttr`, and `SQLGetStmtAttr`, and set all attributes to their default values and set certain attributes to non-default values by calling `SQLSetConnectAttr`, `SQLSetEnvAttr`, and `SQLSetStmtAttr`.
+|PARTIALLY
+|Not all attributes are supported by now. See table below for details.
+
+|Manipulate certain fields of descriptors, by calling `SQLCopyDesc`, `SQLGetDescField`, `SQLGetDescRec`, `SQLSetDescField`, and `SQLSetDescRec`.
+|NO
+|
+
+|Obtain diagnostic information, by calling `SQLGetDiagField` and `SQLGetDiagRec`.
+|YES
+|
+
+|Detect driver capabilities, by calling `SQLGetFunctions` and `SQLGetInfo`. Also, detect the result of any text substitutions made to an SQL statement before it is sent to the data source, by calling `SQLNativeSql`.
+|YES
+|
+
+|Use the syntax of `SQLEndTran` to commit a transaction. A Core-level driver need not support true transactions; therefore, the application cannot specify `SQL_ROLLBACK` nor `SQL_AUTOCOMMIT_OFF` for the `SQL_ATTR_AUTOCOMMIT` connection attribute.
+|YES
+|
+
+|Call `SQLCancel` to cancel the data-at-execution dialog and, in multi-thread environments, to cancel an ODBC function executing in another thread. Core-level interface conformance does not mandate support for asynchronous execution of functions, nor the use of `SQLCancel` to cancel an ODBC function executing asynchronously. Neither the platform nor the ODBC driver need be multi-thread for the driver to conduct independent activities at the same time. However, in multi-thread environments, the ODBC driver must be thread-safe. Serialization of requests from the application is a conformant way to implement this specification, even though it might create serious performance problems.
+|NO
+|Current implementation does not support asynchronous execution. Also, is not supported for data-at-execution.
+
+|Obtain the `SQL_BEST_ROWID` row-identifying column of tables, by calling `SQLSpecialColumns`.
+|PARTIALLY
+|Current implementation always returns empty row set.
+
+|=======================================================================
+
+
+== Level 1 Interface Conformance
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Specify the schema of database tables and views (using two-part naming).
+|YES
+|
+
+|Invoke true asynchronous execution of ODBC functions, where applicable ODBC functions are all synchronous or all asynchronous on a given connection.
+|NO
+|
+
+|Use scrollable cursors, and thereby achieve access to a result set in methods other than forward-only, by calling `SQLFetchScroll` with the `FetchOrientation` argument other than `SQL_FETCH_NEXT`.
+|NO
+|
+
+|Obtain primary keys of tables, by calling `SQLPrimaryKeys`.
+|PARTIALLY
+|Returns empty result set by now.
+
+|Use stored procedures, through the ODBC escape sequence for procedure calls, and query the data dictionary regarding stored procedures, by calling `SQLProcedureColumns` and `SQLProcedures`.
+|NO
+|
+
+|Connect to a data source by interactively browsing the available servers, by calling `SQLBrowseConnect`.
+|NO
+|
+
+|Use ODBC functions instead of SQL statements to perform certain database operations: `SQLSetPos` with `SQL_POSITION` and `SQL_REFRESH`.
+|NO
+|
+
+|Gain access to the contents of multiple result sets generated by batches and stored procedures, by calling `SQLMoreResults`.
+|YES
+|
+
+|Delimit transactions spanning several ODBC functions, with true atomicity and the ability to specify `SQL_ROLLBACK` in `SQLEndTran`.
+|NO
+|Ignite SQL does not support transactions.
+|=======================================================================
+
+== Level 2 Interface Conformance
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Feature|Supported|Comments
+
+|Use three-part names of database tables and views.
+|NO
+|Ignite SQL does not support catalogs.
+
+|Describe dynamic parameters, by calling `SQLDescribeParam`.
+|YES
+|
+
+|Use not only input parameters but also output and input/output parameters, and result values of stored procedures.
+|NO
+|Ignite SQL does not support output parameters
+
+|Use bookmarks, including retrieving bookmarks, by calling `SQLDescribeCol` and `SQLColAttribute` on column number 0; fetching based on a bookmark, by calling `SQLFetchScroll` with the `FetchOrientation` argument set to `SQL_FETCH_BOOKMARK`; and update, delete, and fetch by bookmark operations, by calling `SQLBulkOperations` with the Operation argument set to `SQL_UPDATE_BY_BOOKMARK`, `SQL_DELETE_BY_BOOKMARK`, or `SQL_FETCH_BY_BOOKMARK`.
+|NO
+|Ignite SQL does not support bookmarks.
+
+|Retrieve advanced information about the data dictionary, by calling `SQLColumnPrivileges`, `SQLForeignKeys`, and `SQLTablePrivileges`.
+|PARTIALLY
+|`SQLForeignKeys` implemented, but returns empty result set.
+
+|Use ODBC functions instead of SQL statements to perform additional database operations, by calling `SQLBulkOperations` with `SQL_ADD`, or `SQLSetPos` with `SQL_DELETE` or `SQL_UPDATE`.
+|NO
+|
+
+|Enable asynchronous execution of ODBC functions for specified individual statements.
+|NO
+|
+
+|Obtain the `SQL_ROWVER` row-identifying column of tables, by calling `SQLSpecialColumns`.
+|PARTIALLY
+|Implemented by returning an empty row set.
+
+|Set the `SQL_ATTR_CONCURRENCY` statement attribute to at least one value other than `SQL_CONCUR_READ_ONLY`.
+|NO
+|
+
+|The ability to time out login request and SQL queries (`SQL_ATTR_LOGIN_TIMEOUT` and `SQL_ATTR_QUERY_TIMEOUT`).
+|PARTIALLY
+|`SQL_ATTR_QUERY_TIMEOUT` support implemented.
+`SQL_ATTR_LOGIN_TIMEOUT` is not implemented yet.
+
+|The ability to change the default isolation level; the ability to execute transactions with the "serializable" level of isolation.
+|NO
+|Ignite does not support SQL transactions.
+|=======================================================================
+
+== Function Support
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Function|Supported|Conformance level
+
+|`SQLAllocHandle`
+|YES
+|Core
+
+|`SQLBindCol`
+|YES
+|Core
+
+|`SQLBindParameter`
+|YES
+|Core
+
+|`SQLBrowseConnect`
+|NO
+|Level 1
+
+|`SQLBulkOperations`
+|NO
+|Level 1
+
+|`SQLCancel`
+|NO
+|Core
+
+|`SQLCloseCursor`
+|YES
+|Core
+
+|`SQLColAttribute`
+|YES
+|Core
+
+|`SQLColumnPrivileges`
+|NO
+|Level 2
+
+|`SQLColumns`
+|YES
+|Core
+
+|`SQLConnect`
+|YES
+|Core
+
+|`SQLCopyDesc`
+|NO
+|Core
+
+|`SQLDataSources`
+|N/A
+|Core
+
+|`SQLDescribeCol`
+|YES
+|Core
+
+|`SQLDescribeParam`
+|YES
+|Level 2
+
+|`SQLDisconnect`
+|YES
+|Core
+
+|`SQLDriverConnect`
+|YES
+|Core
+
+|`SQLDrivers`
+|N/A
+|Core
+
+|`SQLEndTran`
+|PARTIALLY
+|Core
+
+|`SQLExecDirect`
+|YES
+|Core
+
+|`SQLExecute`
+|YES
+|Core
+
+|`SQLFetch`
+|YES
+|Core
+
+|`SQLFetchScroll`
+|YES
+|Core
+
+|`SQLForeignKeys`
+|PARTIALLY
+|Level 2
+
+|`SQLFreeHandle`
+|YES
+|Core
+
+|`SQLFreeStmt`
+|YES
+|Core
+
+|`SQLGetConnectAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetCursorName`
+|NO
+|Core
+
+|`SQLGetData`
+|YES
+|Core
+
+|`SQLGetDescField`
+|NO
+|Core
+
+|`SQLGetDescRec`
+|NO
+|Core
+
+|`SQLGetDiagField`
+|YES
+|Core
+
+|`SQLGetDiagRec`
+|YES
+|Core
+
+|`SQLGetEnvAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetFunctions`
+|NO
+|Core
+
+|`SQLGetInfo`
+|YES
+|Core
+
+|`SQLGetStmtAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetTypeInfo`
+|YES
+|Core
+
+|`SQLMoreResults`
+|YES
+|Level 1
+
+|`SQLNativeSql`
+|YES
+|Core
+
+|`SQLNumParams`
+|YES
+|Core
+
+|`SQLNumResultCols`
+|YES
+|Core
+
+|`SQLParamData`
+|YES
+|Core
+
+|`SQLPrepare`
+|YES
+|Core
+
+|`SQLPrimaryKeys`
+|PARTIALLY
+|Level 1
+
+|`SQLProcedureColumns`
+|NO
+|Level 1
+
+|`SQLProcedures`
+|NO
+|Level 1
+
+|`SQLPutData`
+|YES
+|Core
+
+|`SQLRowCount`
+|YES
+|Core
+
+|`SQLSetConnectAttr`
+|PARTIALLY
+|Core
+
+|`SQLSetCursorName`
+|NO
+|Core
+
+|`SQLSetDescField`
+|NO
+|Core
+
+|`SQLSetDescRec`
+|NO
+|Core
+
+|`SQLSetEnvAttr`
+|PARTIALLY
+|Core
+
+|`SQLSetPos`
+|NO
+|Level 1
+
+|`SQLSetStmtAttr`
+|PARTIALLY
+|Core
+
+|`SQLSpecialColumns`
+|PARTIALLY
+|Core
+
+|`SQLStatistics`
+|NO
+|Core
+
+|`SQLTablePrivileges`
+|NO
+|Level 2
+
+|`SQLTables`
+|YES
+|Core
+|=======================================================================
+
+== Environment Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_CONNECTION_POOLING`
+|NO
+|Optional
+
+|`SQL_ATTR_CP_MATCH`
+|NO
+|Optional
+
+|`SQL_ATTR_ODBC_VER`
+|YES
+|Core
+
+|`SQL_ATTR_OUTPUT_NTS`
+|YES
+|Optional
+|=======================================================================
+
+== Connection Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_ACCESS_MODE`
+|NO
+|Core
+
+|`SQL_ATTR_ASYNC_ENABLE`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_AUTO_IPD`
+|NO
+|Level 2
+
+|`SQL_ATTR_AUTOCOMMIT`
+|NO
+|Level 1
+
+|`SQL_ATTR_CONNECTION_DEAD`
+|YES
+|Level 1
+
+|`SQL_ATTR_CONNECTION_TIMEOUT`
+|YES
+|Level 2
+
+|`SQL_ATTR_CURRENT_CATALOG`
+|NO
+|Level 2
+
+|`SQL_ATTR_LOGIN_TIMEOUT`
+|NO
+|Level 2
+
+|`SQL_ATTR_ODBC_CURSORS`
+|NO
+|Core
+
+|`SQL_ATTR_PACKET_SIZE`
+|NO
+|Level 2
+
+|`SQL_ATTR_QUIET_MODE`
+|NO
+|Core
+
+|`SQL_ATTR_TRACE`
+|NO
+|Core
+
+|`SQL_ATTR_TRACEFILE`
+|NO
+|Core
+
+|`SQL_ATTR_TRANSLATE_LIB`
+|NO
+|Core
+
+|`SQL_ATTR_TRANSLATE_OPTION`
+|NO
+|Core
+
+|`SQL_ATTR_TXN_ISOLATION`
+|NO
+|Level 1 / Level 2
+|=======================================================================
+
+== Statement Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_APP_PARAM_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_APP_ROW_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_ASYNC_ENABLE`
+|NO
+|Level 1/ Level 2
+
+|`SQL_ATTR_CONCURRENCY`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_CURSOR_SCROLLABLE`
+|NO
+|Level 1
+
+|`SQL_ATTR_CURSOR_SENSITIVITY`
+|NO
+|Level 2
+
+|`SQL_ATTR_CURSOR_TYPE`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_ENABLE_AUTO_IPD`
+|NO
+|Level 2
+
+|`SQL_ATTR_FETCH_BOOKMARK_PTR`
+|NO
+|Level 2
+
+|`SQL_ATTR_IMP_PARAM_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_IMP_ROW_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_KEYSET_SIZE`
+|NO
+|Level 2
+
+|`SQL_ATTR_MAX_LENGTH`
+|NO
+|Level 1
+
+|`SQL_ATTR_MAX_ROWS`
+|NO
+|Level 1
+
+|`SQL_ATTR_METADATA_ID`
+|NO
+|Core
+
+|`SQL_ATTR_NOSCAN`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_BIND_OFFSET_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAM_BIND_TYPE`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_OPERATION_PTR`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_STATUS_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAMS_PROCESSED_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAMSET_SIZE`
+|YES
+|Core
+
+|`SQL_ATTR_QUERY_TIMEOUT`
+|YES
+|Level 2
+
+|`SQL_ATTR_RETRIEVE_DATA`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_ARRAY_SIZE`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_BIND_OFFSET_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_BIND_TYPE`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_NUMBER`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_OPERATION_PTR`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_STATUS_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_ROWS_FETCHED_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_SIMULATE_CURSOR`
+|NO
+|Level 2
+
+|`SQL_ATTR_USE_BOOKMARKS`
+|NO
+|Level 2
+|=======================================================================
+
+== Descriptor Header Fields Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_DESC_ALLOC_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_ARRAY_SIZE`
+|NO
+|Core
+
+|`SQL_DESC_ARRAY_STATUS_PTR`
+|NO
+|Core / Level 1
+
+|`SQL_DESC_BIND_OFFSET_PTR`
+|NO
+|Core
+
+|`SQL_DESC_BIND_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_COUNT`
+|NO
+|Core
+
+|`SQL_DESC_ROWS_PROCESSED_PTR`
+|NO
+|Core
+|=======================================================================
+
+== Descriptor Record Fields Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_DESC_AUTO_UNIQUE_VALUE`
+|NO
+|Level 2
+
+|`SQL_DESC_BASE_COLUMN_NAME`
+|NO
+|Core
+
+|`SQL_DESC_BASE_TABLE_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_CASE_SENSITIVE`
+|NO
+|Core
+
+|`SQL_DESC_CATALOG_NAME`
+|NO
+|Level 2
+
+|`SQL_DESC_CONCISE_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_DATA_PTR`
+|NO
+|Core
+
+|`SQL_DESC_DATETIME_INTERVAL_CODE`
+|NO
+|Core
+
+|`SQL_DESC_DATETIME_INTERVAL_PRECISION`
+|NO
+|Core
+
+|`SQL_DESC_DISPLAY_SIZE`
+|NO
+|Core
+
+|`SQL_DESC_FIXED_PREC_SCALE`
+|NO
+|Core
+
+|`SQL_DESC_INDICATOR_PTR`
+|NO
+|Core
+
+|`SQL_DESC_LABEL`
+|NO
+|Level 2
+
+|`SQL_DESC_LENGTH`
+|NO
+|Core
+
+|`SQL_DESC_LITERAL_PREFIX`
+|NO
+|Core
+
+|`SQL_DESC_LITERAL_SUFFIX`
+|NO
+|Core
+
+|`SQL_DESC_LOCAL_TYPE_NAME`
+|NO
+|Core
+
+|`SQL_DESC_NAME`
+|NO
+|Core
+
+|`SQL_DESC_NULLABLE`
+|NO
+|Core
+
+|`SQL_DESC_OCTET_LENGTH`
+|NO
+|Core
+
+|`SQL_DESC_OCTET_LENGTH_PTR`
+|NO
+|Core
+
+|`SQL_DESC_PARAMETER_TYPE`
+|NO
+|Core / Level 2
+
+|`SQL_DESC_PRECISION`
+|NO
+|Core
+
+|`SQL_DESC_ROWVER`
+|NO
+|Level 1
+
+|`SQL_DESC_SCALE`
+|NO
+|Core
+
+|`SQL_DESC_SCHEMA_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_SEARCHABLE`
+|NO
+|Core
+
+|`SQL_DESC_TABLE_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_TYPE_NAME`
+|NO
+|Core
+
+|`SQL_DESC_UNNAMED`
+|NO
+|Core
+
+|`SQL_DESC_UNSIGNED`
+|NO
+|Core
+
+|`SQL_DESC_UPDATABLE`
+|NO
+|Core
+
+|=======================================================================
+
+== SQL Data Types
+
+The following SQL data types listed in the link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types[specification] are supported:
+
+[width="100%",cols="80%,20%"]
+|=======================================================================
+|Data Type |Supported
+
+|`SQL_CHAR`
+|YES
+
+|`SQL_VARCHAR`
+|YES
+
+|`SQL_LONGVARCHAR`
+|YES
+
+|`SQL_WCHAR`
+|NO
+
+|`SQL_WVARCHAR`
+|NO
+
+|`SQL_WLONGVARCHAR`
+|NO
+
+|`SQL_DECIMAL`
+|YES
+
+|`SQL_NUMERIC`
+|NO
+
+|`SQL_SMALLINT`
+|YES
+
+|`SQL_INTEGER`
+|YES
+
+|`SQL_REAL`
+|NO
+
+|`SQL_FLOAT`
+|YES
+
+|`SQL_DOUBLE`
+|YES
+
+|`SQL_BIT`
+|YES
+
+|`SQL_TINYINT`
+|YES
+
+|`SQL_BIGINT`
+|YES
+
+|`SQL_BINARY`
+|YES
+
+|`SQL_VARBINARY`
+|YES
+
+|`SQL_LONGVARBINARY`
+|YES
+
+|`SQL_TYPE_DATE`
+|YES
+
+|`SQL_TYPE_TIME`
+|YES
+
+|`SQL_TYPE_TIMESTAMP`
+|YES
+
+|`SQL_TYPE_UTCDATETIME`
+|NO
+
+|`SQL_TYPE_UTCTIME`
+|NO
+
+|`SQL_INTERVAL_MONTH`
+|NO
+
+|`SQL_INTERVAL_YEAR`
+|NO
+
+|`SQL_INTERVAL_YEAR_TO_MONTH`
+|NO
+
+|`SQL_INTERVAL_DAY`
+|NO
+
+|`SQL_INTERVAL_HOUR`
+|NO
+
+|`SQL_INTERVAL_MINUTE`
+|NO
+
+|`SQL_INTERVAL_SECOND`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_HOUR`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_MINUTE`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_SECOND`
+|NO
+
+|`SQL_INTERVAL_HOUR_TO_MINUTE`
+|NO
+
+|`SQL_INTERVAL_HOUR_TO_SECOND`
+|NO
+
+|`SQL_INTERVAL_MINUTE_TO_SECOND`
+|NO
+
+|`SQL_GUID`
+|YES
+|=======================================================================
+
+
+== C Data Types
+
+The following C data types listed in the link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/c-data-types[specification] are supported:
+
+[width="100%",cols="80%,20%"]
+|=======================================================================
+|Data Type |Supported
+
+|`SQL_C_CHAR`
+|YES
+
+|`SQL_C_WCHAR`
+|YES
+
+|`SQL_C_SHORT`
+|YES
+
+|`SQL_C_SSHORT`
+|YES
+
+|`SQL_C_USHORT`
+|YES
+
+|`SQL_C_LONG`
+|YES
+
+|`SQL_C_SLONG`
+|YES
+
+|`SQL_C_ULONG`
+|YES
+
+|`SQL_C_FLOAT`
+|YES
+
+|`SQL_C_DOUBLE`
+|YES
+
+|`SQL_C_BIT`
+|YES
+
+|`SQL_C_TINYINT`
+|YES
+
+|`SQL_C_STINYINT`
+|YES
+
+|`SQL_C_UTINYINT`
+|YES
+
+|`SQL_C_BIGINT`
+|YES
+
+|`SQL_C_SBIGINT`
+|YES
+
+|`SQL_C_UBIGINT`
+|YES
+
+|`SQL_C_BINARY`
+|YES
+
+|`SQL_C_BOOKMARK`
+|NO
+
+|`SQL_C_VARBOOKMARK`
+|NO
+
+|`SQL_C_INTERVAL`* (all interval types)
+|NO
+
+|`SQL_C_TYPE_DATE`
+|YES
+
+|`SQL_C_TYPE_TIME`
+|YES
+
+|`SQL_C_TYPE_TIMESTAMP`
+|YES
+
+|`SQL_C_NUMERIC`
+|YES
+
+|`SQL_C_GUID`
+|YES
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/custom-sql-func.adoc b/docs/_docs/SQL/custom-sql-func.adoc
new file mode 100644
index 0000000..c531fc6
--- /dev/null
+++ b/docs/_docs/SQL/custom-sql-func.adoc
@@ -0,0 +1,49 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Custom SQL Functions
+
+:javaFile: {javaCodeDir}/SqlAPI.java
+
+The SQL Engine can extend the SQL functions' set, defined by the ANSI-99 specification, via the addition of custom SQL functions written in Java.
+
+A custom SQL function is just a public static method marked by the `@QuerySqlFunction` annotation.
+
+////
+TODO looks like it's unsupported in C#
+////
+
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-example, indent=0]
+----
+
+
+The class that owns the custom SQL function has to be registered in the `CacheConfiguration`.
+To do that, use the `setSqlFunctionClasses(...)` method.
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-config, indent=0]
+----
+
+Once you have deployed a cache with the above configuration, you can call the custom function from within SQL queries:
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-query, indent=0]
+----
+
+NOTE: Classes registered with `CacheConfiguration.setSqlFunctionClasses(...)` must be added to the classpath of all the nodes where the defined custom functions might be executed. Otherwise, you will get a `ClassNotFoundException` error when trying to execute the custom function.
diff --git a/docs/_docs/SQL/distributed-joins.adoc b/docs/_docs/SQL/distributed-joins.adoc
new file mode 100644
index 0000000..5394c3a
--- /dev/null
+++ b/docs/_docs/SQL/distributed-joins.adoc
@@ -0,0 +1,110 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Distributed Joins
+
+A distributed join is a SQL statement with a join clause that combines two or more partitioned tables.
+If the tables are joined on the partitioning column (affinity key), the join is called a _colocated join_. Otherwise, it is called a _non-colocated join_.
+
+Colocated joins are more efficient because they can be effectively distributed between the cluster nodes.
+
+By default, Ignite treats each join query as if it is a colocated join and executes it accordingly (see the corresponding section below).
+
+WARNING: If your query is non-colocated, you have to enable the non-colocated mode of query execution by setting `SqlFieldsQuery.setDistributedJoins(true)`; otherwise, the results of the query execution may be incorrect.
+
+[CAUTION]
+====
+If you often join tables, we recommend that you partition your tables on the same column (on which you join the tables).
+
+Non-colocated joins should be reserved for cases when it's impossible to use colocated joins.
+====
+
+== Colocated Joins
+
+The following image illustrates the procedure of executing a colocated join. A colocated join (`Q`) is sent to all the nodes that store the data matching the query condition. Then the query is executed over the local data set on each node (`E(Q)`). The results (`R`) are aggregated on the node that initiated the query (the client node).
+
+image::images/collocated_joins.png[]
+
+
+== Non-colocated Joins
+
+If you execute a query in a non-colocated mode, the SQL Engine executes the query locally on all the nodes that store the data matching the query condition. But because the data is not colocated, each node will request missing data (that is not present locally) from other nodes by sending either broadcast or unicast requests. This process is depicted on the image below.
+
+image::images/non_collocated_joins.png[]
+
+If the join is done on the primary or affinity key, the nodes send unicast requests because in this case the nodes know the location of the missing data. Otherwise, nodes send broadcast requests. For performance reasons, both broadcast and unicast requests are aggregated into batches.
+
+Enable the non-colocated mode of query execution by setting a JDBC/ODBC parameter or, if you use SQL API, by calling `SqlFieldsQuery.setDistributedJoins(true)`.
+
+WARNING: If you use a non-collocated join on a column from a link:data-modeling/data-partitioning#replicated[replicated table], the column must have an index.
+Otherwise, you will get an exception.
+
+
+
+== Hash Joins
+
+//tag::hash-join[]
+To boost performance of join queries, Ignite supports the https://en.wikipedia.org/wiki/Hash_join[hash join
+algorithm].
+Hash joins can be more efficient than nested loop joins for many scenarios, except when the probe side of the join is very small.
+However, hash joins can only be used with equi-joins, i.e. a type of join with equality comparison in the join-predicate.
+
+//end::hash-join[]
+
+To enforce the use of hash joins:
+
+. Use the `enforceJoinOrder` option:
++
+[tabs]
+--
+tab:Java API[]
+[source,java]
+----
+include::{javaCodeDir}/SqlAPI.java[tags=enforceJoinOrder,indent=0]
+----
+
+tab:JDBC[]
+[source,java]
+----
+Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+// Open the JDBC connection.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1?enforceJoinOrder=true");
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlJoinOrder.cs[tag=sqlJoinOrder,indent=0]
+----
+
+tab:C++[]
+[source,c++]
+----
+include::code-snippets/cpp/src/sql_join_order.cpp[tag=sql-join-order,indent=0]
+----
+--
+
+. Specify `USE INDEX(HASH_JOIN_IDX)` on the table for which you want to create the hash-join index:
++
+--
+
+[source, sql]
+----
+SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX) WHERE TABLE_A.column1 = TABLE_B.column2
+----
+--
+
+
+
+
diff --git a/docs/_docs/SQL/indexes.adoc b/docs/_docs/SQL/indexes.adoc
new file mode 100644
index 0000000..4f6a36f
--- /dev/null
+++ b/docs/_docs/SQL/indexes.adoc
@@ -0,0 +1,357 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Defining Indexes
+
+:javaFile: {javaCodeDir}/Indexes.java
+:csharpFile: {csharpCodeDir}/DefiningIndexes.cs
+
+In addition to common DDL commands, such as CREATE/DROP INDEX, developers can use Ignite's link:SQL/sql-api[SQL APIs] to define indexes.
+
+[NOTE]
+====
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from Java code, link:setup#enabling-modules[add this module to your classpath].
+====
+
+Ignite automatically creates indexes for each primary key and affinity key field.
+When you define an index on a field in the value object, Ignite creates a composite index consisting of the indexed field and the cache's primary key.
+In SQL terms, it means that the index will be composed of two columns: the column you want to index and the primary key column.
+
+== Creating Indexes With SQL
+
+Refer to the link:sql-reference/ddl#create-index[CREATE INDEX] section.
+
+== Configuring Indexes Using Annotations
+
+Indexes, as well as queryable fields, can be configured from code via the `@QuerySqlField` annotation. In the example below, the Ignite SQL engine will create indexes for the `id` and `salary` fields.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=configuring-with-annotation,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=idxAnnotationCfg,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The type name is used as the table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained in the link:SQL/schemas[Schemas] section).
+
+Both `id` and `salary` are indexed fields. `id` will be sorted in ascending order (default) and `salary` in descending order.
+
+If you do not want to index a field, but you still need to use it in SQL queries, then the field must be annotated without the `index = true` parameter.
+Such a field is called a _queryable field_.
+In the example above, `name` is defined as a link:SQL/sql-api#configuring-queryable-fields[queryable field].
+
+The `age` field is neither queryable nor is it an indexed field, and thus it will not be accessible from SQL queries.
+
+When you define the indexed fields, you need to <<Registering Indexed Types,register indexed types>>.
+
+////
+Now you can execute the SQL query as follows:
+
+[source,java]
+----
+SqlFieldsQuery qry = new SqlFieldsQuery("SELECT id, name FROM Person" +
+		"WHERE id > 1500 LIMIT 10");
+----
+////
+
+
+[NOTE]
+====
+[discrete]
+=== Updating Indexes and Queryable Fields at Runtime
+
+Use the link:sql-reference/ddl#create-index[CREATE/DROP INDEX] commands if you need to manage indexes or make an object's new fields visible to the SQL engine at​ runtime.
+====
+
+=== Indexing Nested Objects
+Fields of nested objects can also be indexed and queried using annotations. For example, consider a `Person` object that has an `Address` object as a field:
+
+[source,java]
+----
+public class Person {
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private long id;
+
+    /** Queryable field. Will be visible for SQL engine. */
+    @QuerySqlField
+    private String name;
+
+    /** Will NOT be visible for SQL engine. */
+    private int age;
+
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private Address address;
+}
+----
+
+Where the structure of the `Address` class might look like:
+
+[source,java]
+----
+public class Address {
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField (index = true)
+    private String street;
+
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private int zip;
+}
+----
+
+In the above example, the `@QuerySqlField(index = true)` annotation is specified on all the fields of the `Address` class, as well as the `Address` object in the `Person` class.
+
+This makes it possible to execute SQL queries like the following:
+
+[source,java]
+----
+QueryCursor<List<?>> cursor = personCache.query(new SqlFieldsQuery( "select * from Person where street = 'street1'"));
+----
+
+Note that you do not need to specify `address.street` in the WHERE clause of the SQL query. This is because the fields of the `Address` class are flattened within the `Person` table which simply allows us to access the `Address` fields in the queries directly.
+
+WARNING: If you create indexes for nested objects, you won't be able to run UPDATE or INSERT statements on the table.
+
+=== Registering Indexed Types
+After indexed and queryable fields are defined, they have to be registered in the SQL engine along with the object types they belong to.
+
+To specify which types should be indexed, pass the corresponding key-value pairs in the `CacheConfiguration.setIndexedTypes()` method as shown in the example below.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=register-indexed-types,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=register-indexed-types,indent=0]
+----
+tab:C++[unsupported]
+--
+
+This method accepts only pairs of types: one for key class and another for value class. Primitives are passed as boxed types.
+
+[NOTE]
+====
+[discrete]
+=== Predefined Fields
+In addition to all the fields marked with a `@QuerySqlField` annotation, each table will have two special predefined fields: `pass:[_]key` and `pass:[_]val`, which represent links to whole key and value objects. This is useful, for instance, when one of them is of a primitive type and you want to filter by its value. To do this, run a query like: `SELECT * FROM Person WHERE pass:[_]key = 100`.
+====
+
+NOTE: Since Ignite supports link:key-value-api/binary-objects[Binary Objects], there is no need to add classes of indexed types to the classpath of cluster nodes. The SQL query engine can detect values of indexed and queryable fields, avoiding object deserialization.
+
+=== Group Indexes
+
+To set up a multi-field index that can accelerate queries with complex conditions, you can use a `@QuerySqlField.Group` annotation. You can add multiple `@QuerySqlField.Group` annotations in `orderedGroups` if you want a field to be a part of more than one group.
+
+For instance, in the `Person` class below we have the field `age` which belongs to an indexed group named `age_salary_idx` with a group order of "0" and descending sort order. Also, in the same group, we have the field `salary` with a group order of "3" and ascending sort order. Furthermore, the field `salary` itself is a single column index (the `index = true` parameter is specified in addition to the `orderedGroups` declaration). Group `order` does not have to be a particular number. It is needed only to sort fields inside of a particular group.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Indexes_groups.java[tag=group-indexes,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DefiningIndexes.cs[tag=groupIdx,indent=0]
+----
+tab:C++[unsupported]
+--
+
+NOTE: Annotating a field with `@QuerySqlField.Group` outside of `@QuerySqlField(orderedGroups={...})` will have no effect.
+
+== Configuring Indexes Using Query Entities
+
+Indexes and queryable fields can also be configured via the `org.apache.ignite.cache.QueryEntity` class which is convenient for Spring XML based configuration.
+
+All concepts that are discussed as part of the annotation based configuration above are also valid for the `QueryEntity` based approach. Furthermore, the types whose fields are configured with the `@QuerySqlField` annotation and are registered with the `CacheConfiguration.setIndexedTypes()` method are internally converted into query entities.
+
+The example below shows how to define a single field index, group indexes, and queryable fields.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/query-entities.xml[tags=ignite-config,indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tag=index-using-queryentity,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DefiningIndexes.cs[tag=queryEntity,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+A short name of the `valueType` is used as a table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained on the link:SQL/schemas[Schemas] page).
+
+Once the `QueryEntity` is defined, you can execute the SQL query as follows:
+
+[source,java]
+----
+include::{javaFile}[tag=query,indent=0]
+----
+
+[NOTE]
+====
+[discrete]
+=== Updating Indexes and Queryable Fields at Runtime
+
+Use the link:sql-reference/ddl#create-index[CREATE/DROP INDEX] command if you need to manage indexes or make new fields of the object visible to the SQL engine at​ runtime.
+====
+
+== Configuring Index Inline Size
+
+Proper index inline size can help speed up queries on indexed fields.
+//For primitive types and BinaryObjects, Ignite uses a predefined inline index size
+Refer to the dedicated section in the link:SQL/sql-tuning#increasing-index-inline-size[SQL Tuning guide] for the information on how to choose a proper inline size.
+
+In most cases, you will only need to set the inline size for indexes on variable-length fields, such as strings or arrays.
+The default value is 10.
+
+You can change the default value by setting either
+
+* inline size for each index individually, or
+* `CacheConfiguration.sqlIndexMaxInlineSize` property for all indexes within a given cache, or
+* `IGNITE_MAX_INDEX_PAYLOAD_SIZE` system property for all indexes in the cluster
+
+The settings are applied in the order listed above.
+
+//Ignite automatically creates indexes on the primary key and on the affinity key.
+//The inline size for these indexes can be configured via the `CacheConfiguration.sqlIndexMaxInlineSize` property.
+
+You can also configure inline size for each index individually, which will overwrite the default value.
+To set the index inline size for a user-defined index, use one of the following methods. In all cases, the value is set in bytes.
+
+* When using annotations:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=annotation-with-inline-size,indent=0]
+----
+tab:C#/.NET[]
+[source,java]
+----
+include::{csharpFile}[tag=annotation-with-inline-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+* When using `QueryEntity`:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=query-entity-with-inline-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=query-entity-with-inline-size,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+* If you create indexes using the `CREATE INDEX` command, you can use the `INLINE_SIZE` option to set the inline size. See examples in the link:sql-reference/ddl[corresponding section].
++
+[source, sql]
+----
+create index country_idx on Person (country) INLINE_SIZE 13;
+----
+
+
+== Custom Keys
+If you use only predefined SQL data types for primary keys, then you do not need to perform additional manipulation with the SQL schema configuration. Those data types are defined by the `GridQueryProcessor.SQL_TYPES` constant, as listed below.
+
+Predefined SQL data types include:
+
+- all the primitives and their wrappers except `char` and `Character`
+- `String`
+- `BigDecimal`
+- `byte[]`
+- `java.util.Date`, `java.sql.Date`, `java.sql.Timestamp`
+- `java.util.UUID`
+
+However, once you decide to introduce a custom complex key and refer to its fields from DML statements, you need to:
+
+- Define those fields in the `QueryEntity` the same way as you set fields for the value object.
+- Use the new configuration parameter `QueryEntity.setKeyFields(..)` to distinguish key fields from value fields.
+
+The example below shows how to do this.
+
+[tabs]
+--
+
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/custom-keys.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=custom-key,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=custom-key,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+[NOTE]
+====
+[discrete]
+=== Automatic Hash Code Calculation and Equals Implementation
+
+If a custom key can be serialized into a binary form, then Ingnite calculates its hash code and implement the `equals()` method automatically.
+
+However, if the key's type is `Externalizable`, and if it cannot be serialized into the binary form, then you are required to implement the `hashCode` and `equals` methods manually. See the link:key-value-api/binary-objects[Binary Objects] page for more details.
+====
+
+
diff --git a/docs/_docs/SQL/schemas.adoc b/docs/_docs/SQL/schemas.adoc
new file mode 100644
index 0000000..613fc46
--- /dev/null
+++ b/docs/_docs/SQL/schemas.adoc
@@ -0,0 +1,94 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Understanding Schemas
+
+== Overview
+
+Ignite has a number of default schemas and supports creating custom schemas.
+
+There are two schemas that are available by default:
+
+- The SYS schema, which contains a number of system views with information about cluster nodes. You can't create tables in this schema. Refer to the link:monitoring-metrics/system-views[System Views] page for further information.
+- The <<PUBLIC Schema,PUBLIC schema>>, which is used by default whenever a schema is not specified.
+
+Custom schemas are created in the following cases:
+
+- You can specify custom schemas in the cluster configuration. See <<Custom Schemas>>.
+- Ignite creates a schema for each cache created via one of the programming interfaces or XML configuration. See <<Cache and Schema Names>>.
+
+
+== PUBLIC Schema
+
+The PUBLIC schema is used by default whenever a schema is required and is not specified. For example, when you connect to the cluster via JDBC without setting the schema explicitly, you will connect to the PUBLIC schema.
+
+
+== Custom Schemas
+Custom schemas can be set via the `sqlSchemas` property of `IgniteConfiguration`. You can specify a list of schemas in the configuration before starting your cluster and then create objects in these schemas at runtime.
+
+Below is a configuration example with two custom schemas.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/schemas.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Schemas.java[tags=custom-schemas, indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UnderstandingSchemas.cs[tag=schemas,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+To connect to a specific schema via, for example, a JDBC driver, provide the schema name in the connection string:
+
+[source,text]
+----
+jdbc:ignite:thin://127.0.0.1/MY_SCHEMA
+----
+
+== Cache and Schema Names
+When you create a cache with link:SQL/sql-api#configuring-queryable-fields[queryable fields], you can manipulate the cached data using the link:SQL/sql-api[SQL API]. In SQL terms, each such cache corresponds to a separate schema whose name equals the name of the cache.
+
+Similarly, when you create a table via a DDL statement, you can access it as a key-value cache via Ignite's supported programming interfaces. The name of the corresponding cache can be specified by providing the `CACHE_NAME` parameter in the `WITH` part of the `CREATE TABLE` statement.
+
+[source,sql]
+----
+CREATE TABLE City (
+  ID INT(11),
+  Name CHAR(35),
+  CountryCode CHAR(3),
+  District CHAR(20),
+  Population INT(11),
+  PRIMARY KEY (ID, CountryCode)
+) WITH "backups=1, CACHE_NAME=City";
+----
+
+See the link:sql-reference/ddl#create-table[CREATE TABLE] page for more details.
+
+If you do not use this parameter, the cache name is defined in the following format (in capital letters):
+
+....
+SQL_<SCHEMA_NAME>_<TABLE_NAME>
+....
diff --git a/docs/_docs/SQL/sql-api.adoc b/docs/_docs/SQL/sql-api.adoc
new file mode 100644
index 0000000..c372c5a
--- /dev/null
+++ b/docs/_docs/SQL/sql-api.adoc
@@ -0,0 +1,352 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL API
+:javaSourceFile: {javaCodeDir}/SqlAPI.java
+
+In addition to using the JDBC driver, Java developers can use Ignite's SQL APIs to query and modify data stored in Ignite.
+
+The `SqlFieldsQuery` class is an interface for executing SQL statements and navigating through the results. `SqlFieldsQuery` is executed through the `IgniteCache.query(SqlFieldsQuery)` method, which returns a query cursor.
+
+== Configuring Queryable Fields
+
+If you want to query a cache using SQL statements, you need to define which fields of the value objects are queryable. Queryable fields are the fields of your data model that the SQL engine can "see" and query.
+
+NOTE: If you create tables using JDBC or SQL tools, you do not need to define queryable fields.
+
+[NOTE]
+====
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from java code, link:setup#enabling-modules[add this module to the classpath of your application].
+====
+
+In Java, queryable fields can be configured in two ways:
+
+* using annotations
+* by defining query entities
+
+
+=== @QuerySqlField Annotation
+
+To make specific fields queryable, annotate the fields in the value class definition with the `@QuerySqlField` annotation and call `CacheConfiguration.setIndexedTypes(...)`.
+////
+TODO : CacheConfiguration.setIndexedTypes is presented only in java, C# got different API, rewrite sentence above
+////
+
+
+[tabs]
+--
+tab:Java[]
+
+[source,java]
+----
+include::{javaCodeDir}/QueryEntitiesExampleWithAnnotation.java[tags=query-entity-annotation, indent=0]
+----
+
+Make sure to call `CacheConfiguration.setIndexedTypes(...)` to let the SQL engine know about the annotated fields.
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=sqlQueryFields,indent=0]
+----
+tab:C++[unsupported]
+--
+
+=== Query Entities
+
+You can define queryable fields using the `QueryEntity` class. Query entities can be configured via XML configuration.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/query-entities.xml[tags=ignite-config,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/QueryEntityExample.java[tags=query-entity,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=queryEntities,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Querying
+
+To execute a select query on a cache, simply create an object of `SqlFieldsQuery` providing the query string to the constructor and run `cache.query(...)`.
+Note that in the following example, the Person cache must be configured to be <<Configuring Queryable Fields,visible to the SQL engine>>.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=simple-query,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=querying,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql.cpp[tag=sql-fields-query,indent=0]
+----
+--
+
+`SqlFieldsQuery` returns a cursor that iterates through the results that match the SQL query.
+
+=== Local Execution
+
+To force local execution of a query, use `SqlFieldsQuery.setLocal(true)`. In this case, the query is executed against the data stored on the node where the query is run. It means that the results of the query are almost always incomplete. Use the local mode only if you are confident you understand this limitation.
+
+=== Subqueries in WHERE Clause
+
+`SELECT` queries used in `INSERT` and `MERGE` statements as well as `SELECT` queries generated by `UPDATE` and `DELETE` operations are distributed and executed in either link:SQL/distributed-joins[colocated or non-colocated distributed modes].
+
+However, if there is a subquery that is executed as part of a `WHERE` clause, then it can be executed in the colocated mode only.
+
+For instance, let's consider the following query:
+
+[source,sql]
+----
+DELETE FROM Person WHERE id IN
+    (SELECT personId FROM Salary s WHERE s.amount > 2000);
+----
+The SQL engine generates the `SELECT` query in order to get a list of entries to be deleted. The query is distributed and executed across the cluster and looks like the one below:
+[source,sql]
+----
+SELECT _key, _val FROM Person WHERE id IN
+    (SELECT personId FROM Salary s WHERE s.amount > 2000);
+----
+However, the subquery from the `IN` clause (`SELECT personId FROM Salary ...`) is not distributed further and is executed over the local data set available on the node.
+
+== Inserting, Updating, Deleting, and Merging
+
+With `SqlFieldsQuery` you can execute the other DML commands in order to modify the data:
+
+
+[tabs]
+--
+tab:INSERT[]
+[source,java]
+----
+include::{javaSourceFile}[tag=insert,indent=0]
+----
+
+tab:UPDATE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=update,indent=0]
+----
+
+tab:DELETE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=delete,indent=0]
+----
+
+tab:MERGE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=merge,indent=0]
+----
+--
+
+When using `SqlFieldsQuery` to execute DDL statements, you must call `getAll()` on the cursor returned from the `query(...)` method.
+
+== Specifying the Schema
+
+By default, any SELECT statement executed via `SqlFieldsQuery` is resolved against the PUBLIC schema. However, if the table you want to query is in a different schema, you can specify the schema by calling `SqlFieldsQuery.setSchema(...)`. In this case, the statement is executed in the given schema.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=set-schema,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=schema,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql.cpp[tag=sql-fields-query-scheme,indent=0]
+----
+--
+
+Alternatively, you can define the schema in the statement:
+
+[source,java]
+----
+SqlFieldsQuery sql = new SqlFieldsQuery("select name from Person.City");
+----
+
+== Creating Tables
+
+You can pass any supported DDL statement to `SqlFieldsQuery` and execute it on a cache as shown below.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=create-table,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=creatingTables,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql_create.cpp[tag=sql-create,indent=0]
+----
+--
+
+
+In terms of SQL schema, the following tables are created as a result of executing the code:
+
+* Table "Person" in the "Person" schema (if it hasn't been created before).
+* Table "City" in the "Person" schema.
+
+To query the "City" table, use statements like `select * from Person.City` or `new SqlFieldsQuery("select * from City").setSchema("PERSON")` (note the uppercase).
+
+
+////////////////////////////////////////////////////////////////////////////////
+== Joining Tables
+
+
+== Cross-Table Queries
+
+
+`SqlQuery.setSchema("PUBLIC")`
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+IgniteCache cache = ignite.getOrCreateCache(
+    new CacheConfiguration<>()
+        .setName("Person")
+        .setIndexedTypes(Long.class, Person.class));
+
+// Creating City table.
+cache.query(new SqlFieldsQuery("CREATE TABLE City " +
+    "(id int primary key, name varchar, region varchar)").setSchema("PUBLIC")).getAll();
+
+// Creating Organization table.
+cache.query(new SqlFieldsQuery("CREATE TABLE Organization " +
+    "(id int primary key, name varchar, cityName varchar)").setSchema("PUBLIC")).getAll();
+
+// Joining data between City, Organizaion and Person tables. The latter
+// was created with either annotations or QueryEntity approach.
+SqlFieldsQuery qry = new SqlFieldsQuery("SELECT o.name from Organization o " +
+    "inner join \"Person\".Person p on o.id = p.orgId " +
+    "inner join City c on c.name = o.cityName " +
+    "where p.age > 25 and c.region <> 'Texas'");
+
+// Set the query's default schema to PUBLIC.
+// Table names from the query without the schema set will be
+// resolved against PUBLIC schema.
+// Person table belongs to "Person" schema (person cache) and this is why
+// that schema name is set explicitly.
+qry.setSchema("PUBLIC");
+
+// Executing the query.
+cache.query(qry).getAll();
+----
+++++
+</code-tab>
+<code-tab data-tab="C#/.NET">
+++++
+[source,csharp]
+----
+
+----
+++++
+</code-tab>
+<code-tab data-tab="C++">
+++++
+[source,cpp]
+----
+TODO
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+== Cancelling Queries
+There are two ways to cancel long running queries.
+
+The first approach is to prevent run away queries by setting a query execution timeout.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=set-timeout,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=qryTimeout,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The second approach is to halt the query by using `QueryCursor.close()`.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=cancel-by-closing,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=cursorDispose,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Example
+
+The Ignite Community Edition distribution package includes a ready-to-run `SqlDmlExample` as a part of its link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/sql/SqlDmlExample.java[source code]. This example demonstrates the usage of all the above-mentioned DML operations.
diff --git a/docs/_docs/SQL/sql-introduction.adoc b/docs/_docs/SQL/sql-introduction.adoc
new file mode 100644
index 0000000..bfe6d11
--- /dev/null
+++ b/docs/_docs/SQL/sql-introduction.adoc
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Working with SQL
+
+Ignite comes with ANSI-99 compliant, horizontally scalable and fault-tolerant distributed SQL database. The distribution is provided either by partitioning the data across cluster nodes or by full replication, depending on the use case.
+
+As a SQL database, Ignite supports all DML commands including SELECT, UPDATE, INSERT, and DELETE queries and also implements a subset of DDL commands relevant for distributed systems.
+
+You can interact with Ignite as you would with any other SQL enabled storage by connecting with link:SQL/JDBC/jdbc-driver/[JDBC] or link:SQL/ODBC/odbc-driver[ODBC] drivers from both external tools and applications. Java, .NET and C++ developers can leverage native  link:SQL/sql-api[SQL APIs].
+
+Internally, SQL tables have the same data structure as link:data-modeling/data-modeling#key-value-cache-vs-sql-table[key-value caches]. It means that you can change partition distribution of your data and leverage link:data-modeling/affinity-collocation[affinity colocation techniques] for better performance.
+
+Ignite's SQL engine uses H2 Database to parse and optimize queries and generate execution plans.
+
+== Distributed Queries
+
+Queries against link:data-modeling/data-partitioning#partitioned[partitioned] tables are executed in a distributed manner:
+
+- The query is parsed and split into multiple “map” queries and a single “reduce” query.
+- All the map queries are executed on all the nodes where required data resides.
+- All the nodes provide result sets of local execution to the query initiator that, in turn, will merge provided result sets into the final results.
+
+You can force a query to be processed locally, i.e. on the subset of data that is stored on the node where the query is executed.
+
+== Local Queries
+
+If a query is executed over a link:data-modeling/data-partitioning#replicated[replicated] table, it will be run against the local data.
+
+Queries over partitioned tables are executed in a distributed manner.
+However, you can force local execution of a query over a partitioned table.
+See link:SQL/sql-api#local-execution[Local Execution] for details.
+
+
+////
+== Known Limitations
+TODO
+
+https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-known-limitations
+
+https://issues.apache.org/jira/browse/IGNITE-7822 - describe this if not fixed
+////
diff --git a/docs/_docs/SQL/sql-transactions.adoc b/docs/_docs/SQL/sql-transactions.adoc
new file mode 100644
index 0000000..6824746
--- /dev/null
+++ b/docs/_docs/SQL/sql-transactions.adoc
@@ -0,0 +1,87 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Transactions
+:javaSourceFile: {javaCodeDir}/SqlTransactions.java
+
+IMPORTANT: Support for SQL transactions is currently in the beta stage. For production use, consider key-value transactions.
+
+== Overview
+SQL Transactions are supported for caches that use the `TRANSACTIONAL_SNAPSHOT` atomicity mode. The `TRANSACTIONAL_SNAPSHOT` mode is the implementation of multiversion concurrency control (MVCC) for Ignite caches. For more information about MVCC and current limitations, visit the link:transactions/mvcc[Multiversion Concurrency Control] page.
+
+See the link:sql-reference/transactions[Transactions] page for the transaction syntax supported by Ignite.
+
+== Enabling MVCC
+To enable MVCC for a cache, use the `TRANSACTIONAL_SNAPSHOT` atomicity mode in the cache configuration. If you create a table with the `CREATE TABLE` command, specify the atomicity mode as a parameter in the `WITH` part of the command:
+
+[tabs]
+--
+tab:SQL[]
+[source,sql]
+----
+CREATE TABLE Person WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT"
+----
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="cacheConfiguration">
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+
+            <property name="name" value="myCache"/>
+
+            <property name="atomicityMode" value="TRANSACTIONAL_SNAPSHOT"/>
+
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=enable,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlTransactions.cs[tag=mvcc,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+
+== Limitations
+
+=== Cross-Cache Transactions
+
+The `TRANSACTIONAL_SNAPSHOT` mode is enabled per cache and does not permit caches with different atomicity modes within one transaction. Thus, if you want to cover multiple tables in one SQL transaction, all tables must be created with the `TRANSACTIONAL_SNAPSHOT` mode.
+
+=== Nested Transactions
+
+Ignite supports three modes of handling nested SQL transactions that can be enabled via a JDBC/ODBC connection parameter.
+
+[source,sql]
+----
+jdbc:ignite:thin://127.0.0.1/?nestedTransactionsMode=COMMIT
+----
+
+
+When a nested transaction occurs within another transaction, the system behavior depends on the `nestedTransactionsMode` parameter:
+
+- `ERROR` — When the nested transaction is encountered, an error is thrown and the enclosing transaction is rolled back. This is the default behavior.
+- `COMMIT` — The enclosing transaction is committed; the nested transaction starts and is committed when its COMMIT statement is encountered. The rest of the statements in the enclosing transaction are executed as implicit transactions.
+- `IGNORE` — DO NOT USE THIS MODE. The beginning of the nested transaction is ignored, statements within the nested transaction will be executed as part of the enclosing transaction, and all changes will be committed with the commit of the nested transaction. The subsequent statements of the enclosing transaction will be executed as implicit transactions.
diff --git a/docs/_docs/SQL/sql-tuning.adoc b/docs/_docs/SQL/sql-tuning.adoc
new file mode 100644
index 0000000..35872e8
--- /dev/null
+++ b/docs/_docs/SQL/sql-tuning.adoc
@@ -0,0 +1,471 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Performance Tuning
+
+This article outlines basic and advanced optimization techniques for Ignite SQL queries. Some of the sections are also useful for debugging and troubleshooting.
+
+
+== Using the EXPLAIN Statement
+
+Ignite supports the `EXPLAIN` statement, which could be used to read the execution plan of a query.
+Use this command to analyse your queries for possible optimization.
+Note that the plan will contain multiple rows: the last one will contain a query for the reducing side (usually your application), others are for map nodes (usually server nodes).
+Read the link:SQL/sql-introduction#distributed-queries[Distributed Queries] section to learn how queries are executed in Ignite.
+
+[source,sql]
+----
+EXPLAIN SELECT name FROM Person WHERE age = 26;
+----
+
+The execution plan is generated by H2 as described link:http://www.h2database.com/html/performance.html#explain_plan[here, window=_blank].
+
+== OR Operator and Selectivity
+
+//*TODO*: is this still valid?
+
+If a query contains an `OR` operator, then indexes may not be used as expected depending on the complexity of the query.
+For example, for the query `select name from Person where gender='M' and (age = 20 or age = 30)`, an index on the `gender` field will be used instead of an index on the `age` field, although the latter is a more selective index.
+As a workaround for this issue, you can rewrite the query with `UNION ALL` (notice that `UNION` without `ALL` will return `DISTINCT` rows, which will change the query semantics and will further penalize your query performance):
+
+[source,sql]
+----
+SELECT name FROM Person WHERE gender='M' and age = 20
+UNION ALL
+SELECT name FROM Person WHERE gender='M' and age = 30
+----
+
+== Avoid Having Too Many Columns
+
+Avoid having too many columns in the result set of a `SELECT` query. Due to limitations of the H2 query parser, queries with 100+ columns may perform worse than expected.
+
+== Lazy Loading
+
+By default, Ignite attempts to load the whole result set to memory and send it back to the query initiator (which is usually your application).
+This approach provides optimal performance for queries of small or medium result sets.
+However, if the result set is too big to fit in the available memory, it can lead to prolonged GC pauses and even `OutOfMemoryError` exceptions.
+
+To minimize memory consumption, at the cost of a moderate performance hit, you can load and process the result sets lazily by passing the `lazy` parameter to the JDBC and ODBC connection strings or use a similar method available for Java, .NET, and C++ APIs:
+
+[tabs]
+--
+
+tab:Java[]
+[source,java]
+----
+SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10");
+
+// Result set will be loaded lazily.
+query.setLazy(true);
+----
+tab:JDBC[]
+[source,sql]
+----
+jdbc:ignite:thin://192.168.0.15?lazy=true
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+var query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10")
+{
+    // Result set will be loaded lazily.
+    Lazy = true
+};
+----
+tab:C++[]
+--
+
+////
+*TODO* Add tabs for ODBC and other programming languages - C# and C++
+////
+
+== Querying Colocated Data
+
+When Ignite executes a distributed query, it sends sub-queries to individual cluster nodes to fetch the data and groups the results on the reducer node (usually your application).
+If you know in advance that the data you are querying is link:data-modeling/affinity-collocation[colocated] by the `GROUP BY` condition, you can use `SqlFieldsQuery.collocated = true` to tell the SQL engine to do the grouping on the remote nodes.
+This will reduce network traffic between the nodes and query execution time.
+When this flag is set to `true`, the query is executed on individual nodes first and the results are sent to the reducer node for final calculation.
+
+Consider the following example, in which we assume that the data is colocated by `department_id` (in other words, the `department_id` field is configured as the affinity key).
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Employee GROUP BY department_id
+----
+
+Because of the nature of the SUM operation, Ignite sums up the salaries across the elements stored on individual nodes, and then sends these sums to the reducer node where the final result are calculated.
+This operation is already distributed, and enabling the `collocated` flag only slightly improves performance.
+
+Let's take a slightly different example:
+
+[source,sql]
+----
+SELECT AVG(salary) FROM Employee GROUP BY department_id
+----
+
+In this example, Ignite has to fetch all (`salary`, `department_id`) pairs to the reducer node and calculate the results there.
+However, if employees are colocated by the `department_id` field, i.e. employee data for the same department is stored on the same node, setting `SqlFieldsQuery.collocated = true` reduces query execution time because Ignite calculates the averages for each department on the individual nodes and sends the results to the reducer node for final calculation.
+
+
+== Enforcing Join Order
+
+When this flag is set, the query optimizer will not reorder tables in joins.
+In other words, the order in which joins are applied during query execution will be the same as specified in the query.
+Without this flag, the query optimizer can reorder joins to improve performance.
+However, sometimes it might make an incorrect decision.
+This flag helps to control and explicitly specify the order of joins instead of relying on the optimizer.
+
+Consider the following example:
+
+[source, sql]
+----
+SELECT * FROM Person p
+JOIN Company c ON p.company = c.name where p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+This query contains a join between two tables: `Person` and `Company`.
+To get the best performance, we should understand which join will return the smallest result set.
+The table with the smaller result set size should be given first in the join pair.
+To get the size of each result set, let's test each part.
+
+.Q1:
+[source, sql]
+----
+SELECT count(*)
+FROM Person p
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000;
+----
+
+.Q2:
+[source, sql]
+----
+SELECT count(*)
+FROM Company c
+where
+c.name NOT LIKE 'O%';
+----
+
+After running Q1 and Q2, we can get two different outcomes:
+
+Case 1:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 30000
+|Q2 |100000
+|===
+
+Q2 returns more entries than Q1.
+In this case, we don't need to modify the original query, because smaller subset has already been located on the left side of the join.
+
+Case 2:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 50000
+|Q2 |10000
+|===
+
+Q1 returns more entries than Q2. So we need to change the initial query as follows:
+
+[source, sql]
+----
+SELECT *
+FROM Company c
+JOIN Person p
+ON p.company = c.name
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+The force join order hint can be specified as follows:
+
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC driver connection parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver connection attribute]
+* If you use link:SQL/sql-api[SqlFieldsQuery] to execute SQL queries, you can set the enforce join order hint by calling the `SqlFieldsQuery.setEnforceJoinOrder(true)` method.
+
+
+== Increasing Index Inline Size
+
+Every entry in the index has a constant size which is calculated during index creation. This size is called _index inline size_.
+Ideally this size should be enough to store full indexed entry in serialized form.
+When values are not fully included in the index, Ignite may need to perform additional data page reads during index lookup, which can impair performance if persistence is enabled.
+
+
+Here is how values are stored in the index:
+
+// the source code block below uses css-styles from the pygments library. If you change the highlighting library, you should change the syles as well.
+[source,java,subs="quotes"]
+----
+[tok-kt]#int#
+0     1       5
+| tag | value |
+[tok-k]#Total: 5 bytes#
+
+[tok-kt]#long#
+0     1       9
+| tag | value |
+[tok-k]#Total: 9 bytes#
+
+[tok-kt]#String#
+0     1      3             N
+| tag | size | UTF-8 value |
+[tok-k]#Total: 3 + string length#
+
+[tok-kt]#POJO (BinaryObject)#
+0     1         5
+| tag | BO hash |
+[tok-k]#Total: 5#
+----
+
+For primitive data types (bool, byte, short, int, etc.), Ignite automatically calculates the index inline size so that the values are included in full.
+For example, for `int` fields, the inline size is 5 (1 byte for the tag and 4 bytes for the value itself). For `long` fields, the inline size is 9 (1 byte for the tag + 8 bytes for the value).
+
+For binary objects, the index includes the hash of each object, which is enough to avoid collisions. The inline size is 5.
+
+For variable length data, indexes include only first several bytes of the value.
+Therefore, when indexing fields with variable-length data, we recommend that you estimate the length of your field values and set the inline size to a value that includes most (about 95%) or all values.
+For example, if you have a `String` field with 95% of the values containing 10 characters or fewer, you can set the inline size for the index on that field to 13.
+
+
+The inline sizes explained above apply to single field indexes.
+However, when you define an index on a field in the value object or on a non-primary key column, Ignite creates a _composite index_ by appending the primary key to the indexed value.
+Therefore, when calculating the inline size for composite indexes, add up the inline size of the primary key.
+
+
+Below is an example of index inline size calculation for a cache where both key and value are complex objects.
+
+[source, java]
+----
+public class Key {
+    @QuerySqlField
+    private long id;
+
+    @QuerySqlField
+    @AffinityKeyMapped
+    private long affinityKey;
+}
+
+public class Value {
+    @QuerySqlField(index = true)
+    private long longField;
+
+    @QuerySqlField(index = true)
+    private int intField;
+
+    @QuerySqlField(index = true)
+    private String stringField; // we suppose that 95% of the values are 10 symbols
+}
+----
+
+The following table summarizes the inline index sizes for the indexes defined in the example above.
+
+[cols="1,1,1,2",opts="stretch,header"]
+|===
+|Index | Kind | Recommended Inline Size | Comment
+
+| (_key)
+|Primary key index
+| 5
+|Inlined hash of a binary object (5)
+
+|(affinityKey, _key)
+|Affinity key index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(longField, _key)
+|Secondary index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(intField, _key)
+|Secondary index
+|10
+|Inlined int (5) + binary object up to hash (5)
+
+|(stringField, _key)
+|Secondary index
+|18
+|Inlined string (13) + binary object's hash (5) (assuming that the string is {tilde}10 symbols)
+
+|===
+//_
+
+//The inline size for the first two indexes is set via `CacheConfiguration.sqlIndexMaxInlineSize = 29` (because a single property is responsible for two indexes, we set it to the largest value).
+//The inline size for the rest of the indexes is set when you define a corresponding index.
+Note that you will only have to set the inline size for the index on `stringField`. For other indexes, Ignite calculates the inline size automatically.
+
+Refer to the link:SQL/indexes#configuring-index-inline-size[Configuring Index Inline Size] section for the information on how to change the inline size.
+
+You can check the inline size of an existing index in the link:monitoring-metrics/system-views#indexes[INDEXES] system view.
+
+[WARNING]
+====
+Note that since Ignite encodes strings to `UTF-8`, some characters use more than 1 byte.
+====
+
+== Query Parallelism
+
+By default, a SQL query is executed in a single thread on each participating node. This approach is optimal for queries returning small result sets involving index search. For example:
+
+[source,sql]
+----
+SELECT * FROM Person WHERE p.id = ?;
+----
+
+Certain queries might benefit from being executed in multiple threads.
+This relates to queries with table scans and aggregations, which is often the case for HTAP and OLAP workloads.
+For example:
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Person;
+----
+
+The number of threads created on a single node for query execution is configured per cache and by default equals 1.
+You can change the value by setting the `CacheConfiguration.queryParallelism` parameter.
+If you create SQL tables using the CREATE TABLE command, you can use a link:configuring-caches/configuration-overview#cache-templates[cache template] to set this parameter.
+
+If a query contains `JOINs`, then all the participating caches must have the same degree of parallelism.
+
+== Index Hints
+
+Index hints are useful in scenarios when you know that one index is more suitable for certain queries than another.
+You can use them to instruct the query optimizer to choose a more efficient execution plan.
+To do this, you can use `USE INDEX(indexA,...,indexN)` statement as shown in the following example.
+
+
+[source,sql]
+----
+SELECT * FROM Person USE INDEX(index_age)
+WHERE salary > 150000 AND age < 35;
+----
+
+
+== Partition Pruning
+
+Partition pruning is a technique that optimizes queries that use affinity keys in the `WHERE` condition.
+When executing such a query, Ignite  scans only those partitions where the requested data is stored.
+This reduces query time because the query is sent only to the nodes that store the requested partitions.
+
+In the following example, the employee objects are colocated by the `id` field (if an affinity key is not set
+explicitly then the primary key is used as the affinity key):
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR)
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+In the next example, the affinity key is set explicitly and, therefore, will be used to colocate data and direct
+queries to the nodes that keep primary copies of the data:
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR) WITH "AFFINITY_KEY=department_id"
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+
+[NOTE]
+====
+Refer to link:data-modeling/affinity-collocation[affinity colocation] page for more details
+on how data gets colocated and how it helps boost performance in distributed storages like Ignite.
+====
+
+== Skip Reducer on Update
+
+When Ignite executes a DML operation, it first fetches all the affected intermediate rows for analysis to the reducer node (usually your application), and only then prepares batches of updated values that will be sent to remote nodes.
+
+This approach might affect performance and saturate the network if a DML operation has to move many entries.
+
+Use this flag as a hint for the SQL engine to do all intermediate rows analysis and updates “in-place” on the server nodes. The hint is supported for JDBC and ODBC connections.
+
+
+[tabs]
+--
+tab:JDBC Connection String[]
+[source,text]
+----
+//jdbc connection string
+jdbc:ignite:thin://192.168.0.15/skipReducerOnUpdate=true
+----
+--
+
+== SQL On-heap Row Cache
+
+Ignite stores data and indexes in its own memory space outside of Java heap. This means that with every data
+access, a part of the data will be copied from the off-heap space to Java heap, potentially deserialized, and kept in
+the heap as long as your application or server node references it.
+
+The SQL on-heap row cache is intended to store hot rows (key-value objects) in Java heap, minimizing resources
+spent for data copying and deserialization. Each cached row refers to an entry in the off-heap region and can be
+invalidated when one of the following happens:
+
+* The master entry stored in the off-heap region is updated or removed.
+* The data page that stores the master entry is evicted from RAM.
+
+The on-heap row cache can be enabled for a specific cache/table (if you use `CREATE TABLE` to create SQL tables and caches, then the parameter can be passed via a link:configuring-caches/configuration-overview#cache-templates[cache template]):
+
+
+[source,xml]
+----
+include::code-snippets/xml/sql-on-heap-cache.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+////
+*TODO* Add tabs for ODBC/JDBC and other programming languages - Java C# and C++
+////
+
+If the row cache is enabled, you might be able to trade RAM for performance. You might get up to a 2x performance increase for some SQL queries and use cases by allocating more RAM for rows caching purposes.
+
+[WARNING]
+====
+[discrete]
+=== SQL On-Heap Row Cache Size
+
+Presently, the cache is unlimited and can occupy as much RAM as allocated to your memory data regions. Make sure to:
+
+* Set the JVM max heap size equal to the total size of all the data regions that store caches for which this on-heap row cache is enabled.
+
+* link:perf-troubleshooting-guide/memory-tuning#java-heap-and-gc-tuning[Tune] JVM garbage collection accordingly.
+====
+
+== Using TIMESTAMP instead of DATE
+
+//TODO: is this still valid?
+Use the `TIMESTAMP` type instead of `DATE` whenever possible. Presently, the `DATE` type is serialized/deserialized very inefficiently resulting in performance degradation.
diff --git a/docs/_docs/binary-client-protocol/binary-client-protocol.adoc b/docs/_docs/binary-client-protocol/binary-client-protocol.adoc
new file mode 100644
index 0000000..9caf373
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/binary-client-protocol.adoc
@@ -0,0 +1,286 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Client Protocol
+
+== Overview
+
+Ignite binary client protocol enables user applications to communicate with an existing Ignite cluster without starting a full-fledged Ignite node. An application can connect to the cluster through a raw TCP socket. Once the connection is established, the application can communicate with the Ignite cluster and perform cache operations using the established format.
+
+To communicate with the Ignite cluster, a client must obey the data format and communication details explained below.
+
+== Data Format
+
+=== Byte Ordering
+
+Ignite binary client protocol has little-endian byte ordering.
+
+=== Data Objects
+
+User data, such as cache keys and values, are represented in the Ignite link:key-value-api/binary-objects[Binary Object] format. A data object can be a standard (predefined) type or a complex object. For the complete list of data types supported, see the link:binary-client-protocol/data-format[Data Format] section.
+
+== Message Format
+
+All messages- requests and responses, including handshake, start with an `int` type message length (excluding these first 4 bytes) followed by the payload (message body).
+
+=== Handshake
+
+The binary client protocol requires a connection handshake to ensure that client and server versions are compatible. The following tables show the structure of handshake message request and response. Refer to the <<Example>> section on how to send and receive a handshake request and response respectively.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|   Description
+|int| Length of handshake payload
+|byte|    Handshake code, always 1.
+|short|   Version major.
+|short|   Version minor.
+|short|   Version patch.
+|byte|    Client code, always 2.
+|String|  Username
+|String|  Password
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type (success) |   Description
+|int| Success message length, 1.
+|byte|    Success flag, 1.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type (failure)  |  Description
+|int| Error message length.
+|byte|    Success flag, 0.
+|short|   Server version major.
+|short|   Server version minor.
+|short|   Server version patch.
+|String|  Error message.
+|===
+
+
+=== Standard Message Header
+
+Client operation messages are composed of a header and operation-specific data. Each operation has its own <<Client Operations,data request and response format>>, with a common header.
+
+The following tables and examples show the request and response structure of a client operation message header:
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |   Description
+|int| Length of payload.
+|short|   Operation code
+|long|    Request id, generated by client and returned as-is in response
+|===
+
+
+.Request header
+[source, java]
+----
+private static void writeRequestHeader(int reqLength, short opCode, long reqId, DataOutputStream out) throws IOException {
+  // Message length
+  writeIntLittleEndian(10 + reqLength, out);
+
+  // Op code
+  writeShortLittleEndian(opCode, out);
+
+  // Request id
+  writeLongLittleEndian(reqId, out);
+}
+----
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type | Description
+|int| Length of response message.
+|long|    Request id (see above)
+|int| Status code (0 for success, otherwise error code)
+|String|  Error message (present only when status is not 0)
+|===
+
+
+
+.Response header
+[source, java]
+----
+private static void readResponseHeader(DataInputStream in) throws IOException {
+  // Response length
+  final int len = readIntLittleEndian(in);
+
+  // Request id
+  long resReqId = readLongLittleEndian(in);
+
+  // Success code
+  int statusCode = readIntLittleEndian(in);
+}
+----
+
+
+== Connectivity
+
+=== TCP Socket
+
+Client applications should connect to server nodes with a TCP socket. By default, the connector is enabled on port 10800. You can configure the port number and other server-side​ connection parameters in the `clientConnectorConfiguration` property of `IgniteConfiguration` of your cluster, as shown below:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!-- Thin client connection configuration. -->
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+            <property name="host" value="127.0.0.1"/>
+            <property name="port" value="10900"/>
+            <property name="portRange" value="30"/>
+        </bean>
+    </property>
+
+    <!-- Other Ignite Configurations. -->
+
+</bean>
+
+----
+
+
+tab:Java[]
+
+[source, java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+
+ClientConnectorConfiguration ccfg = new ClientConnectorConfiguration();
+ccfg.setHost("127.0.0.1");
+ccfg.setPort(10900);
+ccfg.setPortRange(30);
+
+// Set client connection configuration in IgniteConfiguration
+cfg.setClientConnectorConfiguration(ccfg);
+
+// Start Ignite node
+Ignition.start(cfg);
+----
+
+--
+
+=== Connection Handshake
+
+Besides socket connection, the thin client protocol requires a connection handshake to ensure that client and server versions are compatible. Note that handshake must be the first message after the connection is established.
+
+For the handshake message request and response structure, see the <<Handshake>> section above.
+
+
+=== Example
+
+
+.Socket and Handshake Connection
+[source, java]
+----
+Socket socket = new Socket();
+socket.connect(new InetSocketAddress("127.0.0.1", 10800));
+
+String username = "yourUsername";
+
+String password = "yourPassword";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Message length
+writeIntLittleEndian(18 + username.length() + password.length(), out);
+
+// Handshake operation
+writeByteLittleEndian(1, out);
+
+// Protocol version 1.0.0
+writeShortLittleEndian(1, out);
+writeShortLittleEndian(1, out);
+writeShortLittleEndian(0, out);
+
+// Client code: thin client
+writeByteLittleEndian(2, out);
+
+// username
+writeString(username, out);
+
+// password
+writeString(password, out);
+
+// send request
+out.flush();
+
+// Receive handshake response
+DataInputStream in = new DataInputStream(socket.getInputStream());
+int length = readIntLittleEndian(in);
+int successFlag = readByteLittleEndian(in);
+
+// Since Ignite binary protocol uses little-endian byte order,
+// we need to implement big-endian to little-endian
+// conversion methods for write and read.
+
+// Write int in little-endian byte order
+private static void writeIntLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.write((v >>> 0) & 0xFF);
+  out.write((v >>> 8) & 0xFF);
+  out.write((v >>> 16) & 0xFF);
+  out.write((v >>> 24) & 0xFF);
+}
+
+// Write short in little-endian byte order
+private static final void writeShortLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.write((v >>> 0) & 0xFF);
+  out.write((v >>> 8) & 0xFF);
+}
+
+// Write byte in little-endian byte order
+private static void writeByteLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.writeByte(v);
+}
+
+// Read int in little-endian byte order
+private static int readIntLittleEndian(DataInputStream in) throws IOException {
+  int ch1 = in.read();
+  int ch2 = in.read();
+  int ch3 = in.read();
+  int ch4 = in.read();
+  if ((ch1 | ch2 | ch3 | ch4) < 0)
+    throw new EOFException();
+  return ((ch4 << 24) + (ch3 << 16) + (ch2 << 8) + (ch1 << 0));
+}
+
+
+// Read byte in little-endian byte order
+private static byte readByteLittleEndian(DataInputStream in) throws IOException {
+  return in.readByte();
+}
+
+// Other write and read methods
+
+----
+
+
+== Client Operations
+
+Upon successful handshake, a client can start performing various cache operations:
+
+* link:binary-client-protocol/key-value-queries[Key-Value Queries]
+* link:binary-client-protocol/sql-and-scan-queries[SQL and Scan Queries]
+* link:binary-client-protocol/binary-type-metadata[Binary-Type Operations]
+* link:binary-client-protocol/cache-configuration[Cache Configuration Operations]
diff --git a/docs/_docs/binary-client-protocol/binary-type-metadata.adoc b/docs/_docs/binary-client-protocol/binary-type-metadata.adoc
new file mode 100644
index 0000000..320a83c
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/binary-type-metadata.adoc
@@ -0,0 +1,421 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Type Metadata
+
+== Operation Codes
+
+Upon a successful handshake with an Ignite server node a client can start performing binary-type related operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+
+[cols="2,1",opts="header"]
+|===
+|Operation  | OP_CODE
+|OP_GET_BINARY_TYPE_NAME| 3000
+|OP_REGISTER_BINARY_TYPE_NAME|    3001
+|OP_GET_BINARY_TYPE | 3002
+|OP_PUT_BINARY_TYPE|  3003
+|OP_RESOURCE_CLOSE|   0
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_GET_BINARY_TYPE_NAME
+
+Gets the platform-specific full binary type name by id. For example, .NET and Java can map to the same type Foo, but classes will be Apache.Ignite.Foo in .NET and org.apache.ignite.Foo in Java.
+
+Names are registered with OP_REGISTER_BINARY_TYPE_NAME.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type   | Description
+|Header |  Request header.
+|byte |    Platform id:
+JAVA = 0
+DOTNET = 1
+|int| Type id; Java-style hash code of the type name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  |Description
+|Header |  Response header.
+|String |  Binary type name.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+int typeLen = type.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_GET_BINARY_TYPE_NAME, 1, out);
+
+// Platform id
+writeByteLittleEndian(0, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+----
+
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting String
+int typeCode = readByteLittleEndian(in); // type code
+int strLen = readIntLittleEndian(in); // length
+
+byte[] buf = new byte[strLen];
+
+readFully(in, buf, 0, strLen);
+
+String s = new String(buf);
+
+System.out.println(s);
+----
+
+
+--
+
+== OP_GET_BINARY_TYPE
+
+Gets the binary type information by id.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type   | Description
+|Header |  Request header.
+|int | Type id; Java-style hash code of the type name.
+|===
+
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type | Description
+|Header|  Response header.
+|bool|    False: binary type does not exist, response end.
+True: binary type exists, response as follows.
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|String|  Affinity key field name.
+|int| BinaryField count.
+|BinaryField * count| Structure of BinaryField:
+
+`String`  Field name
+
+`int` Type id; Java-style hash code of the type name.
+
+`int` Field id; Java-style hash code of the field name.
+
+|bool|    Is Enum or not.
+
+If set to true, then you have to pass the following 2 parameters. Otherwise, skip them.
+|int| _Pass only if 'is enum' parameter is 'true'_.
+
+Enum field count.
+|String + int|    _Pass only if 'is enum' parameter is 'true'_.
+
+Enum values. An enum value is a pair of a literal value (String) and numerical value (int).
+
+Repeat for as many times as the Enum field count that is obtained in the previous parameter.
+
+|int| Schema count.
+|BinarySchema|    Structure of BinarySchema:
+
+`int` Unique schema id.
+
+`int` Number of fields in the schema.
+
+`int` Field Id; Java-style hash code of the field name. Repeat for as many times as the total number of fields in the schema.
+
+Repeat for as many times as the BinarySchema count that is obtained in the previous parameter.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(4, OP_BINARY_TYPE_GET, 1, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+boolean typeExist = readBooleanLittleEndian(in);
+
+int typeId = readIntLittleEndian(in);
+
+String typeName = readString(in);
+
+String affinityFieldName = readString(in);
+
+int fieldCount = readIntLittleEndian(in);
+
+for (int i = 0; i < fieldCount; i++)
+    readBinaryTypeField(in);
+
+boolean isEnum = readBooleanLittleEndian(in);
+
+int schemaCount = readIntLittleEndian(in);
+
+// Read binary schemas
+for (int i = 0; i < schemaCount; i++) {
+  int schemaId = readIntLittleEndian(in); // Schema Id
+
+  int fieldCount = readIntLittleEndian(in); // field count
+
+  for (int j = 0; j < fieldCount; j++) {
+    System.out.println(readIntLittleEndian(in)); // field id
+  }
+}
+
+private static void readBinaryTypeField (DataInputStream in) throws IOException{
+  String fieldName = readString(in);
+  int fieldTypeId = readIntLittleEndian(in);
+  int fieldId = readIntLittleEndian(in);
+  System.out.println(fieldName);
+}
+----
+--
+
+
+== OP_REGISTER_BINARY_TYPE_NAME
+
+Registers the platform-specific full binary type name by id. For example, .NET and Java can map to the same type Foo, but classes will be Apache.Ignite.Foo in .NET and org.apache.ignite.Foo in Java.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  | Description
+|Header |  Request header.
+|byte|    Platform id:
+JAVA = 0
+DOTNET = 1
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|===
+
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  |Description
+|Header | Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+int typeLen = type.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(20 + typeLen, OP_PUT_BINARY_TYPE_NAME, 1, out);
+
+//Platform id
+writeByteLittleEndian(0, out);
+
+//Type id
+writeIntLittleEndian(type.hashCode(), out);
+
+// Type name
+writeString(type, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+
+--
+
+== OP_PUT_BINARY_TYPE
+
+Registers binary type information in cluster.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |  Description
+|Header|  Response header.
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|String|  Affinity key field name.
+|int| BinaryField count.
+|BinaryField| Structure of BinaryField:
+
+`String`  Field name
+
+`int` Type id; Java-style hash code of the type name.
+
+`int` Field id; Java-style hash code of the field name.
+
+Repeat for as many times as the BinaryField count that is passed in the previous parameter.
+|bool|    Is Enum or not.
+
+If set to true, then you have to pass the following 2 parameters. Otherwise, skip them.
+|int| Pass only if 'is enum' parameter is 'true'.
+
+Enum field count.
+|String + int|    Pass only if 'is enum' parameter is 'true'.
+
+Enum values. An enum value is a pair of a literal value (String) and numerical value (int).
+
+Repeat for as many times as the Enum field count that is passed in the previous parameter.
+|int| BinarySchema count.
+|BinarySchema|    Structure of BinarySchema:
+
+`int` Unique schema id.
+
+`int` Number of fields in the schema.
+
+`int` Field id; Java-style hash code of the field name. Repeat for as many times as the total number of fields in the schema.
+
+Repeat for as many times as the BinarySchema count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type | Description
+|Header |  Response header.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(120, OP_BINARY_TYPE_PUT, 1, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+
+// Type name
+writeString(type, out);
+
+// Affinity key field name
+writeByteLittleEndian(101, out);
+
+// Field count
+writeIntLittleEndian(3, out);
+
+// Field 1
+String field1 = "id";
+writeBinaryTypeField(field1, "long", out);
+
+// Field 2
+String field2 = "name";
+writeBinaryTypeField(field2, "String", out);
+
+// Field 3
+String field3 = "salary";
+writeBinaryTypeField(field3, "int", out);
+
+// isEnum
+out.writeBoolean(false);
+
+// Schema count
+writeIntLittleEndian(1, out);
+
+// Schema
+writeIntLittleEndian(657, out);  // Schema id; can be any custom value
+writeIntLittleEndian(3, out);  // field count
+writeIntLittleEndian(field1.hashCode(), out);
+writeIntLittleEndian(field2.hashCode(), out);
+writeIntLittleEndian(field3.hashCode(), out);
+
+private static void writeBinaryTypeField (String field, String fieldType, DataOutputStream out) throws IOException{
+  writeString(field, out);
+  writeIntLittleEndian(fieldType.hashCode(), out);
+  writeIntLittleEndian(field.hashCode(), out);
+}
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+
+--
+
diff --git a/docs/_docs/binary-client-protocol/cache-configuration.adoc b/docs/_docs/binary-client-protocol/cache-configuration.adoc
new file mode 100644
index 0000000..9c2a9b1
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/cache-configuration.adoc
@@ -0,0 +1,714 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cache Configuration
+
+== Operation Codes
+
+Upon successful handshake with an Ignite server node, a client can start performing various cahe configuration operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+| Operation | OP_CODE
+|OP_CACHE_GET_NAMES|  1050
+|OP_CACHE_CREATE_WITH_NAME|   1051
+|OP_CACHE_GET_OR_CREATE_WITH_NAME|    1052
+|OP_CACHE_CREATE_WITH_CONFIGURATION|  1053
+|OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION|   1054
+|OP_CACHE_GET_CONFIGURATION|  1055
+|OP_CACHE_DESTROY|    1056
+|OP_QUERY_SCAN|   2000
+|OP_QUERY_SCAN_CURSOR_GET_PAGE|   2001
+|OP_QUERY_SQL|    2002
+|OP_QUERY_SQL_CURSOR_GET_PAGE|    2003
+|OP_QUERY_SQL_FIELDS| 2004
+|OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE| 2005
+|OP_BINARY_TYPE_NAME_GET| 3000
+|OP_BINARY_TYPE_NAME_PUT| 3001
+|OP_BINARY_TYPE_GET|  3002
+|OP_BINARY_TYPE_PUT|  3003
+|===
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_CACHE_CREATE_WITH_NAME
+
+Creates a cache with a given name. Cache template can be applied if there is '{asterisk}' in the cache name. Throws exception if a cache with specified name already exists.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request header.
+|String|  Cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+String cacheName = "myNewCache";
+
+int nameLength = cacheName.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5 + nameLength, OP_CACHE_CREATE_WITH_NAME, 1, out);
+
+// Cache name
+writeString(cacheName, out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+----
+--
+
+
+
+== OP_CACHE_GET_OR_CREATE_WITH_NAME
+
+Creates a cache with a given name. Cache template can be applied if there is '{asterisk}' in the cache name. Does nothing if the cache exists.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|String|  Cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+String cacheName = "myNewCache";
+
+int nameLength = cacheName.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5 + nameLength, OP_CACHE_GET_OR_CREATE_WITH_NAME, 1, out);
+
+// Cache name
+writeString(cacheName, out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_GET_NAMES
+
+Gets existing cache names.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Cache count.
+|String|  Cache name.
+
+Repeat for as many times as the cache count that is obtained in the previous parameter.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_GET_NAMES, 1, out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+// Cache count
+int cacheCount = readIntLittleEndian(in);
+
+// Cache names
+for (int i = 0; i < cacheCount; i++) {
+  int type = readByteLittleEndian(in); // type code
+
+  int strLen = readIntLittleEndian(in); // length
+
+  byte[] buf = new byte[strLen];
+
+  readFully(in, buf, 0, strLen);
+
+  String s = new String(buf); // cache name
+
+  System.out.println(s);
+}
+
+----
+--
+
+
+== OP_CACHE_GET_CONFIGURATION
+
+Gets configuration for the given cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Flag.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Length of the configuration in bytes (all the configuration parameters).
+|CacheConfiguration|  Structure of Cache configuration (See below).
+|===
+
+
+Cache Configuration
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|int| Number of backups.
+|int| CacheMode:
+
+LOCAL = 0
+
+REPLICATED = 1
+
+PARTITIONED = 2
+
+|bool|    CopyOnRead
+|String|  DataRegionName
+|bool|    EagerTTL
+|bool|    StatisticsEnabled
+|String|  GroupName
+|bool|    Invalidate
+|long|    DefaultLockTimeout (milliseconds)
+|int| MaxQueryIterators
+|String|  Name
+|bool|    IsOnheapCacheEnabled
+|int| PartitionLossPolicy:
+
+READ_ONLY_SAFE = 0
+
+READ_ONLY_ALL = 1
+
+READ_WRITE_SAFE = 2
+
+READ_WRITE_ALL = 3
+
+IGNORE = 4
+
+|int| QueryDetailMetricsSize
+|int| QueryParellelism
+|bool|    ReadFromBackup
+|int| RebalanceBatchSize
+|long|    RebalanceBatchesPrefetchCount
+|long|    RebalanceDelay (milliseconds)
+|int| RebalanceMode:
+
+SYNC = 0
+
+ASYNC = 1
+
+NONE = 2
+
+|int| RebalanceOrder
+|long|    RebalanceThrottle (milliseconds)
+|long|    RebalanceTimeout (milliseconds)
+|bool|    SqlEscapeAll
+|int| SqlIndexInlineMaxSize
+|String|  SqlSchema
+|int| WriteSynchronizationMode:
+
+FULL_SYNC = 0
+
+FULL_ASYNC = 1
+
+PRIMARY_SYNC = 2
+
+|int| CacheKeyConfiguration count.
+|CacheKeyConfiguration|   Structure of CacheKeyConfiguration:
+
+`String` Type name
+
+`String` Affinity key field name
+
+Repeat for as many times as the CacheKeyConfiguration count that is obtained in the previous parameter.
+int QueryEntity count.
+|QueryEntity * count| Structure of QueryEntity (see below).
+|===
+
+
+QueryEntity
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|String|  Key type name.
+|String|  Value type name.
+|String|  Table name.
+|String|  Key field name.
+|String|  Value field name.
+|int| QueryField count
+|QueryField * count|  Structure of QueryField:
+
+`String` Name
+
+`String` Type name
+
+`bool` Is key field
+
+`bool` Is notNull constraint field
+
+Repeat for as many times as the QueryField count that is obtained in the previous parameter.
+|int| Alias count
+|(String + String) * count|   Field name aliases.
+|int| QueryIndex count
+|QueryIndex * count | Structure of QueryIndex:
+
+`String`  Index name
+
+`byte`    Index type:
+
+SORTED = 0
+
+FULLTEXT = 1
+
+GEOSPATIAL = 2
+
+`int` Inline size
+
+`int` Field count
+
+`(string + bool) * count`  Fields (name + IsDescensing)
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String cacheName = "myCache";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_GET_CONFIGURATION, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+// Config length
+int configLen = readIntLittleEndian(in);
+
+// CacheAtomicityMode
+int cacheAtomicityMode = readIntLittleEndian(in);
+
+// Backups
+int backups = readIntLittleEndian(in);
+
+// CacheMode
+int cacheMode = readIntLittleEndian(in);
+
+// CopyOnRead
+boolean copyOnRead = readBooleanLittleEndian(in);
+
+// Other configurations
+
+----
+--
+
+
+== OP_CACHE_CREATE_WITH_CONFIGURATION
+
+Creates cache with provided configuration. An exception is thrown if the name is already in use.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Length of the configuration in bytes (all the used configuration parameters).
+|short|   Number of configuration parameters.
+|short + property type |   Configuration Property data.
+
+Repeat for as many times as the number of configuration parameters.
+|===
+
+
+Any number of configuration parameters can be provided. Note that `Name` is required.
+
+Cache configuration data is specified in key-value form, where key is the `short` property id and value is property-specific data. Table below describes all available parameters.
+
+
+[cols="1,1,3",opts="header"]
+|===
+|Property Code |   Property Type|   Description
+|2|   int| CacheAtomicityMode:
+
+TRANSACTIONAL = 0,
+
+ATOMIC = 1
+|3|   int| Backups
+|1|   int| CacheMode:
+LOCAL = 0, REPLICATED = 1, PARTITIONED = 2
+|5|   boolean| CopyOnRead
+|100| String|  DataRegionName
+|405| boolean| EagerTtl
+|406| boolean| StatisticsEnabled
+|400| String|  GroupName
+|402| long|    DefaultLockTimeout (milliseconds)
+|403| int| MaxConcurrentAsyncOperations
+|206| int| MaxQueryIterators
+|0|   String|  Name
+|101| bool|    IsOnheapcacheEnabled
+|404| int| PartitionLossPolicy:
+
+READ_ONLY_SAFE = 0,
+
+ READ_ONLY_ALL = 1,
+
+ READ_WRITE_SAFE = 2,
+
+ READ_WRITE_ALL = 3,
+
+ IGNORE = 4
+|202| int| QueryDetailMetricsSize
+|201| int| QueryParallelism
+|6|   bool|    ReadFromBackup
+|303| int| RebalanceBatchSize
+|304| long|    RebalanceBatchesPrefetchCount
+|301| long|    RebalanceDelay (milliseconds)
+|300| int| RebalanceMode: SYNC = 0, ASYNC = 1, NONE = 2
+|305| int| RebalanceOrder
+|306| long|    RebalanceThrottle (milliseconds)
+|302| long|    RebalanceTimeout (milliseconds)
+|205| bool|    SqlEscapeAll
+|204| int| SqlIndexInlineMaxSize
+|203| String|  SqlSchema
+|4|   int| WriteSynchronizationMode:
+
+FULL_SYNC = 0,
+
+ FULL_ASYNC = 1,
+
+PRIMARY_SYNC = 2
+|401| int + CacheKeyConfiguration * count| CacheKeyConfiguration count + CacheKeyConfiguration
+
+Structure of CacheKeyConfiguration:
+
+`String` Type name
+
+`String` Affinity key field name
+|200 | int + QueryEntity * count |  QueryEntity count + QueryEntity
+
+Structure of QueryEntity: (see below)
+|===
+
+
+
+QueryEntity
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|String|  Key type name.
+|String|  Value type name.
+|String|  Table name.
+|String|  Key field name.
+|String|  Value field name.
+|int| QueryField count
+|QueryField|  Structure of QueryField:
+
+`String` Name
+
+`String` Type name
+
+`bool` Is key field
+
+`bool` Is notNull constraint field
+
+Repeat for as many times as the QueryField count.
+|int| Alias count
+|String + String| Field name alias.
+
+Repeat for as many times as the alias count.
+|int| QueryIndex count
+|QueryIndex|  Structure of QueryIndex:
+
+`String`  Index name
+
+`byte`    Index type:
+
+SORTED = 0
+
+FULLTEXT = 1
+
+GEOSPATIAL = 2
+
+`int` Inline size
+
+`int` Field count
+
+`string + bool` Fields (name + IsDescensing)
+
+Repeat for as many times as the field count that is passed in the previous parameter.
+
+Repeat for as many times as the QueryIndex count.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(30, OP_CACHE_CREATE_WITH_CONFIGURATION, 1, out);
+
+// Config length in bytes
+writeIntLittleEndian(16, out);
+
+// Number of properties
+writeShortLittleEndian(2, out);
+
+// Backups opcode
+writeShortLittleEndian(3, out);
+// Backups: 2
+writeIntLittleEndian(2, out);
+
+// Name opcode
+writeShortLittleEndian(0, out);
+// Name
+writeString("myNewCache", out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION
+
+Creates cache with provided configuration. Does nothing if the name is already in use.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|CacheConfiguration|  Cache configuration (see format above).
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+writeRequestHeader(30, OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION, 1, out);
+
+// Config length in bytes
+writeIntLittleEndian(16, out);
+
+// Number of properties
+writeShortLittleEndian(2, out);
+
+// Backups opcode
+writeShortLittleEndian(3, out);
+
+// Backups: 2
+writeIntLittleEndian(2, out);
+
+// Name opcode
+writeShortLittleEndian(0, out);
+
+// Name
+writeString("myNewCache", out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_DESTROY
+
+Destroys the cache with a given name.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String cacheName = "myCache";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(4, OP_CACHE_DESTROY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+--
+
diff --git a/docs/_docs/binary-client-protocol/data-format.adoc b/docs/_docs/binary-client-protocol/data-format.adoc
new file mode 100644
index 0000000..b56b8c0
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/data-format.adoc
@@ -0,0 +1,1072 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Format
+
+Standard data types are represented as a combination of type code and value.
+
+:table_opts: cols="1,1,4",opts="header"
+
+[{table_opts}]
+|===
+|Field |  Size in bytes |  Description
+|`type_code` |  1 |   Signed one-byte integer code that indicates the type of the value.
+|`value` |  Variable|    Value itself. Its format and size depends on the type_code
+|===
+
+
+Below you can find description of the supported types and their format.
+
+
+== Primitives
+
+Primitives are the very basic types, such as numbers.
+
+
+=== Byte
+[{table_opts}]
+|===
+| Field  | Size in bytes  | Description
+|Type |   1|   1
+|Value  | 1  | Single byte value.
+
+|===
+
+=== Short
+
+Type code: 2;
+
+2-bytes long signed integer number. Little-endian.
+
+Structure:
+
+
+[{table_opts}]
+|===
+| Field |   Size in bytes | Description
+| `Value`  |  2|   The value.
+|===
+
+
+=== Int
+
+Type code: 3;
+
+4-bytes long signed integer number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`value`|   4|   The value.
+|===
+
+=== Long
+
+Type code: 4;
+
+8-bytes long signed integer number. Little-endian.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|`value` |   8  | The value.
+|===
+
+
+=== Float
+
+Type code: 5;
+
+4-byte long IEEE 754 floating-point number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+| value|   4|   The value.
+|===
+
+=== Double
+Type code: 6;
+
+8-byte long IEEE 754 floating-point number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value  | 8|   The value.
+
+|===
+
+=== Char
+Type code: 7;
+
+Single UTF-16 code unit. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value |   2 |   The UTF-16 code unit in little-endian.
+|===
+
+
+=== Bool
+
+Type code: 8;
+
+Boolean value. Zero for false and non-zero for true.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes |   Description
+
+|value |  1 |  The value. Zero for false and non-zero for true.
+
+|===
+
+=== NULL
+
+Type code: 101;
+
+This is not exactly a type. It's just a null value, which can be assigned to object of any type.
+Has no payload, only consists of the type code.
+
+== Standard objects
+
+=== String
+
+Type code: 9;
+
+String in UTF-8 encoding. Should always be a valid UTF-8 string.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes |   Description
+|length|  4|   Signed integer number in little-endian. Length of the string in UTF-8 code units, i.e. in bytes.
+| data |    length |  String data in UTF-8 encoding. Without BOM.
+
+|===
+
+=== UUID (Guid)
+
+
+Type code: 10;
+
+A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|most_significant_bits|   8|   64-bit number in little endian, representing 64 most significant bits of UUID.
+|least_significant_bits|  8|   64-bit number in little endian, representing 64 least significant bits of UUID.
+
+|===
+
+=== Timestamp
+
+Type code: 33;
+
+More precise than a Date data type. Except for a milliseconds since epoch, contains a nanoseconds fraction of a last millisecond, which value could be in a range from 0 to 999999. It means, the full time stamp in nanoseconds can be obtained with the following expression: `msecs_since_epoch \* 1000000 + msec_fraction_in_nsecs`.
+
+NOTE: The nanoseconds time stamp evaluation expression is provided for clarification purposes only. One should not use the expression in production code, as in some languages the expression may result in integer number overflow.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes  | Description
+|`msecs_since_epoch`|   8|   Signed integer number in little-endian. Number of milliseconds elapsed since 00:00:00 1 Jan 1970 UTC. This format widely known as a Unix or POSIX time.
+|`msec_fraction_in_nsecs`|  4|   Signed integer number in little-endian. Nanosecond fraction of a millisecond.
+
+|===
+
+=== Date
+
+Type code: 11;
+
+Date, represented as a number of milliseconds elapsed since 00:00:00 1 Jan 1970 UTC. This format widely known as a Unix or POSIX time.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`msecs_since_epoch`|   8|   The value. Signed integer number in little-endian.
+|===
+
+=== Time
+
+Type code: 36;
+
+Time, represented as a number of milliseconds elapsed since midnight, i.e. 00:00:00 UTC.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value|   8|   Signed integer number in little-endian. Number of milliseconds elapsed since 00:00:00 UTC.
+
+|===
+
+=== Decimal
+
+Type code: 30;
+
+Numeric value of any desired precision and scale.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|scale|   4|   Signed integer number in little-endian. Effectively, a power of the ten, on which the unscaled value should be divided. For example, 42 with scale 3 is 0.042, 42 with scale -3 is 42000, and 42 with scale 1 is 42.
+|length|  4|   Signed integer number in little-endian. Length of the number in bytes.
+|data|    length|  First bit is the flag of negativity. If it's set to 1, then value is negative. Other bits form signed integer number of variable length in big-endian format.
+
+|===
+
+=== Enum
+
+Type code: 28;
+
+Value of an enumerable type. For such types defined only a finite number of named values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id| 4|   Signed integer number in little-endian. See <<Type ID>> for details.
+|ordinal| 4|   Signed integer number stored in little-endian. Enumeration value ordinal . Its position in its enum declaration, where the initial constant is assigned an ordinal of zero.
+
+|===
+
+== Arrays of primitives
+
+Arrays of this kind only contain payloads of values as elements. They all have similar format. See format description in a table below for details. Pay attention that array only contains payloads, not type codes.
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`length`|  4|   Signed integer number. Number of elements in the array.
+|`element_0_payload`|   Depends on the type.|    Payload of the value 0.
+|`element_1_payload`|   Depends on the type.|    Payload of the value 1.
+|... |... |...
+|`element_N_payload`|   Depends on the type. |   Payload of the value N.
+
+|===
+
+=== Byte array
+
+Type code: 12;
+
+Array of bytes. May be either a piece of raw data, or array of small signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    length|  Elements sequence. Every element is a payload of type "byte".
+
+|===
+
+Short array
+
+Type code: 13;
+
+Array of short signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 2`|  Elements sequence. Every element is a payload of type "short".
+
+|===
+
+=== Int array
+
+Type code: 14;
+
+Array of signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 4`|  Elements sequence. Every element is a payload of type "int".
+
+|===
+
+=== Long array
+
+Type code: 15;
+
+Array of long signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 8`|  Elements sequence. Every element is a payload of type "long".
+
+|===
+
+=== Float array
+
+Type code: 16;
+
+Array of floating point numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 4` | Elements sequence. Every element is a payload of type "float".
+
+|===
+
+=== Double array
+
+Type code: 17;
+
+Array of floating point numbers with double precision.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 8`|  Elements sequence. Every element is a payload of type "double".
+
+|===
+
+=== Char array
+
+Type code: 18;
+
+Array of UTF-16 code units. Unlike string, this type is not necessary contains valid UTF-16 text.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length | 4|   Signed integer number. Number of elements in the array.
+|elements|    length * 2|  Elements sequence. Every element is a payload of type "char".
+
+|===
+
+=== Bool array
+
+Type code: 19;
+
+Array of boolean values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    length|  Elements sequence. Every element is a payload of type "bool".
+
+|===
+
+== Arrays of standard objects
+
+Arrays of this kind contain full values as elements. It means, their elements contain type code as well as payload. This format allows for elements of such collections to be NULL values. That's why they are called "objects". They all have similar format. See format description in a table below for details.
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`length` | 4|   Signed integer number.  Number of elements in the array.
+|`element_0_full_value`|    Depends on value type.|  Full value of the element 0. Contains of type code and payload. Also, can be NULL.
+|`element_1_full_value`|    Depends on value type.|  Full value of the element 1 or NULL.
+|... |...| ...
+|`element_N_full_value`|    Depends on value type.|  Full value of the element N or NULL.
+
+|===
+
+=== String array
+
+Type code: 20;
+
+Array of UTF-8 string values.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Depends on every string length. Every element size is either `5 + value_length` for string, or 1 for `NULL`.|  Elements sequence. Every element is a full value of type "string", including type code, or `NULL`.
+
+|===
+
+=== UUID (Guid) array
+
+Type code: 21;
+
+Array of UUIDs (Guids).
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 17 for UUID, or 1 for NULL.|  Elements sequence. Every element is a full value of type "UUID", including type code, or NULL.
+
+|===
+
+=== Timestamp array
+
+Type code: 34;
+
+Array of timestamp values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 13 for Timestamp, or 1 for NULL.| Elements sequence. Every element is a full value of type "timestamp", including type code, or NULL.
+
+|===
+
+=== Date array
+
+Type code: 22;
+
+Array of dates.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 9 for Date, or 1 for NULL.|   Elements sequence. Every element is a full value of type "date", including type code, or NULL.
+
+|===
+
+=== Time array
+
+Type code: 37;
+
+Array of time values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements   | Variable. Every element size is either 9 for Time, or 1 for NULL.|   Elements sequence. Every element is a full value of type "time", including type code, or NULL.
+
+|===
+
+=== Decimal array
+
+Type code: 31;
+
+Array of decimal values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either `9 + value_length` for Decimal, or 1 for NULL.| Elements sequence. Every element is a full value of type "decimal", including type code, or NULL.
+
+|===
+
+== Object collections
+
+=== Object array
+
+Type code: 23;
+
+Array of objects of any type. Can contain objects of any type. This includes standard objects of any type, as well as complex objects of various types, NULL values and any combinations of them. This also means, that collections may contain other collections.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id |4|   Type identifier of the contained objects. For example, in Java this type is used to de-serialize to a Type[]. Obviously, all values in array should have Type as a parent. It is parent type of any object type. For example, in Java this always can be java.lang.Object. Type ID for such "root" object type is -1. See <<Type ID>> for details.
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Depends on sizes of the objects.|  Elements sequence. Every element is a full value of any type or NULL.
+
+|===
+
+=== Collection
+
+Type code: 24;
+
+General collection type. Just as an object array, contains objects, but unlike array, it have a hint for a deserialization to a platform-specific collection of a certain type, not just an array. There are following collection types:
+
+
+*  `USER_SET` = -1. This is a general set type, which can not be mapped to more specific set type. Still, it is known, that it is set. It makes sense to deserialize such a collection to the basic and most widely used set-like type on your platform, e.g. hash set.
+*    `USER_COL` = 0. This is a general collection type, which can not be mapped to any more specific collection type. It makes sense to deserialize such a collection to the basic and most widely used collection type on your platform, e.g. resizeable array.
+*    `ARR_LIST` = 1. This is in fact a resizeable array type.
+*    `LINKED_LIST` = 2. This is a linked list type.
+*    `HASH_SET` = 3. This is a basic hash set type.
+*    `LINKED_HASH_SET` = 4. This is a hash set type, which maintains element order.
+*    `SINGLETON_LIST` = 5. This is a collection that only contains a single element, but behaves as a collection. Could be used by platforms for optimization purposes. If not applicable, any collection type could be used.
+
+[NOTE]
+====
+Collection type byte is used as a hint by a certain platform to deserialize a collection to the most suitable type. For example, in Java HASH_SET deserialized to java.util.HashSet, while LINKED_HASH_SET deserialized to java.util.LinkedHashSet. It is recommended for a thin client implementation to try and use the most suitable collection type on serialization and deserialization. But still, it is only a hint, which user can ignore if it is not relevant or not applicable for the platform.
+====
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the collection.
+|type|    1|   Type of the collection. See description for details.
+elements  |  Variable. Depends on sizes of the objects.  Elements sequence. Every element is a full value of any type or NULL.
+
+|===
+
+=== Map
+
+Type code: 25;
+
+Map-like collection type. Contains pairs of key and value objects. Both key and value objects can be objects of a various types. It includes standard objects of various type, as well as complex objects of various types and any combinations of them. Have a hint for a deserialization to a map of a certain type. There are following map types:
+
+*   `HASH_MAP` = 1. This is a basic hash map.
+*   `LINKED_HASH_MAP` = 2. This is a hash map, which maintains element order.
+
+[NOTE]
+====
+Map type byte is used as a hint by a certain platform to deserialize a collection to the most suitable type. It is recommended for a thin client implementation to try and use the most suitable map type on serialization and deserialization. But still, it is only a hint, which user can ignore if it is not relevant or not applicable for the platform.
+====
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the collection.
+|type|    1|   Type of the collection. See description for details.
+|elements|    Variable. Depends on sizes of the objects.|  Elements sequence. Elements here are keys and values, followed one by one in pairs. Every element is a full value of any type or NULL.
+
+|===
+
+=== Enum array
+
+Type code: 29;
+
+Array of enumerable type value. Element could be either enumerable value or null. So, any element either occupies 9 bytes or 1 byte.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id| 4|   Type identifier of the contained objects. For example, in Java this type is used to de-serialize to a EnumType[]. Obviously, all values in array should have EnumType as a parent. It is parent type of any enumerable object type. See <<Type ID>> for details.
+|length|  4|   Signed integer number. Number of elements in the collection.
+|elements|    Variable. Depends on sizes of the objects. | Elements sequence. Every element is a full value of enum type or NULL.
+
+|===
+
+== Complex object
+
+Type code: 103;
+
+Complex object consist of a 24-byte header, set of fields (data objects), and a schema (field IDs and positions). Depending on an operation and your data model, a data object can be of a primitive type or complex type (set of fields).
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Optionality
+|`version`| 1|   Mandatory
+|`flags`|   2|   Mandatory
+|`type_id`| 4|   Mandatory
+|`hash_code`|   4|   Mandatory
+|`length`|  4|   Mandatory
+|`schema_id`|   4|   Mandatory
+|`object_fields`|   Variable| length.    Optional
+|`schema`|  Variable| length.    Optional
+|`raw_data_offset`| 4|   Optional
+
+|===
+
+
+== Version
+
+This is a field, indicating complex object layout version. It is needed for backward compatibility. Clients should check this field and indicate error to a user, if the object layout version is unknown to them, to prevent data corruption and unpredictable results of the de-serialization.
+
+== Flags
+
+This field is 16-bit long little-endian bitmask. Contains object flags, which indicate how the object instance should be handled by a reader. There are following flags:
+
+*    `USER_TYPE = 0x0001` - Indicates that type is a user type. Should be always set for any client type. Can be ignored on a de-serialization.
+*    `HAS_SCHEMA = 0x0002` - Indicates that object layout contains schema in the footer. See <<Schema>> for details.
+*    `HAS_RAW_DATA = 0x0004` - Indicating that object has raw data. See <<Raw data offset>> for details.
+*    `OFFSET_ONE_BYTE = 0x0008` - Indicating that schema field offset is one byte long. See <<Schema>> for details.
+*    `OFFSET_TWO_BYTES = 0x0010` - Indicating that schema field offset is two byte long. See <<Schema>> for details.
+*    `COMPACT_FOOTER = 0x0020` - Indicating that footer does not contain field IDs, only offsets. See <<Schema>> for details.
+
+== Type ID
+
+This field contains a unique type identifier. It is 4 bytes long and stored in little-endian. By default, Type ID is obtained as a Java-style hash code of the type name. Type ID evaluation algorithm should be the same across all platforms in the cluster for all platforms to be able to operate with objects of this type. Default type ID calculation algorithm, which is recommended for use by all thin clients, can be found below.
+
+[tabs]
+--
+
+tab:Java[]
+[source, java]
+----
+static int hashCode(String str) {
+  int len = str.length;
+
+  int h = 0;
+
+  for (int i = 0; i < len; i++) {
+    int c = str.charAt(i);
+
+    c = Character.toLowerCase(c);
+
+    h = 31 * h + c;
+  }
+
+  return h;
+}
+----
+
+tab:C[]
+
+[source, c]
+----
+int32_t HashCode(const char* val, size_t size)
+{
+  if (!val && size == 0)
+    return 0;
+
+  int32_t hash = 0;
+
+  for (size_t i = 0; i < size; ++i)
+  {
+    char c = val[i];
+
+    if ('A' <= c && c <= 'Z')
+      c |= 0x20;
+
+    hash = 31 * hash + c;
+  }
+
+  return hash;
+}
+----
+
+--
+
+
+
+
+
+== Hash code
+
+Hash code of the value. It is stored as a 4-byte long little-endian value and calculated as a Java-style hash of contents without header. Used by Ignite engine for comparisons, for example - to compare keys. Hash calculation algorithm can be found below.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+static int dataHashCode(byte[] data) {
+  int len = data.length;
+
+  int h = 0;
+
+  for (int i = 0; i < len; i++)
+    h = 31 * h + data[i];
+
+  return h;
+}
+----
+tab:C[]
+
+[source, c]
+----
+int32_t GetDataHashCode(const void* data, size_t size)
+{
+  if (!data)
+    return 0;
+
+  int32_t hash = 1;
+  const int8_t* bytes = static_cast<const int8_t*>(data);
+
+  for (int i = 0; i < size; ++i)
+    hash = 31 * hash + bytes[i];
+
+  return hash;
+}
+----
+
+--
+
+
+
+
+== Length
+
+This field contains full length of the object including header. It is stored as a 4-byte long little-endian integer number. Using this field you can easily skip the whole object by simply increasing current data stream position by the value of this field.
+
+== Schema ID
+
+Object schema identifier. It is stored as a 4-byte long little-endian value and calculated as a hash of all object field IDs. It is used for complex object size optimization. Ignite uses schema ID to avoid writing of the whole schema to the end of the every complex object value. Instead, it stores all schemas in the binary metadata store and only writes field offsets to the object. This optimization helps to significantly reduce size for the complex object containing a lot of short fields (such as ints).
+
+If the schema is missing (e.g. the whole object is written in raw mode, or have no fields at all), the schema ID field is 0.
+
+See <<Schema>> for details on schema structure.
+
+[NOTE]
+====
+Schema ID can not be determined using Type ID as objects of the same type (and thus, having the same Type ID) can have a multiple schemas, i.e. field sequence.
+====
+
+Schema ID calculation algorithm can be found below:
+
+[tabs]
+--
+
+tab:Java[]
+
+[source, java]
+----
+/** FNV1 hash offset basis. */
+private static final int FNV1_OFFSET_BASIS = 0x811C9DC5;
+
+/** FNV1 hash prime. */
+private static final int FNV1_PRIME = 0x01000193;
+
+static int calculateSchemaId(int fieldIds[])
+{
+  if (fieldIds == null || fieldIds.length == 0)
+    return 0;
+
+  int len = fieldIds.length;
+
+  int schemaId = FNV1_OFFSET_BASIS;
+
+  for (size_t i = 0; i < len; ++i)
+  {
+    fieldId = fieldIds[i];
+
+    schemaId = schemaId ^ (fieldId & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 8) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 16) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 24) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+  }
+}
+----
+
+
+tab:C[]
+
+[source, c]
+----
+/** FNV1 hash offset basis. */
+enum { FNV1_OFFSET_BASIS = 0x811C9DC5 };
+
+/** FNV1 hash prime. */
+enum { FNV1_PRIME = 0x01000193 };
+
+int32_t CalculateSchemaId(const int32_t* fieldIds, size_t num)
+{
+  if (!fieldIds || num == 0)
+    return 0;
+
+  int32_t schemaId = FNV1_OFFSET_BASIS;
+
+  for (size_t i = 0; i < num; ++i)
+  {
+    fieldId = fieldIds[i];
+
+    schemaId ^= fieldId & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 8) & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 16) & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 24) & 0xFF;
+    schemaId *= FNV1_PRIME;
+  }
+}
+----
+
+
+--
+
+
+
+== Object Fields
+
+Object fields. Every field is a binary object and could be either complex or standard type. Note that a complex object that has no fields at all is a valid object and may be encountered. Every field can have or not have a name. For named fields there is an offset written in the object schema, by which they can be located in object without de-serialization of the whole object. Fields without name are always stored after the named fields and are written in a so called "raw mode".
+
+Thus, fields that have been written in a raw mode can only be accessed by sequential read in the same order as they were written, while named fields can be read in a random order.
+
+== Schema
+
+Object schema. Any complex object may have or have no schema, so this field is optional. Schema is not present in object, if there is no named fields in object. It also includes cases, when the object does not have fields at all. You should check the HAS_SCHEMA object flag to determine if the object has schema.
+
+The main purpose of a schema is to allow for fast search of object fields. For this purpose, schema contains a sequence of offsets of object fields in the object payload. Field offsets themselves can be of a different size. The size of these fields determined on a write by a max offset value. If it is in the range of [24..255] bytes, then 1-byte offset is used, if it's in the range of [256..65535] bytes, then 2-byte offset is used. In all other cases 4-byte offsets are used. To determine the size of the offsets on read, clients should check `OFFSET_ONE_BYTE` and `OFFSET_TWO_BYTES` flags. If the `OFFSET_ONE_BYTE` flag is set, then offsets are 1 byte long, else if `OFFSET_TWO_BYTES` flag is set, then offsets are 2-byte long, otherwise offsets are 4-byte long.
+
+There are two formats of schema supported:
+
+* Full schema approach - simpler to implement but uses more resources.
+*  Compact footer approach - harder to implement, but provides better performance and reduces memory consumption; thus it is recommended for new clients to implement this approach.
+
+You can find more details on both formats below.
+
+Note that the flag COMPACT_FOOTER should be checked by clients to determine which approach is used in every specific object.
+
+=== Full schema approach
+
+When this approach is used, COMPACT_FOOTER flag is not set and the whole object schema is written to the footer of the object. In this case only complex object itself is needed for a de-serialization - schema_id field is ignored and no additional data is required. The structure of the schema field of the complex object in this case can be found below:
+
+[cols="1,1,2",opts="header"]
+|===
+|Field |  Size in bytes |  Description
+|`field_id_0`|  4|   ID of the field with the index 0. 4-byte long hash stored in little-endian. The Field ID calculated using field name the same way it is done for a <<Type ID>>.
+|`field_offset_0`|  Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian Offset of the field in object, starting from the very first byte of the full object value (i.e. type_code position).
+|`field_id_1`|  4|   4-byte long hash stored in little-endian. ID of the field with the index 1.
+|`field_offset_1` | Variable, depending on the size of the object: 1, 2 or 4.|   Unsigned integer number stored in little-endian. Offset of the field in object.
+|...| ...| ...
+|`field_id_N`|  4|   4-byte long hash stored in little-endian. ID of the field with the index N.
+|`field_offset_N`|  Variable, depending on the size of the object: 1, 2 or 4. |   Unsigned integer number stored in little-endian. Offset of the field in object.
+
+|===
+
+=== Compact footer approach
+
+In this approach, COMPACT_FOOTER flag is set and only field offset sequence is written to the object footer. In this case client uses schema_id field to search objects schema in a previously stored meta store to find out fields order and associate field with its offset.
+
+If this approach is used, client needs to keep schemas in a special meta store and send/retrieve them to Ignite servers. See link:check[Binary Types] for details.
+
+The structure of the schema in this case can be found below:
+
+[cols="1,1,2",opts="header"]
+|===
+|Field |  Size in bytes |  Description
+|`field_offset_0` | Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian. Offset of the field 0 in the object, starting from the very first byte of the full object value (i.e. type_code position).
+|`field_offset_1`|  Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian. Offset of the 1-st field in object.
+|...| ...| ...
+|`field_id_N`|  Variable, depending on the size of the object: 1, 2 or 4.  | Unsigned integer number stored in little-endian.
+Offset of the N-th field in object.
+
+|===
+
+== Raw data offset
+
+Optional field. Only present in object, if there is any fields, that have been written in a raw mode. In this case, HAS_RAW_DATA flag is set and the raw data offset field is present and is stored as an 4-byte long little-endian value, which points to the offset of the raw data in complex object, starting from the very first byte of the header (i.e. this field always greater than a header length).
+
+This field is used to position stream for user to start reading in a raw mode.
+
+== Special types
+
+=== Wrapped Data
+
+Type code: 27;
+
+One or more binary objects can be wrapped in an array. This allows reading, storing, passing and writing objects efficiently without understanding their contents, performing simple byte copy.
+All cache operations return complex objects inside a wrapper (but not primitives).
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size |    Description
+|length|  4|   Signed integer number stored in little-endian. Size of the wrapped data in bytes.
+|payload| length|  Payload.
+|offset|  4|   Signed integer number stored in little-endian. Offset of the object within an array. Array can contain an object graph, this offset points to the root object.
+
+|===
+
+=== Binary enum
+
+Type code: 38
+
+Wrapped enumerable type. This type can be returned by the engine in place of the ordinary enum type. Enums should be written in this form when Binary API is used.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |  Size  |  Description
+|type_id| 4|   Signed integer number in little-endian. See <<Type ID>> for details.
+|ordinal| 4|   Signed integer number stored in little-endian. Enumeration value ordinal . Its position in its enum declaration, where the initial constant is assigned an ordinal of zero.
+
+|===
+
+== Serialization and Deserialization examples
+
+=== Reading objects
+
+A code template below shows how to read data of various types from an input byte stream:
+
+
+[source, java]
+----
+private static Object readDataObject(DataInputStream in) throws IOException {
+  byte code = in.readByte();
+
+  switch (code) {
+    case 1:
+      return in.readByte();
+    case 2:
+      return readShortLittleEndian(in);
+    case 3:
+      return readIntLittleEndian(in);
+    case 4:
+      return readLongLittleEndian(in);
+    case 27: {
+      int len = readIntLittleEndian(in);
+      // Assume 0 offset for simplicity
+      Object res = readDataObject(in);
+      int offset = readIntLittleEndian(in);
+      return res;
+    }
+    case 103:
+      byte ver = in.readByte();
+      assert ver == 1; // version
+      short flags = readShortLittleEndian(in);
+      int typeId = readIntLittleEndian(in);
+      int hash = readIntLittleEndian(in);
+      int len = readIntLittleEndian(in);
+      int schemaId = readIntLittleEndian(in);
+      int schemaOffset = readIntLittleEndian(in);
+      byte[] data = new byte[len - 24];
+      in.read(data);
+      return "Binary Object: " + typeId;
+    default:
+      throw new Error("Unsupported type: " + code);
+  }
+}
+----
+
+=== Int
+
+The following code snippet shows how to write and read a data object of type int, using a socket based output/input stream.
+
+
+[source, java]
+----
+// Write int data object
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+int val = 11;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(val, out);
+
+// Read int data object
+DataInputStream in = new DataInputStream(socket.getInputStream());
+int typeCode = readByteLittleEndian(in);
+int val = readIntLittleEndian(in);
+----
+
+Refer to the link:example[example section] for implementation of `write...()` and `read..()` methods shown above.
+
+As another example, for String type, the structure would be:
+
+
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+| byte |    String type code, 9.
+|int | String length in UTF-8 bytes.
+|bytes |   Actual string.
+|===
+
+=== String
+
+The code snippet below shows how to write and read a String value following this format:
+
+
+[source, java]
+----
+private static void writeString (String str, DataOutputStream out) throws IOException {
+  writeByteLittleEndian(9, out); // type code for String
+
+  int strLen = str.getBytes("UTF-8").length; // length of the string
+  writeIntLittleEndian(strLen, out);
+
+  out.writeBytes(str);
+}
+
+private static String readString(DataInputStream in) throws IOException {
+  int type = readByteLittleEndian(in); // type code
+
+  int strLen = readIntLittleEndian(in); // length of the string
+
+  byte[] buf = new byte[strLen];
+
+  readFully(in, buf, 0, strLen);
+
+  return new String(buf);
+}
+----
+
+
+
+
+
diff --git a/docs/_docs/binary-client-protocol/key-value-queries.adoc b/docs/_docs/binary-client-protocol/key-value-queries.adoc
new file mode 100644
index 0000000..1acabc5
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/key-value-queries.adoc
@@ -0,0 +1,1416 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Key-Value Queries
+
+This page describes the key-value operations that you can perform with a cache. The key-value operations are equivalent to Ignite's native cache operations. Each operation has a link:binary-client-protocol/binary-client-protocol#standard-message-header[header] and operation-specific data.
+
+Refer to the Data Format page for a list of available data types and data format specification.
+
+== Operation Codes
+
+Upon successful handshake with an Ignite server node, a client can start performing various key-value operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+
+
+|Operation|   OP_CODE
+|OP_CACHE_GET|    1000
+|OP_CACHE_PUT|    1001
+|OP_CACHE_PUT_IF_ABSENT|  1002
+|OP_CACHE_GET_ALL|    1003
+|OP_CACHE_PUT_ALL|    1004
+|OP_CACHE_GET_AND_PUT|    1005
+|OP_CACHE_GET_AND_REPLACE|    1006
+|OP_CACHE_GET_AND_REMOVE| 1007
+|OP_CACHE_GET_AND_PUT_IF_ABSENT|  1008
+|OP_CACHE_REPLACE|    1009
+|OP_CACHE_REPLACE_IF_EQUALS|  1010
+|OP_CACHE_CONTAINS_KEY|   1011
+|OP_CACHE_CONTAINS_KEYS|  1012
+|OP_CACHE_CLEAR|  1013
+|OP_CACHE_CLEAR_KEY|  1014
+|OP_CACHE_CLEAR_KEYS| 1015
+|OP_CACHE_REMOVE_KEY| 1016
+|OP_CACHE_REMOVE_IF_EQUALS|   1017
+|OP_CACHE_REMOVE_KEYS|    1018
+|OP_CACHE_REMOVE_ALL| 1019
+|OP_CACHE_GET_SIZE|   1020
+
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+== OP_CACHE_GET
+
+Retrieves a value from a cache by key. If the cache does not contain the key, null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the cache entry to be returned.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|Data Object| The value that corresponds to the given key. null if the cache does not contain the key.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_ALL
+
+Retrieves multiple key-value pairs from a cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object| Key for the cache entry.
+
+Repeat for as many times as the key count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|int| Result count.
+|Key Data Object + Value Data Object| Resulting key-value pairs. Keys that are not present in the cache are not included.
+
+Repeat for as many times as the result count that is obtained in the previous parameter.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_GET_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Key count
+writeIntLittleEndian(2, out);
+
+// Data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Result count
+int resCount = readIntLittleEndian(in);
+
+for (int i = 0; i < resCount; i++) {
+  // Resulting data object
+  int resKeyTypeCode = readByteLittleEndian(in); // Integer type code
+  int resKey = readIntLittleEndian(in); // Cache key
+
+  // Resulting data object
+  int resValTypeCode = readByteLittleEndian(in); // Integer type code
+  int resValue = readIntLittleEndian(in); // Cache value
+}
+
+----
+--
+
+
+== OP_CACHE_PUT
+
+Puts a value with a given key to a cache (overwriting existing value if any).
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value for the key.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response Header
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_PUT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_PUT_ALL
+
+Puts multiple key-value pairs to cache (overwriting existing associations if any).
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key-value pair count
+|Key Data Object + Value Data | Object Key-value pairs.
+
+Repeat for as many times as the key-value pair count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(29, OP_CACHE_PUT_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Entry Count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache value data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value1, out);   // Cache value
+
+// Cache key data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+
+// Cache value data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value2, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+----
+--
+
+
+== OP_CACHE_CONTAINS_KEY
+
+Returns a value indicating whether given key is present in cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header | Response header.
+|bool  |  True when key is present, false otherwise.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_CONTAINS_KEY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Result
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_CONTAINS_KEYS
+
+Returns a value indicating whether all given keys are present in cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object |Key obtained from cache.
+
+Repeat for as many times as the key count that is passed in the previous parameter.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    True when keys are present, false otherwise.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_CONTAINS_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+//Count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+int key1 = 11;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache key data object 2
+int key2 = 22;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_PUT
+
+Puts a key and an associated value into a cache and returns the previous value for that key. If the cache does not contain the key, a new entry is created and null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key to be updated.
+|Data Object| The new value for the specified key.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |  Description
+|Header|  Response header.
+|Data Object| The existing value associated with the specified key, or null.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_PUT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_REPLACE
+
+
+Replaces the value associated with the given key in the specified cache and returns the previous value. If the cache does not contain the key, the operation returns null without changing the cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key whose value is to be replaced.
+|Data Object| The new value to be associated with the specified key.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+| Response Type |  Description
+|Header|  Response header.
+|Data Object| The previous value associated with the given key, or null if the key does not exist.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_REPLACE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_REMOVE
+
+Removes a specific entry from a cache and returns the entry's value. If the key does not exist, null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key to be removed.
+
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|Data Object| The existing value associated with the specified key or null, if the key does not exist.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET_AND_REMOVE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByte(in);
+int value = readInt(in);
+
+----
+--
+
+
+== OP_CACHE_PUT_IF_ABSENT
+
+Puts an entry to a cache if that entry does not exist.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be added.
+|Data Object| The value of the key to be added.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|bool|    true if the new entry is created, false if the entry already exists.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_PUT_IF_ABSENT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache Value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_PUT_IF_ABSENT
+
+Puts an entry to a cache if it does not exist; otherwise, returns the existing value.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be added.
+|Data Object| The value of the entry to be added.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|Data Object| null if the cache does not contain the entry (in this case a new entry is created) or the existing value associated with the given key.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_PUT_IF_ABSENT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REPLACE
+
+Puts a value with a given key to cache only if the key already exists.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value for the key.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    Value indicating whether replace happened.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_REPLACE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REPLACE_IF_EQUALS
+
+Puts a value with a given key to cache only if the key already exists and value equals provided value.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value to be compared with the existing value in the cache for the given key.
+|Data Object| New value for the key.
+|===
+
+[cols="1,2",opts="header"]
+|===
+| Response Type |   Description
+|Header|  Response header.
+|bool|    Value indicating whether replace happened.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(20, OP_CACHE_REPLACE_IF_EQUALS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value to compare
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(newValue, out);   // New cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_CLEAR
+
+Clears the cache without notifying listeners or cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_CLEAR, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_CLEAR_KEY
+
+Clears the cache key without notifying listeners or cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_CLEAR_KEY, 1, out);;
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_CLEAR_KEYS
+
+Clears the cache keys without notifying listeners or cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object * count| Keys
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_CLEAR_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// key count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache key data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_KEY
+
+Removes an entry with a given key, notifying listeners and cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    Value indicating whether remove happened.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_REMOVE_KEY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_IF_EQUALS
+
+Removes an entry with a given key if the specified value is equal to the current value, notifying listeners and cache writers.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be removed.
+|Data Object| The value to be compared with the current value.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    Value indicating whether remove happened
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_REMOVE_IF_EQUALS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_SIZE
+
+Gets the number of entries in a cache. This method is equivalent to `IgniteCache.size(CachePeekMode... peekModes)`.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| The number of peek modes you are going to request. When set to 0, CachePeekMode.ALL is used. When set to a positive value, you need to specify in the following fields the type of entries that should be counted: all, backup, primary, or near cache entries.
+|byte|    Indicates which type of entries should be counted: 0 = all, 1 = near cache entries, 2 = primary entries, 3 = backup entries.
+
+This field must be provided as many times as specified in the previous field.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|long|    Cache size.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET_SIZE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Peek mode count; '0' means All
+writeIntLittleEndian(0, out);
+
+// Peek mode
+writeByteLittleEndian(0, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Number of entries in cache
+long cacheSize = readLongLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_KEYS
+
+Removes entries with given keys, notifying listeners and cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Number of keys to remove.
+|Data Object| The key to be removed. If the cache does not contain the key, it is ignored. This field must be provided for each key to be removed.
+|....|
+|Data Object| The key to be removed.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_REMOVE_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// key count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache value data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_ALL
+
+Removes all entries from cache, notifying listeners and cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_REMOVE_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response length
+final int len = readIntLittleEndian(in);
+
+// Request id
+long resReqId = readLongLittleEndian(in);
+
+// Success
+int statusCode = readIntLittleEndian(in);
+
+----
+--
+
diff --git a/docs/_docs/binary-client-protocol/sql-and-scan-queries.adoc b/docs/_docs/binary-client-protocol/sql-and-scan-queries.adoc
new file mode 100644
index 0000000..168b5aa
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/sql-and-scan-queries.adoc
@@ -0,0 +1,634 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL and Scan Queries
+
+== Operation codes
+
+Upon a successful handshake with an Ignite server node, a client can start performing various SQL and scan queries by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+|Operation |   OP_CODE
+|OP_QUERY_SQL|    2002
+|OP_QUERY_SQL_CURSOR_GET_PAGE|    2003
+|OP_QUERY_SQL_FIELDS| 2004
+|OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE| 2005
+|OP_QUERY_SCAN|   2000
+|OP_QUERY_SCAN_CURSOR_GET_PAGE|   2001
+|OP_RESOURCE_CLOSE|   0
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_QUERY_SQL
+
+Executes an SQL query over data stored in the cluster. The query returns the whole record (key and value).
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|String|  Name of a type or SQL table.
+|String|  SQL query string.
+|int| Query argument count.
+|Data Object| Query argument.
+
+Repeat for as many times as the query argument count that is passed in the previous parameter.
+|bool|    Distributed joins.
+|bool|    Local query.
+|bool|    Replicated only - Whether query contains only replicated tables or not.
+|int| Cursor page size.
+|long|    Timeout (miliseconds).
+
+Timeout value should be non-negative. Zero value disables timeout.
+|===
+
+
+Response includes the first page of the result.
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id. Can be closed with OP_RESOURSE_CLOSE.
+|int| Row count for the first page.
+|Key Data Object + Value Data Object| Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SQL_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String entityName = "Person";
+int entityNameLength = getStrLen(entityName); // UTF-8 bytes
+
+String sql = "Select * from Person";
+int sqlLength = getStrLen(sql);
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(34 + entityNameLength + sqlLength, OP_QUERY_SQL, 1, out);
+
+// Cache id
+String queryCacheName = "personCache";
+writeIntLittleEndian(queryCacheName.hashCode(), out);
+
+// Flag = none
+writeByteLittleEndian(0, out);
+
+// Query Entity
+writeString(entityName, out);
+
+// SQL query
+writeString(sql, out);
+
+// Argument count
+writeIntLittleEndian(0, out);
+
+// Joins
+out.writeBoolean(false);
+
+// Local query
+out.writeBoolean(false);
+
+// Replicated
+out.writeBoolean(false);
+
+// cursor page size
+writeIntLittleEndian(1, out);
+
+// Timeout
+writeLongLittleEndian(5000, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+long cursorId = readLongLittleEndian(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++) {
+  Object key = readDataObject(in);
+  Object val = readDataObject(in);
+
+  System.out.println("CacheEntry: " + key + ", " + val);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+
+== OP_QUERY_SQL_CURSOR_GET_PAGE
+
+Retrieves the next SQL query cursor page by cursor id from OP_QUERY_SQL.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|long|    Cursor id.
+|===
+
+
+Response format looks as follows:
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id.
+|int| Row count.
+|Key Data Object + Value Data Object| Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SQL_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(8, OP_QUERY_SQL_CURSOR_GET_PAGE, 1, out);
+
+// Cursor Id (received from Sql query operation)
+writeLongLittleEndian(cursorId, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++){
+  Object key = readDataObject(in);
+  Object val = readDataObject(in);
+
+  System.out.println("CacheEntry: " + key + ", " + val);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SQL_FIELDS
+
+Performs SQL fields query.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|String|  Schema for the query; can be null, in which case default PUBLIC schema will be used.
+|int| Query cursor page size.
+|int| Max rows.
+|String|  SQL
+|int| Argument count.
+|Data Object| Query argument.
+
+Repeat for as many times as the query argument count that is passed in the previous parameter.
+
+|byte|    Statement type.
+
+ANY = 0
+
+SELECT = 1
+
+UPDATE = 2
+
+|bool|    Distributed joins
+|bool|    Local query.
+|bool|    Replicated only - Whether query contains only replicated tables or not.
+|bool|    Enforce join order.
+|bool|    Collocated - Whether your data is co-located or not.
+|bool|    Lazy query execution.
+|long|    Timeout (milliseconds).
+|bool|    Include field names.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id. Can be closed with OP_RESOURCE_CLOSE.
+|int| Field (column) count.
+|String (optional)|   Needed only when IncludeFieldNames is true in the request.
+
+Column name.
+
+Repeat for as many times as the field count that is retrieved in the previous parameter.
+
+|int| First page row count.
+Data Object Column (field) value. Repeat for as many times as the field count.
+
+Repeat for as many times as the row count that is retrieved in the previous parameter.
+|bool|    Indicates whether more results are available to be retrieved with OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String sql = "Select id, salary from Person";
+int sqlLength = sql.getBytes("UTF-8").length;
+
+String sqlSchema = "PUBLIC";
+int sqlSchemaLength = sqlSchema.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(43 + sqlLength + sqlSchemaLength, OP_QUERY_SQL_FIELDS, 1, out);
+
+// Cache id
+String queryCacheName = "personCache";
+int cacheId = queryCacheName.hashCode();
+writeIntLittleEndian(cacheId, out);
+
+// Flag = none
+writeByteLittleEndian(0, out);
+
+// Schema
+writeByteLittleEndian(9, out);
+writeIntLittleEndian(sqlSchemaLength, out);
+out.writeBytes(sqlSchema); //sqlSchemaLength
+
+// cursor page size
+writeIntLittleEndian(2, out);
+
+// Max Rows
+writeIntLittleEndian(5, out);
+
+// SQL query
+writeByteLittleEndian(9, out);
+writeIntLittleEndian(sqlLength, out);
+out.writeBytes(sql);//sqlLength
+
+// Argument count
+writeIntLittleEndian(0, out);
+
+// Statement type
+writeByteLittleEndian(1, out);
+
+// Joins
+out.writeBoolean(false);
+
+// Local query
+out.writeBoolean(false);
+
+// Replicated
+out.writeBoolean(false);
+
+// Enforce join order
+out.writeBoolean(false);
+
+// collocated
+out.writeBoolean(false);
+
+// Lazy
+out.writeBoolean(false);
+
+// Timeout
+writeLongLittleEndian(5000, out);
+
+// Replicated
+out.writeBoolean(false);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+long cursorId = readLongLittleEndian(in);
+
+int colCount = readIntLittleEndian(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries
+for (int i = 0; i < rowCount; i++) {
+  long id = (long) readDataObject(in);
+  int salary = (int) readDataObject(in);
+
+  System.out.println("Person id: " + id + "; Person Salary: " + salary);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE
+
+Retrieves the next query result page by cursor id from OP_QUERY_SQL_FIELDS .
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|long|    Cursor id received from OP_QUERY_SQL_FIELDS
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Row count.
+|Data Object| Column (field) value. Repeat for as many times as the field count.
+
+Repeat for as many times as the row count that is retrieved in the previous parameter.
+|bool|    Indicates whether more results are available to be retrieved with OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(8, QUERY_SQL_FIELDS_CURSOR_GET_PAGE, 1, out);
+
+// Cursor Id
+writeLongLittleEndian(1, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++){
+   // read data objects * column count.
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SCAN
+
+Performs scan query.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Flag. Pass 0 for default, or 1 to keep the value in binary form.
+|Data Object| Filter object. Can be null if you are not going to filter data on the cluster. The filter class has to be added to the classpath of the server nodes.
+|byte|    Filter platform:
+
+JAVA = 1
+
+DOTNET = 2
+
+CPP = 3
+
+Pass this parameter only if filter object is not null.
+|int| Cursor page size.
+|int| Number of partitions to query (negative to query entire cache).
+|bool|    Local flag - whether this query should be executed on local node only.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id.
+|int| Row count.
+|Key Data Object + Value Data Object| Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SCAN_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_QUERY_SCAN, 1, out);
+
+// Cache id
+String queryCacheName = "personCache";
+writeIntLittleEndian(queryCacheName.hashCode(), out);
+
+// flags
+writeByteLittleEndian(0, out);
+
+// Filter Object
+writeByteLittleEndian(101, out); // null
+
+// Cursor page size
+writeIntLittleEndian(1, out);
+
+// Partition to query
+writeIntLittleEndian(-1, out);
+
+// local flag
+out.writeBoolean(false);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+//Response header
+readResponseHeader(in);
+
+// Cursor id
+long cursorId = readLongLittleEndian(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++) {
+  Object key = readDataObject(in);
+  Object val = readDataObject(in);
+
+  System.out.println("CacheEntry: " + key + ", " + val);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SCAN_CURSOR_GET_PAGE
+
+
+Fetches the next SQL query cursor page by cursor id that is obtained from OP_QUERY_SCAN.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|long|    Cursor id.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id.
+|long|    Row count.
+|Key Data Object + Value Data Object | Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SCAN_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+|===
+
+
+== OP_RESOURCE_CLOSE
+
+Closes a resource, such as query cursor.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|long|    Resource id.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(8, OP_RESOURCE_CLOSE, 1, out);
+
+// Resource id
+long cursorId = 1;
+writeLongLittleEndian(cursorId, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+
+--
+
diff --git a/docs/_docs/clustering/baseline-topology.adoc b/docs/_docs/clustering/baseline-topology.adoc
new file mode 100644
index 0000000..4245dc7
--- /dev/null
+++ b/docs/_docs/clustering/baseline-topology.adoc
@@ -0,0 +1,159 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Baseline Topology
+
+:javaFile: {javaCodeDir}/ClusterAPI.java
+:csharpFile: {csharpCodeDir}/BaselineTopology.cs
+
+The _baseline topology_ is a set of nodes meant to hold data.
+The concept of baseline topology was introduced to give you the ability to control when you want to
+link:data-modeling/data-partitioning#rebalancing[rebalance the data in the cluster]. For example, if
+you have a cluster of 3 nodes where the data is distributed between the nodes, and you add 2 more nodes, the rebalancing
+process re-distributes the data between all 5 nodes. The rebalancing process happens when the
+baseline topology changes, which can either happen automatically or be triggered manually.
+
+The baseline topology only includes server nodes; client nodes are never included because they do not store data.
+
+The purpose of the baseline topology is to:
+
+* Avoid unnecessary data transfer when a server node leaves the cluster for a short period of time, for example, due to
+occasional network failures or scheduled server maintenance.
+* Give you the ability to control when you want to rebalance the data.
+
+Baseline topology changes automatically when <<Baseline Topology Autoadjustment>> is enabled. This is the default
+behavior for pure in-memory clusters. For persistent clusters, the baseline topology autoadjustment feature must be enabled
+manually. By default, it is disabled and you have to change the baseline topology manually. You can change the baseline
+topology using the link:control-script#activation-deactivation-and-topology-management[control script].
+
+[CAUTION]
+====
+Any attempt to create a cache while the baseline topology is being changed results in an exception.
+For more details, see link:key-value-api/basic-cache-operations#creating-caches-dynamically[Creating Caches Dynamically].
+====
+
+== Baseline Topology in Pure In-Memory Clusters
+In pure in-memory clusters, the default behavior is to adjust the baseline topology to the set of all server nodes
+automatically when you add or remove server nodes from the cluster. The data is rebalanced automatically, too.
+You can disable the baseline autoadjustment feature and manage baseline topology manually.
+
+NOTE: In previous releases, baseline topology was relevant only to clusters with persistence.
+However, since version 2.8.0, it applies to in-memory clusters as well.
+If you have a pure in-memory cluster, the transition should be transparent for you because, by default, the baseline topology changes automatically when a server node leaves or joins the cluster.
+
+== Baseline Topology in Persistent Clusters
+
+If your cluster has at least one data region in which persistence is enabled, the cluster is inactive when you start it for the first time.
+In the inactive state, all operations are prohibited.
+The cluster must be activated before you can create caches and upload data.
+Cluster activation sets the current set of server nodes as the baseline topology.
+When you restart the cluster, it is activated automatically as soon as all nodes that are registered in the baseline topology join in.
+However, if some nodes do not join after a restart, you must to activate the cluster manually.
+
+You can activate the cluster using one of the following tools:
+
+* link:control-script#activating-cluster[Control script]
+* link:restapi#change-cluster-state[REST API command]
+* Programmatically:
++
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=activate,indent=0]
+----
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=activate,indent=0]
+----
+tab:C++[]
+--
+
+== Baseline Topology Autoadjustment
+
+Instead of changing the baseline topology manually, you can let the cluster do it automatically. This feature is called
+Baseline Topology Autoadjustment. When it is enabled, the cluster monitors the state of its server nodes and sets the
+baseline on the current topology automatically when the cluster topology is stable for a configurable period of time.
+
+Here is what happens when the set of nodes in the cluster changes:
+
+* The cluster waits for a configurable amount of time (5 min by default).
+* If there are no other topology changes during this period, Ignite sets the baseline topology to the current set of nodes.
+* If the set of nodes changes during this period, the timeout is updated.
+
+Each change in the set of nodes resets the timeout for autoadjustment.
+When the timeout expires and the current set of nodes is different from the baseline topology (for example, new nodes
+are present or some old nodes left), Ignite changes the baseline topology to the current set of nodes.
+This also triggers data rebalancing.
+
+The autoadjustment timeout allows you to avoid data rebalancing when a node disconnects for a short period due to a
+temporary network problem or when you want to quickly restart the node.
+You can set the timeout to a higher value if you expect temporary changes in the set of nodes and don't want to change
+the baseline topology.
+
+Baseline topology is autoadjusted only if the cluster is in the active state.
+
+To enable automatic baseline adjustment, you can use the
+link:control-script#enabling-baseline-topology-autoadjustment[control script] or the
+programmatic API methods shown below:
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=enable-autoadjustment,indent=0]
+----
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=enable-autoadjustment,indent=0]
+----
+tab:C++[]
+--
+
+
+To disable automatic baseline adjustment, use the same method with `false` passed in:
+
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=disable-autoadjustment,indent=0]
+----
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=disable-autoadjustment,indent=0]
+----
+tab:C++[]
+--
+
+
+== Monitoring Baseline Topology
+
+You can use the following tools to monitor and/or manage the baseline topology:
+
+* link:control-script[Control Script]
+* link:monitoring-metrics/metrics#monitoring-topology[JMX Beans]
+
diff --git a/docs/_docs/clustering/clustering.adoc b/docs/_docs/clustering/clustering.adoc
new file mode 100644
index 0000000..8496a3c
--- /dev/null
+++ b/docs/_docs/clustering/clustering.adoc
@@ -0,0 +1,51 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Clustering
+
+== Overview
+
+In this chapter, we discuss different ways nodes can discover each other to form a cluster.
+
+On start-up, a node is assigned either one of the two roles: _server node_ or _client node_.
+Server nodes are the workhorses of the cluster; they cache data, execute compute tasks, etc.
+Client nodes join the topology as regular nodes but they do not store data. Client nodes are used to stream data into the cluster and execute user queries.
+
+To form a cluster, each node must be able to connect to all other nodes. To ensure that, a proper <<Discovery Mechanisms,discovery mechanism>> must be configured.
+
+
+NOTE: In addition to client nodes, you can use Thin Clients to define and manipulate data in the cluster.
+Learn more about the thin clients in the link:thin-clients/getting-started-with-thin-clients[Thin Clients] section.
+
+
+image::images/ignite_clustering.png[Ignite Cluster]
+
+
+
+== Discovery Mechanisms
+
+Nodes can automatically discover each other and form a cluster.
+This allows you to scale out when needed without having to restart the whole cluster.
+Developers can also leverage Ignite's hybrid cloud support that allows establishing connection between private and public clouds such as Amazon Web Services, providing them with the best of both worlds.
+
+Ignite provides two implementations of the discovery mechanism intended for different usage scenarios:
+
+* link:clustering/tcp-ip-discovery[TCP/IP Discovery] is designed and optimized for 100s of nodes.
+* link:clustering/zookeeper-discovery[ZooKeeper Discovery] that allows scaling Ignite clusters to 100s and 1000s of nodes preserving linear scalability and performance.
+
+
+
+
+
+
diff --git a/docs/_docs/clustering/connect-client-nodes.adoc b/docs/_docs/clustering/connect-client-nodes.adoc
new file mode 100644
index 0000000..7373ed7
--- /dev/null
+++ b/docs/_docs/clustering/connect-client-nodes.adoc
@@ -0,0 +1,106 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Connecting Client Nodes
+:javaFile: {javaCodeDir}/ClientNodes.java
+
+
+== Reconnecting a Client Node
+
+A client node can get disconnected from the cluster in several cases:
+
+* The client node cannot re-establish the connection with the server node due to network issues.
+* Connection with the server node was broken for some time; the client node is able to re-establish the connection with the cluster, but the server already dropped the client node since the server did not receive client heartbeats.
+* Slow clients can be kicked out by the cluster.
+
+
+When a client determines that it is disconnected from the cluster, it assigns a new node ID to itself and tries to reconnect to the cluster.
+Note that this has a side effect: the ID property of the local `ClusterNode` changes in the case of a client reconnection.
+This means that any application logic that relied on the ID may be affected.
+
+You can disable client reconnection in the node configuration:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/client-node.xml[tags=ignite-config, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=disable-reconnection, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[unsupported]
+--
+
+
+While a client is in a disconnected state and an attempt to reconnect is in progress, the Ignite API throws a `IgniteClientDisconnectedException`.
+The exception contains a `future` that represents a re-connection operation.
+You can use the `future` to wait until the operation is complete.
+//This future can also be obtained using the `IgniteCluster.clientReconnectFuture()` method.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=reconnect, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[]
+--
+
+//When the client node reconnects to the cluster,
+//This future can also be obtained using the `IgniteCluster.clientReconnectFuture()` method.
+
+
+== Client Disconnected/Reconnected Events
+
+There are two discovery events that are triggered on the client node when it is disconnected from or reconnected to the cluster:
+
+* `EVT_CLIENT_NODE_DISCONNECTED`
+* `EVT_CLIENT_NODE_RECONNECTED`
+
+You can listen to these events and execute custom actions in response.
+Refer to the link:events/listening-to-events[Listening to events] section for a code example.
+
+== Managing Slow Client Nodes
+
+In many deployments, client nodes are launched on slower machines with lower network throughput.
+In these scenarios, it is possible that the servers will generate the load (such as continuous queries notification, for example) that the clients cannot to handle.
+This can result in a growing queue of outbound messages on the servers, which may eventually cause either an out-of-memory situation on the server or block the whole cluster.
+
+To handle these situations, you can configure the maximum number of outgoing messages for client nodes.
+If the size of the outbound queue exceeds this value, the client node is disconnected from the cluster.
+
+The examples below show how to configure a slow client queue limit.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/client-node.xml[tags=!*;ignite-config;slow-client, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=slow-clients, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/clustering/discovery-in-the-cloud.adoc b/docs/_docs/clustering/discovery-in-the-cloud.adoc
new file mode 100644
index 0000000..6372015
--- /dev/null
+++ b/docs/_docs/clustering/discovery-in-the-cloud.adoc
@@ -0,0 +1,270 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Discovery in the Cloud
+
+:javaFile: {javaCodeDir}/DiscoveryInTheCloud.java
+
+Nodes discovery on a cloud platform is usually proven to be more
+challenging because most virtual environments are subject to the
+following limitations:
+
+* Multicast is disabled;
+* TCP addresses change every time a new image is started.
+
+Although you can use TCP-based discovery in the absence of the
+Multicast, you still have to deal with constantly changing IP addresses.
+This causes a serious inconvenience and makes configurations based on
+static IPs virtually unusable in such environments.
+
+To mitigate the constantly changing IP addresses problem, Ignite supports a number of IP finders designed to work in the cloud:
+
+* Apache jclouds IP Finder
+* Amazon S3 IP Finder
+* Amazon ELB IP Finder
+* Google Cloud Storage IP Finder
+
+
+TIP: Cloud-based IP Finders allow you to create your configuration once and reuse it for all instances.
+
+== Apache jclouds IP Finder
+
+To mitigate the constantly changing IP addresses problem, Ignite supports automatic node discovery by utilizing Apache jclouds multi-cloud toolkit via `TcpDiscoveryCloudIpFinder`.
+For information about Apache jclouds please refer to https://jclouds.apache.org[jclouds.apache.org].
+
+The IP finder forms nodes addresses by getting the private and public IP addresses of all virtual machines running on the cloud and adding a port number to them.
+The port is the one that is set with either `TcpDiscoverySpi.setLocalPort(int)` or `TcpDiscoverySpi.DFLT_PORT`.
+This way all the nodes can try to connect to any formed IP address and initiate automatic grid node discovery.
+
+Refer to https://jclouds.apache.org/reference/providers/#compute[Apache jclouds providers section] to get the list of supported cloud platforms.
+
+CAUTION: All virtual machines must start Ignite instances on the same port, otherwise they will not be able to discover each other using this IP finder.
+
+Here is an example of how to configure Apache jclouds based IP finder:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.cloud.TcpDiscoveryCloudIpFinder">
+            <!-- Configuration for Google Compute Engine. -->
+            <property name="provider" value="google-compute-engine"/>
+            <property name="identity" value="YOUR_SERVICE_ACCOUNT_EMAIL"/>
+            <property name="credentialPath" value="PATH_YOUR_PEM_FILE"/>
+            <property name="zones">
+            <list>
+                <value>us-central1-a</value>
+                <value>asia-east1-a</value>
+            </list>
+            </property>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=jclouds,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Amazon S3 IP Finder
+
+Amazon S3-based discovery allows Ignite nodes to register their IP addresses on start-up in an Amazon S3 store.
+This way other nodes can try to connect to any of the IP addresses stored in S3 and initiate automatic node discovery.
+To use S3 based automatic node discovery, you need to configure the `TcpDiscoveryS3IpFindera` type of `ipFinder`.
+
+CAUTION: You must link:setup#enabling-modules[enable the 'ignite-aws' module].
+
+Here is an example of how to configure Amazon S3 based IP finder:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
+          <property name="awsCredentials" ref="aws.creds"/>
+          <property name="bucketName" value="YOUR_BUCKET_NAME"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- AWS credentials. Provide your access key ID and secret access key. -->
+<bean id="aws.creds" class="com.amazonaws.auth.BasicAWSCredentials">
+  <constructor-arg value="YOUR_ACCESS_KEY_ID" />
+  <constructor-arg value="YOUR_SECRET_ACCESS_KEY" />
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=aws1,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+You can also use *Instance Profile* for AWS credentials provider.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
+          <property name="awsCredentialsProvider" ref="aws.creds"/>
+          <property name="bucketName" value="YOUR_BUCKET_NAME"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- Instance Profile based credentials -->
+<bean id="aws.creds" class="com.amazonaws.auth.InstanceProfileCredentialsProvider">
+  <constructor-arg value="false" />
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=aws2,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Amazon ELB Based Discovery
+
+AWS ELB-based IP finder does not require nodes to register their IP
+addresses. The IP finder automatically fetches addresses of all the
+nodes connected under an ELB and uses them to connect to the cluster. To
+use ELB based automatic node discovery, you need to configure the
+`TcpDiscoveryElbIpFinder` type of `ipFinder`.
+
+Here is an example of how to configure Amazon ELB based IP finder:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.elb.TcpDiscoveryElbIpFinder">
+          <property name="credentialsProvider">
+              <bean class="com.amazonaws.auth.AWSStaticCredentialsProvider">
+                  <constructor-arg ref="aws.creds"/>
+              </bean>
+          </property>
+          <property name="region" value="YOUR_ELB_REGION_NAME"/>
+          <property name="loadBalancerName" value="YOUR_AWS_ELB_NAME"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- AWS credentials. Provide your access key ID and secret access key. -->
+<bean id="aws.creds" class="com.amazonaws.auth.BasicAWSCredentials">
+  <constructor-arg value="YOUR_ACCESS_KEY_ID" />
+  <constructor-arg value="YOUR_SECRET_ACCESS_KEY" />
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=awsElb,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Google Compute Discovery
+
+Ignite supports automatic node discovery by utilizing Google Cloud Storage store.
+This mechanism is implemented in `TcpDiscoveryGoogleStorageIpFinder`.
+On start-up, each node registers its IP address in the storage and discovers other nodes by reading the storage.
+
+IMPORTANT: To use `TcpDiscoveryGoogleStorageIpFinder`, enable the `ignite-gce` link:setup#enabling-modules[module] in your application.
+
+Here is an example of how to configure Google Cloud Storage based IP finder:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.gce.TcpDiscoveryGoogleStorageIpFinder">
+          <property name="projectName" ref="YOUR_GOOGLE_PLATFORM_PROJECT_NAME"/>
+          <property name="bucketName" value="YOUR_BUCKET_NAME"/>
+          <property name="serviceAccountId" value="YOUR_SERVICE_ACCOUNT_ID"/>
+          <property name="serviceAccountP12FilePath" value="PATH_TO_YOUR_PKCS12_KEY"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=google,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/clustering/network-configuration.adoc b/docs/_docs/clustering/network-configuration.adoc
new file mode 100644
index 0000000..8d47b60
--- /dev/null
+++ b/docs/_docs/clustering/network-configuration.adoc
@@ -0,0 +1,198 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Network Configuration
+:javaFile: {javaCodeDir}/NetworkConfiguration.java
+:xmlFile: code-snippets/xml/network-configuration.xml
+
+== IPv4 vs IPv6
+
+Ignite tries to support IPv4 and IPv6 but this can sometimes lead to issues where the cluster becomes detached. A possible solution — unless you require IPv6 — is to restrict Ignite to IPv4 by setting the `-Djava.net.preferIPv4Stack=true` JVM parameter.
+
+
+== Discovery
+This section describes the network parameters of the default discovery mechanism, which uses the TCP/IP protocol to exahcange discovery messages and is implemented in the `TcpDiscoverySpi` class.
+
+You can change the properties of the discovery mechanism as follows:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;discovery, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=discovery, indent=0]
+
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+
+The following table describes some most important properties of `TcpDiscoverySpi`.
+You can find the complete list of properties in the javadoc:org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi[] javadoc.
+
+[CAUTION]
+====
+You should initialize the `IgniteConfiguration.localHost` or `TcpDiscoverySpi.localAddress` parameter with the network
+interface that will be used for inter-node communication. By default, a node binds to and listens on all available IP
+addresses of an environment it's running on. It can prolong node failures detection if some of the node's addresses are
+not reachable from other cluster nodes.
+====
+
+[cols="1,2,1",opts="header"]
+|===
+|Property | Description| Default Value
+| `localAddress`| Local host IP address used for discovery. If set, overrides the `IgniteConfiguration.localHost` setting. | By default, a node binds to all available network addresses. If there is a non-loopback address available, then java.net.InetAddress.getLocalHost() is used.
+| `localPort`  | The port that the node binds to. If set to a non-default value, other cluster nodes must know this port to be able to discover the node. | `47500`
+| `localPortRange`| If the `localPort` is busy, the node attempts to bind to the next port (incremented by 1) and continues this process until it finds a free port. The `localPortRange` property defines the number of ports the node will try (starting from `localPort`).
+   | `100`
+| `soLinger`| Specifies a linger-on-close timeout of TCP sockets used by Discovery SPI. See Java `Socket.setSoLinger` API
+for details on how to adjust this setting. In Ignite, the timeout defaults to a non-negative value to prevent
+link:https://bugs.openjdk.java.net/browse/JDK-8219658[potential deadlocks with SSL connections, window=_blank] but,
+as a side effect, this can prolong the detection of cluster node failures. Alternatively, update your JRE version to the
+one with the SSL issue fixed and adjust this setting accordingly. | `0`
+| `reconnectCount` | The number of times the node tries to (re)establish connection to another node. |`10`
+| `networkTimeout` |  The maximum network timeout in milliseconds for network operations. |`5000`
+| `socketTimeout` |  The socket operations timeout. This timeout is used to limit connection time and write-to-socket time. |`5000`
+| `ackTimeout`| The acknowledgement timeout for discovery messages.
+If an acknowledgement is not received within this timeout, the discovery SPI tries to resend the message.  |  `5000`
+| `joinTimeout` |  The join timeout defines how much time the node waits to join a cluster. If a non-shared IP finder is used and the node fails to connect to any address from the IP finder, the node keeps trying to join within this timeout. If all addresses are unresponsive, an exception is thrown and the node terminates.
+`0` means waiting indefinitely.  | `0`
+| `statisticsPrintFrequency` | Defines how often the node prints discovery statistics to the log.
+`0` indicates no printing. If the value is greater than 0, and quiet mode is disabled, then statistics is printed out at INFO level once every period. | `0`
+
+|===
+
+
+
+== Communication
+
+After the nodes discover each other and the cluster is formed, the nodes exchange messages via the communication SPI.
+The messages represent distributed cluster operations, such as task execution, data modification operations, queries, etc.
+The default implementation of the communication SPI uses the TCP/IP protocol to exchange messages (`TcpCommunicationSpi`).
+This section describes the properties of `TcpCommunicationSpi`.
+
+Each node opens a local communication port and address to which other nodes connect and send messages.
+At startup, the node tries to bind to the specified communication port (default is 47100).
+If the port is already used, the node increments the port number until it finds a free port.
+The number of attempts is defined by the `localPortRange` property (defaults to 100).
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;communication-spi, indent=0]
+----
+
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/ClusteringOverview.java[tag=commSpi,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringOverview.cs[tag=CommunicationSPI,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Below is a list of some important properties of `TcpCommunicationSpi`.
+You can find the list of all properties in the javadoc:org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi[] javadoc.
+
+[cols="1,2,1",opts="header"]
+|===
+|Property | Description| Default Value
+| `localAddress` | The local address for the communication SPI to bind to. If set, overrides the `IgniteConfiguration.localHost` setting. |
+
+| `localPort` | The local port that the node uses for communication.  | `47100`
+
+| `localPortRange` | The range of ports the nodes tries to bind to sequentially until it finds a free one. |  `100`
+
+|`tcpNoDelay` | Sets the value for the `TCP_NODELAY` socket option. Each socket accepted or created will use the provided value.
+
+The option should be set to `true` (default) to reduce request/response time during communication over TCP. In most cases we do not recommend changing this option.| `true`
+
+|`idleConnectionTimeout` | The maximum idle connection timeout (in milliseconds) after which the connection is closed. |  `600000`
+
+|`usePairedConnections` | Whether dual socket connection between the nodes should be enforced. If set to `true`, two separate connections will be established between the communicating nodes: one for outgoing messages, and one for incoming messages. When set to `false`, a single TCP connection will be used for both directions.
+This flag is useful on some operating systems when messages take too long to be delivered.   | `false`
+
+| `directBuffer` | A boolean flag that indicates whether to allocate NIO direct buffer instead of NIO heap allocation buffer. Although direct buffers perform better, in some cases (especially on Windows) they may cause JVM crashes. If that happens in your environment, set this property to `false`.   | `true`
+
+|`directSendBuffer` | Whether to use NIO direct buffer instead of NIO heap allocation buffer when sending messages.   | `false`
+
+|`socketReceiveBuffer`| Receive buffer size for sockets created or accepted by the communication SPI. If set to `0`,   the operating system's default value is used. | `0`
+
+|`socketSendBuffer` | Send buffer size for sockets created or accepted by the communication SPI. If set to `0` the  operating system's default value is used. | `0`
+
+|===
+
+
+== Connection Timeouts
+
+////
+//Connection timeout is a period of time a cluster node waits before a connection to another node is considered "failed".
+
+Every node in a cluster is connected to every other node.
+When node A sends a message to node B, and node B does not reply in `failureDetectionTimeout` (in milliseconds), then node B will be removed from the cluster.
+////
+
+There are several properties that define connection timeouts:
+
+[cols="",opts="header"]
+|===
+|Property | Description | Default Value
+| `IgniteConfiguration.failureDetectionTimeout` | A timeout for basic network operations for server nodes. | `10000`
+
+| `IgniteConfiguration.clientFailureDetectionTimeout` | A timeout for basic network operations for client nodes.  | `30000`
+
+|===
+
+//CAUTION: The timeout automatically controls configuration parameters of `TcpDiscoverySpi`, such as socket timeout, message acknowledgment timeout and others. If any of these parameters is set explicitly, then the failure timeout setting will be ignored.
+
+:ths: &#8239;
+
+You can set the failure detection timeout in the node configuration as shown in the example below.
+//The default value is 10{ths}000 ms for server nodes and 30{ths}000 ms for client nodes.
+The default values allow the discovery SPI to work reliably on most on-premise and containerized deployments.
+However, in stable low-latency networks, you can set the parameter to {tilde}200 milliseconds in order to detect and react to​ failures more quickly.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/network-configuration.xml[tags=!*;ignite-config;failure-detection-timeout, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=failure-detection-timeout, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
diff --git a/docs/_docs/clustering/running-client-nodes-behind-nat.adoc b/docs/_docs/clustering/running-client-nodes-behind-nat.adoc
new file mode 100644
index 0000000..f60285a
--- /dev/null
+++ b/docs/_docs/clustering/running-client-nodes-behind-nat.adoc
@@ -0,0 +1,47 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Running Client Nodes Behind NAT
+
+If your client nodes are deployed behind a NAT, the server nodes won't be able to establish connection with the clients because of the limitations of the communication protocol.
+This includes deployment cases when client nodes are running in virtual environments (like Kubernetes) and the server nodes are deployed elsewhere.
+
+For cases like this, you need to enable a special mode of communication:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/client-behind-nat.xml[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/Discovery.java[tags=client-behind-nat,indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+== Limitations
+
+* This mode cannot be used when `TcpCommunicationSpi.usePairedConnections = true` on both server and client nodes.
+
+* Peer class loading for link:key-value-api/continuous-queries[continuous queries (transformers and filters)] does not work when a continuous query is started from a client node `forceClientToServerConnections = true`.
+You will need to add the corresponding classes to the classpath of every server node.
+
+* This property can only be used on client nodes. This limitation will be addressed in the future releases.
diff --git a/docs/_docs/clustering/tcp-ip-discovery.adoc b/docs/_docs/clustering/tcp-ip-discovery.adoc
new file mode 100644
index 0000000..44fdd53
--- /dev/null
+++ b/docs/_docs/clustering/tcp-ip-discovery.adoc
@@ -0,0 +1,426 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= TCP/IP Discovery
+
+:javaFile: {javaCodeDir}/TcpIpDiscovery.java
+
+In an Ignite  cluster, nodes can discover each other by using `DiscoverySpi`.
+Ignite provides `TcpDiscoverySpi` as a default implementation of `DiscoverySpi` that uses TCP/IP for node discovery.
+Discovery SPI can be configured for Multicast and Static IP based node
+discovery.
+
+== Multicast IP Finder
+
+`TcpDiscoveryMulticastIpFinder` uses Multicast to discover other nodes
+and is the default IP finder. Here is an example of how to configure
+this finder via a Spring XML file or programmatically:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/discovery-multicast.xml[tags=ignite-config, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=multicast,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=multicast,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Static IP Finder
+
+Static IP Finder, implemented in `TcpDiscoveryVmIpFinder`, allows you to specify a set of IP addresses and ports that will be checked for node discovery.
+
+You are only required to provide at least one IP address of a remote
+node, but usually it is advisable to provide 2 or 3 addresses of
+nodes that you plan to start in the future. Once a
+connection to any of the provided IP addresses is established, Ignite automatically discovers all other nodes.
+
+[TIP]
+====
+Instead of specifying addresses in the configuration, you can specify them in
+the `IGNITE_TCP_DISCOVERY_ADDRESSES` environment variable or in the system property
+with the same name. Addresses should be comma separated and may optionally contain
+a port range.
+====
+
+[TIP]
+====
+By default, the `TcpDiscoveryVmIpFinder` is used in the 'non-shared' mode.
+If you plan to start a server node, then in this mode the list of IP addresses should contain the address of the local node as well. In this case, the node will not wait until other nodes join the cluster; instead, it will become the first cluster node and start to operate normally.
+====
+
+You can configure the static IP finder via XML configuration or programmatically:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/discovery-static.xml[tags=ignite-config, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=static,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=static,indent=0]
+----
+
+tab:Shell[]
+[source,shell]
+----
+# The configuration should use TcpDiscoveryVmIpFinder without addresses specified:
+
+IGNITE_TCP_DISCOVERY_ADDRESSES=1.2.3.4,1.2.3.5:47500..47509 bin/ignite.sh -v config/default-config.xml
+----
+--
+
+[WARNING]
+====
+[discrete]
+Provide multiple node addresses only if you are sure that those are reachable. The unreachable addresses increase the
+time it takes for the nodes to join the cluster. Let's say you set five IP addresses, and nobody listens for incoming
+connections on two addresses out of five. If Ignite starts connecting to the cluster via those two unreachable addresses,
+it will impact the node's startup time.
+====
+
+
+== Multicast and Static IP Finder
+
+You can use both Multicast and Static IP based discovery together. In
+this case, in addition to any addresses received via multicast,
+`TcpDiscoveryMulticastIpFinder` can also work with a pre-configured list
+of static IP addresses, just like Static IP-Based Discovery described
+above. Here is an example of how to configure Multicast IP finder with
+static IP addresses:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/discovery-static-and-multicast.xml[tags=ignite-config, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=multicastAndStatic,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=multicastAndStatic,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+
+== Isolated Clusters on Same Set of Machines
+
+Ignite allows you to start two isolated clusters on the same set of
+machines. This can be done if nodes from different clusters use non-intersecting local port ranges for `TcpDiscoverySpi` and `TcpCommunicationSpi`.
+
+Let’s say you need to start two isolated clusters on a single machine
+for testing purposes. For the nodes from the first cluster, you
+should use the following `TcpDiscoverySpi` and `TcpCommunicationSpi`
+configurations:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!--
+    Explicitly configure TCP discovery SPI to provide list of
+    initial nodes from the first cluster.
+    -->
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <!-- Initial local port to listen to. -->
+            <property name="localPort" value="48500"/>
+
+            <!-- Changing local port range. This is an optional action. -->
+            <property name="localPortRange" value="20"/>
+
+            <!-- Setting up IP finder for this cluster -->
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                    <property name="addresses">
+                        <list>
+                            <!--
+                            Addresses and port range of nodes from
+                            the first cluster.
+                            127.0.0.1 can be replaced with actual IP addresses
+                            or host names. Port range is optional.
+                            -->
+                            <value>127.0.0.1:48500..48520</value>
+                        </list>
+                    </property>
+                </bean>
+            </property>
+        </bean>
+    </property>
+
+    <!--
+    Explicitly configure TCP communication SPI changing local
+    port number for the nodes from the first cluster.
+    -->
+    <property name="communicationSpi">
+        <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+            <property name="localPort" value="48100"/>
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=isolated1,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=isolated1,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+
+For the nodes from the second cluster, the configuration might look like
+this:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!--
+    Explicitly configure TCP discovery SPI to provide list of initial
+    nodes from the second cluster.
+    -->
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <!-- Initial local port to listen to. -->
+            <property name="localPort" value="49500"/>
+
+            <!-- Changing local port range. This is an optional action. -->
+            <property name="localPortRange" value="20"/>
+
+            <!-- Setting up IP finder for this cluster -->
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                    <property name="addresses">
+                        <list>
+                            <!--
+                            Addresses and port range of the nodes from the second cluster.
+                            127.0.0.1 can be replaced with actual IP addresses or host names. Port range is optional.
+                            -->
+                            <value>127.0.0.1:49500..49520</value>
+                        </list>
+                    </property>
+                </bean>
+            </property>
+        </bean>
+    </property>
+
+    <!--
+    Explicitly configure TCP communication SPI changing local port number
+    for the nodes from the second cluster.
+    -->
+    <property name="communicationSpi">
+        <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+            <property name="localPort" value="49100"/>
+        </bean>
+    </property>
+</bean>
+
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=isolated2,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=isolated2,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+As you can see from the configurations, the difference between them is minor — only port numbers for SPIs and IP finder vary.
+
+[TIP]
+====
+If you want the nodes from different clusters to be able to look for
+each other using the multicast protocol, replace
+`TcpDiscoveryVmIpFinder` with `TcpDiscoveryMulticastIpFinder` and set
+unique `TcpDiscoveryMulticastIpFinder.multicastGroups` in each
+configuration above.
+====
+
+[CAUTION]
+====
+[discrete]
+=== Persistence Files Location
+
+If the isolated clusters use Native Persistence, then every
+cluster has to store its persistence files under different paths in the
+file system. Refer to the link:persistence/native-persistence[Native Persistence documentation] to learn how you can change persistence related directories.
+====
+
+
+== JDBC-Based IP Finder
+NOTE: Not supported in .NET/C#/{cpp}.
+
+You can have your database be a common shared storage of initial IP addresses. With this IP finder, nodes will write their IP addresses to a database on startup. This is done via `TcpDiscoveryJdbcIpFinder`.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.jdbc.TcpDiscoveryJdbcIpFinder">
+          <property name="dataSource" ref="ds"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- Configured data source instance. -->
+<bean id="ds" class="some.Datasource">
+
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=jdbc,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+
+tab:C++[unsupported]
+
+--
+
+
+== Shared File System IP Finder
+
+NOTE: Not supported in .NET/C#/{cpp}.
+
+A shared file system can be used as a storage for nodes' IP addresses. The nodes will write their IP addresses to the file system on startup. This behavior is supported by `TcpDiscoverySharedFsIpFinder`.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.sharedfs.TcpDiscoverySharedFsIpFinder">
+                  <property name="path" value="/var/ignite/addresses"/>
+                </bean>
+            </property>
+        </bean>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=sharedFS,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== ZooKeeper IP Finder
+
+NOTE: Not supported in .NET/C#.
+
+To set up ZooKeeper IP finder use `TcpDiscoveryZookeeperIpFinder` (note that `ignite-zookeeper` module has to be enabled).
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder">
+                    <property name="zkConnectionString" value="127.0.0.1:2181"/>
+                </bean>
+            </property>
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=zk,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+
+--
+
+
+
+
diff --git a/docs/_docs/clustering/zookeeper-discovery.adoc b/docs/_docs/clustering/zookeeper-discovery.adoc
new file mode 100644
index 0000000..3a0ddd9
--- /dev/null
+++ b/docs/_docs/clustering/zookeeper-discovery.adoc
@@ -0,0 +1,193 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ZooKeeper Discovery
+
+Ignite's default TCP/IP Discovery organizes cluster nodes into a ring topology that has advantages and
+disadvantages. For instance, on topologies with hundreds of cluster
+nodes, it can take many seconds for a system message to traverse through
+all the nodes. As a result, the basic processing of events such as
+joining of new nodes or detecting the failed ones can take a while,
+affecting the overall cluster responsiveness and performance.
+
+ZooKeeper Discovery is designed for massive deployments that
+need to preserve ease of scalability and linear performance.
+However, using both Ignite and ZooKeeper requires configuring and managing two
+distributed systems, which can be challenging.
+Therefore, we recommend that you use ZooKeeper Discovery only if you plan to scale to 100s or 1000s nodes.
+Otherwise, it is best to use link:clustering/tcp-ip-discovery[TCP/IP Discovery].
+
+ZooKeeper Discovery uses ZooKeeper as a single point of synchronization
+and to organize the cluster into a star-shaped topology where a
+ZooKeeper cluster sits in the center and the Ignite nodes exchange
+discovery events through it.
+
+image::images/zookeeper.png[Zookeeper]
+
+It is worth mentioning that ZooKeeper Discovery is an alternative implementation of the Discovery SPI and doesn’t affect the Communication SPI.
+Once the nodes discover each other via ZooKeeper Discovery, they use Communication SPI for peer-to-peer communication.
+////////////////////////////////////////////////////////////////////////////////
+TODO: explain what it means
+////////////////////////////////////////////////////////////////////////////////
+
+== Configuration
+
+To enable ZooKeeper Discovery, you need to configure `ZookeeperDiscoverySpi` in a way similar to this:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi">
+      <property name="zkConnectionString" value="127.0.0.1:34076,127.0.0.1:43310,127.0.0.1:36745"/>
+      <property name="sessionTimeout" value="30000"/>
+      <property name="zkRootPath" value="/apacheIgnite"/>
+      <property name="joinTimeout" value="10000"/>
+    </bean>
+  </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ZookeeperDiscovery.java[tag=cfg,indent=0]
+----
+tab:.NET[unsupported]
+tab:C++[unsupported]
+--
+
+The following parameters are required (other parameters are optional):
+
+* `zkConnectionString` - keeps the list of addresses of ZooKeeper
+servers.
+* `sessionTimeout` - specifies the time after which an Ignite node is considered disconnected if it doesn’t react to events exchanged via Discovery SPI.​
+
+== Failures and Split Brain Handling
+
+In case of network partitioning, some of ​the nodes cannot communicate to each other because they are located in separated network segments, which may lead to failure to process user requests or inconsistent data modification.
+
+ZooKeeper Discovery approaches network partitioning (aka. split brain)
+and communication failures between individual nodes in the following
+way:
+
+[CAUTION]
+====
+It is assumed that the ZooKeeper cluster is always visible to all the
+nodes in the cluster. In fact, if a node disconnects from ZooKeeper, it
+shuts down and other nodes treat it as failed or disconnected.
+====
+
+Whenever a node discovers that it cannot connect to some of the other
+nodes in the cluster, it initiates a communication failure resolution
+process by publishing special requests to the ZooKeeper cluster. When
+the process is started, all nodes try to connect to each other and send
+the results of the connection attempts to the node that coordinates the
+process (_the coordinator node_). Based on this information, the
+coordinator node creates a connectivity graph that represents the
+network situation in the cluster. Further actions depend on the type of
+network segmentation. The following sections discuss possible scenarios.
+
+=== Cluster is split into several disjoint components
+
+If the cluster is split into several independent components, each
+component (being a cluster) may think of itself as a master cluster and
+continue to process user requests, resulting in data inconsistency. To
+avoid this, only the component with the largest number of nodes is kept
+alive; and the nodes from the other components are brought down.
+
+image::images/network_segmentation.png[Network Segmentation]
+
+The image above shows a case where the cluster network is split into 2 segments.
+The nodes from the smaller cluster (right-hand segment) are terminated.
+
+image::images/segmentation_resolved.png[Segmentation Resolved]
+
+When there are multiple largest components, the one that has the largest
+number of clients is kept alive, and the others are shut down.
+
+=== Several links between nodes are missing
+
+Some nodes cannot connect to some other nodes, which means the nodes are
+not completely disconnected from the cluster but can’t exchange data
+with some of the nodes and, therefore, cannot be part of the cluster. In
+the image below, one node cannot connect to two other nodes.
+
+image::images/split_brain.png[Split-brain]
+
+In this case, the task is to find the largest component in which every
+node can connect to every other node, which, in the general case, is a
+difficult problem and cannot be solved in an acceptable amount of time. The
+coordinator node uses a heuristic algorithm to find the best approximate
+solution. The nodes that are left out of the solution are shut down.
+
+image::images/split_brain_resolved.png[Split-brain Resolved]
+
+=== ZooKeeper cluster segmentation
+
+In large-scale deployments where the ZooKeeper cluster can span multiple data centers and geographically diverse locations, it can split into multiple segments due to network segmentation.
+If this occurs, ZooKeeper checks if there is a segment that contains more than a half of all ZooKeeper nodes (ZooKeeper requires this many nodes to continue its operation), and, if found, this segment takes over managing the Ignite cluster, while other segments are shut down.
+If there is no such segment, ZooKeeper shuts down all its nodes.
+
+In case of ZooKeeper cluster segmentation, the Ignite cluster may or may not be split.
+In any case, when the ZooKeeper nodes are shut down, the corresponding Ignite nodes try to connect to available ZooKeeper nodes and shut down if unable to do so.
+
+The following image is an example of network segmentation that splits both the Ignite cluster and ZooKeeper cluster into two segments.
+This may happen if your clusters are deployed in two data centers.
+In this case, the ZooKeeper node located in Data Center B shuts itself down.
+The Ignite nodes located in Data Center B are not able to connect to the remaining ZooKeeper nodes and shut themselves down as well.
+
+image::images/zookeeper_split.png[Zookeeper Split]
+
+== Custom Discovery Events
+
+Changing a ring-shaped topology to the star-shaped one affects the way
+custom discovery events are handled by the Discovery SPI component. Since
+the ring topology is linear, it means that each discovery message is
+processed by nodes sequentially.
+
+With ZooKeeper Discovery, the coordinator sends discovery messages to
+all nodes simultaneously resulting in the messages to be processed in
+parallel. As a result, ZooKeeper Discovery prohibits custom discovery events from being changed. For instance, the nodes are not allowed to add any payload to discovery messages.
+
+== Ignite and ZooKeeper Configuration Considerations
+
+When using ZooKeeper Discovery, you need to make sure that the configuration parameters of the ZooKeeper cluster and Ignite cluster match each other.
+
+Consider a sample ZooKeeper configuration, as follows:
+
+[source,shell]
+----
+# The number of milliseconds of each tick
+tickTime=2000
+
+# The number of ticks that can pass between sending a request and getting an acknowledgement
+syncLimit=5
+----
+
+Configured this way, ZooKeeper server detects its own segmentation from the rest of the ZooKeeper cluster only after `tickTime * syncLimit` elapses.
+Until this event is detected at ZooKeeper level, all Ignite nodes connected to the segmented ZooKeeper server do not try to reconnect to the other ZooKeeper servers.
+
+On the other hand, there is a `sessionTimeout` parameter on the Ignite
+side that defines how soon ZooKeeper closes an Ignite node’s session if
+the node gets disconnected from the ZooKeeper cluster.
+If `sessionTimeout` is smaller than `tickTime * syncLimit` , then the
+Ignite node is notified by the segmented ZooKeeper server too
+late — its session expires before it tries to reconnect to other ZooKeeper servers.
+
+To avoid this situation, `sessionTimeout` should be bigger than `tickTime * syncLimit`.
diff --git a/docs/_docs/code-deployment/deploying-user-code.adoc b/docs/_docs/code-deployment/deploying-user-code.adoc
new file mode 100644
index 0000000..3916278
--- /dev/null
+++ b/docs/_docs/code-deployment/deploying-user-code.adoc
@@ -0,0 +1,96 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Deploying User Code
+:javaFile: {javaCodeDir}/UserCodeDeployment.java
+
+In addition to link:code-deployment/peer-class-loading[peer class loading], you can deploy user code by configuring `UriDeploymentSpi`. With this approach, you specify the location of your libraries in the node configuration.
+Ignite scans the location periodically and redeploys the classes if they change.
+The location may be a file system directory or an HTTP(S) location.
+When Ignite detects that the libraries are removed from the location, the classes are undeployed from the cluster.
+
+You can specify multiple locations (of different types) by providing both directory paths and http(s) URLs.
+
+//TODO NOTE: peer class loading vs. URL deployment
+
+
+== Deploying from a Local Directory
+
+To deploy libraries from a file system directory, add the directory path to the list of URIs in the `UriDeploymentSpi` configuration.
+The directory must exist on the nodes where it is specified and contain jar files with the classes you want to deploy.
+Note that the path must be specified using the "file://" scheme.
+You can specify different directories on different nodes.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/deployment.xml[tags=!*;ignite-config;from-local-dir, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=from-local-dir, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+You can pass the following parameter in the URL:
+
+[cols="1,2,1",opts="header"]
+|===
+|Parameter | Description | Default Value
+| `freq` |  Scanning frequency in milliseconds. | `5000`
+|===
+
+
+== Deploying from a URL
+
+To deploy libraries from an http(s) location, add the URL to the list of URIs in the `UriDeploymentSpi` configuration.
+
+Ignite parses the HTML file to find the HREF attributes of all `<a>` tags on the page.
+The references must point to the jar files you want to deploy.
+//It's important that only HTTP scanner uses the URLConnection.getLastModified() method to check if there were any changes since last iteration for each GAR-file before redeploying.
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/deployment.xml[tags=!*;ignite-config;from-url, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=from-url, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+You can pass the following parameter in the URL:
+
+[cols="1,2,1",opts="header"]
+|===
+|Parameter | Description | Default Value
+| `freq` |  Scanning frequency in milliseconds. | `300000`
+|===
+
diff --git a/docs/_docs/code-deployment/peer-class-loading.adoc b/docs/_docs/code-deployment/peer-class-loading.adoc
new file mode 100644
index 0000000..0dd7d18
--- /dev/null
+++ b/docs/_docs/code-deployment/peer-class-loading.adoc
@@ -0,0 +1,166 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Peer Class Loading
+
+== Overview
+
+Peer class loading refers to loading classes from a local node where they are defined to remote nodes.
+With peer class loading enabled, you don't have to manually deploy your Java code on each node in the cluster and re-deploy it each time it changes.
+Ignite automatically loads the classes from the node where they are defined to the nodes where they are used.
+
+[CAUTION]
+====
+[discrete]
+=== Automatic Assemblies Loading in .NET
+If you develop C# and .NET applications, then refer to the link:net-specific/net-remote-assembly-loading[Remote Assembly Loading]
+page for details on how to set up and use the peer-class-loading feature with that type of applications.
+====
+
+For example, when link:key-value-api/using-scan-queries[querying data] with a custom transformer, you only need to define your tasks on the client node that initiates the computation, and Ignite loads the classes to the server nodes.
+
+When enabled, peer class loading is used to deploy the following classes:
+
+* Tasks and jobs submitted via the link:distributed-computing/distributed-computing[compute interface].
+* Transformers and filters used with link:key-value-api/using-scan-queries[scan queries] and link:key-value-api/continuous-queries[continuous queries].
+* Stream transformers, receivers and visitors used with link:data-streaming#data-streamers[data streamers].
+* link:distributed-computing/collocated-computations#entry-processor[Entry processors].
+
+When defining the classes listed above, we recommend that each class is created as either a separate class or inner static class and not as a lambda or anonymous inner class. Non-static inner classes are serialized together with its enclosing class. If some fields of the enclosing class cannot be serialized, you will get serialization exceptions.
+
+[IMPORTANT]
+====
+The peer class loading functionality does not deploy the key and object classes of the entries stored in caches.
+====
+
+[WARNING]
+====
+The peer class loading functionality allows any client to deploy custom code to the cluster. If you want to use it in production environments, make sure only authorized clients have access to the cluster.
+====
+
+
+This is what happens when a class is required on remote nodes:
+
+* Ignite checks if the class is available in the local classpath, i.e. if it was loaded during system initialization, and if it was, it is returned. No class loading from a peer node takes place in this case.
+* If the class is not available locally, then a request for the class definition is sent to the originating node. The originating node sends the class's byte-code and the class is loaded on the worker node. This happens once per class. When the class definition is loaded on a node, it does not have to be loaded again.
+
+[NOTE]
+====
+[discrete]
+=== Deploying 3rd Party Libraries
+When utilizing peer class loading, you should be aware of the libraries that get loaded from peer nodes vs. libraries that are already available locally in the class path.
+We suggest you should include all 3rd party libraries into the class path of every node.
+This can be achieved by copying your JAR files into the `{IGNITE_HOME}/libs` folder.
+This way you do not transfer megabytes of 3rd party classes to remote nodes every time you change a line of code.
+====
+
+
+== Enabling Peer Class Loading
+
+Here is how you can configure peer class loading:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/peer-class-loading.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/PeerClassLoading.java[tags=configure, indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PeerClassLoading.cs[tag=enable,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+
+The following table describes parameters related to peer class loading.
+
+[cols="30%,60%,10%",opts="header,width=100%"]
+|===
+|Parameter| Description | Default value
+
+|`peerClassLoadingEnabled`| Enables/disables peer class loading. | `false`
+|`deploymentMode` | The peer class loading mode. | `SHARED`
+
+| `peerClassLoadingExecutorService` | Configures a thread pool to be used for peer class loading. If not configured, a default pool is used.  | `null`
+| `peerClassLoadingExecutorServiceShutdown` |Peer class loading executor service shutdown flag. If the flag is set to `true`, the peer class loading thread pool is forcibly shut down when the node stops. | `true`
+|`peerClassLoadingLocalClassPathExclude` |List of packages in the system class path that should be P2P loaded even if they exist locally. | `null`
+
+|`peerClassLoadingMissedResourcesCacheSize`| Size of missed resources cache. Set to 0 to avoid caching of missing resources. | 100
+
+|===
+
+
+
+== Peer Class Loading Modes
+
+=== PRIVATE and ISOLATED
+Classes deployed within the same class loader on the master node still share the same class loader remotely on worker nodes.
+However, the tasks deployed from different master nodes does not share the same class loader on worker nodes.
+This is useful in development environments when different developers can be working on different versions of the same classes.
+There is no difference between `PRIVATE` and `ISOLATED` deployment modes since the `@UserResource` annotation has been removed.
+Both constants were kept for backward-compatibility reasons and one of them is likely to be removed in a future major release.
+
+In this mode, classes get un-deployed when the master node leaves the cluster.
+
+=== SHARED
+
+This is the default deployment mode.
+In this mode, classes from different master nodes with the same user version share the same class loader on worker nodes.
+Classes are un-deployed when all master nodes leave the cluster or the user version changes.
+This mode allows classes coming from different master nodes to share the same instances of user resources on remote nodes (see below).
+This method is specifically useful in production as, in comparison to `ISOLATED` mode which has a scope of a single class loader on a single master node, `SHARED` mode broadens the deployment scope to all master nodes.
+
+In this mode, classes get un-deployed when all the master nodes leave the cluster.
+
+=== CONTINUOUS
+In `CONTINUOUS` mode, the classes do not get un-deployed when master nodes leave the cluster.
+Un-deployment only happens when a class user version changes.
+The advantage of this approach is that it allows tasks coming from different master nodes to share the same instances of user resources on worker nodes.
+This allows the tasks executing on worker nodes to reuse, for example, the same instances of connection pools or caches.
+When using this mode, you can start up multiple stand-alone worker nodes, define user resources on the master nodes, and have them initialized once on worker nodes regardless of which master node they came from.
+In comparison to the `ISOLATED` deployment mode which has a scope of a single class loader on a single master node, `CONTINUOUS` mode broadens the deployment scope to all master nodes which is specifically useful in production.
+
+In this mode, classes do not get un-deployed even if all the master nodes leave the cluster.
+
+== Un-Deployment and User Versions
+
+The classes deployed with peer class loading have their own lifecycle. On certain events (when the master node leaves or the user version changes, depending on deployment mode), the class information is un-deployed from the cluster: the class definition is erased from all nodes and the user resources linked with that class definition are also optionally erased (again, depending on deployment mode).
+
+User version comes into play whenever you want to redeploy classes deployed in `SHARED` or `CONTINUOUS` modes.
+By default, Ignite automatically detects if the class loader has changed or a node is restarted.
+However, if you would like to change and redeploy the code on a subset of nodes, or in the case of `CONTINUOUS` mode, kill every living deployment, you should change the user version.
+User version is specified in the `META-INF/ignite.xml` file of your class path as follows:
+
+[source, xml]
+-------------------------------------------------------------------------------
+<!-- User version. -->
+<bean id="userVersion" class="java.lang.String">
+    <constructor-arg value="0"/>
+</bean>
+-------------------------------------------------------------------------------
+
+By default, all Ignite startup scripts (ignite.sh or ignite.bat) pick up the user version from the `IGNITE_HOME/config/userversion` folder.
+Usually, you just need to update the user version under that folder.
+However, in case of GAR or JAR deployment, you should remember to provide the `META-INF/ignite.xml` file with the desired user version in it.
diff --git a/docs/_docs/code-snippets/cpp/src/affinity_run.cpp b/docs/_docs/code-snippets/cpp/src/affinity_run.cpp
new file mode 100644
index 0000000..94ee1ee
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/affinity_run.cpp
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+using namespace cache;
+
+//tag::affinity-run[]
+/*
+ * Function class.
+ */
+struct FuncAffinityRun : compute::ComputeFunc<void>
+{
+    /*
+    * Default constructor.
+    */
+    FuncAffinityRun()
+    {
+        // No-op.
+    }
+
+    /*
+    * Parameterized constructor.
+    */
+    FuncAffinityRun(std::string cacheName, int32_t key) :
+        cacheName(cacheName), key(key)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        Ignite& node = GetIgnite();
+
+        Cache<int32_t, std::string> cache = node.GetCache<int32_t, std::string>(cacheName.c_str());
+
+        // Peek is a local memory lookup.
+        std::cout << "Co-located [key= " << key << ", value= " << cache.LocalPeek(key, CachePeekMode::ALL) << "]" << std::endl;
+    }
+
+    std::string cacheName;
+    int32_t key;
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<FuncAffinityRun>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("FuncAffinityRun");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "FuncAffinityRun";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const FuncAffinityRun& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const FuncAffinityRun& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(FuncAffinityRun& dst)
+            {
+                dst = FuncAffinityRun();
+            }
+
+            static void Write(BinaryWriter& writer, const FuncAffinityRun& obj)
+            {
+                writer.WriteString("cacheName", obj.cacheName);
+                writer.WriteInt32("key", obj.key);
+            }
+
+            static void Read(BinaryReader& reader, FuncAffinityRun& dst)
+            {
+                dst.cacheName = reader.ReadString("cacheName");
+                dst.key = reader.ReadInt32("key");
+            }
+        };
+    }
+}
+
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get cache instance.
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<FuncAffinityRun>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    int key = 1;
+
+    // This closure will execute on the remote node where
+    // data for the given 'key' is located.
+    compute.AffinityRun(cache.GetName(), key, FuncAffinityRun(cache.GetName(), key));
+}
+//end::affinity-run[]
diff --git a/docs/_docs/code-snippets/cpp/src/cache_asynchronous_execution.cpp b/docs/_docs/code-snippets/cpp/src/cache_asynchronous_execution.cpp
new file mode 100644
index 0000000..85c87a0
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_asynchronous_execution.cpp
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+using namespace cache;
+
+//tag::cache-asynchronous-execution[]
+/*
+ * Function class.
+ */
+class HelloWorld : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<HelloWorld>;
+public:
+    /*
+     * Default constructor.
+     */
+    HelloWorld()
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        std::cout << "Job Result: Hello World" << std::endl;
+    }
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<HelloWorld>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("HelloWorld");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "HelloWorld";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const HelloWorld& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const HelloWorld& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(HelloWorld& dst)
+            {
+                dst = HelloWorld();
+            }
+
+            static void Write(BinaryWriter& writer, const HelloWorld& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, HelloWorld& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<HelloWorld>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    // Declaring function instance.
+    HelloWorld helloWorld;
+
+    // Making asynchronous call.
+    compute.RunAsync(helloWorld);
+}
+//end::cache-asynchronous-execution[]
diff --git a/docs/_docs/code-snippets/cpp/src/cache_atomic_operations.cpp b/docs/_docs/code-snippets/cpp/src/cache_atomic_operations.cpp
new file mode 100644
index 0000000..a505ac2
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_atomic_operations.cpp
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::cache-atomic-operations[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<std::string, int32_t> cache = ignite.GetOrCreateCache<std::string, int32_t>("myNewCache");
+
+    // Put-if-absent which returns previous value.
+    int32_t oldVal = cache.GetAndPutIfAbsent("Hello", 11);
+
+    // Put-if-absent which returns boolean success flag.
+    boolean success = cache.PutIfAbsent("World", 22);
+
+    // Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.
+    oldVal = cache.GetAndReplace("Hello", 11);
+
+    // Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.
+    success = cache.Replace("World", 22);
+
+    // Replace-if-matches operation.
+    success = cache.Replace("World", 2, 22);
+
+    // Remove-if-matches operation.
+    success = cache.Remove("Hello", 1);
+    //end::cache-atomic-operations[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/cache_creating_dynamically.cpp b/docs/_docs/code-snippets/cpp/src/cache_creating_dynamically.cpp
new file mode 100644
index 0000000..3fb11a9
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_creating_dynamically.cpp
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::cache-creating-dynamically[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Create a cache with the given name, if it does not exist.
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myNewCache");
+    //end::cache-creating-dynamically[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/cache_get_put.cpp b/docs/_docs/code-snippets/cpp/src/cache_get_put.cpp
new file mode 100644
index 0000000..a2d7291
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_get_put.cpp
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+/** Cache name. */
+const char* CACHE_NAME = "cacheName";
+
+int main()
+{
+    //tag::cache-get-put[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    try
+    {
+        Ignite ignite = Ignition::Start(cfg);
+
+        Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>(CACHE_NAME);
+
+        // Store keys in the cache (the values will end up on different cache nodes).
+        for (int32_t i = 0; i < 10; i++)
+        {
+            cache.Put(i, std::to_string(i));
+        }
+
+        for (int i = 0; i < 10; i++)
+        {
+            std::cout << "Got [key=" << i << ", val=" + cache.Get(i) << "]" << std::endl;
+        }
+    }
+    catch (IgniteError& err)
+    {
+        std::cout << "An error occurred: " << err.GetText() << std::endl;
+        return err.GetCode();
+    }
+    //end::cache-get-put[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/cache_getting_instance.cpp b/docs/_docs/code-snippets/cpp/src/cache_getting_instance.cpp
new file mode 100644
index 0000000..c2d0665
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_getting_instance.cpp
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::cache-getting-instance[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Obtain instance of cache named "myCache".
+    // Note that different caches may have different generics.
+    Cache<int32_t, std::string> cache = ignite.GetCache<int32_t, std::string>("myCache");
+    //end::cache-getting-instance[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/city.h b/docs/_docs/code-snippets/cpp/src/city.h
new file mode 100644
index 0000000..8e25ee6a
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/city.h
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+namespace ignite
+{
+    struct City
+    {
+        City() : population(0)
+        {
+            // No-op.
+        }
+
+        City(const int32_t population) :
+            population(population)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+            oss << "City [population=" << population << ']';
+            return oss.str();
+        }
+
+        int32_t population;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::City)
+
+            typedef ignite::City City;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(City)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(City)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(City)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(City)
+
+            static void Write(BinaryWriter& writer, const ignite::City& obj)
+        {
+            writer.WriteInt32("population", obj.population);
+        }
+
+        static void Read(BinaryReader& reader, ignite::City& dst)
+        {
+            dst.population = reader.ReadInt32("population");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/city_key.h b/docs/_docs/code-snippets/cpp/src/city_key.h
new file mode 100644
index 0000000..b673601
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/city_key.h
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+namespace ignite
+{
+    struct CityKey
+    {
+        CityKey() : id(0)
+        {
+            // No-op.
+        }
+
+        CityKey(int32_t id, const std::string& name) :
+            id(id),
+            name(name)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+
+            oss << "CityKey [id=" << id
+                << ", name=" << name << ']';
+
+            return oss.str();
+        }
+
+        int32_t id;
+        std::string name;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::CityKey)
+
+            typedef ignite::CityKey CityKey;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(CityKey)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(CityKey)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(CityKey)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(CityKey)
+
+            static void Write(BinaryWriter& writer, const ignite::CityKey& obj)
+        {
+            writer.WriteInt64("id", obj.id);
+            writer.WriteString("name", obj.name);
+        }
+
+        static void Read(BinaryReader& reader, ignite::CityKey& dst)
+        {
+            dst.id = reader.ReadInt32("id");
+            dst.name = reader.ReadString("name");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/compute_acessing_data.cpp b/docs/_docs/code-snippets/cpp/src/compute_acessing_data.cpp
new file mode 100644
index 0000000..a6da98d
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_acessing_data.cpp
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+
+//tag::compute-acessing-data[]
+/*
+ * Function class.
+ */
+class GetValue : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<GetValue>;
+public:
+    /*
+     * Default constructor.
+     */
+    GetValue()
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        Ignite& node = GetIgnite();
+
+        // Get the data you need
+        Cache<int64_t, Person> cache = node.GetCache<int64_t, Person>("person");
+
+        // do with the data what you need to do
+        Person person = cache.Get(1);
+    }
+};
+//end::compute-acessing-data[]
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<GetValue>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("GetValue");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "GetValue";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const GetValue& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const GetValue& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(GetValue& dst)
+            {
+                dst = GetValue();
+            }
+
+            static void Write(BinaryWriter& writer, const GetValue& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, GetValue& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, Person>("person");
+    cache.Put(1, Person(1, "first", "last", "resume", 100.00));
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<GetValue>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    // Run compute task.
+    compute.Run(GetValue());
+}
diff --git a/docs/_docs/code-snippets/cpp/src/compute_broadcast.cpp b/docs/_docs/code-snippets/cpp/src/compute_broadcast.cpp
new file mode 100644
index 0000000..136c00d
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_broadcast.cpp
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-broadcast[]
+/*
+ * Function class.
+ */
+class Hello : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<Hello>;
+public:
+    /*
+     * Default constructor.
+     */
+    Hello()
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        std::cout << "Hello" << std::endl;
+    }
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<Hello>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("Hello");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "Hello";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const Hello& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const Hello& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(Hello& dst)
+            {
+                dst = Hello();
+            }
+
+            static void Write(BinaryWriter& writer, const Hello& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, Hello& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<Hello>();
+
+    // Broadcast to all nodes.
+    compute::Compute compute = ignite.GetCompute();
+
+    // Declaring function instance.
+    Hello hello;
+
+    // Print out hello message on nodes in the cluster group.
+    compute.Broadcast(hello);
+}
+//end::compute-broadcast[]
diff --git a/docs/_docs/code-snippets/cpp/src/compute_call.cpp b/docs/_docs/code-snippets/cpp/src/compute_call.cpp
new file mode 100644
index 0000000..bf1d8d5
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_call.cpp
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-call[]
+/*
+ * Function class.
+ */
+class CountLength : public compute::ComputeFunc<int32_t>
+{
+    friend struct ignite::binary::BinaryType<CountLength>;
+public:
+    /*
+     * Default constructor.
+     */
+    CountLength()
+    {
+        // No-op.
+    }
+
+    /*
+     * Constructor.
+     *
+     * @param text Text.
+     */
+    CountLength(const std::string& word) :
+        word(word)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     * Counts number of characters in provided word.
+     *
+     * @return Word's length.
+     */
+    virtual int32_t Call()
+    {
+        return word.length();
+    }
+
+    /** Word to print. */
+    std::string word;
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<CountLength>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("CountLength");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "CountLength";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const CountLength& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const CountLength& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(CountLength& dst)
+            {
+                dst = CountLength("");
+            }
+
+            static void Write(BinaryWriter& writer, const CountLength& obj)
+            {
+                writer.RawWriter().WriteString(obj.word);
+            }
+
+            static void Read(BinaryReader& reader, CountLength& dst)
+            {
+                dst.word = reader.RawReader().ReadString();
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<CountLength>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    std::istringstream iss("How many characters");
+    std::vector<std::string> words((std::istream_iterator<std::string>(iss)),
+        std::istream_iterator<std::string>());
+
+    int32_t total = 0;
+
+    // Iterate through all words in the sentence, create and call jobs.
+    for (std::string word : words)
+    {
+        // Add word length received from cluster node.
+        total += compute.Call<int32_t>(CountLength(word));
+    }
+}
+//end::compute-call[]
diff --git a/docs/_docs/code-snippets/cpp/src/compute_call_async.cpp b/docs/_docs/code-snippets/cpp/src/compute_call_async.cpp
new file mode 100644
index 0000000..bd72bdf
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_call_async.cpp
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-call-async[]
+/*
+ * Function class.
+ */
+class CountLength : public compute::ComputeFunc<int32_t>
+{
+    friend struct ignite::binary::BinaryType<CountLength>;
+public:
+    /*
+     * Default constructor.
+     */
+    CountLength()
+    {
+        // No-op.
+    }
+
+    /*
+     * Constructor.
+     *
+     * @param text Text.
+     */
+    CountLength(const std::string& word) :
+        word(word)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     * Counts number of characters in provided word.
+     *
+     * @return Word's length.
+     */
+    virtual int32_t Call()
+    {
+        return word.length();
+    }
+
+    /** Word to print. */
+    std::string word;
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<CountLength>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("CountLength");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "CountLength";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const CountLength& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const CountLength& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(CountLength& dst)
+            {
+                dst = CountLength("");
+            }
+
+            static void Write(BinaryWriter& writer, const CountLength& obj)
+            {
+                writer.RawWriter().WriteString(obj.word);
+            }
+
+            static void Read(BinaryReader& reader, CountLength& dst)
+            {
+                dst.word = reader.RawReader().ReadString();
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<CountLength>();
+
+    // Get compute instance.
+    compute::Compute asyncCompute = ignite.GetCompute();
+
+    std::istringstream iss("Count characters using callable");
+    std::vector<std::string> words((std::istream_iterator<std::string>(iss)),
+        std::istream_iterator<std::string>());
+
+    std::vector<Future<int32_t>> futures;
+
+    // Iterate through all words in the sentence, create and call jobs.
+    for (std::string word : words)
+    {
+        // Counting number of characters remotely.
+        futures.push_back(asyncCompute.CallAsync<int32_t>(CountLength(word)));
+    }
+
+    int32_t total = 0;
+
+    // Counting total number of characters.
+    for (Future<int32_t> future : futures)
+    {
+        // Waiting for results.
+        future.Wait();
+
+        total += future.GetValue();
+    }
+
+    // Printing result.
+    std::cout << "Total number of characters: " << total << std::endl;
+}
+//end::compute-call-async[]
diff --git a/docs/_docs/code-snippets/cpp/src/compute_get.cpp b/docs/_docs/code-snippets/cpp/src/compute_get.cpp
new file mode 100644
index 0000000..4071d04
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_get.cpp
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+const char* CONFIG_DEFAULT = "/path/to/configuration.xml";
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = CONFIG_DEFAULT;
+
+    //tag::compute-get[]
+    Ignite ignite = Ignition::Start(cfg);
+
+    Compute compute = ignite.GetCompute();
+    //end::compute-get[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/compute_run.cpp b/docs/_docs/code-snippets/cpp/src/compute_run.cpp
new file mode 100644
index 0000000..40896a1
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_run.cpp
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-run[]
+/*
+ * Function class.
+ */
+class PrintWord : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<PrintWord>;
+public:
+    /*
+     * Default constructor.
+     */
+    PrintWord()
+    {
+        // No-op.
+    }    
+    
+    /*
+     * Constructor.
+     *
+     * @param text Text.
+     */
+    PrintWord(const std::string& word) :
+        word(word)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        std::cout << word << std::endl;
+    }
+
+    /** Word to print. */
+    std::string word;
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<PrintWord>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("PrintWord");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "PrintWord";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const PrintWord& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const PrintWord& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(PrintWord& dst)
+            {
+                dst = PrintWord("");
+            }
+
+            static void Write(BinaryWriter& writer, const PrintWord& obj)
+            {
+                writer.RawWriter().WriteString(obj.word);
+            }
+
+            static void Read(BinaryReader& reader, PrintWord& dst)
+            {
+                dst.word = reader.RawReader().ReadString();
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<PrintWord>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    std::istringstream iss("Print words on different cluster nodes");
+    std::vector<std::string> words((std::istream_iterator<std::string>(iss)),
+        std::istream_iterator<std::string>());
+
+    // Iterate through all words and print
+    // each word on a different cluster node.
+    for (std::string word : words)
+    {
+        // Run compute task.
+        compute.Run(PrintWord(word));
+    }
+}
+//end::compute-run[]
diff --git a/docs/_docs/code-snippets/cpp/src/concurrent_updates.cpp b/docs/_docs/code-snippets/cpp/src/concurrent_updates.cpp
new file mode 100644
index 0000000..85d715a
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/concurrent_updates.cpp
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace transactions;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignition::Start(cfg);
+
+    Ignite ignite = Ignition::Get();
+
+    Cache<std::int32_t, std::string> cache = ignite.GetOrCreateCache<std::int32_t, std::string>("myCache");
+
+    //tag::concurrent-updates[]
+    for (int i = 1; i <= 5; i++)
+    {
+        Transaction tx = ignite.GetTransactions().TxStart();
+        std::cout << "attempt #" << i << ", value: " << cache.Get(1) << std::endl;
+        try {
+            cache.Put(1, "new value");
+            tx.Commit();
+            std::cout << "attempt #" << i << " succeeded" << std::endl;
+            break;
+        }
+        catch (IgniteError e)
+        {
+            if (!tx.IsRollbackOnly())
+            {
+                // Transaction was not marked as "rollback only",
+                // so it's not a concurrent update issue.
+                // Process the exception here.
+                break;
+            }
+        }
+    }
+    //end::concurrent-updates[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/continuous_query.cpp b/docs/_docs/code-snippets/cpp/src/continuous_query.cpp
new file mode 100644
index 0000000..0d2cfb3
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/continuous_query.cpp
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+/**
+ * Listener class.
+ */
+template<typename K, typename V>
+class Listener : public event::CacheEntryEventListener<K, V>
+{
+public:
+    /**
+     * Default constructor.
+     */
+    Listener()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param evts Events.
+     * @param num Events number.
+     */
+    virtual void OnEvent(const CacheEntryEvent<K, V>* evts, uint32_t num)
+    {
+        for (uint32_t i = 0; i < num; ++i)
+        {
+            std::cout << "Queried entry [key=" << (evts[i].HasValue() ? evts[i].GetKey() : K())
+                << ", val=" << (evts[i].HasValue() ? evts[i].GetValue() : V()) << ']'
+                << std::endl;
+        }
+    }
+};
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    //tag::continuous-query[]
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Custom listener
+    Listener<int32_t, std::string> listener;
+
+    // Declaring continuous query.
+    continuous::ContinuousQuery<int32_t, std::string> query(MakeReference(listener));
+
+    // Declaring optional initial query
+    ScanQuery initialQuery = ScanQuery();
+
+    continuous::ContinuousQueryHandle<int32_t, std::string> handle = cache.QueryContinuous(query, initialQuery);
+
+    // Iterating over existing data stored in the cache.
+    QueryCursor<int32_t, std::string> cursor = handle.GetInitialQueryCursor();
+
+    while (cursor.HasNext())
+    {
+        std::cout << cursor.GetNext().GetKey() << std::endl;
+    }
+    //end::continuous-query[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/continuous_query_filter.cpp b/docs/_docs/code-snippets/cpp/src/continuous_query_filter.cpp
new file mode 100644
index 0000000..2663f7e
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/continuous_query_filter.cpp
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+
+#include <ignite/ignition.h>
+#include <ignite/cache/query/continuous/continuous_query.h>
+
+#include "ignite/examples/person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+/**
+ * Listener class.
+ */
+template<typename K, typename V>
+class Listener : public event::CacheEntryEventListener<K, V>
+{
+public:
+    /**
+     * Default constructor.
+     */
+    Listener()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param evts Events.
+     * @param num Events number.
+     */
+    virtual void OnEvent(const CacheEntryEvent<K, V>* evts, uint32_t num)
+    {
+        for (uint32_t i = 0; i < num; ++i)
+        {
+            std::cout << "Queried entry [key=" << (evts[i].HasValue() ? evts[i].GetKey() : K())
+                << ", val=" << (evts[i].HasValue() ? evts[i].GetValue() : V()) << ']'
+                << std::endl;
+        }
+    }
+};
+
+//tag::continuous-query-filter[]
+template<typename K, typename V>
+struct RemoteFilter : event::CacheEntryEventFilter<int32_t, std::string>
+{
+    /**
+     * Default constructor.
+     */
+    RemoteFilter()
+    {
+        // No-op.
+    }
+
+    /**
+     * Destructor.
+     */
+    virtual ~RemoteFilter()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param event Event.
+     * @return True if the event passes filter.
+     */
+    virtual bool Process(const CacheEntryEvent<K, V>& event)
+    {
+        std::cout << "The value for key " << event.GetKey() <<
+            " was updated from " << event.GetOldValue() << " to " << event.GetValue() << std::endl;
+        return true;
+    }
+};
+
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType< RemoteFilter<int32_t, std::string> >
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("RemoteFilter<int32_t,std::string>");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "RemoteFilter<int32_t,std::string>";
+
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static bool IsNull(const RemoteFilter<int32_t, std::string>&)
+            {
+                return false;
+            }
+
+            static void GetNull(RemoteFilter<int32_t, std::string>& dst)
+            {
+                dst = RemoteFilter<int32_t, std::string>();
+            }
+
+            static void Write(BinaryWriter& writer, const RemoteFilter<int32_t, std::string>& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, RemoteFilter<int32_t, std::string>& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    // Start a node.
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering remote filter.
+    binding.RegisterCacheEntryEventFilter<RemoteFilter<int32_t, std::string>>();
+
+    // Get cache instance.
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Declaring custom listener.
+    Listener<int32_t, std::string> listener;
+
+    // Declaring filter.
+    RemoteFilter<int32_t, std::string> filter;
+
+    // Declaring continuous query.
+    continuous::ContinuousQuery<int32_t, std::string> qry(MakeReference(listener), MakeReference(filter));
+}
+//end::continuous-query-filter[]
diff --git a/docs/_docs/code-snippets/cpp/src/continuous_query_listener.cpp b/docs/_docs/code-snippets/cpp/src/continuous_query_listener.cpp
new file mode 100644
index 0000000..947b01e
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/continuous_query_listener.cpp
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+//tag::continuous-query-listener[]
+/**
+ * Listener class.
+ */
+template<typename K, typename V>
+class Listener : public event::CacheEntryEventListener<K, V>
+{
+public:
+    /**
+     * Default constructor.
+     */
+    Listener()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param evts Events.
+     * @param num Events number.
+     */
+    virtual void OnEvent(const CacheEntryEvent<K, V>* evts, uint32_t num)
+    {
+        for (uint32_t i = 0; i < num; ++i)
+        {
+            std::cout << "Queried entry [key=" << (evts[i].HasValue() ? evts[i].GetKey() : K())
+                << ", val=" << (evts[i].HasValue() ? evts[i].GetValue() : V()) << ']'
+                << std::endl;
+        }
+    }
+};
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Declaring custom listener.
+    Listener<int32_t, std::string> listener;
+
+    // Declaring continuous query.
+    continuous::ContinuousQuery<int32_t, std::string> query(MakeReference(listener));
+
+    continuous::ContinuousQueryHandle<int32_t, std::string> handle = cache.QueryContinuous(query);
+}
+//end::continuous-query-listener[]
diff --git a/docs/_docs/code-snippets/cpp/src/country.h b/docs/_docs/code-snippets/cpp/src/country.h
new file mode 100644
index 0000000..487c24f
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/country.h
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+namespace ignite
+{
+    struct Country
+    {
+        Country() : population(0)
+        {
+            // No-op.
+        }
+
+        Country(const int32_t population, const std::string& name) :
+            population(population),
+            name(name)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+            oss << "Country [population=" << population
+                << ", name=" << name << ']';
+            return oss.str();
+        }
+
+        int32_t population;
+        std::string name;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::Country)
+
+            typedef ignite::Country Country;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(Country)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(Country)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(Country)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(Country)
+
+            static void Write(BinaryWriter& writer, const ignite::Country& obj)
+        {
+            writer.WriteInt32("population", obj.population);
+            writer.WriteString("name", obj.name);
+        }
+
+        static void Read(BinaryReader& reader, ignite::Country& dst)
+        {
+            dst.population = reader.ReadInt32("population");
+            dst.name = reader.ReadString("name");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/invoke.cpp b/docs/_docs/code-snippets/cpp/src/invoke.cpp
new file mode 100644
index 0000000..1d2895b
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/invoke.cpp
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+#include "ignite/cache/cache_entry_processor.h"
+
+using namespace ignite;
+using namespace cache;
+
+//tag::invoke[]
+/**
+ * Processor for invoke method.
+ */
+class IncrementProcessor : public cache::CacheEntryProcessor<std::string, int32_t, int32_t, int32_t>
+{
+public:
+    /**
+     * Constructor.
+     */
+    IncrementProcessor()
+    {
+        // No-op.
+    }
+
+    /**
+     * Copy constructor.
+     *
+     * @param other Other instance.
+     */
+    IncrementProcessor(const IncrementProcessor& other)
+    {
+        // No-op.
+    }
+
+    /**
+     * Assignment operator.
+     *
+     * @param other Other instance.
+     * @return This instance.
+     */
+    IncrementProcessor& operator=(const IncrementProcessor& other)
+    {
+        return *this;
+    }
+
+    /**
+     * Call instance.
+     */
+    virtual int32_t Process(MutableCacheEntry<std::string, int32_t>& entry, const int& arg)
+    {
+        // Increment the value for a specific key by 1.
+        // The operation will be performed on the node where the key is stored.
+        // Note that if the cache does not contain an entry for the given key, it will
+        // be created.
+        if (!entry.IsExists())
+            entry.SetValue(1);
+        else
+            entry.SetValue(entry.GetValue() + 1);
+
+        return entry.GetValue();
+    }
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<IncrementProcessor>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("IncrementProcessor");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "IncrementProcessor";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const IncrementProcessor& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const IncrementProcessor& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(IncrementProcessor& dst)
+            {
+                dst = IncrementProcessor();
+            }
+
+            static void Write(BinaryWriter& writer, const IncrementProcessor& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, IncrementProcessor& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "platforms/cpp/examples/put-get-example/config/example-cache.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get cache instance.
+    Cache<std::string, int32_t> cache = ignite.GetOrCreateCache<std::string, int32_t>("myCache");
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a cache entry processor.
+    binding.RegisterCacheEntryProcessor<IncrementProcessor>();
+
+    std::string key("mykey");
+    IncrementProcessor inc;
+
+    cache.Invoke<int32_t>(key, inc, NULL);
+}
+//end::invoke[]
diff --git a/docs/_docs/code-snippets/cpp/src/key_value_execute_sql.cpp b/docs/_docs/code-snippets/cpp/src/key_value_execute_sql.cpp
new file mode 100644
index 0000000..8903173
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/key_value_execute_sql.cpp
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "country.h";
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+const char* CITY_CACHE_NAME = "City";
+const char* COUNTRY_CACHE_NAME = "Country";
+const char* COUNTRY_LANGUAGE_CACHE_NAME = "CountryLanguage";
+
+int main()
+{
+    //tag::key-value-execute-sql[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "config/sql.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<int64_t, std::string> cityCache = ignite.GetOrCreateCache<int64_t, std::string>(CITY_CACHE_NAME);
+    Cache<int64_t, Country> countryCache = ignite.GetOrCreateCache<int64_t, Country>(COUNTRY_CACHE_NAME);
+    Cache<int64_t, std::string> languageCache = ignite.GetOrCreateCache<int64_t, std::string>(COUNTRY_LANGUAGE_CACHE_NAME);
+
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    SqlFieldsQuery query = SqlFieldsQuery("SELECT name, population FROM country ORDER BY population DESC LIMIT 10");
+
+    QueryFieldsCursor cursor = countryCache.Query(query);
+    while (cursor.HasNext())
+    {
+        QueryFieldsRow row = cursor.GetNext();
+        std::string name = row.GetNext<std::string>();
+        std::string population = row.GetNext<std::string>();
+        std::cout << "    >>> " << population << " people live in " << name << std::endl;
+    }
+    //end::key-value-execute-sql[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/key_value_object_key.cpp b/docs/_docs/code-snippets/cpp/src/key_value_object_key.cpp
new file mode 100644
index 0000000..d1d0338
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/key_value_object_key.cpp
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "city.h"
+#include "city_key.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    //tag::key-value-object-key[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<CityKey, City> cityCache = ignite.GetOrCreateCache<CityKey, City>("City");
+
+    CityKey key = CityKey(5, "NLD");
+
+    cityCache.Put(key, 100000);
+
+    //getting the city by ID and country code
+    City city = cityCache.Get(key);
+
+    std::cout << ">> Updating Amsterdam record:" << std::endl;
+    city.population = city.population - 10000;
+
+    cityCache.Put(key, city);
+
+    std::cout << cityCache.Get(key).ToString() << std::endl;
+    //end::key-value-object-key[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/person.h b/docs/_docs/code-snippets/cpp/src/person.h
new file mode 100644
index 0000000..492f5c5
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/person.h
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "ignite/binary/binary.h"
+
+namespace ignite
+{
+    struct Person
+    {
+        Person() : orgId(0), salary(.0)
+        {
+            // No-op.
+        }
+
+        Person(int64_t orgId, const std::string& firstName,
+            const std::string& lastName, const std::string& resume, double salary) :
+            orgId(orgId),
+            firstName(firstName),
+            lastName(lastName),
+            resume(resume),
+            salary(salary)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+
+            oss << "Person [orgId=" << orgId
+                << ", lastName=" << lastName
+                << ", firstName=" << firstName
+                << ", salary=" << salary
+                << ", resume=" << resume << ']';
+
+            return oss.str();
+        }
+
+        int64_t orgId;
+        std::string firstName;
+        std::string lastName;
+        std::string resume;
+        double salary;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::Person)
+
+            typedef ignite::Person Person;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(Person)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(Person)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(Person)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(Person)
+
+            static void Write(BinaryWriter& writer, const ignite::Person& obj)
+        {
+            writer.WriteInt64("orgId", obj.orgId);
+            writer.WriteString("firstName", obj.firstName);
+            writer.WriteString("lastName", obj.lastName);
+            writer.WriteString("resume", obj.resume);
+            writer.WriteDouble("salary", obj.salary);
+        }
+
+        static void Read(BinaryReader& reader, ignite::Person& dst)
+        {
+            dst.orgId = reader.ReadInt64("orgId");
+            dst.firstName = reader.ReadString("firstName");
+            dst.lastName = reader.ReadString("lastName");
+            dst.resume = reader.ReadString("resume");
+            dst.salary = reader.ReadDouble("salary");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/scan_query.cpp b/docs/_docs/code-snippets/cpp/src/scan_query.cpp
new file mode 100644
index 0000000..56b35c3
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/scan_query.cpp
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite grid = Ignition::Start(cfg);
+
+    //tag::query-cursor[]
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, ignite::Person>("personCache");
+
+    QueryCursor<int64_t, Person> cursor = cache.Query(ScanQuery());
+    //end::query-cursor[]
+
+    // Iterate over results.
+    while (cursor.HasNext())
+    {
+        std::cout << cursor.GetNext().GetKey() << std::endl;
+    }
+
+    //tag::set-local[]
+    ScanQuery sq;
+    sq.SetLocal(true);
+
+    QueryCursor<int64_t, Person> cursor = cache.Query(sq);
+    //end::set-local[]
+
+}
diff --git a/docs/_docs/code-snippets/cpp/src/setting_work_directory.cpp b/docs/_docs/code-snippets/cpp/src/setting_work_directory.cpp
new file mode 100644
index 0000000..50ea929
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/setting_work_directory.cpp
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::setting-work-directory[]
+    IgniteConfiguration cfg;
+
+    cfg.igniteHome = "/path/to/work/directory";
+    //end::setting-work-directory[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/sql.cpp b/docs/_docs/code-snippets/cpp/src/sql.cpp
new file mode 100644
index 0000000..fb80f01
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/sql.cpp
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "config/sql.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    //tag::sql-fields-query[]
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, Person>("Person");
+
+    // Iterate over the result set.
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    QueryFieldsCursor cursor = cache.Query(SqlFieldsQuery("select concat(firstName, ' ', lastName) from Person"));
+    while (cursor.HasNext())
+    {
+        std::cout << "personName=" << cursor.GetNext().GetNext<std::string>() << std::endl;
+    }
+    //end::sql-fields-query[]
+
+    //tag::sql-fields-query-scheme[]
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    SqlFieldsQuery sql = SqlFieldsQuery("select name from City");
+    sql.SetSchema("PERSON");
+    //end::sql-fields-query-scheme[]
+
+    //tag::sql-fields-query-scheme-inline[]
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    sql = SqlFieldsQuery("select name from Person.City");
+    //end::sql-fields-query-scheme-inline[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/sql_create.cpp b/docs/_docs/code-snippets/cpp/src/sql_create.cpp
new file mode 100644
index 0000000..ceae081
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/sql_create.cpp
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    //tag::sql-create[]
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, Person>("Person");
+
+    // Creating City table.
+    cache.Query(SqlFieldsQuery("CREATE TABLE City (id int primary key, name varchar, region varchar)"));
+    //end::sql-create[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/sql_join_order.cpp b/docs/_docs/code-snippets/cpp/src/sql_join_order.cpp
new file mode 100644
index 0000000..3a1dc94
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/sql_join_order.cpp
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+	//tag::sql-join-order[]
+	SqlFieldsQuery query = SqlFieldsQuery("SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX) WHERE TABLE_A.column1 = TABLE_B.column2");
+	query.SetEnforceJoinOrder(true);
+	//end::sql-join-order[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/start_stop_nodes.cpp b/docs/_docs/code-snippets/cpp/src/start_stop_nodes.cpp
new file mode 100644
index 0000000..b68c35a
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/start_stop_nodes.cpp
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::start-all-nodes[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+    //end::start-all-nodes[]
+
+    //tag::activate-cluster[]
+    ignite.SetActive(true);
+    //end::activate-cluster[]
+
+    //tag::deactivate-cluster[]
+    ignite.SetActive(false);
+    //end::deactivate-cluster[]
+
+    //tag::stop-node[]
+    Ignition::Stop(ignite.GetName(), false);
+    //end::stop-node[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_authentication.cpp b/docs/_docs/code-snippets/cpp/src/thin_authentication.cpp
new file mode 100644
index 0000000..34b36dd
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_authentication.cpp
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::thin-authentication[]
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void TestClientWithAuth()
+{
+    IgniteClientConfiguration cfg;
+    cfg.SetEndPoints("127.0.0.1:10800");
+
+    // Use your own credentials here.
+    cfg.SetUser("ignite");
+    cfg.SetPassword("ignite");
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    cache::CacheClient<int32_t, std::string> cacheClient =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+
+    cacheClient.Put(42, "Hello Ignite Thin Client with auth!");
+}
+//end::thin-authentication[]
+
+int main()
+{
+    TestClientWithAuth();
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_client_cache.cpp b/docs/_docs/code-snippets/cpp/src/thin_client_cache.cpp
new file mode 100644
index 0000000..d8fe477
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_client_cache.cpp
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+int main()
+{
+    IgniteClientConfiguration cfg;
+    cfg.SetEndPoints("127.0.0.1:10800");
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    //tag::thin-getting-cache-instance[]
+    cache::CacheClient<int32_t, std::string> cache =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+    //end::thin-getting-cache-instance[]
+
+    //tag::basic-cache-operations[]
+    std::map<int, std::string> vals;
+    for (int i = 1; i < 100; i++)
+    {
+        vals[i] = i;
+    }
+
+    cache.PutAll(vals);
+    cache.Replace(1, "2");
+    cache.Put(101, "101");
+    cache.RemoveAll();
+    //end::basic-cache-operations[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_client_ssl.cpp b/docs/_docs/code-snippets/cpp/src/thin_client_ssl.cpp
new file mode 100644
index 0000000..b11dcfd
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_client_ssl.cpp
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void main()
+{
+    //tag::thin-client-ssl[]
+    IgniteClientConfiguration cfg;
+
+    // Sets SSL mode.
+    cfg.SetSslMode(SslMode::Type::REQUIRE);
+
+    // Sets file path to SSL certificate authority to authenticate server certificate during connection establishment.
+    cfg.SetSslCaFile("path/to/SSL/certificate/authority");
+
+    // Sets file path to SSL certificate to use during connection establishment.
+    cfg.SetSslCertFile("path/to/SSL/certificate");
+
+    // Sets file path to SSL private key to use during connection establishment.
+    cfg.SetSslKeyFile("path/to/SSL/private/key");
+    //end::thin-client-ssl[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_creating_client_instance.cpp b/docs/_docs/code-snippets/cpp/src/thin_creating_client_instance.cpp
new file mode 100644
index 0000000..a2c4230
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_creating_client_instance.cpp
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::thin-creating-client-instance[]
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void TestClient()
+{
+    IgniteClientConfiguration cfg;
+
+    //Endpoints list format is "<host>[port[..range]][,...]"
+    cfg.SetEndPoints("127.0.0.1:11110,example.com:1234..1240");
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    cache::CacheClient<int32_t, std::string> cacheClient =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+
+    cacheClient.Put(42, "Hello Ignite Thin Client!");
+}
+//end::thin-creating-client-instance[]
+
+int main()
+{
+    TestClient();
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_partition_awareness.cpp b/docs/_docs/code-snippets/cpp/src/thin_partition_awareness.cpp
new file mode 100644
index 0000000..c965d9b
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_partition_awareness.cpp
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::thin-partition-awareness[]
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void TestClientPartitionAwareness()
+{
+    IgniteClientConfiguration cfg;
+    cfg.SetEndPoints("127.0.0.1:10800,217.29.2.1:10800,200.10.33.1:10800");
+    cfg.SetPartitionAwareness(true);
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    cache::CacheClient<int32_t, std::string> cacheClient =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+
+    cacheClient.Put(42, "Hello Ignite Partition Awareness!");
+
+    cacheClient.RefreshAffinityMapping();
+
+    // Getting a value
+    std::string val = cacheClient.Get(42);
+}
+//end::thin-partition-awareness[]
+
+int main()
+{
+    TestClientPartitionAwareness();
+}
diff --git a/docs/_docs/code-snippets/cpp/src/transactions.cpp b/docs/_docs/code-snippets/cpp/src/transactions.cpp
new file mode 100644
index 0000000..6e79ee6
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/transactions.cpp
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace transactions;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignition::Start(cfg);
+
+    Ignite ignite = Ignition::Get();
+
+    Cache<std::string, int32_t> cache = ignite.GetOrCreateCache<std::string, int32_t>("myCache");
+
+    //tag::transactions-execution[]
+    Transactions transactions = ignite.GetTransactions();
+
+    Transaction tx = transactions.TxStart();
+    int hello = cache.Get("Hello");
+
+    if (hello == 1)
+        cache.Put("Hello", 11);
+
+    cache.Put("World", 22);
+
+    tx.Commit();
+    //end::transactions-execution[]
+
+    //tag::transactions-optimistic[]
+    // Re-try the transaction a limited number of times.
+    int const retryCount = 10;
+    int retries = 0;
+    
+    // Start a transaction in the optimistic mode with the serializable isolation level.
+    while (retries < retryCount)
+    {
+        retries++;
+    
+        try
+        {
+            Transaction tx = ignite.GetTransactions().TxStart(
+                    TransactionConcurrency::OPTIMISTIC, TransactionIsolation::SERIALIZABLE);
+
+            // commit the transaction
+            tx.Commit();
+
+            // the transaction succeeded. Leave the while loop.
+            break;
+        }
+        catch (IgniteError e)
+        {
+            // Transaction has failed. Retry.
+        }
+    }
+    //end::transactions-optimistic[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/transactions_pessimistic.cpp b/docs/_docs/code-snippets/cpp/src/transactions_pessimistic.cpp
new file mode 100644
index 0000000..ea28876
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/transactions_pessimistic.cpp
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace transactions;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+    
+    Cache<int32_t, int32_t> cache = ignite.GetOrCreateCache<int32_t, int32_t>("myCache");
+
+    //tag::transactions-pessimistic[]
+    try {
+        Transaction tx = ignite.GetTransactions().TxStart(
+            TransactionConcurrency::PESSIMISTIC, TransactionIsolation::READ_COMMITTED, 300, 0);
+        cache.Put(1, 1);
+    
+        cache.Put(2, 1);
+    
+        tx.Commit();
+    }
+    catch (IgniteError& err)
+    {
+        std::cout << "An error occurred: " << err.GetText() << std::endl;
+        std::cin.get();
+        return err.GetCode();
+    }
+    //end::transactions-pessimistic[]
+}
diff --git a/docs/_docs/code-snippets/dotnet/AffinityCollocation.cs b/docs/_docs/code-snippets/dotnet/AffinityCollocation.cs
new file mode 100644
index 0000000..433e113
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/AffinityCollocation.cs
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Affinity;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    // tag::affinityCollocation[]
+    class Person
+    {
+        public int Id { get; set; }
+        public string Name { get; set; }
+        public int CityId { get; set; }
+        public string CompanyId { get; set; }
+    }
+
+    class PersonKey
+    {
+        public int Id { get; set; }
+
+        [AffinityKeyMapped] public string CompanyId { get; set; }
+    }
+
+    class Company
+    {
+        public string Name { get; set; }
+    }
+
+    class AffinityCollocation
+    {
+        public static void Example()
+        {
+            var personCfg = new CacheConfiguration
+            {
+                Name = "persons",
+                Backups = 1,
+                CacheMode = CacheMode.Partitioned
+            };
+
+            var companyCfg = new CacheConfiguration
+            {
+                Name = "companies",
+                Backups = 1,
+                CacheMode = CacheMode.Partitioned
+            };
+
+            using (var ignite = Ignition.Start())
+            {
+                var personCache = ignite.GetOrCreateCache<PersonKey, Person>(personCfg);
+                var companyCache = ignite.GetOrCreateCache<string, Company>(companyCfg);
+
+                var person = new Person {Name = "Vasya"};
+
+                var company = new Company {Name = "Company1"};
+
+                personCache.Put(new PersonKey {Id = 1, CompanyId = "company1_key"}, person);
+                companyCache.Put("company1_key", company);
+            }
+        }
+    }
+    // end::affinityCollocation[]
+
+    static class CacheKeyConfigurationExamples
+    {
+        public static void ConfigureAffinityKeyWithCacheKeyConfiguration() {
+            // tag::config-with-key-configuration[]
+            var personCfg = new CacheConfiguration("persons")
+            {
+                KeyConfiguration = new[]
+                {
+                    new CacheKeyConfiguration
+                    {
+                        TypeName = nameof(Person),
+                        AffinityKeyFieldName = nameof(Person.CompanyId)
+                    } 
+                }
+            };
+
+            var companyCfg = new CacheConfiguration("companies");
+
+            IIgnite ignite = Ignition.Start();
+
+            ICache<PersonKey, Person> personCache = ignite.GetOrCreateCache<PersonKey, Person>(personCfg);
+            ICache<string, Company> companyCache = ignite.GetOrCreateCache<string, Company>(companyCfg);
+
+            var companyId = "company_1";
+            Company c1 = new Company {Name = "My company"};
+            Person p1 = new Person {Id = 1, Name = "John", CompanyId = companyId};
+
+            // Both the p1 and c1 objects will be cached on the same node
+            personCache.Put(new PersonKey {Id = 1, CompanyId = companyId}, p1);
+            companyCache.Put(companyId, c1);
+
+            // Get the person object
+            p1 = personCache.Get(new PersonKey {Id = 1, CompanyId = companyId});
+            // end::config-with-key-configuration[]
+        }
+
+        public static void AffinityKeyClass()
+        {
+            // tag::affinity-key-class[]
+            var personCfg = new CacheConfiguration("persons");
+            var companyCfg = new CacheConfiguration("companies");
+
+            IIgnite ignite = Ignition.Start();
+
+            ICache<AffinityKey, Person> personCache = ignite.GetOrCreateCache<AffinityKey, Person>(personCfg);
+            ICache<string, Company> companyCache = ignite.GetOrCreateCache<string, Company>(companyCfg);
+
+            var companyId = "company_1";
+            Company c1 = new Company {Name = "My company"};
+            Person p1 = new Person {Id = 1, Name = "John", CompanyId = companyId};
+
+            // Both the p1 and c1 objects will be cached on the same node
+            personCache.Put(new AffinityKey(1, companyId), p1);
+            companyCache.Put(companyId, c1);
+
+            // Get the person object
+            p1 = personCache.Get(new AffinityKey(1, companyId));
+            // end::affinity-key-class[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/BaselineTopology.cs b/docs/_docs/code-snippets/dotnet/BaselineTopology.cs
new file mode 100644
index 0000000..65710ea
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/BaselineTopology.cs
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using Apache.Ignite.Core;
+
+namespace dotnet_helloworld
+{
+    public static class BaselineTopology
+    {
+        public static void Activate()
+        {
+            // tag::activate[]
+            IIgnite ignite = Ignition.Start();
+            ignite.GetCluster().SetActive(true);
+            // end::activate[]
+        }
+
+        public static void EnableAutoAdjust()
+        {
+            // tag::enable-autoadjustment[]
+            IIgnite ignite = Ignition.Start();
+            ignite.GetCluster().SetBaselineAutoAdjustEnabledFlag(true);
+            ignite.GetCluster().SetBaselineAutoAdjustTimeout(30000);
+            // end::enable-autoadjustment[]
+        }
+
+        public static void DisableAutoAdjust()
+        {
+            IIgnite ignite = Ignition.Start();
+            // tag::disable-autoadjustment[]
+            ignite.GetCluster().SetBaselineAutoAdjustEnabledFlag(false);
+            // end::disable-autoadjustment[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/BasicCacheOperations.cs b/docs/_docs/code-snippets/dotnet/BasicCacheOperations.cs
new file mode 100644
index 0000000..11980c4
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/BasicCacheOperations.cs
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Compute;
+
+namespace dotnet_helloworld
+{
+    public class BasicCacheOperations
+    {
+        public static void AtomicOperations()
+        {
+            // tag::atomicOperations1[]
+            using (var ignite = Ignition.Start("examples/config/example-cache.xml"))
+            {
+                var cache = ignite.GetCache<int, string>("cache_name");
+
+                for (var i = 0; i < 10; i++)
+                {
+                    cache.Put(i, i.ToString());
+                }
+
+                for (var i = 0; i < 10; i++)
+                {
+                    Console.Write("Got [key=" + i + ", val=" + cache.Get(i) + ']');
+                }
+            }
+            // end::atomicOperations1[]
+
+            // tag::atomicOperations2[]
+            using (var ignite = Ignition.Start("examples/config/example-cache.xml"))
+            {
+                var cache = ignite.GetCache<string, int>("cache_name");
+
+                // Put-if-absent which returns previous value.
+                var oldVal = cache.GetAndPutIfAbsent("Hello", 11);
+
+                // Put-if-absent which returns boolean success flag.
+                var success = cache.PutIfAbsent("World", 22);
+
+                // Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.
+                oldVal = cache.GetAndReplace("Hello", 11);
+
+                // Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.
+                success = cache.Replace("World", 22);
+
+                // Replace-if-matches operation.
+                success = cache.Replace("World", 2, 22);
+
+                // Remove-if-matches operation.
+                success = cache.Remove("Hello", 1);
+            }
+            // end::atomicOperations2[]
+        }
+
+        // tag::asyncExec[]
+        class HelloworldFunc : IComputeFunc<string>
+        {
+            public string Invoke()
+            {
+                return "Hello World";
+            }
+        }
+        
+        public static void AsynchronousExecution()
+        {
+            var ignite = Ignition.Start();
+            var compute = ignite.GetCompute();
+            
+            //Execute a closure asynchronously
+            var fut = compute.CallAsync(new HelloworldFunc());
+            
+            // Listen for completion and print out the result
+            fut.ContinueWith(Console.Write);
+        }
+        // end::asyncExec[]
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ClusterGroups.cs b/docs/_docs/code-snippets/dotnet/ClusterGroups.cs
new file mode 100644
index 0000000..8948b7d
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ClusterGroups.cs
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Compute;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class ClusterGroups
+    {
+        // tag::broadcastAction[]
+        class PrintNodeIdAction : IComputeAction
+        {
+            public void Invoke()
+            {
+                Console.WriteLine("Hello node: " +
+                                  Ignition.GetIgnite().GetCluster().GetLocalNode().Id);
+            }
+        }
+
+        public static void RemotesBroadcastDemo()
+        {
+            var ignite = Ignition.Start();
+
+            var cluster = ignite.GetCluster();
+
+            // Get compute instance which will only execute
+            // over remote nodes, i.e. all the nodes except for this one.
+            var compute = cluster.ForRemotes().GetCompute();
+
+            // Broadcast to all remote nodes and print the ID of the node
+            // on which this closure is executing.
+            compute.Broadcast(new PrintNodeIdAction());
+        }
+        // end::broadcastAction[]
+
+        public static void ClusterGroupsDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+    
+            // tag::clusterGroups[]
+            var cluster = ignite.GetCluster();
+            
+            // All nodes on which cache with name "myCache" is deployed,
+            // either in client or server mode.
+            var cacheGroup = cluster.ForCacheNodes("myCache");
+
+            // All data nodes responsible for caching data for "myCache".
+            var dataGroup = cluster.ForDataNodes("myCache");
+
+            // All client nodes that access "myCache".
+            var clientGroup = cluster.ForClientNodes("myCache");
+            // end::clusterGroups[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ClusteringOverview.cs b/docs/_docs/code-snippets/dotnet/ClusteringOverview.cs
new file mode 100644
index 0000000..2b6ea30
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ClusteringOverview.cs
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Communication.Tcp;
+
+namespace dotnet_helloworld
+{
+    class ClusteringOverview
+    {
+        static void Foo()
+        {
+            // tag::ClientsAndServers[]
+            Ignition.ClientMode = true;
+            Ignition.Start();
+            // end::ClientsAndServers[]
+
+             // tag::CommunicationSPI[]
+             var cfg = new IgniteConfiguration
+             {
+                 CommunicationSpi = new TcpCommunicationSpi
+                 {
+                     LocalPort = 1234
+                 }
+             };
+            Ignition.Start(cfg);
+            // end::CommunicationSPI[]
+        }
+
+        static void ClientCfg()
+        {
+            // tag::ClientCfg[]
+            var cfg = new IgniteConfiguration
+            {
+                // Enable client mode.
+                ClientMode = true
+            };
+            
+            // Start Ignite in client mode.
+            Ignition.Start(cfg);
+            // end::ClientCfg[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ClusteringTcpIpDiscovery.cs b/docs/_docs/code-snippets/dotnet/ClusteringTcpIpDiscovery.cs
new file mode 100644
index 0000000..4e83a23
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ClusteringTcpIpDiscovery.cs
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Communication.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Multicast;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    class ClusteringTcpIpDiscovery
+    {
+        public static void MulticastIpFinderDemo()
+        {
+            //tag::multicast[]
+            var cfg = new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    IpFinder = new TcpDiscoveryMulticastIpFinder
+                    {
+                        MulticastGroup = "228.10.10.157"
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            //end::multicast[]
+        }
+
+        public static void StaticIpFinderDemo()
+        {
+            //tag::static[]
+            var cfg = new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[] {"1.2.3.4", "1.2.3.5:47500..47509" }
+                    }
+                }
+            };
+            //end::static[]
+            Ignition.Start(cfg);
+        }
+
+        public static void MulticastAndStaticDemo()
+        {
+            //tag::multicastAndStatic[]
+            var cfg = new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    IpFinder = new TcpDiscoveryMulticastIpFinder
+                    {
+                        MulticastGroup = "228.10.10.157",
+                        Endpoints = new[] {"1.2.3.4", "1.2.3.5:47500..47509" }
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            //end::multicastAndStatic[]
+        }
+        
+        public static void IsolatedClustersDemo()
+        {
+            // tag::isolated1[]
+            var firstCfg = new IgniteConfiguration
+            {
+                IgniteInstanceName = "first",
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                },
+                CommunicationSpi = new TcpCommunicationSpi
+                {
+                    LocalPort = 48100
+                }
+            };
+            Ignition.Start(firstCfg);
+            // end::isolated1[]
+
+            // tag::isolated2[]
+            var secondCfg = new IgniteConfiguration
+            {
+                IgniteInstanceName = "second",
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 49500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:49500..49520"
+                        }
+                    }
+                },
+                CommunicationSpi = new TcpCommunicationSpi
+                {
+                    LocalPort = 49100
+                }
+            };
+            Ignition.Start(secondCfg);
+            // end::isolated2[]
+            
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/CollocationgComputationsWithData.cs b/docs/_docs/code-snippets/dotnet/CollocationgComputationsWithData.cs
new file mode 100644
index 0000000..379493e
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/CollocationgComputationsWithData.cs
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Binary;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Query;
+using Apache.Ignite.Core.Compute;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+using Apache.Ignite.Core.Resource;
+
+namespace dotnet_helloworld
+{
+    public class CollocationgComputationsWithData
+    {
+        // tag::affinityRun[]
+
+        class MyComputeAction : IComputeAction
+        {
+            [InstanceResource] private readonly IIgnite _ignite;
+
+            public int Key { get; set; }
+
+            public void Invoke()
+            {
+                var cache = _ignite.GetCache<int, string>("myCache");
+                // Peek is a local memory lookup
+                Console.WriteLine("Co-located [key= " + Key + ", value= " + cache.LocalPeek(Key) + ']');
+            }
+        }
+
+        public static void AffinityRunDemo()
+        {
+            var cfg = new IgniteConfiguration();
+            // end::affinityRun[]
+            var discoverySpi = new TcpDiscoverySpi
+            {
+                LocalPort = 48500,
+                LocalPortRange = 20,
+                IpFinder = new TcpDiscoveryStaticIpFinder
+                {
+                    Endpoints = new[]
+                    {
+                        "127.0.0.1:48500..48520"
+                    }
+                }
+            };
+            cfg.DiscoverySpi = discoverySpi;
+            // tag::affinityRun[]
+            var ignite = Ignition.Start(cfg);
+
+            var cache = ignite.GetOrCreateCache<int, string>("myCache");
+            cache.Put(0, "foo");
+            cache.Put(1, "bar");
+            cache.Put(2, "baz");
+            var keyCnt = 3;
+            
+            var compute = ignite.GetCompute();
+
+            for (var key = 0; key < keyCnt; key++)
+            {
+                // This closure will execute on the remote node where
+                // data for the given 'key' is located.
+                compute.AffinityRun("myCache", key, new MyComputeAction {Key = key});
+            }
+        }
+        // end::affinityRun[]
+        
+        // tag::calculate-average[]
+        // this task sums up the values of the salary field for the given set of keys
+        // TODO: APIs are not released yet
+        /*
+        private class SumTask : IComputeFunc<decimal>
+        {
+            private readonly ICollection<long> _keys;
+            
+            [InstanceResource] private IIgnite _ignite;
+
+            public SumTask(ICollection<long> keys)
+            {
+                _keys = keys;
+            }
+
+            public decimal Invoke()
+            {
+                ICache<long, IBinaryObject> cache = _ignite.GetCache<long, object>("person")
+                    .WithKeepBinary<long, IBinaryObject>();
+
+                return _keys.Sum(k => cache[k].GetField<decimal>("salary"));
+            }
+        }
+
+        public static void CalculateAverage(IIgnite ignite, ICollection<long> keys)
+        {
+            // get the affinity function configured for the cache
+            const string cacheName = "person";
+            ICacheAffinity affinity = ignite.GetAffinity(cacheName);
+
+            IEnumerable<IGrouping<int, long>> keysByPartition = keys.GroupBy(affinity.GetPartition);
+
+            decimal total = 0;
+
+            ICompute compute = ignite.GetCompute();
+
+            foreach (IGrouping<int, long> grouping in keysByPartition)
+            {
+                int partition = grouping.Key;
+                long[] partitionKeys = grouping.ToArray();
+                decimal sum = compute.AffinityCall(cacheName, partition, new SumTask(partitionKeys));
+                total += sum;
+            }
+
+            Console.WriteLine("the average salary is " + total / keys.Count);
+        }*/
+
+        // end::calculate-average[]
+
+
+        private class Person
+        {
+            public string Name { get; set; }
+            public decimal Salary { get; set; }
+        }
+        
+        class SumFunc : IComputeFunc<decimal>
+        {
+            public int PartId { get; set; }
+            
+            [InstanceResource] private readonly IIgnite _ignite;
+            
+            public decimal Invoke()
+            {
+                //use binary objects to avoid deserialization
+                var cache = _ignite.GetCache<long, Person>("person").WithKeepBinary<long, IBinaryObject>();
+
+                using (var cursor = cache.Query(new ScanQuery<long, IBinaryObject>{Partition = PartId, Local = true}))
+                {
+                    return cursor.Sum(entry => entry.Value.GetField<decimal>("salary"));
+                }
+            }
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ConfiguringMetrics.cs b/docs/_docs/code-snippets/dotnet/ConfiguringMetrics.cs
new file mode 100644
index 0000000..bf883a5
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ConfiguringMetrics.cs
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Configuration;
+
+namespace dotnet_helloworld
+{
+    public class ConfiguringMetrics
+    {
+        public static void EnablingCacheMetrics()
+        {
+            // tag::cache-metrics[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration("my-cache")
+                    {
+                        EnableStatistics = true
+                    }
+                }
+            };
+
+            var ignite = Ignition.Start(cfg);
+            // end::cache-metrics[]
+        }
+        
+        public static void EnablingDataRegionMetrics()
+        {
+            // tag::data-region-metrics[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    DefaultDataRegionConfiguration = new DataRegionConfiguration
+                    {
+                        Name = DataStorageConfiguration.DefaultDataRegionName,
+                        MetricsEnabled = true
+                    },
+                    DataRegionConfigurations = new[]
+                    {
+                        new DataRegionConfiguration
+                        {
+                            Name = "myDataRegion",
+                            MetricsEnabled = true
+                        }
+                    }
+                }
+            };
+            
+            var ignite = Ignition.Start(cfg);
+            // end::data-region-metrics[]
+        }
+        
+        public static void EnablingPersistenceMetrics()
+        {
+            // tag::data-storage-metrics[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    MetricsEnabled = true
+                }
+            };
+            
+            var ignite = Ignition.Start(cfg);
+            // end::data-storage-metrics[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ContiniuosQueries.cs b/docs/_docs/code-snippets/dotnet/ContiniuosQueries.cs
new file mode 100644
index 0000000..aaac9a3
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ContiniuosQueries.cs
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using System.Collections.Generic;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Event;
+using Apache.Ignite.Core.Cache.Query.Continuous;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class ContinuousQueries
+    {
+        // tag::localListener[]
+        // tag::remoteFilter[]
+        class LocalListener : ICacheEntryEventListener<int, string>
+        {
+            public void OnEvent(IEnumerable<ICacheEntryEvent<int, string>> evts)
+            {
+                foreach (var cacheEntryEvent in evts)
+                {
+                    //react to update events here
+                }
+            }
+        }
+        // end::localListener[]
+        class RemoteFilter : ICacheEntryEventFilter<int, string>
+        {
+            public bool Evaluate(ICacheEntryEvent<int, string> e)
+            {
+                if (e.Key == 1)
+                {
+                    return false;
+                }
+                Console.WriteLine("the value for key {0} was updated from {1} to {2}", e.Key, e.OldValue, e.Value);
+                return true;
+            }
+        }
+        // end::remoteFilter[]
+        
+        // tag::localListener[]
+        public static void ContinuousQueryListenerDemo()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+            var cache = ignite.GetOrCreateCache<int, string>("myCache");
+
+            var query = new ContinuousQuery<int, string>(new LocalListener());
+
+            var handle = cache.QueryContinuous(query);
+            
+            cache.Put(1, "1");
+            cache.Put(2, "2");
+        }
+        // end::localListener[]
+        
+        // tag::remoteFilter[]
+        public static void ContinuousQueryFilterDemo()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+            var cache = ignite.GetOrCreateCache<int, string>("myCache");
+
+            var query = new ContinuousQuery<int, string>(new LocalListener(), new RemoteFilter());
+
+            var handle = cache.QueryContinuous(query);
+            
+            cache.Put(1, "1");
+            cache.Put(2, "2");
+        }
+        // end::remoteFilter[]
+        
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DataModellingConfiguringCaches.cs b/docs/_docs/code-snippets/dotnet/DataModellingConfiguringCaches.cs
new file mode 100644
index 0000000..6063e2e
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/DataModellingConfiguringCaches.cs
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    class DataModellingConfiguringCaches
+    {
+        public static void ConfigurationExample()
+        {
+            // tag::cfg[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "myCache",
+                        CacheMode = CacheMode.Partitioned,
+                        Backups = 2,
+                        RebalanceMode = CacheRebalanceMode.Sync,
+                        WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync,
+                        PartitionLossPolicy = PartitionLossPolicy.ReadOnlySafe
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            // end::cfg[]
+        }
+
+        public static void Backups()
+        {
+            // tag::backups[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "myCache",
+                        CacheMode = CacheMode.Partitioned,
+                        Backups = 1
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            // end::backups[]
+        }
+        
+        public static void AsyncBackups()
+        {
+            // tag::synchronization-mode[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "myCache",
+                        WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync,
+                        Backups = 1
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            // end::synchronization-mode[]
+        }
+
+
+        public static void CacheTemplates() 
+        {
+            // tag::template[]
+            var ignite = Ignition.Start();
+
+            var cfg = new CacheConfiguration
+            {
+                Name = "myCacheTemplate*",
+                CacheMode = CacheMode.Partitioned,
+                Backups = 2
+            };
+
+            ignite.AddCacheConfiguration(cfg);
+            // end::template[]
+        }
+
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DataModellingDataPartitioning.cs b/docs/_docs/code-snippets/dotnet/DataModellingDataPartitioning.cs
new file mode 100644
index 0000000..a0dc065
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/DataModellingDataPartitioning.cs
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    class DataModellingDataPartitioning
+    {
+
+        public static void Foo()
+        {
+            // tag::partitioning[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "Person",
+                        Backups = 1,
+                        GroupName = "group1"
+                    },
+                    new CacheConfiguration
+                    {
+                        Name = "Organization",
+                        Backups = 1,
+                        GroupName = "group1"
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            // end::partitioning[]
+
+
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DataRebalancing.cs b/docs/_docs/code-snippets/dotnet/DataRebalancing.cs
new file mode 100644
index 0000000..9fcb2e8
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/DataRebalancing.cs
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    class DataRebalancing
+    {
+        public static void RebalanceMode()
+        {
+            // tag::RebalanceMode[]
+            IgniteConfiguration cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "mycache",
+                        RebalanceMode = CacheRebalanceMode.Sync
+                    }
+                }
+            };
+
+            // Start a node.
+            var ignite = Ignition.Start(cfg);
+            // end::RebalanceMode[]
+        }
+
+        public static void RebalanceThrottle()
+        {
+            // tag::RebalanceThrottle[]
+            IgniteConfiguration cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "mycache",
+                        RebalanceBatchSize = 2 * 1024 * 1024,
+                        RebalanceThrottle = new TimeSpan(0, 0, 0, 0, 100)
+                    }
+                }
+            };
+
+            // Start a node.
+            var ignite = Ignition.Start(cfg);
+            // end::RebalanceThrottle[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DataStreaming.cs b/docs/_docs/code-snippets/dotnet/DataStreaming.cs
new file mode 100644
index 0000000..1093b30
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/DataStreaming.cs
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Datastream;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class DataStreaming
+    {
+
+        public static void DataStreamerExample()
+        {
+            using (var ignite = Ignition.Start())
+            {
+                ignite.GetOrCreateCache<int, string>("myCache");
+                //tag::dataStreamer1[]
+                using (var stmr = ignite.GetDataStreamer<int, string>("myCache"))
+                {
+                    for (var i = 0; i < 1000; i++)
+                        stmr.AddData(i, i.ToString());
+                    //end::dataStreamer1[]
+                    //tag::dataStreamer2[]
+                    stmr.AllowOverwrite = true;
+                    //end::dataStreamer2[]
+                    //tag::dataStreamer1[]
+                }
+
+                //end::dataStreamer1[]
+            }
+        }        
+        // tag::streamReceiver[]
+        private class MyStreamReceiver : IStreamReceiver<int, string>
+        {
+            public void Receive(ICache<int, string> cache, ICollection<ICacheEntry<int, string>> entries)
+            {
+                foreach (var entry in entries)
+                {
+                    // do something with the entry
+
+                    cache.Put(entry.Key, entry.Value);
+                }
+            }
+        }
+
+        public static void StreamReceiverDemo()
+        {
+            var ignite = Ignition.Start();
+
+            using (var stmr = ignite.GetDataStreamer<int, string>("myCache"))
+            {
+                stmr.AllowOverwrite = true;
+                stmr.Receiver = new MyStreamReceiver();
+            }
+        }
+        // end::streamReceiver[]
+
+        // tag::streamTransformer[]
+        class MyEntryProcessor : ICacheEntryProcessor<string, long, object, object>
+        {
+            public object Process(IMutableCacheEntry<string, long> e, object arg)
+            {
+                //get current count
+                var val = e.Value;
+                
+                //increment count by 1
+                e.Value = val == 0 ? 1L : val + 1;
+
+                return null;
+            }
+        }
+
+        public static void StreamTransformerDemo()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+            var cfg = new CacheConfiguration("wordCountCache");
+            var stmCache = ignite.GetOrCreateCache<string, long>(cfg);
+
+            using (var stmr = ignite.GetDataStreamer<string, long>(stmCache.Name))
+            {
+                //Allow data updates
+                stmr.AllowOverwrite = true;
+                
+                //Configure data transformation to count instances of the same word
+                stmr.Receiver = new StreamTransformer<string, long, object, object>(new MyEntryProcessor());
+                
+                //stream words into the streamer cache
+                foreach (var word in GetWords())
+                {
+                    stmr.AddData(word, 1L);
+                }
+            }
+            
+            Console.WriteLine(stmCache.Get("a"));
+            Console.WriteLine(stmCache.Get("b"));
+        }
+
+        static IEnumerable<string> GetWords()
+        {
+            //populate words list somehow
+            return Enumerable.Repeat("a", 3).Concat(Enumerable.Repeat("b", 2));
+        }
+        // end::streamTransformer[]
+
+
+        // tag::streamVisitor[]
+        class Instrument
+        {
+            public readonly string Symbol;
+            public double Latest { get; set; }
+            public double High { get; set; }
+            public double Low { get; set; }
+
+            public Instrument(string symbol)
+            {
+                this.Symbol = symbol;
+            }
+        }
+
+        private static Dictionary<string, double> getMarketData()
+        {
+            //populate market data somehow
+            return new Dictionary<string, double>
+            {
+                ["foo"] = 1.0,
+                ["foo"] = 2.0,
+                ["foo"] = 3.0
+            };
+        }
+
+        public static void StreamVisitorDemo()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+
+            var mrktDataCfg = new CacheConfiguration("marketData");
+            var instCfg = new CacheConfiguration("instruments");
+
+            // Cache for market data ticks streamed into the system
+            var mrktData = ignite.GetOrCreateCache<string, double>(mrktDataCfg);
+            // Cache for financial instruments
+            var instCache = ignite.GetOrCreateCache<string, Instrument>(instCfg);
+
+            using (var mktStmr = ignite.GetDataStreamer<string, double>("marketData"))
+            {
+                // Note that we do not populate 'marketData' cache (it remains empty).
+                // Instead we update the 'instruments' cache based on the latest market price.
+                mktStmr.Receiver = new StreamVisitor<string, double>((cache, e) =>
+                {
+                    var symbol = e.Key;
+                    var tick = e.Value;
+
+                    Instrument inst = instCache.Get(symbol);
+
+                    if (inst == null)
+                    {
+                        inst = new Instrument(symbol);
+                    }
+
+                    // Update instrument price based on the latest market tick.
+                    inst.High = Math.Max(inst.High, tick);
+                    inst.Low = Math.Min(inst.Low, tick);
+                    inst.Latest = tick;
+                });
+                var marketData = getMarketData();
+                foreach (var tick in marketData)
+                {
+                    mktStmr.AddData(tick);
+                }
+                mktStmr.Flush();
+                Console.Write(instCache.Get("foo"));
+            }
+        }
+
+        // end::streamVisitor[]
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs b/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs
new file mode 100644
index 0000000..fbc2c40
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/DefiningIndexes.cs
@@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    //todo discuss about "Indexing Nested Objects"
+    public class DefiningIndexes
+    {
+        // tag::idxAnnotationCfg[]
+        class Person
+        {
+            // Indexed field. Will be visible to the SQL engine.
+            [QuerySqlField(IsIndexed = true)] public long Id;
+
+            //Queryable field. Will be visible to the SQL engine
+            [QuerySqlField] public string Name;
+
+            //Will NOT be visible to the SQL engine.
+            public int Age;
+
+            /** Indexed field sorted in descending order.
+              * Will be visible to the SQL engine. */
+            [QuerySqlField(IsIndexed = true, IsDescending = true)]
+            public float Salary;
+        }
+        // end::idxAnnotationCfg[]
+
+        //todo indexing nested objects - will be deprecated, discuss with Artem
+
+        public static void RegisteringIndexedTypes()
+        {
+            // tag::register-indexed-types[]
+            var ccfg = new CacheConfiguration
+            {
+                QueryEntities = new[]
+                {
+                    new QueryEntity(typeof(long), typeof(Person))
+                }
+            };
+            // end::register-indexed-types[]
+        }
+
+        public class GroupIndexes
+        {
+            //todo functional is limited comparing to Java client, discuss
+            // tag::groupIdx[]
+            class Person
+            {
+                [QuerySqlField(IndexGroups = new[] {"age_salary_idx"})]
+                public int Age;
+
+                [QuerySqlField(IsIndexed = true, IndexGroups = new[] {"age_salary_idx"})]
+                public double Salary;
+            }
+            // end::groupIdx[]
+        }
+
+        public static void QueryEntityDemo()
+        {
+            // tag::queryEntity[]
+            var cacheCfg = new CacheConfiguration
+            {
+                Name = "myCache",
+                QueryEntities = new[]
+                {
+                    new QueryEntity
+                    {
+                        KeyType = typeof(long),
+                        KeyFieldName = "id",
+                        ValueType = typeof(dotnet_helloworld.Person),
+                        Fields = new[]
+                        {
+                            new QueryField
+                            {
+                                Name = "id",
+                                FieldType = typeof(long)
+                            },
+                            new QueryField
+                            {
+                                Name = "name",
+                                FieldType = typeof(string)
+                            },
+                            new QueryField
+                            {
+                                Name = "salary",
+                                FieldType = typeof(long)
+                            },
+                        },
+                        Indexes = new[]
+                        {
+                            new QueryIndex("name"),
+                            new QueryIndex(false, QueryIndexType.Sorted, new[] {"id", "salary"})
+                        }
+                    }
+                }
+            };
+            Ignition.Start(new IgniteConfiguration
+            {
+                CacheConfiguration = new[] {cacheCfg}
+            });
+            // end::queryEntity[]
+        }
+
+        private static void QueryEntityInlineSize()
+        {
+            // tag::query-entity-with-inline-size[]
+            var qe = new QueryEntity
+            {
+                Indexes = new[]
+                {
+                    new QueryIndex
+                    {
+                        InlineSize = 13
+                    }
+                }
+            };
+            // end::query-entity-with-inline-size[]
+        }
+
+        private static void QueryEntityKeyFields()
+        {
+            // tag::custom-key[]
+            var ccfg = new CacheConfiguration
+            {
+                Name = "personCache",
+                QueryEntities = new[]
+                {
+                    new QueryEntity
+                    {
+                        KeyTypeName = "CustomKey",
+                        ValueTypeName = "Person",
+                        Fields = new[]
+                        {
+                            new QueryField
+                            {
+                                Name = "intKeyField",
+                                FieldType = typeof(int),
+                                IsKeyField = true
+                            },
+                            new QueryField
+                            {
+                                Name = "strKeyField",
+                                FieldType = typeof(string),
+                                IsKeyField = true
+                            },
+                            new QueryField
+                            {
+                                Name = "firstName",
+                                FieldType = typeof(string)
+                            },
+                            new QueryField
+                            {
+                                Name = "lastName",
+                                FieldType = typeof(string)
+                            }
+                        }
+                    },
+                }
+            };
+            // end::custom-key[]
+        }
+
+        private class InlineSize
+        {
+            // tag::annotation-with-inline-size[]
+            [QuerySqlField(IsIndexed = true, IndexInlineSize = 13)]
+            public string Country { get; set; }
+            // end::annotation-with-inline-size[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/DistributedComputingApi.cs b/docs/_docs/code-snippets/dotnet/DistributedComputingApi.cs
new file mode 100644
index 0000000..29209d5
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/DistributedComputingApi.cs
@@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using System.Linq;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Compute;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+using Apache.Ignite.Core.Resource;
+
+namespace dotnet_helloworld
+{
+    public class DistributedComputingApi
+    {
+        public static void ForRemotesDemo()
+        {
+            // tag::forRemotes[]
+            var ignite = Ignition.Start();
+            var compute = ignite.GetCluster().ForRemotes().GetCompute();
+            // end::forRemotes[]
+        }
+
+        public static void GetCompute()
+        {
+            // tag::gettingCompute[]
+            var ignite = Ignition.Start();
+            var compute = ignite.GetCompute();
+            // end::gettingCompute[]
+        }
+
+        // tag::computeAction[]
+        class PrintWordAction : IComputeAction
+        {
+            public void Invoke()
+            {
+                foreach (var s in "Print words on different cluster nodes".Split(" "))
+                {
+                    Console.WriteLine(s);
+                }
+            }
+        }
+
+        public static void ComputeRunDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+            ignite.GetCompute().Run(new PrintWordAction());
+        }
+        // end::computeAction[]
+
+        // tag::computeFunc[]
+        // tag::async[]
+        class CharCounter : IComputeFunc<int>
+        {
+            private readonly string arg;
+
+            public CharCounter(string arg)
+            {
+                this.arg = arg;
+            }
+
+            public int Invoke()
+            {
+                return arg.Length;
+            }
+        }
+        // end::async[]
+
+        public static void ComputeFuncDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+
+            // Iterate through all words in the sentence and create callable jobs.
+            var calls = "How many characters".Split(" ").Select(s => new CharCounter(s)).ToList();
+
+            // Execute the collection of calls on the cluster.
+            var res = ignite.GetCompute().Call(calls);
+
+            // Add all the word lengths received from cluster nodes.
+            var total = res.Sum();
+        }
+        // end::computeFunc[]
+
+
+        // tag::computeFuncApply[]
+        class CharCountingFunc : IComputeFunc<string, int>
+        {
+            public int Invoke(string arg)
+            {
+                return arg.Length;
+            }
+        }
+
+        public static void Foo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+
+            var res = ignite.GetCompute().Apply(new CharCountingFunc(), "How many characters".Split());
+
+            int total = res.Sum();
+        }
+        // end::computeFuncApply[]
+
+        // tag::broadcast[]
+        class PrintNodeIdAction : IComputeAction
+        {
+            public void Invoke()
+            {
+                Console.WriteLine("Hello node: " +
+                                  Ignition.GetIgnite().GetCluster().GetLocalNode().Id);
+            }
+        }
+
+        public static void BroadcastDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+
+            // Limit broadcast to remote nodes only.
+            var compute = ignite.GetCluster().ForRemotes().GetCompute();
+            // Print out hello message on remote nodes in the cluster group.
+            compute.Broadcast(new PrintNodeIdAction());
+        }
+        // end::broadcast[]
+
+        // tag::async[]
+        public static void AsyncDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+
+            var calls = "Count character using async compute"
+                .Split(" ").Select(s => new CharCounter(s)).ToList();
+
+            var future = ignite.GetCompute().CallAsync(calls);
+
+            future.ContinueWith(fut =>
+            {
+                var total = fut.Result.Sum();
+                Console.WriteLine("Total number of characters: " + total);
+            });
+        }
+        // end::async[]
+
+        // tag::instanceResource[]
+        class FuncWithDataAccess : IComputeFunc<int>
+        {
+            [InstanceResource] private IIgnite _ignite;
+
+            public int Invoke()
+            {
+                var cache = _ignite.GetCache<int, string>("someCache");
+
+                // get the data you need
+                string cached = cache.Get(1);
+                
+                // do with data what you need to do, for example:
+                Console.WriteLine(cached);
+
+                return 1;
+            }
+        }
+        // end::instanceResource[]
+
+        public static void InstanceResourceDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+
+            var cache = ignite.GetOrCreateCache<int, string>("someCache");
+            cache.Put(1, "foo");
+            ignite.GetCompute().Call(new FuncWithDataAccess());
+
+
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/EvictionPolicies.cs b/docs/_docs/code-snippets/dotnet/EvictionPolicies.cs
new file mode 100644
index 0000000..6357ef5
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/EvictionPolicies.cs
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Cache.Eviction;
+using Apache.Ignite.Core.Configuration;
+using DataPageEvictionMode = Apache.Ignite.Core.Configuration.DataPageEvictionMode;
+
+namespace dotnet_helloworld
+{
+    class EvictionPolicies
+    {
+        public static void RandomLRU()
+        {
+            // tag::randomLRU[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    DataRegionConfigurations = new[]
+                    {
+                        new DataRegionConfiguration
+                        {
+                            Name = "20GB_Region",
+                            InitialSize = 500L * 1024 * 1024,
+                            MaxSize = 20L * 1024 * 1024 * 1024,
+                            PageEvictionMode = DataPageEvictionMode.RandomLru
+                        }
+                    }
+                }
+            };
+            // end::randomLRU[]
+        }
+
+        public static void Random2LRU()
+        {
+            // tag::random2LRU[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    DataRegionConfigurations = new[]
+                    {
+                        new DataRegionConfiguration
+                        {
+                            Name = "20GB_Region",
+                            InitialSize = 500L * 1024 * 1024,
+                            MaxSize = 20L * 1024 * 1024 * 1024,
+                            PageEvictionMode = DataPageEvictionMode.Random2Lru
+                        }
+                    }
+                }
+            };
+            // end::random2LRU[]
+        }
+
+        public static void LRU()
+        {
+            // tag::LRU[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "cacheName",
+                        OnheapCacheEnabled = true,
+                        EvictionPolicy = new LruEvictionPolicy
+                        {
+                            MaxSize = 100000
+                        }
+                    }
+                }
+            };
+            // end::LRU[]
+        }
+
+        public static void FIFO()
+        {
+            // tag::FIFO[]
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "cacheName",
+                        OnheapCacheEnabled = true,
+                        EvictionPolicy = new FifoEvictionPolicy
+                        {
+                            MaxSize = 100000
+                        }
+                    }
+                }
+            };
+            // end::FIFO[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ExpiryPolicies.cs b/docs/_docs/code-snippets/dotnet/ExpiryPolicies.cs
new file mode 100644
index 0000000..fcb027c
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ExpiryPolicies.cs
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Cache.Expiry;
+using Apache.Ignite.Core.Common;
+
+namespace dotnet_helloworld
+{
+    class ExpiryPolicies
+    {
+
+        // tag::cfg[]
+        class ExpiryPolicyFactoryImpl : IFactory<IExpiryPolicy>
+        {
+            public IExpiryPolicy CreateInstance()
+            {
+                return new ExpiryPolicy(TimeSpan.FromMilliseconds(100), TimeSpan.FromMilliseconds(100),
+                    TimeSpan.FromMilliseconds(100));
+            }
+        }
+
+        public static void Example()
+        {
+            var cfg = new CacheConfiguration
+            {
+                Name = "cache_name",
+                ExpiryPolicyFactory = new ExpiryPolicyFactoryImpl()
+            };
+            // end::cfg[]
+        }
+
+        public static void EagerTtl()
+        {
+            // tag::eagerTTL[]
+            var cfg = new CacheConfiguration
+            {
+                Name = "cache_name",
+                EagerTtl = true
+            };
+            // end::eagerTTL[]
+        }
+
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/IgniteLifecycle.cs b/docs/_docs/code-snippets/dotnet/IgniteLifecycle.cs
new file mode 100644
index 0000000..fe849e8
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/IgniteLifecycle.cs
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+
+namespace dotnet_helloworld
+{
+    public static class IgniteLifecycle
+    {
+        public static void Start()
+        {
+            // tag::start[]
+            var cfg = new IgniteConfiguration();
+            IIgnite ignite = Ignition.Start(cfg);
+            // end::start[]
+        }
+
+        public static void StartClient()
+        {
+            // tag::start-client[]
+            var cfg = new IgniteConfiguration
+            {
+                ClientMode = true
+            };
+            IIgnite ignite = Ignition.Start(cfg);
+            // end::start-client[]
+        }
+
+        public static void StartDispose()
+        {
+            // tag::disposable[]
+            var cfg = new IgniteConfiguration();
+            using (IIgnite ignite = Ignition.Start(cfg))
+            {
+                //
+            }
+            // end::disposable[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/MapReduceApi.cs b/docs/_docs/code-snippets/dotnet/MapReduceApi.cs
new file mode 100644
index 0000000..01fdd41
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/MapReduceApi.cs
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cluster;
+using Apache.Ignite.Core.Compute;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class MapReduceApi
+    {
+        // tag::mapReduceComputeTask[]
+        // tag::computeTaskExample[]
+        class CharCountComputeJob : IComputeJob<int>
+        {
+            private readonly string _arg;
+
+            public CharCountComputeJob(string arg)
+            {
+                Console.WriteLine(">>> Printing '" + arg + "' from compute job.");
+                this._arg = arg;
+            }
+
+            public int Execute()
+            {
+                return _arg.Length;
+            }
+
+            public void Cancel()
+            {
+                throw new System.NotImplementedException();
+            }
+        }
+        
+        // end::computeTaskExample[]
+
+        class CharCountTask : IComputeTask<string, int, int>
+        {
+            public IDictionary<IComputeJob<int>, IClusterNode> Map(IList<IClusterNode> subgrid, string arg)
+            {
+                var map = new Dictionary<IComputeJob<int>, IClusterNode>();
+                using (var enumerator = subgrid.GetEnumerator())
+                {
+                    foreach (var s in arg.Split(" "))
+                    {
+                        if (!enumerator.MoveNext())
+                        {
+                            enumerator.Reset();
+                            enumerator.MoveNext();
+                        }
+
+                        map.Add(new CharCountComputeJob(s), enumerator.Current);
+                    }
+                }
+
+                return map;
+            }
+
+            public ComputeJobResultPolicy OnResult(IComputeJobResult<int> res, IList<IComputeJobResult<int>> rcvd)
+            {
+                // If there is no exception, wait for all job results.
+                return res.Exception != null ? ComputeJobResultPolicy.Failover : ComputeJobResultPolicy.Wait;
+            }
+
+            public int Reduce(IList<IComputeJobResult<int>> results)
+            {
+                return results.Select(res => res.Data).Sum();
+            }
+        }
+
+        public static void MapReduceComputeJobDemo()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+
+            var compute = ignite.GetCompute();
+
+            var res = compute.Execute(new CharCountTask(), "Hello Grid Please Count Chars In These Words");
+
+            Console.WriteLine("res=" + res);
+        }
+        // end::mapReduceComputeTask[]
+
+        // tag::computeTaskExample[]
+        public class ComputeTaskExample
+        {
+            private class CharacterCountTask : ComputeTaskSplitAdapter<string, int, int>
+            {
+                public override int Reduce(IList<IComputeJobResult<int>> results)
+                {
+                    return results.Select(res => res.Data).Sum();
+                }
+
+                protected override ICollection<IComputeJob<int>> Split(int gridSize, string arg)
+                {
+                    return arg.Split(" ")
+                        .Select(word => new CharCountComputeJob(word))
+                        .Cast<IComputeJob<int>>()
+                        .ToList();
+                }
+            }
+
+            public static void RunComputeTaskExample()
+            {
+                var ignite = Ignition.Start(new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                });
+
+                var cnt = ignite.GetCompute().Execute(new CharacterCountTask(), "Hello Grid Enabled World!");
+                Console.WriteLine(">>> Total number of characters in the phrase is '" + cnt + "'.");
+            }
+        }
+        // end::computeTaskExample[]
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/MemoryArchitecture.cs b/docs/_docs/code-snippets/dotnet/MemoryArchitecture.cs
new file mode 100644
index 0000000..4cf1bb2
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/MemoryArchitecture.cs
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Configuration;
+
+namespace dotnet_helloworld
+{
+    class MemoryArchitecture
+    {
+        public static void MemoryConfiguration()
+        {
+            // tag::mem[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    DefaultDataRegionConfiguration = new DataRegionConfiguration
+                    {
+                        Name = "Default_Region",
+                        InitialSize = 100 * 1024 * 1024
+                    },
+                    DataRegionConfigurations = new[]
+                    {
+                        new DataRegionConfiguration
+                        {
+                            Name = "40MB_Region_Eviction",
+                            InitialSize = 20 * 1024 * 1024,
+                            MaxSize = 40 * 1024 * 1024,
+                            PageEvictionMode = DataPageEvictionMode.Random2Lru
+                        },
+                        new DataRegionConfiguration
+                        {
+                            Name = "30MB_Region_Swapping",
+                            InitialSize = 15 * 1024 * 1024,
+                            MaxSize = 30 * 1024 * 1024,
+                            SwapPath = "/path/to/swap/file"
+                        }
+                    }
+                }
+            };
+            Ignition.Start(cfg);
+            // end::mem[]
+        }
+
+        public static void DefaultDataReqion()
+        {
+             // tag::DefaultDataReqion[]
+             var cfg = new IgniteConfiguration
+             {
+                 DataStorageConfiguration = new DataStorageConfiguration
+                 {
+                     DefaultDataRegionConfiguration = new DataRegionConfiguration
+                     {
+                         Name = "Default_Region",
+                         InitialSize = 100 * 1024 * 1024
+                     }
+                 }
+             };
+
+             // Start the node.
+             var ignite = Ignition.Start(cfg);
+             // end::DefaultDataReqion[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/NearCaches.cs b/docs/_docs/code-snippets/dotnet/NearCaches.cs
new file mode 100644
index 0000000..e297f6e
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/NearCaches.cs
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Cache.Eviction;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class NearCaches
+    {
+        public static void ConfiguringNearCache()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+
+            //tag::nearCacheConf[]
+            var cacheCfg = new CacheConfiguration
+            {
+                Name = "myCache",
+                NearConfiguration = new NearCacheConfiguration
+                {
+                    EvictionPolicy = new LruEvictionPolicy
+                    {
+                        MaxSize = 100_000
+                    }
+                }
+            };
+
+            var cache = ignite.GetOrCreateCache<int, int>(cacheCfg);
+            //end::nearCacheConf[]
+        }
+
+        public static void NearCacheOnClientNodeDemo()
+        {
+            //tag::nearCacheClientNode[]
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                },
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration {Name = "myCache"}
+                }
+            });
+            var client = Ignition.Start(new IgniteConfiguration
+            {
+                IgniteInstanceName = "clientNode",
+                ClientMode = true,
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+            // Create a near-cache configuration
+            var nearCfg = new NearCacheConfiguration
+            {
+                // Use LRU eviction policy to automatically evict entries
+                // from near-cache, whenever it reaches 100_000 in size.
+                EvictionPolicy = new LruEvictionPolicy()
+                {
+                    MaxSize = 100_000
+                }
+            };
+
+
+            // get the cache named "myCache" and create a near cache for it
+            var cache = client.GetOrCreateNearCache<int, string>("myCache", nearCfg);
+            //end::nearCacheClientNode[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/OnHeapCaching.cs b/docs/_docs/code-snippets/dotnet/OnHeapCaching.cs
new file mode 100644
index 0000000..5e076a0
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/OnHeapCaching.cs
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    class OnHeapCaching
+    {
+        public static void Run()
+        {
+            // tag::onheap[]
+            var cfg = new CacheConfiguration
+            {
+                Name = "myCache",
+                OnheapCacheEnabled = true
+            };
+            // end::onheap[]
+        }
+
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/PeerClassLoading.cs b/docs/_docs/code-snippets/dotnet/PeerClassLoading.cs
new file mode 100644
index 0000000..0ea5506
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/PeerClassLoading.cs
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Deployment;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class PeerClassLoading
+    {
+        public static void PeerClassLoadingEnabled()
+        {
+            //tag::enable[]
+            var cfg = new IgniteConfiguration
+            {
+                PeerAssemblyLoadingMode = PeerAssemblyLoadingMode.CurrentAppDomain
+            };
+            //end::enable[]
+            var discoverySpi = new TcpDiscoverySpi
+            {
+                LocalPort = 48500,
+                LocalPortRange = 20,
+                IpFinder = new TcpDiscoveryStaticIpFinder
+                {
+                    Endpoints = new[]
+                    {
+                        "127.0.0.1:48500..48520"
+                    }
+                }
+            };
+            cfg.DiscoverySpi = discoverySpi;
+            //tag::enable[]
+            var ignite = Ignition.Start(cfg);
+            //end::enable[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/PerformingTransactions.cs b/docs/_docs/code-snippets/dotnet/PerformingTransactions.cs
new file mode 100644
index 0000000..3284905
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/PerformingTransactions.cs
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+using Apache.Ignite.Core.Transactions;
+
+namespace dotnet_helloworld
+{
+    public class PerformingTransactions
+    {
+        public static void TransactionExecutionDemo()
+        {
+            // tag::executingTransactions[]
+            // tag::optimisticTx[]
+            // tag::deadlock[]
+            var cfg = new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                },
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "cacheName",
+                        AtomicityMode = CacheAtomicityMode.Transactional
+                    }
+                },
+                TransactionConfiguration = new TransactionConfiguration
+                {
+                    DefaultTimeoutOnPartitionMapExchange = TimeSpan.FromSeconds(20)
+                }
+            };
+
+            var ignite = Ignition.Start(cfg);
+            // end::optimisticTx[]
+            // end::deadlock[]
+            var cache = ignite.GetCache<string, int>("cacheName");
+            cache.Put("Hello", 1);
+            var transactions = ignite.GetTransactions();
+
+            using (var tx = transactions.TxStart())
+            {
+                int hello = cache.Get("Hello");
+
+                if (hello == 1)
+                {
+                    cache.Put("Hello", 11);
+                }
+
+                cache.Put("World", 22);
+
+                tx.Commit();
+            }
+            // end::executingTransactions[]
+
+            // tag::optimisticTx[]
+            // Re-try the transaction a limited number of times
+            var retryCount = 10;
+            var retries = 0;
+
+            // Start a transaction in the optimistic mode with the serializable isolation level
+            while (retries < retryCount)
+            {
+                retries++;
+                try
+                {
+                    using (var tx = ignite.GetTransactions().TxStart(TransactionConcurrency.Optimistic,
+                        TransactionIsolation.Serializable))
+                    {
+                        // modify cache entries as part of this transaction.
+
+                        // commit the transaction
+                        tx.Commit();
+
+                        // the transaction succeeded. Leave the while loop.
+                        break;
+                    }
+                }
+                catch (TransactionOptimisticException)
+                {
+                    // Transaction has failed. Retry.
+                }
+
+            }
+            // end::optimisticTx[]
+
+            // tag::deadlock[]
+            var intCache = ignite.GetOrCreateCache<int, int>("intCache");
+            try
+            {
+                using (var tx = ignite.GetTransactions().TxStart(TransactionConcurrency.Pessimistic,
+                    TransactionIsolation.ReadCommitted, TimeSpan.FromMilliseconds(300), 0))
+                {
+                    intCache.Put(1, 1);
+                    intCache.Put(2, 1);
+                    tx.Commit();
+                }
+            }
+            catch (TransactionTimeoutException e)
+            {
+                Console.WriteLine(e.Message);
+            }
+            catch (TransactionDeadlockException e)
+            {
+                Console.WriteLine(e.Message);
+            }
+
+            // end::deadlock[]
+        }
+
+        public static void TxTimeoutOnPme()
+        {
+            // tag::pmeTimeout[]
+            var cfg = new IgniteConfiguration
+            {
+                TransactionConfiguration = new TransactionConfiguration
+                {
+                    DefaultTimeoutOnPartitionMapExchange = TimeSpan.FromSeconds(20)
+                }
+            };
+            Ignition.Start(cfg);
+            // end::pmeTimeout[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/PersistenceIgnitePersistence.cs b/docs/_docs/code-snippets/dotnet/PersistenceIgnitePersistence.cs
new file mode 100644
index 0000000..c34bd29
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/PersistenceIgnitePersistence.cs
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Configuration;
+
+namespace dotnet_helloworld
+{
+    class PersistenceIgnitePersistence
+    {
+        public static void DisablingWal()
+        {
+            // tag::disableWal[]
+            var cacheName = "myCache";
+            var ignite = Ignition.Start();
+            ignite.GetCluster().DisableWal(cacheName);
+
+            //load data
+
+            ignite.GetCluster().EnableWal(cacheName);
+            // end::disableWal[]
+        }
+
+        public static void Configuration()
+        {
+            // tag::cfg[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                  // tag::storage-path[]
+                    StoragePath = "/ssd/storage",
+
+                  // end::storage-path[]
+                    DefaultDataRegionConfiguration = new DataRegionConfiguration
+                    {
+                        Name = "Default_Region",
+                        PersistenceEnabled = true
+                    }
+                }
+            };
+
+            Ignition.Start(cfg);
+            // end::cfg[]
+        }
+
+        public static void Swapping()
+        {
+            // tag::cfg-swap[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    DataRegionConfigurations = new[]
+                    {
+                        new DataRegionConfiguration
+                        {
+                            Name = "5GB_Region",
+                            InitialSize = 100L * 1024 * 1024,
+                            MaxSize = 5L * 1024 * 1024 * 1024,
+                            SwapPath = "/path/to/some/directory"
+                        }
+                    }
+                }
+            };
+            // end::cfg-swap[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/PersistenceTuning.cs b/docs/_docs/code-snippets/dotnet/PersistenceTuning.cs
new file mode 100644
index 0000000..4684923
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/PersistenceTuning.cs
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Configuration;
+
+namespace dotnet_helloworld
+{
+    public class PersistenceTuning
+    {
+        public static void AdjustingPageSize()
+        {
+            // tag::page-size[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    // Changing the page size to 4 KB.
+                    PageSize = 4096
+                }
+            };
+            // end::page-size[]
+        }
+
+        public static void KeepWalsSeparately()
+        {
+            // tag::separate-wal[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    // Sets a path to the root directory where data and indexes are to be persisted.
+                    // It's assumed the directory is on a separated SSD.
+                    StoragePath = "/ssd/storage",
+                    
+                    // Sets a path to the directory where WAL is stored.
+                    // It's assumed the directory is on a separated HDD.
+                    WalPath = "/wal",
+                    
+                    // Sets a path to the directory where WAL archive is stored.
+                    // The directory is on the same HDD as the WAL.
+                    WalArchivePath = "/wal/archive"
+                }
+            };
+            // end::separate-wal[]
+        }
+
+        public static void Throttling()
+        {
+            // tag::throttling[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    WriteThrottlingEnabled = true
+                }
+            };
+            // end::throttling[]
+        }
+
+        public static void CheckpointBufferSize()
+        {
+            // tag::checkpointing-buffer-size[]
+            var cfg = new IgniteConfiguration
+            {
+                DataStorageConfiguration = new DataStorageConfiguration
+                {
+                    WriteThrottlingEnabled = true,
+                    DefaultDataRegionConfiguration = new DataRegionConfiguration
+                    {
+                        Name = DataStorageConfiguration.DefaultDataRegionName,
+                        PersistenceEnabled = true,
+                        
+                        // Increasing the buffer size to 1 GB.
+                        CheckpointPageBufferSize = 1024L * 1024 * 1024
+                    }
+                }
+            };
+            // end::checkpointing-buffer-size[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/PlatformCache.cs b/docs/_docs/code-snippets/dotnet/PlatformCache.cs
new file mode 100644
index 0000000..dc74473
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/PlatformCache.cs
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using System.Collections.Generic;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Binary;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Cache.Eviction;
+
+namespace dotnet_helloworld
+{
+    public class PlatformCache
+    {
+        public static void ConfigurePlatformCacheOnServer()
+        {
+            var ignite = Ignition.Start();
+            
+            //tag::platformCacheConf[]
+            var cacheCfg = new CacheConfiguration("my-cache")
+            {
+                PlatformCacheConfiguration = new PlatformCacheConfiguration()
+            };
+
+            var cache = ignite.CreateCache<int, string>(cacheCfg);
+            //end::platformCacheConf[]
+        }
+        
+        public static void ConfigurePlatformCacheOnClient()
+        {
+            var ignite = Ignition.Start();
+            
+            //tag::platformCacheConfClient[]
+            var nearCacheCfg = new NearCacheConfiguration
+            {
+                // Keep up to 1000 most recently used entries in Near and Platform caches.
+                EvictionPolicy = new LruEvictionPolicy
+                {
+                    MaxSize = 1000
+                }
+            };
+            
+            var cache = ignite.CreateNearCache<int, string>("my-cache",
+                nearCacheCfg,
+                new PlatformCacheConfiguration());
+            //end::platformCacheConfClient[]
+        }
+
+        public static void AccessPlatformCache()
+        {
+            var ignite = Ignition.Start();
+
+            //tag::platformCacheAccess[]
+            var cache = ignite.GetCache<int, string>("my-cache");
+            
+            // Get value from platform cache.
+            bool hasKey = cache.TryLocalPeek(1, out var val, CachePeekMode.Platform);
+            
+            // Get platform cache size (current number of entries on local node).
+            int size = cache.GetLocalSize(CachePeekMode.Platform);
+            
+            // Get all values from platform cache.
+            IEnumerable<ICacheEntry<int, string>> entries = cache.GetLocalEntries(CachePeekMode.Platform);
+            
+            //end::platformCacheAccess[]
+        }
+
+        public static void AdvancedConfigBinaryMode()
+        {
+            var ignite = Ignition.Start();
+            
+            //tag::advancedConfigBinaryMode[]
+            var cacheCfg = new CacheConfiguration("people")
+            {
+                PlatformCacheConfiguration = new PlatformCacheConfiguration
+                {
+                    KeepBinary = true
+                }
+            };
+
+            var cache = ignite.CreateCache<int, Person>(cacheCfg)
+                .WithKeepBinary<int, IBinaryObject>();
+
+            IBinaryObject binaryPerson = cache.Get(1);
+            //end::advancedConfigBinaryMode[]
+        }
+        
+        public static void AdvancedConfigKeyValTypes()
+        {
+            var ignite = Ignition.Start();
+            
+            //tag::advancedConfigKeyValTypes[]
+            var cacheCfg = new CacheConfiguration("people")
+            {
+                PlatformCacheConfiguration = new PlatformCacheConfiguration
+                {
+                    KeyTypeName = typeof(long).FullName,
+                    ValueTypeName = typeof(Guid).FullName
+                }
+            };
+
+            var cache = ignite.CreateCache<long, Guid>(cacheCfg);
+            //end::advancedConfigKeyValTypes[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/SqlJoinOrder.cs b/docs/_docs/code-snippets/dotnet/SqlJoinOrder.cs
new file mode 100644
index 0000000..f890199
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/SqlJoinOrder.cs
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core.Cache.Query;
+
+namespace dotnet_helloworld
+{
+    using System;
+    using System.Collections.Generic;
+    using Apache.Ignite.Core;
+
+    public static class SqlJoinOrder
+    {
+        public static void EnforceJoinOrder()
+        {
+            // tag::sqlJoinOrder[]
+            var query = new SqlFieldsQuery("SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX) WHERE TABLE_A.column1 = TABLE_B.column2")
+            {
+                EnforceJoinOrder = true
+            };
+            // end::sqlJoinOrder[]
+        }
+    }
+
+}
diff --git a/docs/_docs/code-snippets/dotnet/SqlTransactions.cs b/docs/_docs/code-snippets/dotnet/SqlTransactions.cs
new file mode 100644
index 0000000..917e02d
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/SqlTransactions.cs
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class SqlTransactions
+    {
+        public static void EnablingMvcc()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                });
+
+            // tag::mvcc[]
+            var cacheCfg = new CacheConfiguration
+            {
+                Name = "myCache",
+                AtomicityMode = CacheAtomicityMode.TransactionalSnapshot
+            };
+            // end::mvcc[]
+            ignite.CreateCache<long, long>(cacheCfg);
+            Console.Write(typeof(Person));
+        }
+
+        public static void ConcurrentUpdates()
+        {
+            var cfg = new IgniteConfiguration
+            {
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "mvccCache",
+                        AtomicityMode = CacheAtomicityMode.TransactionalSnapshot
+                    }, 
+                }
+            };
+            var ignite = Ignition.Start(cfg);
+            var cache = ignite.GetCache<int, string>("mvccCache");
+
+            // tag::mvccConcurrentUpdates[]
+            for (var i = 1; i <= 5; i++)
+            {
+                using (var tx = ignite.GetTransactions().TxStart())
+                {
+                    Console.WriteLine($"attempt #{i}, value: {cache.Get(1)}");
+                    try
+                    {
+                        cache.Put(1, "new value");
+                        tx.Commit();
+                        Console.WriteLine($"attempt #{i} succeeded");
+                        break;
+                    }
+                    catch (CacheException)
+                    {
+                        if (!tx.IsRollbackOnly)
+                        {
+                            // Transaction was not marked as "rollback only",
+                            // so it's not a concurrent update issue.
+                            // Process the exception here.
+                            break;
+                        }
+                    }
+                }
+            }
+            // end::mvccConcurrentUpdates[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ThinClient.cs b/docs/_docs/code-snippets/dotnet/ThinClient.cs
new file mode 100644
index 0000000..629326b
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ThinClient.cs
@@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using System.Linq;
+using System.Threading;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Binary;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Cache.Query;
+using Apache.Ignite.Core.Client;
+using Apache.Ignite.Core.Client.Cache;
+using Apache.Ignite.Core.Client.Compute;
+using Apache.Ignite.Core.Configuration;
+using Apache.Ignite.Core.Log;
+
+namespace dotnet_helloworld
+{
+    public class ThinClient
+    {
+        public static void ThinClientConnecting()
+        {
+            //tag::connecting[]
+            var cfg = new IgniteClientConfiguration
+            {
+                Endpoints = new[] {"127.0.0.1:10800"}
+            };
+
+            using (var client = Ignition.StartClient(cfg))
+            {
+                var cache = client.GetOrCreateCache<int, string>("cache");
+                cache.Put(1, "Hello, World!");
+            }
+
+            //end::connecting[]
+        }
+
+        public static void ThinClientCacheOperations()
+        {
+            var cfg = new IgniteClientConfiguration
+            {
+                Endpoints = new[] {"127.0.0.1:10800"}
+            };
+            using (var client = Ignition.StartClient(cfg))
+            {
+                //tag::createCache[]
+                var cacheCfg = new CacheClientConfiguration
+                {
+                    Name = "References",
+                    CacheMode = CacheMode.Replicated,
+                    WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
+                };
+                var cache = client.GetOrCreateCache<int, string>(cacheCfg);
+                //end::createCache[]
+
+                //tag::basicOperations[]
+                var data = Enumerable.Range(1, 100).ToDictionary(e => e, e => e.ToString());
+
+                cache.PutAll(data);
+
+                var replace = cache.Replace(1, "2", "3");
+                Console.WriteLine(replace); //false
+
+                var value = cache.Get(1);
+                Console.WriteLine(value); //1
+
+                replace = cache.Replace(1, "1", "3");
+                Console.WriteLine(replace); //true
+
+                value = cache.Get(1);
+                Console.WriteLine(value); //3
+
+                cache.Put(101, "101");
+
+                cache.RemoveAll(data.Keys);
+                var sizeIsOne = cache.GetSize() == 1;
+                Console.WriteLine(sizeIsOne); //true
+
+                value = cache.Get(101);
+                Console.WriteLine(value); //101
+
+                cache.RemoveAll();
+                var sizeIsZero = cache.GetSize() == 0;
+                Console.WriteLine(sizeIsZero); //true
+                //end::basicOperations[]
+            }
+        }
+
+        //tag::scanQry[]
+        class NameFilter : ICacheEntryFilter<int, Person>
+        {
+            public bool Invoke(ICacheEntry<int, Person> entry)
+            {
+                return entry.Value.Name.Contains("Smith");
+            }
+        }
+        //end::scanQry[]
+
+        public static void ScanQueryFilterDemo()
+        {
+            using (var ignite = Ignition.Start())
+            {
+                var cfg = new IgniteClientConfiguration
+                {
+                    Endpoints = new[] {"127.0.0.1:10800"}
+                };
+                using (var client = Ignition.StartClient(cfg))
+                {
+                    //tag::scanQry2[]
+                    var cache = client.GetOrCreateCache<int, Person>("personCache");
+
+                    cache.Put(1, new Person {Name = "John Smith"});
+                    cache.Put(2, new Person {Name = "John Johnson"});
+
+                    using (var cursor = cache.Query(new ScanQuery<int, Person>(new NameFilter())))
+                    {
+                        foreach (var entry in cursor)
+                        {
+                            Console.WriteLine("Key = " + entry.Key + ", Name = " + entry.Value.Name);
+                        }
+                    }
+
+                    //end::scanQry2[]
+
+                    //tag::handleNodeFailure[]
+                    var scanQry = new ScanQuery<int, Person>(new NameFilter());
+                    using (var cur = cache.Query(scanQry))
+                    {
+                        var res = cur.GetAll().ToDictionary(entry => entry.Key, entry => entry.Value);
+                    }
+
+                    //end::handleNodeFailure[]
+                }
+            }
+        }
+
+        public static void WorkingWithBinaryObjects()
+        {
+            var cfg = new IgniteClientConfiguration
+            {
+                Endpoints = new[] {"127.0.0.1:10800"}
+            };
+            using (var client = Ignition.StartClient(cfg))
+            {
+                //tag::binaryObj[]
+                var binary = client.GetBinary();
+
+                var val = binary.GetBuilder("Person")
+                    .SetField("id", 1)
+                    .SetField("name", "Joe")
+                    .Build();
+
+                var cache = client.GetOrCreateCache<int, object>("persons").WithKeepBinary<int, IBinaryObject>();
+
+                cache.Put(1, val);
+
+                var value = cache.Get(1);
+                //end::binaryObj[]
+            }
+        }
+
+        public static void ExecutingSql()
+        {
+            using (var ignite = Ignition.Start())
+            {
+                var cfg = new IgniteClientConfiguration
+                {
+                    Endpoints = new[] {"127.0.0.1:10800"}
+                };
+                using (var client = Ignition.StartClient(cfg))
+                {
+                    //tag::executingSql[]
+                    var cache = client.GetOrCreateCache<int, Person>("Person");
+                    cache.Query(new SqlFieldsQuery(
+                            $"CREATE TABLE IF NOT EXISTS Person (id INT PRIMARY KEY, name VARCHAR) WITH \"VALUE_TYPE={typeof(Person)}\"")
+                        {Schema = "PUBLIC"}).GetAll();
+
+                    var key = 1;
+                    var val = new Person {Id = key, Name = "Person 1"};
+
+                    cache.Query(
+                        new SqlFieldsQuery("INSERT INTO Person(id, name) VALUES(?, ?)")
+                        {
+                            Arguments = new object[] {val.Id, val.Name},
+                            Schema = "PUBLIC"
+                        }
+                    ).GetAll();
+
+                    var cursor = cache.Query(
+                        new SqlFieldsQuery("SELECT name FROM Person WHERE id = ?")
+                        {
+                            Arguments = new object[] {key},
+                            Schema = "PUBLIC"
+                        }
+                    );
+
+                    var results = cursor.GetAll();
+
+                    var first = results.FirstOrDefault();
+                    if (first != null)
+                    {
+                        Console.WriteLine("name = " + first[0]);
+                    }
+
+                    //end::executingSql[]
+                }
+            }
+        }
+
+        public static void EnablingSsl()
+        {
+            //tag::ssl[]
+            var cfg = new IgniteClientConfiguration
+            {
+                Endpoints = new[] {"127.0.0.1:10800"},
+                SslStreamFactory = new SslStreamFactory
+                {
+                    CertificatePath = ".../certs/client.pfx",
+                    CertificatePassword = "password",
+                }
+            };
+            using (var client = Ignition.StartClient(cfg))
+            {
+                //...
+            }
+
+            //end::ssl[]
+        }
+
+        public static void Authentication()
+        {
+            //tag::auth[]
+            var cfg = new IgniteClientConfiguration
+            {
+                Endpoints = new[] {"127.0.0.1:10800"},
+                UserName = "ignite",
+                Password = "ignite"
+            };
+            using (var client = Ignition.StartClient(cfg))
+            {
+                //...
+            }
+
+            //end::auth[]
+        }
+
+        public static void ClusterConfig()
+        {
+            //tag::clusterConfiguration[]
+            var cfg = new IgniteConfiguration
+            {
+                ClientConnectorConfiguration = new ClientConnectorConfiguration
+                {
+                    // Set a port range from 10000 to 10005
+                    Port = 10000,
+                    PortRange = 5
+                }
+            };
+
+            var ignite = Ignition.Start(cfg);
+            //end::clusterConfiguration[]
+        }
+
+		public static void Discovery()
+		{
+            //tag::discovery[]
+            var cfg = new IgniteClientConfiguration
+            {
+                Endpoints = new[] {"127.0.0.1:10800"},
+                EnablePartitionAwareness = true,
+
+                // Enable trace logging to observe discovery process.
+                Logger = new ConsoleLogger { MinLevel = LogLevel.Trace }
+            };
+
+            var client = Ignition.StartClient(cfg);
+
+            // Perform any operation and sleep to let the client discover
+            // server nodes asynchronously.
+            client.GetCacheNames();
+            Thread.Sleep(1000);
+
+            foreach (IClientConnection connection in client.GetConnections())
+            {
+                Console.WriteLine(connection.RemoteEndPoint);
+            }
+            //end::discovery[]
+        }
+
+        public static void ClientCluster()
+        {
+            var cfg = new IgniteClientConfiguration();
+            //tag::client-cluster[]
+            IIgniteClient client = Ignition.StartClient(cfg);
+            IClientCluster cluster = client.GetCluster();
+            cluster.SetActive(true);
+            cluster.EnableWal("my-cache");
+            //end::client-cluster[]
+        }
+
+        public static void ClientClusterGroups()
+        {
+            var cfg = new IgniteClientConfiguration();
+            //tag::client-cluster-groups[]
+            IIgniteClient client = Ignition.StartClient(cfg);
+            IClientClusterGroup serversInDc1 = client.GetCluster().ForServers().ForAttribute("dc", "dc1");
+            foreach (IClientClusterNode node in serversInDc1.GetNodes())
+                Console.WriteLine($"Node ID: {node.Id}");
+            //end::client-cluster-groups[]
+        }
+
+        public static void Compute()
+        {
+            //tag::client-compute-setup[]
+            var igniteCfg = new IgniteConfiguration
+            {
+                ClientConnectorConfiguration = new ClientConnectorConfiguration
+                {
+                    ThinClientConfiguration = new ThinClientConfiguration
+                    {
+                        MaxActiveComputeTasksPerConnection = 10
+                    }
+                }
+            };
+
+            IIgnite ignite = Ignition.Start(igniteCfg);
+            //end::client-compute-setup[]
+
+            var cfg = new IgniteClientConfiguration();
+            //tag::client-compute-task[]
+            IIgniteClient client = Ignition.StartClient(cfg);
+            IComputeClient compute = client.GetCompute();
+            int result = compute.ExecuteJavaTask<int>("org.foo.bar.AddOneTask", 1);
+            //end::client-compute-task[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/UnderstandingConfiguration.cs b/docs/_docs/code-snippets/dotnet/UnderstandingConfiguration.cs
new file mode 100644
index 0000000..c789ac5
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/UnderstandingConfiguration.cs
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    class UnderstandingConfiguration
+    {
+        static void Foo()
+        {
+            // tag::UnderstandingConfigurationProgrammatic[]
+            var igniteCfg = new IgniteConfiguration
+            {
+                WorkDirectory = "/path/to/work/directory",
+                CacheConfiguration = new[]
+                {
+                    new CacheConfiguration
+                    {
+                        Name = "myCache",
+                        CacheMode = CacheMode.Partitioned
+                    }
+                }
+            };
+            // end::UnderstandingConfigurationProgrammatic[]
+
+            // tag::SettingWorkDir[]
+            var cfg = new IgniteConfiguration
+            {
+                WorkDirectory = "/path/to/work/directory"
+            };
+            // end::SettingWorkDir[]
+
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/UnderstandingSchemas.cs b/docs/_docs/code-snippets/dotnet/UnderstandingSchemas.cs
new file mode 100644
index 0000000..e02070e
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/UnderstandingSchemas.cs
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using Apache.Ignite.Core;
+
+namespace dotnet_helloworld
+{
+    public class UnderstandingSchemas
+    {
+        public static void MultipleSchemas()
+        {
+            // tag::schemas[]
+            var cfg = new IgniteConfiguration
+            {
+                SqlSchemas = new[]
+                {
+                    "MY_SCHEMA",
+                    "MY_SECOND_SCHEMA"
+                }
+            };
+            // end::schemas[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/UsingScanQueries.cs b/docs/_docs/code-snippets/dotnet/UsingScanQueries.cs
new file mode 100644
index 0000000..06f5fab
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/UsingScanQueries.cs
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Query;
+
+namespace dotnet_helloworld_queries
+{
+
+    class Person
+    {
+        public string Name { get; set; }
+        public int Salary { get; set; }
+    }
+
+    public class UsingScanQueries
+    {
+        public static void ExecutingScanQueries()
+        {
+            var ignite = Ignition.Start();
+            var cache = ignite.GetOrCreateCache<int, Person>("person_cache");
+            // tag::scanQry1[]
+            var cursor = cache.Query(new ScanQuery<int, Person>());
+            // end::scanQry1[]
+        }
+
+        // tag::scanQry2[]
+        class SalaryFilter : ICacheEntryFilter<int, Person>
+        {
+            public bool Invoke(ICacheEntry<int, Person> entry)
+            {
+                return entry.Value.Salary > 1000;
+            }
+        }
+
+        public static void ScanQueryFilterDemo()
+        {
+            var ignite = Ignition.Start();
+            var cache = ignite.GetOrCreateCache<int, Person>("person_cache");
+
+            cache.Put(1, new Person {Name = "person1", Salary = 1001});
+            cache.Put(2, new Person {Name = "person2", Salary = 999});
+
+            using (var cursor = cache.Query(new ScanQuery<int, Person>(new SalaryFilter())))
+            {
+                foreach (var entry in cursor)
+                {
+                    Console.WriteLine("Key = " + entry.Key + ", Value = " + entry.Value);
+                }
+            }
+        }
+
+        // end::scanQry2[]
+
+        public static void LocalScanQuery()
+        {
+            var ignite = Ignition.Start();
+            var cache = ignite.GetOrCreateCache<int, Person>("person_cache");
+
+            // tag::scanQryLocal[]
+            var query = new ScanQuery<int, Person> {Local = true};
+            var cursor = cache.Query(query);
+            // end::scanQryLocal[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/UsingSqlApi.cs b/docs/_docs/code-snippets/dotnet/UsingSqlApi.cs
new file mode 100644
index 0000000..2e35f15
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/UsingSqlApi.cs
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache.Configuration;
+using Apache.Ignite.Core.Cache.Query;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class UsingSqlApi
+    {
+        // tag::sqlQueryFields[]
+        class Person
+        {
+            // Indexed field. Will be visible to the SQL engine.
+            [QuerySqlField(IsIndexed = true)] public long Id;
+
+            //Queryable field. Will be visible to the SQL engine
+            [QuerySqlField] public string Name;
+
+            //Will NOT be visible to the SQL engine.
+            public int Age;
+
+            /**
+              * Indexed field sorted in descending order.
+              * Will be visible to the SQL engine.
+            */
+            [QuerySqlField(IsIndexed = true, IsDescending = true)]
+            public float Salary;
+        }
+
+        public static void SqlQueryFieldDemo()
+        {
+            var cacheCfg = new CacheConfiguration
+            {
+                Name = "cacheName",
+                QueryEntities = new[]
+                {
+                    new QueryEntity(typeof(int), typeof(Person))
+                }
+            };
+
+            var ignite = Ignition.Start();
+            var cache = ignite.CreateCache<int, Person>(cacheCfg);
+        }
+        // end::sqlQueryFields[]
+
+        public class Inner
+        {
+            // tag::queryEntities[]
+            private class Person
+            {
+                public long Id;
+
+                public string Name;
+
+                public int Age;
+
+                public float Salary;
+            }
+
+            public static void QueryEntitiesDemo()
+            {
+                var personCacheCfg = new CacheConfiguration
+                {
+                    Name = "Person",
+                    QueryEntities = new[]
+                    {
+                        new QueryEntity
+                        {
+                            KeyType = typeof(long),
+                            ValueType = typeof(Person),
+                            Fields = new[]
+                            {
+                                new QueryField("Id", typeof(long)),
+                                new QueryField("Name", typeof(string)),
+                                new QueryField("Age", typeof(int)),
+                                new QueryField("Salary", typeof(float))
+                            },
+                            Indexes = new[]
+                            {
+                                new QueryIndex("Id"),
+                                new QueryIndex(true, "Salary"),
+                            }
+                        }
+                    }
+                };
+                var ignite = Ignition.Start();
+                var personCache = ignite.CreateCache<int, Person>(personCacheCfg);
+            }
+            // end::queryEntities[]
+
+            public static void QueryingDemo()
+            {
+                var ignite = Ignition.Start();
+                // tag::querying[]
+                var cache = ignite.GetCache<long, Person>("Person");
+
+                var sql = new SqlFieldsQuery("select concat(FirstName, ' ', LastName) from Person");
+
+                using (var cursor = cache.Query(sql))
+                {
+                    foreach (var row in cursor)
+                    {
+                        Console.WriteLine("personName=" + row[0]);
+                    }
+                }
+
+                // end::querying[]
+
+                // tag::schema[]
+                var sqlFieldsQuery = new SqlFieldsQuery("select name from City") {Schema = "PERSON"};
+                // end::schema[]
+            }
+
+            public static void CreateTableDdlDemo()
+            {
+                var ignite = Ignition.Start(new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                });
+
+                // tag::creatingTables[]
+                var cache = ignite.GetOrCreateCache<long, Person>(
+                    new CacheConfiguration
+                    {
+                        Name = "Person"
+                    }
+                );
+
+                //Creating City table
+                cache.Query(new SqlFieldsQuery("CREATE TABLE City (id int primary key, name varchar, region varchar)"));
+                // end::creatingTables[]
+
+                var qry = new SqlFieldsQuery("select name from City") {Schema = "PERSON"};
+                cache.Query(qry).GetAll();
+            }
+
+            public static void CancellingQueries()
+            {
+                var ignite = Ignition.Start(
+                    new IgniteConfiguration
+                    {
+                        DiscoverySpi = new TcpDiscoverySpi
+                        {
+                            LocalPort = 48500,
+                            LocalPortRange = 20,
+                            IpFinder = new TcpDiscoveryStaticIpFinder
+                            {
+                                Endpoints = new[]
+                                {
+                                    "127.0.0.1:48500..48520"
+                                }
+                            }
+                        },
+                        CacheConfiguration = new[]
+                        {
+                            new CacheConfiguration
+                            {
+                                Name = "personCache",
+                                QueryEntities = new[] {new QueryEntity(typeof(long), typeof(Person)),}
+                            },
+                        }
+                    }
+                );
+                var cache = ignite.GetOrCreateCache<long, Person>("personCache");
+                // tag::qryTimeout[]
+                var query = new SqlFieldsQuery("select * from Person") {Timeout = TimeSpan.FromSeconds(10)};
+                // end::qryTimeout[]
+
+                // tag::cursorDispose[]
+                var qry = new SqlFieldsQuery("select * from Person");
+                var cursor = cache.Query(qry);
+
+                //Executing query
+
+                //Halting the query that might be still in progress
+                cursor.Dispose();
+                // end::cursorDispose[]
+            }
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/WorkingWithBinaryObjects.cs b/docs/_docs/code-snippets/dotnet/WorkingWithBinaryObjects.cs
new file mode 100644
index 0000000..8a244b9
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/WorkingWithBinaryObjects.cs
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Binary;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class WorkingWithBinaryObjects
+    {
+        class Book
+        {
+            public string Title { get; set; }
+        }
+
+        class MyEntryProcessor : ICacheEntryProcessor<int, IBinaryObject, object, object>
+        {
+            public object Process(IMutableCacheEntry<int, IBinaryObject> entry, object arg)
+            {
+                // Create a builder from the old value
+                var bldr = entry.Value.ToBuilder();
+
+                //Update the field in the builder
+                bldr.SetField("Name", "Ignite");
+
+                // Set new value to the entry
+                entry.Value = bldr.Build();
+
+                return null;
+            }
+        }
+
+        public static void EntryProcessorForBinaryObjectDemo()
+        {
+            var ignite = Ignition.Start(new IgniteConfiguration
+            {
+                DiscoverySpi = new TcpDiscoverySpi
+                {
+                    LocalPort = 48500,
+                    LocalPortRange = 20,
+                    IpFinder = new TcpDiscoveryStaticIpFinder
+                    {
+                        Endpoints = new[]
+                        {
+                            "127.0.0.1:48500..48520"
+                        }
+                    }
+                }
+            });
+
+
+            var cache = ignite.CreateCache<int, object>("cacheName");
+            var key = 101;
+            cache.Put(key, new Book {Title = "book_name"});
+
+            cache
+                .WithKeepBinary<int, IBinaryObject>()
+                .Invoke(key, new MyEntryProcessor(), null);
+            // Not supported yet: https://issues.apache.org/jira/browse/IGNITE-3825
+        }
+        // tag::entryProcessor[]
+        // Not supported in C# for now
+        // end::entryProcessor[]
+
+        public class ExampleGlobalNameMapper : IBinaryNameMapper
+        {
+            public string GetTypeName(string name)
+            {
+                throw new System.NotImplementedException();
+            }
+
+            public string GetFieldName(string name)
+            {
+                throw new System.NotImplementedException();
+            }
+        }
+
+        public class ExampleGlobalIdMapper : IBinaryIdMapper
+        {
+            public int GetTypeId(string typeName)
+            {
+                throw new System.NotImplementedException();
+            }
+
+            public int GetFieldId(int typeId, string fieldName)
+            {
+                throw new System.NotImplementedException();
+            }
+        }
+
+        public class ExampleSerializer : IBinarySerializer
+        {
+            public void WriteBinary(object obj, IBinaryWriter writer)
+            {
+                throw new System.NotImplementedException();
+            }
+
+            public void ReadBinary(object obj, IBinaryReader reader)
+            {
+                throw new System.NotImplementedException();
+            }
+        }
+
+        public static void ConfiguringBinaryObjects()
+        {
+            // tag::binaryCfg[]
+            var cfg = new IgniteConfiguration
+            {
+                BinaryConfiguration = new BinaryConfiguration
+                {
+                    NameMapper = new ExampleGlobalNameMapper(),
+                    IdMapper = new ExampleGlobalIdMapper(),
+                    TypeConfigurations = new[]
+                    {
+                        new BinaryTypeConfiguration
+                        {
+                            TypeName = "org.apache.ignite.examples.*",
+                            Serializer = new ExampleSerializer()
+                        }
+                    }
+                }
+            };
+            // end::binaryCfg[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/WorkingWithEvents.cs b/docs/_docs/code-snippets/dotnet/WorkingWithEvents.cs
new file mode 100644
index 0000000..7b9509e
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/WorkingWithEvents.cs
@@ -0,0 +1,183 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+using Apache.Ignite.Core.Events;
+
+namespace dotnet_helloworld
+{
+    public class WorkingWithEvents
+    {
+        public static void EnablingEvents()
+        {
+            //tag::enablingEvents[]
+            var cfg = new IgniteConfiguration
+            {
+                IncludedEventTypes = new[]
+                {
+                    EventType.CacheObjectPut,
+                    EventType.CacheObjectRead,
+                    EventType.CacheObjectRemoved,
+                    EventType.NodeJoined,
+                    EventType.NodeLeft
+                }
+            };
+            // end::enablingEvents[]
+            var discoverySpi = new TcpDiscoverySpi
+            {
+                LocalPort = 48500,
+                LocalPortRange = 20,
+                IpFinder = new TcpDiscoveryStaticIpFinder
+                {
+                    Endpoints = new[]
+                    {
+                        "127.0.0.1:48500..48520"
+                    }
+                }
+            };
+            cfg.DiscoverySpi = discoverySpi;
+            // tag::enablingEvents[]
+            var ignite = Ignition.Start(cfg);
+            // end::enablingEvents[]
+        }
+
+        public static void GettingEventsInterface1()
+        {
+            //tag::gettingEventsInterface1[]
+            var ignite = Ignition.GetIgnite();
+            var events = ignite.GetEvents();
+            //end::gettingEventsInterface1[]
+        }
+
+        public static void GettingEventsInterface2()
+        {
+            //tag::gettingEventsInterface2[]
+            var ignite = Ignition.GetIgnite();
+            var events = ignite.GetCluster().ForCacheNodes("person").GetEvents();
+            //end::gettingEventsInterface2[]
+        }
+
+        //tag::localListen[]
+        class LocalListener : IEventListener<CacheEvent>
+        {
+            public bool Invoke(CacheEvent evt)
+            {
+                Console.WriteLine("Received event [evt=" + evt.Name + ", key=" + evt.Key + ", oldVal=" + evt.OldValue
+                                  + ", newVal=" + evt.NewValue);
+                return true;
+            }
+        }
+
+        public static void LocalListenDemo()
+        {
+            var cfg = new IgniteConfiguration
+            {
+                IncludedEventTypes = new[]
+                {
+                    EventType.CacheObjectPut,
+                    EventType.CacheObjectRead,
+                    EventType.CacheObjectRemoved,
+                }
+            };
+            //end::localListen[]
+            var discoverySpi = new TcpDiscoverySpi
+            {
+                LocalPort = 48500,
+                LocalPortRange = 20,
+                IpFinder = new TcpDiscoveryStaticIpFinder
+                {
+                    Endpoints = new[]
+                    {
+                        "127.0.0.1:48500..48520"
+                    }
+                }
+            };
+            cfg.DiscoverySpi = discoverySpi;
+            // tag::localListen[]
+            var ignite = Ignition.Start(cfg);
+            var events = ignite.GetEvents();
+            events.LocalListen(new LocalListener(), EventType.CacheObjectPut, EventType.CacheObjectRead,
+                EventType.CacheObjectRemoved);
+
+            var cache = ignite.GetOrCreateCache<int, int>("myCache");
+            cache.Put(1, 1);
+            cache.Put(2, 2);
+        }
+        //end::localListen[]
+
+
+        //tag::queryRemote[]
+        class EventFilter : IEventFilter<CacheEvent>
+        {
+            public bool Invoke(CacheEvent evt)
+            {
+                return true;
+            }
+        }
+        // ....
+
+
+        //end::queryRemote[]
+
+
+        public static void StoringEventsDemo()
+        {
+            //tag::storingEvents[]
+            var cfg = new IgniteConfiguration
+            {
+                EventStorageSpi = new MemoryEventStorageSpi()
+                {
+                    ExpirationTimeout = TimeSpan.FromMilliseconds(600000)
+                },
+                IncludedEventTypes = new[]
+                {
+                    EventType.CacheObjectPut,
+                    EventType.CacheObjectRead,
+                    EventType.CacheObjectRemoved,
+                }
+            };
+            //end::storingEvents[]
+            var discoverySpi = new TcpDiscoverySpi
+            {
+                LocalPort = 48500,
+                LocalPortRange = 20,
+                IpFinder = new TcpDiscoveryStaticIpFinder
+                {
+                    Endpoints = new[]
+                    {
+                        "127.0.0.1:48500..48520"
+                    }
+                }
+            };
+            cfg.DiscoverySpi = discoverySpi;
+            //tag::storingEvents[]
+            var ignite = Ignition.Start(cfg);
+            //end::storingEvents[]
+            //tag::queryLocal[]
+            //tag::queryRemote[]
+            var events = ignite.GetEvents();
+            //end::queryRemote[]
+            var cacheEvents = events.LocalQuery(EventType.CacheObjectPut);
+            //end::queryLocal[]
+            //tag::queryRemote[]
+            var storedEvents = events.RemoteQuery(new EventFilter(), null, EventType.CacheObjectPut);
+            //end::queryRemote[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/dotnet.csproj b/docs/_docs/code-snippets/dotnet/dotnet.csproj
new file mode 100644
index 0000000..14f8fa3
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/dotnet.csproj
@@ -0,0 +1,11 @@
+<Project Sdk="Microsoft.NET.Sdk">

+

+  <PropertyGroup>

+    <TargetFramework>netcoreapp2.0</TargetFramework>

+  </PropertyGroup>

+

+  <ItemGroup>

+    <PackageReference Include="Apache.Ignite" Version="2.9.0-alpha20201001" />

+  </ItemGroup>

+

+</Project>

diff --git a/docs/_docs/code-snippets/java/pom.xml b/docs/_docs/code-snippets/java/pom.xml
new file mode 100644
index 0000000..de5623d
--- /dev/null
+++ b/docs/_docs/code-snippets/java/pom.xml
@@ -0,0 +1,146 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+	<modelVersion>4.0.0</modelVersion>
+	<groupId>org.apache.ignite</groupId>
+	<artifactId>code-snippets</artifactId>
+	<version>1.0.0-SNAPSHOT</version>
+	<properties>
+		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+		<ignite.version>2.9.0-SNAPSHOT</ignite.version>
+	</properties>
+	<dependencies>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-core</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>${groupId}</groupId>
+			<artifactId>ignite-log4j2</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>${groupId}</groupId>
+			<artifactId>ignite-log4j</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>${groupId}</groupId>
+			<artifactId>ignite-jcl</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>${groupId}</groupId>
+			<artifactId>ignite-slf4j</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-indexing</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-spring</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-urideploy</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-zookeeper</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-opencensus</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-cloud</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-aws</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-compress</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+
+		<dependency>
+			<groupId>org.apache.ignite</groupId>
+			<artifactId>ignite-gce</artifactId>
+			<version>${ignite.version}</version>
+		</dependency>
+		<dependency>
+			<groupId>mysql</groupId>
+			<artifactId>mysql-connector-java</artifactId>
+			<version>8.0.13</version>
+		</dependency>
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-api</artifactId>
+			<version>5.6.2</version>
+		</dependency>
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-engine</artifactId>
+			<version>5.6.2</version>
+		</dependency>
+	</dependencies>
+	<build>
+		<testSourceDirectory>src/main/java</testSourceDirectory>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-compiler-plugin</artifactId>
+				<version>3.7.0</version>
+				<configuration>
+					<source>1.8</source>
+					<target>1.8</target>
+				</configuration>
+			</plugin>
+			<plugin>
+				<artifactId>maven-surefire-plugin</artifactId>
+				<version>2.22.2</version>
+				<configuration>
+					<includes>
+						<include>**/*.java</include>
+					</includes>
+				</configuration>
+			</plugin>
+			<plugin>
+				<artifactId>maven-failsafe-plugin</artifactId>
+				<version>2.22.2</version>
+			</plugin>
+		</plugins>
+	</build>
+</project>
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/AffinityCollocationExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/AffinityCollocationExample.java
new file mode 100644
index 0000000..d9bc6a9
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/AffinityCollocationExample.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheKeyConfiguration;
+import org.apache.ignite.cache.CacheMode;
+import org.apache.ignite.cache.affinity.AffinityKey;
+import org.apache.ignite.cache.affinity.AffinityKeyMapped;
+import org.apache.ignite.configuration.CacheConfiguration;
+
+//tag::collocation[]
+public class AffinityCollocationExample {
+
+    static class Person {
+        private int id;
+        private String companyId;
+        private String name;
+
+        public Person(int id, String companyId, String name) {
+            this.id = id;
+            this.companyId = companyId;
+            this.name = name;
+        }
+
+        public int getId() {
+            return id;
+        }
+    }
+
+    static class PersonKey {
+        private int id;
+
+        @AffinityKeyMapped
+        private String companyId;
+
+        public PersonKey(int id, String companyId) {
+            this.id = id;
+            this.companyId = companyId;
+        }
+    }
+
+    static class Company {
+        private String id;
+        private String name;
+
+        public Company(String id, String name) {
+            this.id = id;
+            this.name = name;
+        }
+
+        public String getId() {
+            return id;
+        }
+    }
+
+    public void configureAffinityKeyWithAnnotation() {
+        CacheConfiguration<PersonKey, Person> personCfg = new CacheConfiguration<PersonKey, Person>("persons");
+        personCfg.setBackups(1);
+
+        CacheConfiguration<String, Company> companyCfg = new CacheConfiguration<>("companies");
+        companyCfg.setBackups(1);
+
+        try (Ignite ignite = Ignition.start()) {
+            IgniteCache<PersonKey, Person> personCache = ignite.getOrCreateCache(personCfg);
+            IgniteCache<String, Company> companyCache = ignite.getOrCreateCache(companyCfg);
+
+            Company c1 = new Company("company1", "My company");
+            Person p1 = new Person(1, c1.getId(), "John");
+
+            // Both the p1 and c1 objects will be cached on the same node
+            personCache.put(new PersonKey(p1.getId(), c1.getId()), p1);
+            companyCache.put("company1", c1);
+
+            // Get the person object
+            p1 = personCache.get(new PersonKey(1, "company1"));
+        }
+    }
+
+    // tag::affinity-key-class[]
+    public void configureAffinitKeyWithAffinityKeyClass() {
+        CacheConfiguration<AffinityKey<Integer>, Person> personCfg = new CacheConfiguration<AffinityKey<Integer>, Person>(
+                "persons");
+        personCfg.setBackups(1);
+
+        CacheConfiguration<String, Company> companyCfg = new CacheConfiguration<String, Company>("companies");
+        companyCfg.setBackups(1);
+
+        Ignite ignite = Ignition.start();
+
+        IgniteCache<AffinityKey<Integer>, Person> personCache = ignite.getOrCreateCache(personCfg);
+        IgniteCache<String, Company> companyCache = ignite.getOrCreateCache(companyCfg);
+
+        Company c1 = new Company("company1", "My company");
+        Person p1 = new Person(1, c1.getId(), "John");
+
+        // Both the p1 and c1 objects will be cached on the same node
+        personCache.put(new AffinityKey<Integer>(p1.getId(), c1.getId()), p1);
+        companyCache.put(c1.getId(), c1);
+
+        // Get the person object
+        p1 = personCache.get(new AffinityKey(1, "company1"));
+    }
+
+    // end::affinity-key-class[]
+    // tag::config-with-key-configuration[]
+    public void configureAffinityKeyWithCacheKeyConfiguration() {
+        CacheConfiguration<PersonKey, Person> personCfg = new CacheConfiguration<PersonKey, Person>("persons");
+        personCfg.setBackups(1);
+
+        // Configure the affinity key
+        personCfg.setKeyConfiguration(new CacheKeyConfiguration("Person", "companyId"));
+
+        CacheConfiguration<String, Company> companyCfg = new CacheConfiguration<String, Company>("companies");
+        companyCfg.setBackups(1);
+
+        Ignite ignite = Ignition.start();
+
+        IgniteCache<PersonKey, Person> personCache = ignite.getOrCreateCache(personCfg);
+        IgniteCache<String, Company> companyCache = ignite.getOrCreateCache(companyCfg);
+
+        Company c1 = new Company("company1", "My company");
+        Person p1 = new Person(1, c1.getId(), "John");
+
+        // Both the p1 and c1 objects will be cached on the same node
+        personCache.put(new PersonKey(1, c1.getId()), p1);
+        companyCache.put(c1.getId(), c1);
+
+        // Get the person object
+        p1 = personCache.get(new PersonKey(1, "company1"));
+    }
+    // end::config-with-key-configuration[]
+}
+//end::collocation[]
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/BackupFilter.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/BackupFilter.java
new file mode 100644
index 0000000..ee5b607
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/BackupFilter.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter;
+import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class BackupFilter {
+
+	@Test
+	void backupFilter() {
+		
+		//tag::backup-filter[]
+		CacheConfiguration<Integer, String> cacheCfg = new CacheConfiguration<Integer, String>("myCache");
+
+		cacheCfg.setBackups(1);
+		RendezvousAffinityFunction af = new RendezvousAffinityFunction();
+		af.setAffinityBackupFilter(new ClusterNodeAttributeAffinityBackupFilter("AVAILABILITY_ZONE"));
+
+		cacheCfg.setAffinity(af);
+		//end::backup-filter[]
+	}
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/BasicCacheOperations.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/BasicCacheOperations.java
new file mode 100644
index 0000000..aeaf37b
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/BasicCacheOperations.java
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteCompute;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.lang.IgniteFuture;
+import org.junit.jupiter.api.Test;
+
+public class BasicCacheOperations {
+
+    @Test
+    void getCacheInstanceExample() {
+        Ignition.start();
+        Ignition.ignite().createCache("myCache");
+        // tag::getCache[]
+        Ignite ignite = Ignition.ignite();
+
+        // Obtain an instance of the cache named "myCache".
+        // Note that different caches may have different generics.
+        IgniteCache<Integer, String> cache = ignite.cache("myCache");
+        // end::getCache[]
+        Ignition.ignite().close();
+    }
+
+    @Test
+    void createCacheExample() {
+        Ignition.start();
+        // tag::createCache[]
+        Ignite ignite = Ignition.ignite();
+
+        CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
+
+        cfg.setName("myNewCache");
+        cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
+
+        // Create a cache with the given name if it does not exist.
+        IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);
+        // end::createCache[]
+        Ignition.ignite().close();
+    }
+
+    @Test
+    void destroyCacheExaple() {
+        Ignition.start();
+        Ignition.ignite().createCache("myCache");
+        // tag::destroyCache[]
+        Ignite ignite = Ignition.ignite();
+
+        IgniteCache<Long, String> cache = ignite.cache("myCache");
+
+        cache.destroy();
+        // end::destroyCache[]
+        Ignition.ignite().close();
+    }
+
+    @Test
+    void atomicOperationsExample() {
+        try (Ignite ignite = Ignition.start()) {
+            ignite.createCache("myCache");
+            // tag::atomic1[]
+            IgniteCache<Integer, String> cache = ignite.cache("myCache");
+
+            // Store keys in the cache (the values will end up on different cache nodes).
+            for (int i = 0; i < 10; i++)
+                cache.put(i, Integer.toString(i));
+
+            for (int i = 0; i < 10; i++)
+                System.out.println("Got [key=" + i + ", val=" + cache.get(i) + ']');
+            // end::atomic1[]
+            // tag::atomic2[]
+            // Put-if-absent which returns previous value.
+            String oldVal = cache.getAndPutIfAbsent(11, "Hello");
+
+            // Put-if-absent which returns boolean success flag.
+            boolean success = cache.putIfAbsent(22, "World");
+
+            // Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous
+            // value.
+            oldVal = cache.getAndReplace(11, "New value");
+
+            // Replace-if-exists operation (opposite of putIfAbsent), returns boolean
+            // success flag.
+            success = cache.replace(22, "Other new value");
+
+            // Replace-if-matches operation.
+            success = cache.replace(22, "Other new value", "Yet-another-new-value");
+
+            // Remove-if-matches operation.
+            success = cache.remove(11, "Hello");
+            // end::atomic2[]
+        }
+    }
+
+    @Test
+    void asyncExecutionExample() {
+        try (Ignite ignite = Ignition.start()) {
+            // tag::async[]
+            IgniteCompute compute = ignite.compute();
+
+            // Execute a closure asynchronously.
+            IgniteFuture<String> fut = compute.callAsync(() -> "Hello World");
+
+            // Listen for completion and print out the result.
+            fut.listen(f -> System.out.println("Job result: " + f.get()));
+            // end::async[]
+        }
+    }
+
+    void readRepair() {
+
+        try (Ignite ignite = Ignition.start()) {
+            //tag::read-repair[]
+            IgniteCache<Object, Object> cache = ignite.cache("my_cache").withReadRepair();
+
+            Object value = cache.get(10);
+            //end::read-repair[]
+        }
+
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CacheJdbcPersonStore.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CacheJdbcPersonStore.java
new file mode 100644
index 0000000..5ee781c
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CacheJdbcPersonStore.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+
+import javax.cache.Cache.Entry;
+import javax.cache.integration.CacheLoaderException;
+import javax.cache.integration.CacheWriterException;
+
+import org.apache.ignite.cache.store.CacheStoreAdapter;
+import org.apache.ignite.lang.IgniteBiInClosure;
+
+//tag::class[]
+public class CacheJdbcPersonStore extends CacheStoreAdapter<Long, Person> {
+    // This method is called whenever the "get(...)" methods are called on IgniteCache.
+    @Override
+    public Person load(Long key) {
+        try (Connection conn = connection()) {
+            try (PreparedStatement st = conn.prepareStatement("select * from PERSON where id=?")) {
+                st.setLong(1, key);
+
+                ResultSet rs = st.executeQuery();
+
+                return rs.next() ? new Person(rs.getInt(1), rs.getString(2)) : null;
+            }
+        } catch (SQLException e) {
+            throw new CacheLoaderException("Failed to load: " + key, e);
+        }
+    }
+
+    @Override
+    public void write(Entry<? extends Long, ? extends Person> entry) throws CacheWriterException {
+        try (Connection conn = connection()) {
+            // Syntax of MERGE statement is database specific and should be adopted for your database.
+            // If your database does not support MERGE statement then use sequentially
+            // update, insert statements.
+            try (PreparedStatement st = conn.prepareStatement("merge into PERSON (id, name) key (id) VALUES (?, ?)")) {
+                Person val = entry.getValue();
+
+                st.setLong(1, entry.getKey());
+                st.setString(2, val.getName());
+
+                st.executeUpdate();
+            }
+        } catch (SQLException e) {
+            throw new CacheWriterException("Failed to write entry (" + entry + ")", e);
+        }
+    }
+
+    // This method is called whenever the "remove(...)" method are called on IgniteCache.
+    @Override
+    public void delete(Object key) {
+        try (Connection conn = connection()) {
+            try (PreparedStatement st = conn.prepareStatement("delete from PERSON where id=?")) {
+                st.setLong(1, (Long) key);
+
+                st.executeUpdate();
+            }
+        } catch (SQLException e) {
+            throw new CacheWriterException("Failed to delete: " + key, e);
+        }
+    }
+
+    // This method is called whenever the "loadCache()" and "localLoadCache()"
+    // methods are called on IgniteCache. It is used for bulk-loading the cache.
+    // If you don't need to bulk-load the cache, skip this method.
+    @Override
+    public void loadCache(IgniteBiInClosure<Long, Person> clo, Object... args) {
+        if (args == null || args.length == 0 || args[0] == null)
+            throw new CacheLoaderException("Expected entry count parameter is not provided.");
+
+        final int entryCnt = (Integer) args[0];
+
+        try (Connection conn = connection()) {
+            try (PreparedStatement st = conn.prepareStatement("select * from PERSON")) {
+                try (ResultSet rs = st.executeQuery()) {
+                    int cnt = 0;
+
+                    while (cnt < entryCnt && rs.next()) {
+                        Person person = new Person(rs.getInt(1), rs.getString(2));
+                        clo.apply(person.getId(), person);
+                        cnt++;
+                    }
+                }
+            }
+        } catch (SQLException e) {
+            throw new CacheLoaderException("Failed to load values from cache store.", e);
+        }
+    }
+
+    // Open JDBC connection.
+    private Connection connection() throws SQLException {
+        // Open connection to your RDBMS systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, etc.)
+        Connection conn = DriverManager.getConnection("jdbc:mysql://[host]:[port]/[database]", "YOUR_USER_NAME", "YOUR_PASSWORD");
+
+        conn.setAutoCommit(true);
+
+        return conn;
+    }
+}
+
+//end::class[]
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClientNodes.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClientNodes.java
new file mode 100644
index 0000000..68a2505
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClientNodes.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteClientDisconnectedException;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
+import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
+import org.junit.jupiter.api.Test;
+
+public class ClientNodes {
+
+    @Test
+    void disableReconnection() {
+
+        //tag::disable-reconnection[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
+        discoverySpi.setClientReconnectDisabled(true);
+
+        cfg.setDiscoverySpi(discoverySpi);
+        //end::disable-reconnection[]
+
+        try (Ignite ignite = Ignition.start(cfg)) {
+
+        }
+    }
+
+    void slowClient() {
+        //tag::slow-clients[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setClientMode(true);
+
+        TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
+        commSpi.setSlowClientQueueLimit(1000);
+
+        cfg.setCommunicationSpi(commSpi);
+        //end::slow-clients[]
+    }
+
+    void reconnect() {
+        Ignite ignite = Ignition.start();
+
+        //tag::reconnect[]
+
+        IgniteCache cache = ignite.getOrCreateCache(new CacheConfiguration<>("myCache"));
+
+        try {
+            cache.put(1, "value");
+        } catch (IgniteClientDisconnectedException e) {
+            if (e.getCause() instanceof IgniteClientDisconnectedException) {
+                IgniteClientDisconnectedException cause = (IgniteClientDisconnectedException) e.getCause();
+
+                cause.reconnectFuture().get(); // Wait until the client is reconnected. 
+                // proceed
+            }
+        }
+        //end::reconnect[]
+
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusterAPI.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusterAPI.java
new file mode 100644
index 0000000..0054478
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusterAPI.java
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCheckedException;
+import org.apache.ignite.IgniteCluster;
+import org.apache.ignite.IgniteCompute;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cluster.ClusterGroup;
+import org.apache.ignite.cluster.ClusterState;
+import org.junit.jupiter.api.Test;
+
+public class ClusterAPI {
+
+    @Test
+    void activate() {
+        //tag::activate[]
+        Ignite ignite = Ignition.start();
+
+        ignite.cluster().state(ClusterState.ACTIVE);
+        //end::activate[]
+        ignite.close();
+    }
+
+    @Test
+    void changeClusterState() {
+        //tag::change-state[]
+        Ignite ignite = Ignition.start();
+
+        ignite.cluster().state(ClusterState.ACTIVE_READ_ONLY);
+        //end::change-state[]
+        ignite.close();
+    }
+
+    void changeClusterTag() throws IgniteCheckedException {
+        //tag::cluster-tag[]
+        Ignite ignite = Ignition.start();
+
+        // get the cluster id
+       java.util.UUID clusterId = ignite.cluster().id();
+       
+       // change the cluster tag
+       ignite.cluster().tag("new_tag");
+
+        //end::cluster-tag[]
+        ignite.close();
+    }
+
+    @Test
+    void enableAutoadjustment() {
+        //tag::enable-autoadjustment[]
+
+        Ignite ignite = Ignition.start();
+
+        ignite.cluster().baselineAutoAdjustEnabled(true);
+
+        ignite.cluster().baselineAutoAdjustTimeout(30000);
+
+        //end::enable-autoadjustment[]
+
+        //tag::disable-autoadjustment[]
+        ignite.cluster().baselineAutoAdjustEnabled(false);
+        //end::disable-autoadjustment[]
+        ignite.close();
+    }
+
+    @Test
+    void remoteNodes() {
+        // tag::remote-nodes[]
+        Ignite ignite = Ignition.ignite();
+
+        IgniteCluster cluster = ignite.cluster();
+
+        // Get compute instance which will only execute
+        // over remote nodes, i.e. all the nodes except for this one.
+        IgniteCompute compute = ignite.compute(cluster.forRemotes());
+
+        // Broadcast to all remote nodes and print the ID of the node
+        // on which this closure is executing.
+        compute.broadcast(
+                () -> System.out.println("Hello Node: " + ignite.cluster().localNode().id()));
+        // end::remote-nodes[]
+    }
+
+    @Test
+    void example(Ignite ignite) {
+        // tag::group-examples[]
+        IgniteCluster cluster = ignite.cluster();
+
+        // All nodes on which the cache with name "myCache" is deployed,
+        // either in client or server mode.
+        ClusterGroup cacheGroup = cluster.forCacheNodes("myCache");
+
+        // All data nodes responsible for caching data for "myCache".
+        ClusterGroup dataGroup = cluster.forDataNodes("myCache");
+
+        // All client nodes that can access "myCache".
+        ClusterGroup clientGroup = cluster.forClientNodes("myCache");
+
+        // end::group-examples[]
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java
new file mode 100644
index 0000000..be16350
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ClusteringOverview.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
+import org.junit.jupiter.api.Test;
+
+public class ClusteringOverview {
+
+    @Test
+    void clientModeCfg() {
+        try (Ignite serverNode = Ignition
+                .start(new IgniteConfiguration().setIgniteInstanceName("server-node"))) {
+            // tag::clientCfg[]
+            IgniteConfiguration cfg = new IgniteConfiguration();
+
+            // Enable client mode.
+            cfg.setClientMode(true);
+
+            // Start the node in client mode.
+            Ignite ignite = Ignition.start(cfg);
+            // end::clientCfg[]
+
+            ignite.close();
+        }
+    }
+
+    void setClientModeEnabledByIgnition() {
+
+        Ignite serverNode = Ignition
+                .start(new IgniteConfiguration().setIgniteInstanceName("server-node"));
+        // tag::clientModeIgnition[]
+        Ignition.setClientMode(true);
+
+        // Start the node in client mode.
+        Ignite ignite = Ignition.start();
+        // end::clientModeIgnition[]
+
+        ignite.close();
+        serverNode.close();
+    }
+
+    @Test
+    void communicationSpiDemo() {
+
+        Ignite serverNode = Ignition
+                .start(new IgniteConfiguration().setIgniteInstanceName("server-node"));
+        // tag::commSpi[]
+        TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
+
+        // Set the local port.
+        commSpi.setLocalPort(4321);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setCommunicationSpi(commSpi);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::commSpi[]
+        ignite.close();
+        serverNode.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CollocatedComputations.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CollocatedComputations.java
new file mode 100644
index 0000000..c657e02
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CollocatedComputations.java
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.math.BigDecimal;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import javax.cache.Cache;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteCompute;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.binary.BinaryObject;
+import org.apache.ignite.cache.CachePeekMode;
+import org.apache.ignite.cache.affinity.Affinity;
+import org.apache.ignite.cache.query.QueryCursor;
+import org.apache.ignite.cache.query.ScanQuery;
+import org.apache.ignite.lang.IgniteCallable;
+import org.apache.ignite.resources.IgniteInstanceResource;
+
+public class CollocatedComputations {
+
+    public static void main(String[] args) {
+        Ignite ignite = Ignition.start();
+        HashSet<Long> keys = new HashSet<>();
+        keys.add(1L);
+        keys.add(2L);
+        keys.add(3L);
+        keys.add(4L);
+
+        calculateAverage(ignite, keys);
+        
+    }
+
+    void collocatingByKey(Ignite ignite) {
+        // tag::collocating-by-key[]
+        IgniteCache<Integer, String> cache = ignite.cache("myCache");
+
+        IgniteCompute compute = ignite.compute();
+
+        int key = 1;
+
+        // This closure will execute on the remote node where
+        // data for the given 'key' is located.
+        compute.affinityRun("myCache", key, () -> {
+            // Peek is a local memory lookup.
+            System.out.println("Co-located [key= " + key + ", value= " + cache.localPeek(key) + ']');
+        });
+
+        // end::collocating-by-key[]
+    }
+
+    // tag::calculate-average[]
+    // this task sums up the values of the salary field for the given set of keys
+    private static class SumTask implements IgniteCallable<BigDecimal> {
+        private Set<Long> keys;
+
+        public SumTask(Set<Long> keys) {
+            this.keys = keys;
+        }
+
+        @IgniteInstanceResource
+        private Ignite ignite;
+
+        @Override
+        public BigDecimal call() throws Exception {
+
+            IgniteCache<Long, BinaryObject> cache = ignite.cache("person").withKeepBinary();
+
+            BigDecimal sum = new BigDecimal(0);
+
+            for (long k : keys) {
+                BinaryObject person = cache.localPeek(k, CachePeekMode.PRIMARY);
+                if (person != null)
+                    sum = sum.add(new BigDecimal((float) person.field("salary")));
+            }
+
+            return sum;
+        }
+    }
+
+    public static void calculateAverage(Ignite ignite, Set<Long> keys) {
+
+        // get the affinity function configured for the cache
+        Affinity<Long> affinityFunc = ignite.affinity("person");
+
+        // this map stores collections of keys for each partition
+        HashMap<Integer, Set<Long>> partMap = new HashMap<>();
+        keys.forEach(k -> {
+            int partId = affinityFunc.partition(k);
+
+            Set<Long> keysByPartition = partMap.computeIfAbsent(partId, key -> new HashSet<Long>());
+            keysByPartition.add(k);
+        });
+
+        BigDecimal total = new BigDecimal(0);
+
+        IgniteCompute compute = ignite.compute();
+
+        List<String> caches = Arrays.asList("person");
+
+        // iterate over all partitions
+        for (Map.Entry<Integer, Set<Long>> pair : partMap.entrySet()) {
+            // send a task that gets specific keys for the partition
+            BigDecimal sum = compute.affinityCall(caches, pair.getKey().intValue(), new SumTask(pair.getValue()));
+            total = total.add(sum);
+        }
+
+        System.out.println("the average salary is " + total.floatValue() / keys.size());
+    }
+
+    // end::calculate-average[]
+
+    // tag::sum-by-partition[]
+    // this task sums up the value of the 'salary' field for all objects stored in
+    // the given partition
+    public static class SumByPartitionTask implements IgniteCallable<BigDecimal> {
+        private int partId;
+
+        public SumByPartitionTask(int partId) {
+            this.partId = partId;
+        }
+
+        @IgniteInstanceResource
+        private Ignite ignite;
+
+        @Override
+        public BigDecimal call() throws Exception {
+            // use binary objects to avoid deserialization
+            IgniteCache<Long, BinaryObject> cache = ignite.cache("person").withKeepBinary();
+
+            BigDecimal total = new BigDecimal(0);
+            try (QueryCursor<Cache.Entry<Long, BinaryObject>> cursor = cache
+                    .query(new ScanQuery<Long, BinaryObject>(partId).setLocal(true))) {
+                for (Cache.Entry<Long, BinaryObject> entry : cursor) {
+                    total = total.add(new BigDecimal((float) entry.getValue().field("salary")));
+                }
+            }
+
+            return total;
+        }
+    }
+
+    // end::sum-by-partition[]
+
+    public static void entryProcessor(Ignite ignite) {
+        // tag::entry-processor[]
+        IgniteCache<String, Integer> cache = ignite.cache("mycache");
+
+        // Increment the value for a specific key by 1.
+        // The operation will be performed on the node where the key is stored.
+        // Note that if the cache does not contain an entry for the given key, it will
+        // be created.
+        cache.invoke("mykey", (entry, args) -> {
+            Integer val = entry.getValue();
+
+            entry.setValue(val == null ? 1 : val + 1);
+
+            return null;
+        });
+
+        // end::entry-processor[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ComputeTaskExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ComputeTaskExample.java
new file mode 100644
index 0000000..a6f2da9
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ComputeTaskExample.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCompute;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.compute.ComputeJob;
+import org.apache.ignite.compute.ComputeJobAdapter;
+import org.apache.ignite.compute.ComputeJobResult;
+import org.apache.ignite.compute.ComputeTaskSplitAdapter;
+
+//tag::compute-task-example[]
+public class ComputeTaskExample {
+    public static class CharacterCountTask extends ComputeTaskSplitAdapter<String, Integer> {
+        // 1. Splits the received string into words
+        // 2. Creates a child job for each word
+        // 3. Sends the jobs to other nodes for processing.
+        @Override
+        public List<ComputeJob> split(int gridSize, String arg) {
+            String[] words = arg.split(" ");
+
+            List<ComputeJob> jobs = new ArrayList<>(words.length);
+
+            for (final String word : words) {
+                jobs.add(new ComputeJobAdapter() {
+                    @Override
+                    public Object execute() {
+                        System.out.println(">>> Printing '" + word + "' on from compute job.");
+
+                        // Return the number of letters in the word.
+                        return word.length();
+                    }
+                });
+            }
+
+            return jobs;
+        }
+
+        @Override
+        public Integer reduce(List<ComputeJobResult> results) {
+            int sum = 0;
+
+            for (ComputeJobResult res : results)
+                sum += res.<Integer>getData();
+
+            return sum;
+        }
+    }
+
+    public static void main(String[] args) {
+
+        Ignite ignite = Ignition.start();
+
+        IgniteCompute compute = ignite.compute();
+
+        // Execute the task on the cluster and wait for its completion.
+        int cnt = compute.execute(CharacterCountTask.class, "Hello Grid Enabled World!");
+
+        System.out.println(">>> Total number of characters in the phrase is '" + cnt + "'.");
+    }
+}
+
+//end::compute-task-example[]
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ConfiguringCaches.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ConfiguringCaches.java
new file mode 100644
index 0000000..670b6e5
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ConfiguringCaches.java
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheMode;
+import org.apache.ignite.cache.CacheRebalanceMode;
+import org.apache.ignite.cache.CacheWriteSynchronizationMode;
+import org.apache.ignite.cache.PartitionLossPolicy;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class ConfiguringCaches {
+
+    public static void main(String[] args) {
+        configurationExample();
+        cacheTemplateExample();
+    }
+
+    public static void configurationExample() {
+        // tag::cfg[]
+        CacheConfiguration cacheCfg = new CacheConfiguration("myCache");
+
+        cacheCfg.setCacheMode(CacheMode.PARTITIONED);
+        cacheCfg.setBackups(2);
+        cacheCfg.setRebalanceMode(CacheRebalanceMode.SYNC);
+        cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
+        cacheCfg.setPartitionLossPolicy(PartitionLossPolicy.READ_ONLY_SAFE);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setCacheConfiguration(cacheCfg);
+
+        // Start a node.
+        Ignition.start(cfg);
+        // end::cfg[]
+        Ignition.ignite().close();
+    }
+
+    public static void cacheTemplateExample() {
+        // tag::template[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        try (Ignite ignite = Ignition.start(igniteCfg)) {
+            CacheConfiguration cacheCfg = new CacheConfiguration("myCacheTemplate");
+
+            cacheCfg.setBackups(2);
+            cacheCfg.setCacheMode(CacheMode.PARTITIONED);
+
+            // Register the cache template 
+            ignite.addCacheConfiguration(cacheCfg);
+        }
+        // end::template[]
+    }
+
+    static void backupsSync() {
+        // tag::synchronization-mode[]
+        CacheConfiguration cacheCfg = new CacheConfiguration();
+
+        cacheCfg.setName("cacheName");
+        cacheCfg.setBackups(1);
+        cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setCacheConfiguration(cacheCfg);
+
+        // Start the node.
+        Ignition.start(cfg);
+        // end::synchronization-mode[]
+    }
+
+    static void configuringBackups() {
+        // tag::backups[]
+        CacheConfiguration cacheCfg = new CacheConfiguration();
+
+        cacheCfg.setName("cacheName");
+        cacheCfg.setCacheMode(CacheMode.PARTITIONED);
+        cacheCfg.setBackups(1);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setCacheConfiguration(cacheCfg);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+
+        // end::backups[]
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ConfiguringMetrics.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ConfiguringMetrics.java
new file mode 100644
index 0000000..e406afc
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ConfiguringMetrics.java
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.DataRegionConfiguration;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.metric.jmx.JmxMetricExporterSpi;
+import org.apache.ignite.spi.metric.log.LogExporterSpi;
+import org.apache.ignite.spi.metric.sql.SqlViewMetricExporterSpi;
+import org.junit.jupiter.api.Test;
+
+public class ConfiguringMetrics {
+
+    @Test
+    void cacheMetrics() {
+        // tag::cache-metrics[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        CacheConfiguration cacheCfg = new CacheConfiguration("test-cache");
+
+        // Enable statistics for the cache.
+        cacheCfg.setStatisticsEnabled(true);
+
+        cfg.setCacheConfiguration(cacheCfg);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::cache-metrics[]
+
+        ignite.close();
+    }
+
+    @Test
+    void dataStorageMetrics() {
+
+        // tag::data-storage-metrics[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+        storageCfg.setMetricsEnabled(true);
+
+        // Apply the new configuration.
+        cfg.setDataStorageConfiguration(storageCfg);
+
+        Ignite ignite = Ignition.start(cfg);
+        // end::data-storage-metrics[]
+        ignite.close();
+    }
+
+    @Test
+    void dataRegionMetrics() {
+
+        // tag::data-region-metrics[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+
+        DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
+        defaultRegion.setMetricsEnabled(true);
+
+        storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
+
+        // Create a new data region.
+        DataRegionConfiguration regionCfg = new DataRegionConfiguration();
+
+        // Region name.
+        regionCfg.setName("myDataRegion");
+
+        // Enable metrics for this region.
+        regionCfg.setMetricsEnabled(true);
+
+        // Set the data region configuration.
+        storageCfg.setDataRegionConfigurations(regionCfg);
+
+        // Other properties
+
+        // Apply the new configuration.
+        cfg.setDataStorageConfiguration(storageCfg);
+
+        Ignite ignite = Ignition.start(cfg);
+        // end::data-region-metrics[]
+        ignite.close();
+    }
+
+    @Test
+    void newMetrics() {
+
+        //tag::new-metric-framework[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setMetricExporterSpi(new JmxMetricExporterSpi(), new SqlViewMetricExporterSpi());
+
+        Ignite ignite = Ignition.start(cfg);
+        //end::new-metric-framework[]
+
+        ignite.close();
+    }
+    
+    @Test
+    void sqlExporter() {
+
+        //tag::sql-exporter[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        SqlViewMetricExporterSpi jmxExporter = new SqlViewMetricExporterSpi();
+
+        //export cache metrics only
+        jmxExporter.setExportFilter(mreg -> mreg.name().startsWith("cache."));
+
+        cfg.setMetricExporterSpi(jmxExporter);
+        //end::sql-exporter[]
+
+        Ignition.start(cfg).close();
+    }
+
+    @Test
+    void jmxExporter() {
+
+        //tag::metrics-filter[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        JmxMetricExporterSpi jmxExporter = new JmxMetricExporterSpi();
+
+        //export cache metrics only
+        jmxExporter.setExportFilter(mreg -> mreg.name().startsWith("cache."));
+
+        cfg.setMetricExporterSpi(jmxExporter);
+        //end::metrics-filter[]
+
+        Ignition.start(cfg).close();
+    }
+
+    @Test
+    void logExporter() {
+
+        //tag::log-exporter[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        LogExporterSpi logExporter = new LogExporterSpi();
+        logExporter.setPeriod(600_000);
+
+        //export cache metrics only
+        logExporter.setExportFilter(mreg -> mreg.name().startsWith("cache."));
+
+        cfg.setMetricExporterSpi(logExporter);
+
+        Ignite ignite = Ignition.start(cfg);
+        //end::log-exporter[]
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CustomThreadPool.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CustomThreadPool.java
new file mode 100644
index 0000000..7a55aa4
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/CustomThreadPool.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.ExecutorConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.lang.IgniteRunnable;
+import org.apache.ignite.resources.IgniteInstanceResource;
+
+public class CustomThreadPool {
+
+    void customPool() {
+
+        // tag::pool-config[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setExecutorConfiguration(new ExecutorConfiguration("myPool").setSize(16));
+        // end::pool-config[]
+
+        Ignite ignite = Ignition.start(cfg);
+
+        ignite.compute().run(new OuterRunnable());
+
+    }
+
+    // tag::inner-runnable[]
+    public class InnerRunnable implements IgniteRunnable {
+        @Override
+        public void run() {
+            System.out.println("Hello from inner runnable!");
+        }
+    }
+    // end::inner-runnable[]
+
+    // tag::outer-runnable[]
+    public class OuterRunnable implements IgniteRunnable {
+        @IgniteInstanceResource
+        private Ignite ignite;
+
+        @Override
+        public void run() {
+            // Synchronously execute InnerRunnable in a custom executor.
+            ignite.compute().withExecutor("myPool").run(new InnerRunnable());
+            System.out.println("outer runnable is executed");
+        }
+    }
+    // end::outer-runnable[]
+
+    public static void main(String[] args) {
+        CustomThreadPool ctp = new CustomThreadPool();
+        ctp.customPool();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataPartitioning.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataPartitioning.java
new file mode 100644
index 0000000..1dbf6b8
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataPartitioning.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.PartitionLossPolicy;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class DataPartitioning {
+
+    @Test
+     void configurationExample() {
+        // tag::cfg[]
+        // Defining cluster configuration.
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        
+        // Defining Person cache configuration.
+        CacheConfiguration<Integer, Person> personCfg = new CacheConfiguration<Integer, Person>("Person");
+
+        personCfg.setBackups(1);
+
+        // Group the cache belongs to.
+        personCfg.setGroupName("group1");
+
+        // Defining Organization cache configuration.
+        CacheConfiguration orgCfg = new CacheConfiguration("Organization");
+
+        orgCfg.setBackups(1);
+
+        // Group the cache belongs to.
+        orgCfg.setGroupName("group1");
+
+        cfg.setCacheConfiguration(personCfg, orgCfg);
+
+        // Starting the node.
+        Ignition.start(cfg);
+        // end::cfg[]
+        Ignition.ignite().close();
+    }
+
+    @Test
+    void partitionLossPolicy() {
+        //tag::partition-loss-policy[]
+
+        CacheConfiguration<Integer, Person> personCfg = new CacheConfiguration<Integer, Person>("Person");
+        
+        personCfg.setPartitionLossPolicy(PartitionLossPolicy.IGNORE);
+
+        //end::partition-loss-policy[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataRegionConfigurationExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataRegionConfigurationExample.java
new file mode 100644
index 0000000..138beb2
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataRegionConfigurationExample.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.DataPageEvictionMode;
+import org.apache.ignite.configuration.DataRegionConfiguration;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class DataRegionConfigurationExample {
+
+    public static void main(String[] args) {
+
+        //tag::ignite-config[]
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+        //tag::default[]
+
+        DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
+        defaultRegion.setName("Default_Region");
+        defaultRegion.setInitialSize(100 * 1024 * 1024);
+
+        storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
+        //end::default[]
+        //tag::data-regions[]
+        // 40MB memory region with eviction enabled.
+        DataRegionConfiguration regionWithEviction = new DataRegionConfiguration();
+        regionWithEviction.setName("40MB_Region_Eviction");
+        regionWithEviction.setInitialSize(20 * 1024 * 1024);
+        regionWithEviction.setMaxSize(40 * 1024 * 1024);
+        regionWithEviction.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
+
+        storageCfg.setDataRegionConfigurations(regionWithEviction);
+        //end::data-regions[]
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setDataStorageConfiguration(storageCfg);
+        //tag::caches[]
+
+        CacheConfiguration cache1 = new CacheConfiguration("SampleCache");
+        //this cache will be hosted in the "40MB_Region_Eviction" data region
+        cache1.setDataRegionName("40MB_Region_Eviction");
+
+        cfg.setCacheConfiguration(cache1);
+        //end::caches[]
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::ignite-config[]
+
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataStreaming.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataStreaming.java
new file mode 100644
index 0000000..64f1e50
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataStreaming.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.stream.StreamReceiver;
+import org.apache.ignite.stream.StreamTransformer;
+import org.apache.ignite.stream.StreamVisitor;
+import org.junit.jupiter.api.Test;
+
+public class DataStreaming {
+
+    @Test
+    void dataStreamerExample() {
+        try (Ignite ignite = Ignition.start()) {
+            IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
+            //tag::dataStreamer1[]
+            // Get the data streamer reference and stream data.
+            try (IgniteDataStreamer<Integer, String> stmr = ignite.dataStreamer("myCache")) {
+                // Stream entries.
+                for (int i = 0; i < 100000; i++)
+                    stmr.addData(i, Integer.toString(i));
+                //end::dataStreamer1[]
+                //tag::dataStreamer2[]
+                stmr.allowOverwrite(true);
+                //end::dataStreamer2[]
+                //tag::dataStreamer1[]
+            }
+            System.out.println("dataStreamerExample output:" + cache.get(99999));
+            //end::dataStreamer1[]
+        }
+    }
+
+    @Test
+    void streamTransformerExample() {
+        try (Ignite ignite = Ignition.start()) {
+            //tag::streamTransformer[]
+            String[] text = { "hello", "world", "hello", "Ignite" };
+            CacheConfiguration<String, Long> cfg = new CacheConfiguration<>("wordCountCache");
+
+            IgniteCache<String, Long> stmCache = ignite.getOrCreateCache(cfg);
+
+            try (IgniteDataStreamer<String, Long> stmr = ignite.dataStreamer(stmCache.getName())) {
+                // Allow data updates.
+                stmr.allowOverwrite(true);
+
+                // Configure data transformation to count instances of the same word.
+                stmr.receiver(StreamTransformer.from((e, arg) -> {
+                    // Get current count.
+                    Long val = e.getValue();
+
+                    // Increment count by 1.
+                    e.setValue(val == null ? 1L : val + 1);
+
+                    return null;
+                }));
+
+                // Stream words into the streamer cache.
+                for (String word : text)
+                    stmr.addData(word, 1L);
+
+            }
+            //end::streamTransformer[]
+            System.out.println("StreamTransformer example output:" + stmCache.get("hello"));
+        }
+    }
+
+    @Test
+    void streamReceiverExample() {
+        try (Ignite ignite = Ignition.start()) {
+            ignite.getOrCreateCache("myCache");
+            //tag::streamReceiver[]
+            try (IgniteDataStreamer<Integer, String> stmr = ignite.dataStreamer("myCache")) {
+
+                stmr.allowOverwrite(true);
+
+                stmr.receiver((StreamReceiver<Integer, String>) (cache, entries) -> entries.forEach(entry -> {
+
+                    // do something with the entry
+
+                    cache.put(entry.getKey(), entry.getValue());
+                }));
+            }
+            //end::streamReceiver[]
+        }
+    }
+
+    @Test
+    void poolSize() {
+        //tag::pool-size[] 
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setDataStreamerThreadPoolSize(10);
+
+        Ignite ignite = Ignition.start(cfg);
+        //end::pool-size[] 
+        ignite.close();
+    }
+
+    // tag::stream-visitor[]
+    static class Instrument {
+        final String symbol;
+        Double latest;
+        Double high;
+        Double low;
+
+        public Instrument(String symbol) {
+            this.symbol = symbol;
+        }
+
+    }
+
+    static Map<String, Double> getMarketData() {
+        //populate market data somehow
+        return new HashMap<>();
+    }
+
+    @Test
+    void streamVisitorExample() {
+        try (Ignite ignite = Ignition.start()) {
+            CacheConfiguration<String, Double> mrktDataCfg = new CacheConfiguration<>("marketData");
+            CacheConfiguration<String, Instrument> instCfg = new CacheConfiguration<>("instruments");
+
+            // Cache for market data ticks streamed into the system.
+            IgniteCache<String, Double> mrktData = ignite.getOrCreateCache(mrktDataCfg);
+
+            // Cache for financial instruments.
+            IgniteCache<String, Instrument> instCache = ignite.getOrCreateCache(instCfg);
+
+            try (IgniteDataStreamer<String, Double> mktStmr = ignite.dataStreamer("marketData")) {
+                // Note that we do not populate the 'marketData' cache (it remains empty).
+                // Instead we update the 'instruments' cache based on the latest market price.
+                mktStmr.receiver(StreamVisitor.from((cache, e) -> {
+                    String symbol = e.getKey();
+                    Double tick = e.getValue();
+
+                    Instrument inst = instCache.get(symbol);
+
+                    if (inst == null)
+                        inst = new Instrument(symbol);
+
+                    // Update instrument price based on the latest market tick.
+                    inst.high = Math.max(inst.high, tick);
+                    inst.low = Math.min(inst.low, tick);
+                    inst.latest = tick;
+
+                    // Update the instrument cache.
+                    instCache.put(symbol, inst);
+                }));
+
+                // Stream market data into the cluster.
+                Map<String, Double> marketData = getMarketData();
+                for (Map.Entry<String, Double> tick : marketData.entrySet())
+                    mktStmr.addData(tick);
+            }
+        }
+    }
+    // end::stream-visitor[]
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataStructures.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataStructures.java
new file mode 100644
index 0000000..3d86580
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DataStructures.java
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteAtomicLong;
+import org.apache.ignite.IgniteAtomicReference;
+import org.apache.ignite.IgniteAtomicSequence;
+import org.apache.ignite.IgniteCountDownLatch;
+import org.apache.ignite.IgniteQueue;
+import org.apache.ignite.IgniteSemaphore;
+import org.apache.ignite.IgniteSet;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CollectionConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class DataStructures {
+
+    @Test
+    void queue() {
+        //tag::queue[]
+        Ignite ignite = Ignition.start();
+
+        IgniteQueue<String> queue = ignite.queue("queueName", // Queue name.
+                0, // Queue capacity. 0 for an unbounded queue.
+                new CollectionConfiguration() // Collection configuration.
+        );
+
+        //end::queue[]
+        ignite.close();
+    }
+
+    @Test
+    void set() {
+
+        //tag::set[]
+        Ignite ignite = Ignition.start();
+
+        IgniteSet<String> set = ignite.set("setName", // Set name.
+                new CollectionConfiguration() // Collection configuration.
+        );
+        //end::set[]
+        ignite.close();
+    }
+
+    void colocatedQueue() {
+        //tag::colocated-queue[]
+        Ignite ignite = Ignition.start();
+
+        CollectionConfiguration colCfg = new CollectionConfiguration();
+
+        colCfg.setCollocated(true);
+
+        // Create a colocated queue.
+        IgniteQueue<String> queue = ignite.queue("queueName", 0, colCfg);
+        //end::colocated-queue[]
+        ignite.close();
+    }
+
+    @Test
+    void colocatedSet() {
+        //tag::colocated-set[]
+        Ignite ignite = Ignition.start();
+
+        CollectionConfiguration colCfg = new CollectionConfiguration();
+
+        colCfg.setCollocated(true);
+
+        // Create a colocated set.
+        IgniteSet<String> set = ignite.set("setName", colCfg);
+        //end::colocated-set[] 
+        ignite.close();
+    }
+
+    @Test
+    void atomicLong() {
+        //tag::atomic-long[]
+
+        Ignite ignite = Ignition.start();
+
+        IgniteAtomicLong atomicLong = ignite.atomicLong("atomicName", // Atomic long name.
+                0, // Initial value.
+                true // Create if it does not exist.
+        );
+
+        // Increment atomic long on local node
+        System.out.println("Incremented value: " + atomicLong.incrementAndGet());
+        //end::atomic-long[]
+        ignite.close();
+    }
+
+    @Test
+    void atomicReference() {
+        //tag::atomic-reference[]
+        Ignite ignite = Ignition.start();
+
+        // Create an AtomicReference
+        IgniteAtomicReference<String> ref = ignite.atomicReference("refName", // Reference name.
+                "someVal", // Initial value for atomic reference.
+                true // Create if it does not exist.
+        );
+
+        // Compare and update the value
+        ref.compareAndSet("WRONG EXPECTED VALUE", "someNewVal"); // Won't change.
+        //end::atomic-reference[] 
+        ignite.close();
+    }
+
+    @Test
+    void countDownLatch() {
+        //tag::count-down-latch[]
+        Ignite ignite = Ignition.start();
+
+        IgniteCountDownLatch latch = ignite.countDownLatch("latchName", // Latch name.
+                10, // Initial count.
+                false, // Auto remove, when counter has reached zero.
+                true // Create if it does not exist.
+        );
+        //end::count-down-latch[]
+        ignite.close();
+    }
+
+    @Test
+    void syncOnLatch() {
+        //tag::sync-on-latch[]
+
+        Ignite ignite = Ignition.start();
+
+        final IgniteCountDownLatch latch = ignite.countDownLatch("latchName", 10, false, true);
+
+        // Execute jobs.
+        for (int i = 0; i < 10; i++)
+            // Execute a job on some remote cluster node.
+            ignite.compute().run(() -> {
+                int newCnt = latch.countDown();
+
+                System.out.println("Counted down: newCnt=" + newCnt);
+            });
+
+        // Wait for all jobs to complete.
+        latch.await();
+        //end::sync-on-latch[]
+        ignite.close();
+    }
+
+    @Test
+    void atomicSequence() {
+        //tag::atomic-sequence[]
+        Ignite ignite = Ignition.start();
+
+        //create an atomic sequence
+        IgniteAtomicSequence seq = ignite.atomicSequence("seqName", // Sequence name.
+                0, // Initial value for sequence.
+                true // Create if it does not exist.
+        );
+
+        // Increment the atomic sequence.
+        for (int i = 0; i < 20; i++) {
+            long currentValue = seq.get();
+            long newValue = seq.incrementAndGet();
+        }
+        //end::atomic-sequence[]
+        ignite.close();
+    }
+
+    @Test
+    void semaphore() {
+        //tag::semaphore[]
+        Ignite ignite = Ignition.start();
+
+        IgniteSemaphore semaphore = ignite.semaphore("semName", // Distributed semaphore name.
+                20, // Number of permits.
+                true, // Release acquired permits if node, that owned them, left topology.
+                true // Create if it doesn't exist.
+        );
+        //end::semaphore[] 
+        ignite.close();
+    }
+
+    void useSemaphorr() {
+        //tag::use-semaphore[]
+
+        Ignite ignite = Ignition.start();
+
+        IgniteSemaphore semaphore = ignite.semaphore("semName", // Distributed semaphore name.
+                20, // Number of permits.
+                true, // Release acquired permits if node, that owned them, left topology.
+                true // Create if it doesn't exist.
+        );
+
+        // Acquires a permit, blocking until it's available.
+        semaphore.acquire();
+
+        try {
+            // Semaphore permit is acquired. Execute a distributed task.
+            ignite.compute().run(() -> {
+                System.out.println("Executed on:" + ignite.cluster().localNode().id());
+
+                // Additional logic.
+            });
+        } finally {
+            // Releases a permit, returning it to the semaphore.
+            semaphore.release();
+        }
+
+        //end::use-semaphore[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Discovery.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Discovery.java
new file mode 100644
index 0000000..4a216bf
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Discovery.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
+import org.junit.jupiter.api.Test;
+
+public class Discovery {
+
+    @Test
+    void clientsBehindNat() {
+
+        //tag::client-behind-nat[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        
+        cfg.setClientMode(true);
+
+        cfg.setCommunicationSpi(new TcpCommunicationSpi().setForceClientToServerConnections(true));
+
+        //end::client-behind-nat[]
+        try(Ignite ignite = Ignition.start(cfg)) {
+            
+        } 
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DiscoveryInTheCloud.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DiscoveryInTheCloud.java
new file mode 100644
index 0000000..576b36d
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DiscoveryInTheCloud.java
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import com.amazonaws.auth.AWSCredentialsProvider;
+import com.amazonaws.auth.AWSStaticCredentialsProvider;
+import com.amazonaws.auth.BasicAWSCredentials;
+import com.amazonaws.auth.InstanceProfileCredentialsProvider;
+import java.util.Arrays;
+import java.util.Collections;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.cloud.TcpDiscoveryCloudIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.elb.TcpDiscoveryElbIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.gce.TcpDiscoveryGoogleStorageIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder;
+
+public class DiscoveryInTheCloud {
+
+    public static void apacheJcloudsExample() {
+        //tag::jclouds[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        TcpDiscoveryCloudIpFinder ipFinder = new TcpDiscoveryCloudIpFinder();
+
+        // Configuration for AWS EC2.
+        ipFinder.setProvider("aws-ec2");
+        ipFinder.setIdentity("yourAccountId");
+        ipFinder.setCredential("yourAccountKey");
+        ipFinder.setRegions(Collections.singletonList("us-east-1"));
+        ipFinder.setZones(Arrays.asList("us-east-1b", "us-east-1e"));
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start a node.
+        Ignition.start(cfg);
+        //end::jclouds[]
+    }
+
+    public static void awsExample1() {
+        //tag::aws1[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        BasicAWSCredentials creds = new BasicAWSCredentials("yourAccessKey", "yourSecreteKey");
+
+        TcpDiscoveryS3IpFinder ipFinder = new TcpDiscoveryS3IpFinder();
+        ipFinder.setAwsCredentials(creds);
+        ipFinder.setBucketName("yourBucketName");
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start a node.
+        Ignition.start(cfg);
+        //end::aws1[]
+    }
+
+    public static void awsExample2() {
+        //tag::aws2[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        AWSCredentialsProvider instanceProfileCreds = new InstanceProfileCredentialsProvider(false);
+
+        TcpDiscoveryS3IpFinder ipFinder = new TcpDiscoveryS3IpFinder();
+        ipFinder.setAwsCredentialsProvider(instanceProfileCreds);
+        ipFinder.setBucketName("yourBucketName");
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start a node.
+        Ignition.start(cfg);
+        //end::aws2[]
+    }
+
+    public static void awsElbExample() {
+        //tag::awsElb[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        BasicAWSCredentials creds = new BasicAWSCredentials("yourAccessKey", "yourSecreteKey");
+
+        TcpDiscoveryElbIpFinder ipFinder = new TcpDiscoveryElbIpFinder();
+        ipFinder.setRegion("yourElbRegion");
+        ipFinder.setLoadBalancerName("yourLoadBalancerName");
+        ipFinder.setCredentialsProvider(new AWSStaticCredentialsProvider(creds));
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start the node.
+        Ignition.start(cfg);
+        //end::awsElb[]
+    }
+
+    public static void googleCloudStorageExample() {
+        //tag::google[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        TcpDiscoveryGoogleStorageIpFinder ipFinder = new TcpDiscoveryGoogleStorageIpFinder();
+
+        ipFinder.setServiceAccountId("yourServiceAccountId");
+        ipFinder.setServiceAccountP12FilePath("pathToYourP12Key");
+        ipFinder.setProjectName("yourGoogleClourPlatformProjectName");
+
+        // Bucket name must be unique across the whole Google Cloud Platform.
+        ipFinder.setBucketName("your_bucket_name");
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start the node.
+        Ignition.start(cfg);
+        //end::google[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DiskCompression.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DiskCompression.java
new file mode 100644
index 0000000..de31a3e
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DiskCompression.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.DataRegionConfiguration;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.DiskPageCompression;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class DiskCompression {
+
+    @Test
+    void configuration() {
+        //tag::configuration[]
+        DataStorageConfiguration dsCfg = new DataStorageConfiguration();
+
+        //set the page size to 2 types of the disk page size
+        dsCfg.setPageSize(4096 * 2);
+
+        //enable persistence for the default data region
+        dsCfg.setDefaultDataRegionConfiguration(new DataRegionConfiguration().setPersistenceEnabled(true));
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setDataStorageConfiguration(dsCfg);
+
+        CacheConfiguration cacheCfg = new CacheConfiguration("myCache");
+        //enable disk page compression for this cache
+        cacheCfg.setDiskPageCompression(DiskPageCompression.LZ4);
+        //optionally set the compression level
+        cacheCfg.setDiskPageCompressionLevel(10);
+
+        cfg.setCacheConfiguration(cacheCfg);
+
+        Ignite ignite = Ignition.start(cfg);
+        //end::configuration[]
+        
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DistributedComputing.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DistributedComputing.java
new file mode 100644
index 0000000..9fe272d
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/DistributedComputing.java
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteCluster;
+import org.apache.ignite.IgniteCompute;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.lang.IgniteCallable;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.ignite.resources.IgniteInstanceResource;
+
+public class DistributedComputing {
+
+    void getCompute() {
+        // tag::get-compute[]
+        Ignite ignite = Ignition.start();
+
+        IgniteCompute compute = ignite.compute();
+        // end::get-compute[]
+    }
+
+    void getComputeForNodes() {
+
+        // tag::get-compute-for-nodes[]
+        Ignite ignite = Ignition.start();
+
+        IgniteCompute compute = ignite.compute(ignite.cluster().forRemotes());
+
+        // end::get-compute-for-nodes[]
+    }
+
+    void taskTimeout() {
+        Ignite ignite = Ignition.start();
+
+        // tag::timeout[]
+        IgniteCompute compute = ignite.compute();
+
+        compute.withTimeout(300_000).run(() -> {
+            // your computation
+            // ...
+        });
+        // end::timeout[]
+    }
+
+    void executeRunnable(Ignite ignite) {
+        // tag::execute-runnable[]
+        IgniteCompute compute = ignite.compute();
+
+        // Iterate through all words and print
+        // each word on a different cluster node.
+        for (String word : "Print words on different cluster nodes".split(" ")) {
+            compute.run(() -> System.out.println(word));
+        }
+        // end::execute-runnable[]
+    }
+
+    void executeCallable(Ignite ignite) {
+        // tag::execute-callable[]
+        Collection<IgniteCallable<Integer>> calls = new ArrayList<>();
+
+        // Iterate through all words in the sentence and create callable jobs.
+        for (String word : "How many characters".split(" "))
+            calls.add(word::length);
+
+        // Execute the collection of callables on the cluster.
+        Collection<Integer> res = ignite.compute().call(calls);
+
+        // Add all the word lengths received from cluster nodes.
+        int total = res.stream().mapToInt(Integer::intValue).sum();
+        // end::execute-callable[]
+    }
+
+    void executeIgniteClosure(Ignite ignite) {
+        // tag::execute-closure[]
+        IgniteCompute compute = ignite.compute();
+
+        // Execute closure on all cluster nodes.
+        Collection<Integer> res = compute.apply(String::length, Arrays.asList("How many characters".split(" ")));
+
+        // Add all the word lengths received from cluster nodes.
+        int total = res.stream().mapToInt(Integer::intValue).sum();
+        // end::execute-closure[]
+    }
+
+    void broadcast(Ignite ignite) {
+
+        // tag::broadcast[]
+        // Limit broadcast to remote nodes only.
+        IgniteCompute compute = ignite.compute(ignite.cluster().forRemotes());
+
+        // Print out hello message on remote nodes in the cluster group.
+        compute.broadcast(() -> System.out.println("Hello Node: " + ignite.cluster().localNode().id()));
+        // end::broadcast[]
+    }
+
+    void async(Ignite ignite) {
+
+        // tag::async[]
+
+        IgniteCompute compute = ignite.compute();
+
+        Collection<IgniteCallable<Integer>> calls = new ArrayList<>();
+
+        // Iterate through all words in the sentence and create callable jobs.
+        for (String word : "Count characters using a callable".split(" "))
+            calls.add(word::length);
+
+        IgniteFuture<Collection<Integer>> future = compute.callAsync(calls);
+
+        future.listen(fut -> {
+            // Total number of characters.
+            int total = fut.get().stream().mapToInt(Integer::intValue).sum();
+
+            System.out.println("Total number of characters: " + total);
+        });
+
+        // end::async[]
+    }
+
+    void shareState(Ignite ignite) {
+        // tag::get-map[]
+        IgniteCluster cluster = ignite.cluster();
+
+        ConcurrentMap<String, Integer> nodeLocalMap = cluster.nodeLocalMap();
+        // end::get-map[]
+
+        // tag::job-counter[]
+        IgniteCallable<Long> job = new IgniteCallable<Long>() {
+            @IgniteInstanceResource
+            private Ignite ignite;
+
+            @Override
+            public Long call() {
+                // Get a reference to node local.
+                ConcurrentMap<String, AtomicLong> nodeLocalMap = ignite.cluster().nodeLocalMap();
+
+                AtomicLong cntr = nodeLocalMap.get("counter");
+
+                if (cntr == null) {
+                    AtomicLong old = nodeLocalMap.putIfAbsent("counter", cntr = new AtomicLong());
+
+                    if (old != null)
+                        cntr = old;
+                }
+
+                return cntr.incrementAndGet();
+            }
+        };
+
+        // end::job-counter[]
+    }
+
+    // tag::access-data[]
+    public class MyCallableTask implements IgniteCallable<Integer> {
+
+        @IgniteInstanceResource
+        private Ignite ignite;
+
+        @Override
+        public Integer call() throws Exception {
+
+            IgniteCache<Long, Person> cache = ignite.cache("person");
+
+            // Get the data you need
+            Person person = cache.get(1L);
+
+            // do with the data what you need to do
+
+            return 1;
+        }
+    }
+
+    // end::access-data[]
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Events.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Events.java
new file mode 100644
index 0000000..6d6f5af
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Events.java
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteEvents;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.events.CacheEvent;
+import org.apache.ignite.events.EventType;
+import org.apache.ignite.events.JobEvent;
+import org.apache.ignite.lang.IgniteBiPredicate;
+import org.apache.ignite.lang.IgnitePredicate;
+import org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi;
+
+public class Events {
+
+    public static void main(String[] args) {
+
+    }
+
+    void enablingEvents() {
+        // tag::enabling-events[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Enable cache events.
+        cfg.setIncludeEventTypes(EventType.EVT_CACHE_OBJECT_PUT, EventType.EVT_CACHE_OBJECT_READ,
+                EventType.EVT_CACHE_OBJECT_REMOVED, EventType.EVT_NODE_JOINED, EventType.EVT_NODE_LEFT);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::enabling-events[]
+
+        // tag::get-events[]
+        IgniteEvents events = ignite.events();
+        // end::get-events[]
+    }
+
+    void getEventsForNodes() {
+        // tag::get-events-for-cache[]
+        Ignite ignite = Ignition.ignite();
+
+        IgniteEvents events = ignite.events(ignite.cluster().forCacheNodes("person"));
+        // end::get-events-for-cache[]
+    }
+
+    void getNodeFromEvent(Ignite ignite) {
+        // tag::get-node[]
+        IgniteEvents events = ignite.events();
+
+        UUID uuid = events.remoteListen(new IgniteBiPredicate<UUID, JobEvent>() {
+            @Override
+            public boolean apply(UUID uuid, JobEvent e) {
+
+                System.out.println("nodeID = " + e.node().id() + ", addresses=" + e.node().addresses());
+
+                return true; //continue listening
+            }
+        }, null, EventType.EVT_JOB_FINISHED);
+
+        // end::get-node[]
+    }
+
+    void localEvents(Ignite ignite) {
+        // tag::local[]
+        IgniteEvents events = ignite.events();
+
+        // Local listener that listens to local events.
+        IgnitePredicate<CacheEvent> localListener = evt -> {
+            System.out.println("Received event [evt=" + evt.name() + ", key=" + evt.key() + ", oldVal=" + evt.oldValue()
+                    + ", newVal=" + evt.newValue());
+
+            return true; // Continue listening.
+        };
+
+        // Subscribe to the cache events that are triggered on the local node.
+        events.localListen(localListener, EventType.EVT_CACHE_OBJECT_PUT, EventType.EVT_CACHE_OBJECT_READ,
+                EventType.EVT_CACHE_OBJECT_REMOVED);
+        // end::local[]
+    }
+
+    void remoteEvents(Ignite ignite) {
+        // tag::remote[]
+        IgniteEvents events = ignite.events();
+
+        IgnitePredicate<CacheEvent> filter = evt -> {
+            System.out.println("remote event: " + evt.name());
+            return true;
+        };
+
+        // Subscribe to the cache events on all nodes where the cache is hosted.
+        UUID uuid = events.remoteListen(new IgniteBiPredicate<UUID, CacheEvent>() {
+
+            @Override
+            public boolean apply(UUID uuid, CacheEvent e) {
+
+                // process the event
+
+                return true; //continue listening
+            }
+        }, filter, EventType.EVT_CACHE_OBJECT_PUT);
+        // end::remote[]
+    }
+
+    void batching() {
+        // tag::batching[]
+        Ignite ignite = Ignition.ignite();
+
+        // Get an instance of the cache.
+        final IgniteCache<Integer, String> cache = ignite.cache("cacheName");
+
+        // Sample remote filter which only accepts events for the keys
+        // that are greater than or equal to 10.
+        IgnitePredicate<CacheEvent> rmtLsnr = new IgnitePredicate<CacheEvent>() {
+            @Override
+            public boolean apply(CacheEvent evt) {
+                System.out.println("Cache event: " + evt);
+
+                int key = evt.key();
+
+                return key >= 10;
+            }
+        };
+
+        // Subscribe to the cache events that are triggered on all nodes
+        // that host the cache.
+        // Send notifications in batches of 10.
+        ignite.events(ignite.cluster().forCacheNodes("cacheName")).remoteListen(10 /* batch size */,
+                0 /* time intervals */, false, null, rmtLsnr, EventType.EVTS_CACHE);
+
+        // Generate cache events.
+        for (int i = 0; i < 20; i++)
+            cache.put(i, Integer.toString(i));
+
+        // end::batching[]
+    }
+
+    void storeEvents() {
+        // tag::event-storage[]
+        MemoryEventStorageSpi eventStorageSpi = new MemoryEventStorageSpi();
+        eventStorageSpi.setExpireAgeMs(600000);
+
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+        igniteCfg.setEventStorageSpi(eventStorageSpi);
+
+        Ignite ignite = Ignition.start(igniteCfg);
+        // end::event-storage[]
+
+        IgniteEvents events = ignite.events();
+
+        // tag::query-local-events[]
+        Collection<CacheEvent> cacheEvents = events.localQuery(e -> {
+            // process the event
+            return true;
+        }, EventType.EVT_CACHE_OBJECT_PUT);
+
+        // end::query-local-events[]
+
+        // tag::query-remote-events[]
+        Collection<CacheEvent> storedEvents = events.remoteQuery(e -> {
+            // process the event
+            return true;
+        }, 0, EventType.EVT_CACHE_OBJECT_PUT);
+
+        // end::query-remote-events[]
+
+        ignite.close();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/EvictionPolicies.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/EvictionPolicies.java
new file mode 100644
index 0000000..4002fa4
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/EvictionPolicies.java
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy;
+import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy;
+import org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.DataPageEvictionMode;
+import org.apache.ignite.configuration.DataRegionConfiguration;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class EvictionPolicies {
+
+    public static void runAll() {
+        randomLRU();
+        random2LRU();
+        LRU();
+        FIFO();
+        sorted();
+    }
+
+    public static void randomLRU() {
+        //tag::randomLRU[]
+        // Node configuration.
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Memory configuration.
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+
+        // Creating a new data region.
+        DataRegionConfiguration regionCfg = new DataRegionConfiguration();
+
+        // Region name.
+        regionCfg.setName("20GB_Region");
+
+        // 500 MB initial size (RAM).
+        regionCfg.setInitialSize(500L * 1024 * 1024);
+
+        // 20 GB max size (RAM).
+        regionCfg.setMaxSize(20L * 1024 * 1024 * 1024);
+
+        // Enabling RANDOM_LRU eviction for this region.
+        regionCfg.setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU);
+
+        // Setting the data region configuration.
+        storageCfg.setDataRegionConfigurations(regionCfg);
+
+        // Applying the new configuration.
+        cfg.setDataStorageConfiguration(storageCfg);
+        //end::randomLRU[]
+
+        try (Ignite ignite = Ignition.start(new IgniteConfiguration().setDataStorageConfiguration(storageCfg))) {
+
+        }
+    }
+
+    public static void random2LRU() {
+        //tag::random2LRU[]
+        // Ignite configuration.
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Memory configuration.
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+
+        // Creating a new data region.
+        DataRegionConfiguration regionCfg = new DataRegionConfiguration();
+
+        // Region name.
+        regionCfg.setName("20GB_Region");
+
+        // 500 MB initial size (RAM).
+        regionCfg.setInitialSize(500L * 1024 * 1024);
+
+        // 20 GB max size (RAM).
+        regionCfg.setMaxSize(20L * 1024 * 1024 * 1024);
+
+        // Enabling RANDOM_2_LRU eviction for this region.
+        regionCfg.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
+
+        // Setting the data region configuration.
+        storageCfg.setDataRegionConfigurations(regionCfg);
+
+        // Applying the new configuration.
+        cfg.setDataStorageConfiguration(storageCfg);
+        //end::random2LRU[]
+
+    }
+
+    public static void LRU() {
+        //tag::LRU[]
+        CacheConfiguration cacheCfg = new CacheConfiguration();
+
+        cacheCfg.setName("cacheName");
+
+        // Enabling on-heap caching for this distributed cache.
+        cacheCfg.setOnheapCacheEnabled(true);
+
+        // Set the maximum cache size to 1 million (default is 100,000).
+        cacheCfg.setEvictionPolicyFactory(() -> new LruEvictionPolicy(1000000));
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setCacheConfiguration(cacheCfg);
+        //end::LRU[]
+
+    }
+
+    public static void FIFO() {
+        //tag::FIFO[]
+        CacheConfiguration cacheCfg = new CacheConfiguration();
+
+        cacheCfg.setName("cacheName");
+
+        // Enabling on-heap caching for this distributed cache.
+        cacheCfg.setOnheapCacheEnabled(true);
+
+        // Set the maximum cache size to 1 million (default is 100,000).
+        cacheCfg.setEvictionPolicyFactory(() -> new FifoEvictionPolicy(1000000));
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setCacheConfiguration(cacheCfg);
+        //end::FIFO[]
+
+        
+    }
+
+    public static void sorted() {
+        //tag::sorted[]
+        CacheConfiguration cacheCfg = new CacheConfiguration();
+
+        cacheCfg.setName("cacheName");
+
+        // Enabling on-heap caching for this distributed cache.
+        cacheCfg.setOnheapCacheEnabled(true);
+
+        // Set the maximum cache size to 1 million (default is 100,000).
+        cacheCfg.setEvictionPolicyFactory(() -> new SortedEvictionPolicy(1000000));
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setCacheConfiguration(cacheCfg);
+        //end::sorted[]
+
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ExpiryPolicies.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ExpiryPolicies.java
new file mode 100644
index 0000000..0ad07e9
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ExpiryPolicies.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.concurrent.TimeUnit;
+import javax.cache.expiry.CreatedExpiryPolicy;
+import javax.cache.expiry.Duration;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class ExpiryPolicies {
+
+    @Test
+   void expiryPoliciesExample() {
+        //tag::cfg[]
+        //tag::eagerTtl[]
+        CacheConfiguration<Integer, String> cfg = new CacheConfiguration<Integer, String>();
+        cfg.setName("myCache");
+        //end::cfg[]
+
+        cfg.setEagerTtl(true);
+        //end::eagerTtl[]
+        //tag::cfg[]
+        cfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.FIVE_MINUTES));
+        //end::cfg[]
+
+    }
+
+    @Test
+    void expiryPolicyForIndividualEntry() {
+
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+        try (Ignite ignite = Ignition.start(igniteCfg)) {
+
+            //tag::expiry2[]
+
+            CacheConfiguration<Integer, String> cacheCfg = new CacheConfiguration<Integer, String>("myCache");
+
+            ignite.createCache(cacheCfg);
+
+            IgniteCache cache = ignite.cache("myCache")
+                    .withExpiryPolicy(new CreatedExpiryPolicy(new Duration(TimeUnit.MINUTES, 5)));
+
+            // if the cache does not contain key 1, the entry will expire after 5 minutes
+            cache.put(1, "first value");
+
+            //end::expiry2[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ExternalStorage.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ExternalStorage.java
new file mode 100644
index 0000000..f66f037
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ExternalStorage.java
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import com.mysql.cj.jdbc.MysqlDataSource;
+import java.io.Serializable;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.Set;
+import javax.cache.configuration.Factory;
+import javax.sql.DataSource;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.cache.CacheMode;
+import org.apache.ignite.cache.QueryEntity;
+import org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory;
+import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
+import org.apache.ignite.cache.store.jdbc.JdbcType;
+import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
+import org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class ExternalStorage {
+
+    public static void cacheJdbcPojoStoreExample() {
+        //tag::pojo[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        CacheConfiguration<Integer, Person> personCacheCfg = new CacheConfiguration<>();
+
+        personCacheCfg.setName("PersonCache");
+        personCacheCfg.setCacheMode(CacheMode.PARTITIONED);
+        personCacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
+
+        personCacheCfg.setReadThrough(true);
+        personCacheCfg.setWriteThrough(true);
+
+        CacheJdbcPojoStoreFactory<Integer, Person> factory = new CacheJdbcPojoStoreFactory<>();
+        factory.setDialect(new MySQLDialect());
+        factory.setDataSourceFactory((Factory<DataSource>)() -> {
+            MysqlDataSource mysqlDataSrc = new MysqlDataSource();
+            mysqlDataSrc.setURL("jdbc:mysql://[host]:[port]/[database]");
+            mysqlDataSrc.setUser("YOUR_USER_NAME");
+            mysqlDataSrc.setPassword("YOUR_PASSWORD");
+            return mysqlDataSrc;
+        });
+
+        JdbcType personType = new JdbcType();
+        personType.setCacheName("PersonCache");
+        personType.setKeyType(Integer.class);
+        personType.setValueType(Person.class);
+        // Specify the schema if applicable
+        // personType.setDatabaseSchema("MY_DB_SCHEMA");
+        personType.setDatabaseTable("PERSON");
+
+        personType.setKeyFields(new JdbcTypeField(java.sql.Types.INTEGER, "id", Integer.class, "id"));
+
+        personType.setValueFields(new JdbcTypeField(java.sql.Types.INTEGER, "id", Integer.class, "id"));
+        personType.setValueFields(new JdbcTypeField(java.sql.Types.VARCHAR, "name", String.class, "name"));
+
+        factory.setTypes(personType);
+
+        personCacheCfg.setCacheStoreFactory(factory);
+
+        QueryEntity qryEntity = new QueryEntity();
+
+        qryEntity.setKeyType(Integer.class.getName());
+        qryEntity.setValueType(Person.class.getName());
+        qryEntity.setKeyFieldName("id");
+
+        Set<String> keyFields = new HashSet<>();
+        keyFields.add("id");
+        qryEntity.setKeyFields(keyFields);
+
+        LinkedHashMap<String, String> fields = new LinkedHashMap<>();
+        fields.put("id", "java.lang.Integer");
+        fields.put("name", "java.lang.String");
+
+        qryEntity.setFields(fields);
+
+        personCacheCfg.setQueryEntities(Collections.singletonList(qryEntity));
+
+        igniteCfg.setCacheConfiguration(personCacheCfg);
+        //end::pojo[]
+    }
+
+    //tag::person[]
+    class Person implements Serializable {
+        private static final long serialVersionUID = 0L;
+
+        private int id;
+
+        private String name;
+
+        public Person() {
+        }
+
+        public String getName() {
+            return name;
+        }
+
+        public void setName(String name) {
+            this.name = name;
+        }
+
+        public int getId() {
+            return id;
+        }
+
+        public void setId(int id) {
+            this.id = id;
+        }
+    }
+    //end::person[]
+
+    public static void cacheJdbcBlobStoreExample() {
+        //tag::blob1[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        CacheConfiguration<Integer, Person> personCacheCfg = new CacheConfiguration<>();
+        personCacheCfg.setName("PersonCache");
+
+        CacheJdbcBlobStoreFactory<Integer, Person> cacheStoreFactory = new CacheJdbcBlobStoreFactory<>();
+
+        cacheStoreFactory.setUser("USER_NAME");
+
+        MysqlDataSource mysqlDataSrc = new MysqlDataSource();
+        mysqlDataSrc.setURL("jdbc:mysql://[host]:[port]/[database]");
+        mysqlDataSrc.setUser("USER_NAME");
+        mysqlDataSrc.setPassword("PASSWORD");
+
+        cacheStoreFactory.setDataSource(mysqlDataSrc);
+
+        personCacheCfg.setCacheStoreFactory(cacheStoreFactory);
+
+        personCacheCfg.setWriteThrough(true);
+        personCacheCfg.setReadThrough(true);
+
+        igniteCfg.setCacheConfiguration(personCacheCfg);
+        //end::blob1[]
+
+        Ignite ignite = Ignition.start(igniteCfg);
+
+        //tag::blob2[]
+        // Load data from person table into PersonCache.
+        IgniteCache<Integer, Person> personCache = ignite.cache("PersonCache");
+
+        personCache.loadCache(null);
+        //end::blob2[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/FailureHandler.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/FailureHandler.java
new file mode 100644
index 0000000..e997d30
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/FailureHandler.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Collections;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.failure.StopNodeFailureHandler;
+
+public class FailureHandler {
+
+    void configure() {
+        // tag::configure-handler[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setFailureHandler(new StopNodeFailureHandler());
+        Ignite ignite = Ignition.start(cfg);
+        // end::configure-handler[]
+        ignite.close();
+    }
+
+    void failureTypes() {
+        // tag::failure-types[]
+        StopNodeFailureHandler failureHandler = new StopNodeFailureHandler();
+        failureHandler.setIgnoredFailureTypes(Collections.EMPTY_SET);
+
+        IgniteConfiguration cfg = new IgniteConfiguration().setFailureHandler(failureHandler);
+
+        Ignite ignite = Ignition.start(cfg);
+        // end::failure-types[]
+
+        ignite.close();
+    }
+
+    public static void main(String[] args) {
+        FailureHandler fh = new FailureHandler();
+        fh.configure();
+        fh.failureTypes();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/FaultTolerance.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/FaultTolerance.java
new file mode 100644
index 0000000..9708c8e
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/FaultTolerance.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.failover.always.AlwaysFailoverSpi;
+import org.apache.ignite.spi.failover.never.NeverFailoverSpi;
+
+public class FaultTolerance {
+    void always() {
+        // tag::always[]
+        AlwaysFailoverSpi failSpi = new AlwaysFailoverSpi();
+
+        // Override maximum failover attempts.
+        failSpi.setMaximumFailoverAttempts(5);
+
+        // Override the default failover SPI.
+        IgniteConfiguration cfg = new IgniteConfiguration().setFailoverSpi(failSpi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::always[]
+
+        ignite.close();
+    }
+
+    void never() {
+        // tag::never[]
+        NeverFailoverSpi failSpi = new NeverFailoverSpi();
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override the default failover SPI.
+        cfg.setFailoverSpi(failSpi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::never[]
+
+        ignite.close();
+    }
+
+    public static void main(String[] args) {
+        FaultTolerance ft = new FaultTolerance();
+
+        ft.always();
+        ft.never();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgniteExecutorService.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgniteExecutorService.java
new file mode 100644
index 0000000..0e032c0
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgniteExecutorService.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.concurrent.ExecutorService;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.cluster.ClusterGroup;
+import org.apache.ignite.lang.IgniteRunnable;
+
+public class IgniteExecutorService {
+
+    void test(Ignite ignite) {
+
+        // tag::execute[]
+        // Get cluster-enabled executor service.
+        ExecutorService exec = ignite.executorService();
+
+        // Iterate through all words in the sentence and create jobs.
+        for (final String word : "Print words using runnable".split(" ")) {
+            // Execute runnable on some node.
+            exec.submit(new IgniteRunnable() {
+                @Override
+                public void run() {
+                    System.out.println(">>> Printing '" + word + "' on this node from grid job.");
+                }
+            });
+        }
+        // end::execute[]
+    }
+
+    void clusterGroup(Ignite ignite) {
+        // tag::cluster-group[]
+        // A group for nodes where the attribute 'worker' is defined.
+        ClusterGroup workerGrp = ignite.cluster().forAttribute("ROLE", "worker");
+
+        // Get an executor service for the cluster group.
+        ExecutorService exec = ignite.executorService(workerGrp);
+        // end::cluster-group[]
+
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgniteLifecycle.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgniteLifecycle.java
new file mode 100644
index 0000000..9293194
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgniteLifecycle.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class IgniteLifecycle {
+
+    @Test
+    void startNode() {
+        //tag::start[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        Ignite ignite = Ignition.start(cfg);
+        //end::start[]
+        ignite.close();
+    }
+
+    @Test
+    void startAndClose() {
+
+        //tag::autoclose[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        try (Ignite ignite = Ignition.start(cfg)) {
+            //
+        }
+
+        //end::autoclose[]
+    }
+
+    void startClientNode() {
+        //tag::client-node[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Enable client mode.
+        cfg.setClientMode(true);
+
+        // Start a client 
+        Ignite ignite = Ignition.start(cfg);
+        //end::client-node[]
+
+        ignite.close();
+    }
+
+    @Test
+    void lifecycleEvents() {
+        //tag::lifecycle[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Specify a lifecycle bean in the node configuration.
+        cfg.setLifecycleBeans(new MyLifecycleBean());
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::lifecycle[]
+
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgnitePersistence.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgnitePersistence.java
new file mode 100644
index 0000000..f31344e
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/IgnitePersistence.java
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cluster.ClusterState;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.DiskPageCompression;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class IgnitePersistence {
+
+    @Test
+    void disablingWal() {
+
+        //tag::wal[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+        storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
+
+        cfg.setDataStorageConfiguration(storageCfg);
+
+        Ignite ignite = Ignition.start(cfg);
+
+        ignite.cluster().state(ClusterState.ACTIVE);
+
+        String cacheName = "myCache";
+
+        ignite.getOrCreateCache(cacheName);
+
+        ignite.cluster().disableWal(cacheName);
+
+        //load data
+        ignite.cluster().enableWal(cacheName);
+
+        //end::wal[]
+        ignite.close();
+    }
+
+    @Test
+    public static void changeWalSegmentSize() {
+        // tag::segment-size[] 
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+        storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
+
+        storageCfg.setWalSegmentSize(128 * 1024 * 1024);
+
+        cfg.setDataStorageConfiguration(storageCfg);
+
+        Ignite ignite = Ignition.start(cfg);
+        // end::segment-size[]
+
+        ignite.close();
+    }
+
+    @Test
+    public static void cfgExample() {
+        //tag::cfg[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        //data storage configuration
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+
+        storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
+
+        //tag::storage-path[]
+        storageCfg.setStoragePath("/opt/storage");
+        //end::storage-path[]
+
+        cfg.setDataStorageConfiguration(storageCfg);
+
+        Ignite ignite = Ignition.start(cfg);
+        //end::cfg[]
+        ignite.close();
+    }
+
+    @Test
+    void walRecordsCompression() {
+        //tag::wal-records-compression[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        DataStorageConfiguration dsCfg = new DataStorageConfiguration();
+        dsCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
+
+        //WAL page compression parameters
+        dsCfg.setWalPageCompression(DiskPageCompression.LZ4);
+        dsCfg.setWalPageCompressionLevel(8);
+
+        cfg.setDataStorageConfiguration(dsCfg);
+        Ignite ignite = Ignition.start(cfg);
+        //end::wal-records-compression[]
+
+        ignite.close();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Indexes.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Indexes.java
new file mode 100644
index 0000000..e4bdb04
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Indexes.java
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.io.Serializable;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.Set;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.QueryEntity;
+import org.apache.ignite.cache.QueryIndex;
+import org.apache.ignite.cache.QueryIndexType;
+import org.apache.ignite.cache.query.SqlFieldsQuery;
+import org.apache.ignite.cache.query.annotations.QuerySqlField;
+import org.apache.ignite.configuration.CacheConfiguration;
+
+public class Indexes {
+
+    // tag::configuring-with-annotation[]
+    public class Person implements Serializable {
+        /** Indexed field. Will be visible to the SQL engine. */
+        @QuerySqlField(index = true)
+        private long id;
+
+        /** Queryable field. Will be visible to the SQL engine. */
+        @QuerySqlField
+        private String name;
+
+        /** Will NOT be visible to the SQL engine. */
+        private int age;
+
+        /**
+         * Indexed field sorted in descending order. Will be visible to the SQL engine.
+         */
+        @QuerySqlField(index = true, descending = true)
+        private float salary;
+    }
+    // end::configuring-with-annotation[]
+
+    public class Person2 implements Serializable {
+        // tag::annotation-with-inline-size[]
+        @QuerySqlField(index = true, inlineSize = 13)
+        private String country;
+        // end::annotation-with-inline-size[]
+    }
+
+    void register() {
+        // tag::register-indexed-types[]
+        // Preparing configuration.
+        CacheConfiguration<Long, Person> ccfg = new CacheConfiguration<>();
+
+        // Registering indexed type.
+        ccfg.setIndexedTypes(Long.class, Person.class);
+        // end::register-indexed-types[]
+    }
+
+    void executeQuery() {
+        // tag::query[]
+        SqlFieldsQuery qry = new SqlFieldsQuery("SELECT id, name FROM Person" + "WHERE id > 1500 LIMIT 10");
+        // end::query[]
+    }
+
+    void withQueryEntities() {
+        // tag::index-using-queryentity[]
+        CacheConfiguration<Long, Person> cache = new CacheConfiguration<Long, Person>("myCache");
+
+        QueryEntity queryEntity = new QueryEntity();
+
+        queryEntity.setKeyFieldName("id").setKeyType(Long.class.getName()).setValueType(Person.class.getName());
+
+        LinkedHashMap<String, String> fields = new LinkedHashMap<>();
+        fields.put("id", "java.lang.Long");
+        fields.put("name", "java.lang.String");
+        fields.put("salary", "java.lang.Long");
+
+        queryEntity.setFields(fields);
+
+        queryEntity.setIndexes(Arrays.asList(new QueryIndex("name"),
+                new QueryIndex(Arrays.asList("id", "salary"), QueryIndexType.SORTED)));
+
+        cache.setQueryEntities(Arrays.asList(queryEntity));
+
+        // end::index-using-queryentity[]
+    }
+
+    void inline() {
+
+        QueryEntity queryEntity = new QueryEntity();
+        // tag::query-entity-with-inline-size[]
+        QueryIndex idx = new QueryIndex("country");
+        idx.setInlineSize(13);
+        queryEntity.setIndexes(Arrays.asList(idx));
+        // end::query-entity-with-inline-size[]
+    }
+
+    void customKeys() {
+        Ignite ignite = Ignition.start();
+        // tag::custom-key[]
+        // Preparing cache configuration.
+        CacheConfiguration<Long, Person> cacheCfg = new CacheConfiguration<Long, Person>("personCache");
+
+        // Creating the query entity.
+        QueryEntity entity = new QueryEntity("CustomKey", "Person");
+
+        // Listing all the queryable fields.
+        LinkedHashMap<String, String> fields = new LinkedHashMap<>();
+
+        fields.put("intKeyField", Integer.class.getName());
+        fields.put("strKeyField", String.class.getName());
+
+        fields.put("firstName", String.class.getName());
+        fields.put("lastName", String.class.getName());
+
+        entity.setFields(fields);
+
+        // Listing a subset of the fields that belong to the key.
+        Set<String> keyFlds = new HashSet<>();
+
+        keyFlds.add("intKeyField");
+        keyFlds.add("strKeyField");
+
+        entity.setKeyFields(keyFlds);
+
+        // End of new settings, nothing else here is DML related
+
+        entity.setIndexes(Collections.<QueryIndex>emptyList());
+
+        cacheCfg.setQueryEntities(Collections.singletonList(entity));
+
+        ignite.createCache(cacheCfg);
+
+        // end::custom-key[]
+
+    }
+
+    public static void main(String[] args) {
+        Indexes ind = new Indexes();
+
+        ind.withQueryEntities();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Indexes_groups.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Indexes_groups.java
new file mode 100644
index 0000000..80ebcc3
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Indexes_groups.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.io.Serializable;
+
+import org.apache.ignite.cache.query.annotations.QuerySqlField;
+
+public class Indexes_groups {
+
+    //tag::group-indexes[]
+    public class Person implements Serializable {
+        /** Indexed in a group index with "salary". */
+        @QuerySqlField(orderedGroups = { @QuerySqlField.Group(name = "age_salary_idx", order = 0, descending = true) })
+
+        private int age;
+
+        /** Indexed separately and in a group index with "age". */
+        @QuerySqlField(index = true, orderedGroups = { @QuerySqlField.Group(name = "age_salary_idx", order = 3) })
+        private double salary;
+    }
+    //end::group-indexes[]
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JDBCClientDriver.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JDBCClientDriver.java
new file mode 100644
index 0000000..765ab1b
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JDBCClientDriver.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+
+import org.junit.jupiter.api.Test;
+
+public class JDBCClientDriver {
+
+    void registerDriver() throws ClassNotFoundException, SQLException {
+        //tag::register[]
+        // Registering the JDBC driver.
+        Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+        // Opening JDBC connection (cache name is not specified, which means that we use default cache).
+        Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://config/ignite-jdbc.xml");
+        
+        //end::register[]
+        conn.close();
+    }
+
+    void streaming() throws ClassNotFoundException, SQLException {
+        // Register JDBC driver.
+        Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+        // Opening connection in the streaming mode.
+        Connection conn = DriverManager
+                .getConnection("jdbc:ignite:cfg://cache=myCache:streaming=true@file://config/ignite-jdbc.xml");
+
+        conn.close();
+    }
+    
+    void timeBasedFlushing() throws ClassNotFoundException, SQLException {
+        //tag::time-based-flushing[]
+     // Register JDBC driver.
+        Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+        // Opening a connection in the streaming mode and time based flushing set.
+        Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true:streamingFlushFrequency=1000@file:///etc/config/ignite-jdbc.xml");
+
+        PreparedStatement stmt = conn.prepareStatement(
+          "INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+        // Adding the data.
+        for (int i = 1; i < 100000; i++) {
+              // Inserting a Person object with a Long key.
+              stmt.setInt(1, i);
+              stmt.setString(2, "John Smith");
+              stmt.setInt(3, 25);
+
+              stmt.execute();
+        }
+
+        conn.close();
+
+        // Beyond this point, all data is guaranteed to be flushed into the cache.
+
+        //end::time-based-flushing[]
+    }
+    
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JDBCThinDriver.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JDBCThinDriver.java
new file mode 100644
index 0000000..6ab942b
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JDBCThinDriver.java
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+
+import org.apache.ignite.IgniteJdbcThinDataSource;
+
+public class JDBCThinDriver {
+
+    Connection getConnection() throws ClassNotFoundException, SQLException {
+
+        // tag::get-connection[]
+        // Register JDBC driver.
+        Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+        // Open the JDBC connection.
+        Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1");
+
+        // end::get-connection[]
+        return conn;
+    }
+
+    void multipleEndpoints() throws ClassNotFoundException, SQLException {
+        // tag::multiple-endpoints[]
+
+        // Register JDBC Driver.
+        Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+        // Open the JDBC connection passing several connection endpoints.
+        Connection conn = DriverManager
+                .getConnection("jdbc:ignite:thin://192.168.0.50:101,192.188.5.40:101,192.168.10.230:101");
+        // end::multiple-endpoints[]
+
+    }
+
+    public Connection connectionFromDatasource() throws SQLException {
+        // tag::connection-from-data-source[]
+        // Or open connection via DataSource.
+        IgniteJdbcThinDataSource ids = new IgniteJdbcThinDataSource();
+        ids.setUrl("jdbc:ignite:thin://127.0.0.1");
+        ids.setDistributedJoins(true);
+
+        Connection conn = ids.getConnection();
+        // end::connection-from-data-source[]
+
+        return conn;
+    }
+
+    void select() throws ClassNotFoundException, SQLException {
+
+        Connection conn = getConnection();
+
+        // tag::select[]
+        // Query people with specific age using prepared statement.
+        PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
+
+        stmt.setInt(1, 30);
+
+        ResultSet rs = stmt.executeQuery();
+
+        while (rs.next()) {
+            String name = rs.getString("name");
+            int age = rs.getInt("age");
+            // ...
+        }
+        // end::select[]
+        conn.close();
+    }
+
+    void insert() throws ClassNotFoundException, SQLException {
+
+        Connection conn = getConnection();
+        // tag::insert[]
+        // Insert a Person with a Long key.
+        PreparedStatement stmt = conn
+                .prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+        stmt.setInt(1, 1);
+        stmt.setString(2, "John Smith");
+        stmt.setInt(3, 25);
+
+        stmt.execute();
+        // end::insert[]
+        conn.close();
+    }
+
+    void merge() throws ClassNotFoundException, SQLException {
+
+        Connection conn = getConnection();
+        // tag::merge[]
+        // Merge a Person with a Long key.
+        PreparedStatement stmt = conn
+                .prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+        stmt.setInt(1, 1);
+        stmt.setString(2, "John Smith");
+        stmt.setInt(3, 25);
+
+        stmt.executeUpdate();
+        // end::merge[]
+        conn.close();
+    }
+
+    void partitionAwareness() throws ClassNotFoundException, SQLException {
+
+        // tag::partition-awareness[]
+        Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+        Connection conn = DriverManager
+                .getConnection("jdbc:ignite:thin://192.168.0.50,192.188.5.40,192.168.10.230?partitionAwareness=true");
+        // end::partition-awareness[]
+
+        conn.close();
+    }
+
+    void handleException() throws ClassNotFoundException, SQLException {
+
+        Connection conn = getConnection();
+        // tag::handle-exception[]
+        PreparedStatement ps;
+
+        try {
+            ps = conn.prepareStatement("INSERT INTO Person(id, name, age) values (1, 'John', 'unparseableString')");
+        } catch (SQLException e) {
+            switch (e.getSQLState()) {
+            case "0700B":
+                System.out.println("Conversion failure");
+                break;
+
+            case "42000":
+                System.out.println("Parsing error");
+                break;
+
+            default:
+                System.out.println("Unprocessed error: " + e.getSQLState());
+                break;
+            }
+        }
+        // end::handle-exception[]
+    }
+
+    void ssl() throws ClassNotFoundException, SQLException {
+
+        //tag::ssl[]
+        Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+        String keyStore = "keystore/node.jks";
+        String keyStorePassword = "123456";
+
+        String trustStore = "keystore/trust.jks";
+        String trustStorePassword = "123456";
+
+        try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1?sslMode=require"
+                + "&sslClientCertificateKeyStoreUrl=" + keyStore + "&sslClientCertificateKeyStorePassword="
+                + keyStorePassword + "&sslTrustCertificateKeyStoreUrl=" + trustStore
+                + "&sslTrustCertificateKeyStorePassword=" + trustStorePassword)) {
+
+            ResultSet rs = conn.createStatement().executeQuery("select 10");
+            rs.next();
+            System.out.println(rs.getInt(1));
+        } catch (Exception e) {
+            e.printStackTrace();
+        }
+
+        //end::ssl[]
+
+    }
+
+    void errorCodes() throws ClassNotFoundException, SQLException {
+        //tag::error-codes[]
+        // Register JDBC driver.
+        Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+        // Open JDBC connection.
+        Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1");
+
+        PreparedStatement ps;
+
+        try {
+            ps = conn.prepareStatement("INSERT INTO Person(id, name, age) values (1," + "'John', 'unparseableString')");
+        } catch (SQLException e) {
+            switch (e.getSQLState()) {
+            case "0700B":
+                System.out.println("Conversion failure");
+                break;
+
+            case "42000":
+                System.out.println("Parsing error");
+                break;
+
+            default:
+                System.out.println("Unprocessed error: " + e.getSQLState());
+                break;
+            }
+        }
+
+        //end::error-codes[]
+    }
+
+    public static void main(String[] args) throws Exception {
+        //        Ignite ignite = Util.startNode();
+        try {
+            JDBCThinDriver j = new JDBCThinDriver();
+
+            //            j.getConnection();
+            // j.multipleEndpoints();
+            // j.connectionFromDatasource();
+            // j.partitionAwareness();
+
+            j.ssl();
+        } catch (Exception e) {
+            e.printStackTrace();
+            System.exit(1);
+        } finally {
+            // ignite.close();
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JavaThinClient.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JavaThinClient.java
new file mode 100644
index 0000000..5c3a855
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JavaThinClient.java
@@ -0,0 +1,427 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+import javax.cache.Cache;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteBinary;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.binary.BinaryObject;
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.cache.CacheMode;
+import org.apache.ignite.cache.CacheWriteSynchronizationMode;
+import org.apache.ignite.cache.query.FieldsQueryCursor;
+import org.apache.ignite.cache.query.Query;
+import org.apache.ignite.cache.query.QueryCursor;
+import org.apache.ignite.cache.query.ScanQuery;
+import org.apache.ignite.cache.query.SqlFieldsQuery;
+import org.apache.ignite.client.ClientAuthenticationException;
+import org.apache.ignite.client.ClientCache;
+import org.apache.ignite.client.ClientCacheConfiguration;
+import org.apache.ignite.client.ClientCluster;
+import org.apache.ignite.client.ClientClusterGroup;
+import org.apache.ignite.client.ClientConnectionException;
+import org.apache.ignite.client.ClientException;
+import org.apache.ignite.client.ClientTransaction;
+import org.apache.ignite.client.ClientTransactions;
+import org.apache.ignite.client.IgniteClient;
+import org.apache.ignite.client.SslMode;
+import org.apache.ignite.client.SslProtocol;
+import org.apache.ignite.cluster.ClusterState;
+import org.apache.ignite.configuration.ClientConfiguration;
+import org.apache.ignite.configuration.ClientConnectorConfiguration;
+import org.apache.ignite.configuration.ClientTransactionConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.configuration.ThinClientConfiguration;
+import org.apache.ignite.ssl.SslContextFactory;
+import org.apache.ignite.transactions.TransactionConcurrency;
+import org.apache.ignite.transactions.TransactionIsolation;
+import org.junit.jupiter.api.Test;
+
+public class JavaThinClient {
+
+    public static void main(String[] args) throws ClientException, Exception {
+        JavaThinClient test = new JavaThinClient();
+
+        ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+        try (IgniteClient client = Ignition.startClient(cfg)) {
+            test.scanQuery(client);
+        }
+    }
+
+    @Test
+    void clusterConnection() {
+        // tag::clusterConfiguration[]
+        ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+        // Set a port range from 10000 to 10005
+        clientConnectorCfg.setPort(10000);
+        clientConnectorCfg.setPortRange(5);
+
+        IgniteConfiguration cfg = new IgniteConfiguration().setClientConnectorConfiguration(clientConnectorCfg);
+
+        // Start a node
+        Ignite ignite = Ignition.start(cfg);
+        // end::clusterConfiguration[]
+        
+        ignite.close();
+    }
+
+    void tx(IgniteClient client) {
+        //tag::tx[]
+        ClientCache<Integer, String> cache = client.cache("my_transactional_cache");
+
+        ClientTransactions tx = client.transactions();
+
+        try (ClientTransaction t = tx.txStart()) {
+            cache.put(1, "new value");
+
+            t.commit();
+        }
+        //end::tx[]
+    }
+
+    @Test
+    void transactionConfiguration() {
+
+        // tag::transaction-config[]
+        ClientConfiguration cfg = new ClientConfiguration();
+        cfg.setAddresses("localhost:10800");
+
+        cfg.setTransactionConfiguration(new ClientTransactionConfiguration().setDefaultTxTimeout(10000)
+                .setDefaultTxConcurrency(TransactionConcurrency.OPTIMISTIC)
+                .setDefaultTxIsolation(TransactionIsolation.REPEATABLE_READ));
+
+        IgniteClient client = Ignition.startClient(cfg);
+
+        // end::transaction-config[]
+
+        ClientCache cache = client.createCache(
+                new ClientCacheConfiguration().setName("test").setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));
+
+        // tag::tx-custom-properties[]
+        ClientTransactions tx = client.transactions();
+        try (ClientTransaction t = tx.txStart(TransactionConcurrency.OPTIMISTIC,
+                TransactionIsolation.REPEATABLE_READ)) {
+            cache.put(1, "new value");
+            t.commit();
+        }
+        // end::tx-custom-properties[]
+
+    }
+
+    void connection() throws ClientException, Exception {
+
+        // tag::clientConnection[]
+        ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+        try (IgniteClient client = Ignition.startClient(cfg)) {
+            ClientCache<Integer, String> cache = client.cache("myCache");
+            // Get data from the cache
+        }
+        // end::clientConnection[]
+    }
+
+    void connectionToMultipleNodes() throws ClientException, Exception {
+        // tag::connect-to-many-nodes[]
+        try (IgniteClient client = Ignition.startClient(new ClientConfiguration().setAddresses("node1_address:10800",
+                "node2_address:10800", "node3_address:10800"))) {
+        } catch (ClientConnectionException ex) {
+            // All the servers are unavailable
+        }
+        // end::connect-to-many-nodes[]
+    }
+
+    ClientCache<Integer, String> createCache(IgniteClient client) {
+        // tag::getOrCreateCache[]
+        ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration().setName("References")
+                .setCacheMode(CacheMode.REPLICATED)
+                .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
+
+        ClientCache<Integer, String> cache = client.getOrCreateCache(cacheCfg);
+        // end::getOrCreateCache[]
+        return cache;
+    }
+
+    void keyValueOperations(ClientCache<Integer, String> cache) {
+        // tag::key-value-operations[]
+        Map<Integer, String> data = IntStream.rangeClosed(1, 100).boxed()
+                .collect(Collectors.toMap(i -> i, Object::toString));
+
+        cache.putAll(data);
+
+        assert !cache.replace(1, "2", "3");
+        assert "1".equals(cache.get(1));
+        assert cache.replace(1, "1", "3");
+        assert "3".equals(cache.get(1));
+
+        cache.put(101, "101");
+
+        cache.removeAll(data.keySet());
+        assert cache.size() == 1;
+        assert "101".equals(cache.get(101));
+
+        cache.removeAll();
+        assert 0 == cache.size();
+        // end::key-value-operations[]
+        System.out.println("done");
+    }
+
+    void scanQuery(IgniteClient client) {
+
+        // tag::scan-query[]
+        ClientCache<Integer, Person> personCache = client.getOrCreateCache("personCache");
+
+        Query<Cache.Entry<Integer, Person>> qry = new ScanQuery<Integer, Person>(
+                (i, p) -> p.getName().contains("Smith"));
+
+        try (QueryCursor<Cache.Entry<Integer, Person>> cur = personCache.query(qry)) {
+            for (Cache.Entry<Integer, Person> entry : cur) {
+                // Process the entry ...
+            }
+        }
+        // end::scan-query[]
+    }
+
+    void binary(IgniteClient client) {
+        // tag::binary-example[]
+        IgniteBinary binary = client.binary();
+
+        BinaryObject val = binary.builder("Person").setField("id", 1, int.class).setField("name", "Joe", String.class)
+                .build();
+
+        ClientCache<Integer, BinaryObject> cache = client.cache("persons").withKeepBinary();
+
+        cache.put(1, val);
+
+        BinaryObject value = cache.get(1);
+        // end::binary-example[]
+    }
+
+    void sql(IgniteClient client) {
+
+        // tag::sql[]
+        client.query(new SqlFieldsQuery(String.format(
+                "CREATE TABLE IF NOT EXISTS Person (id INT PRIMARY KEY, name VARCHAR) WITH \"VALUE_TYPE=%s\"",
+                Person.class.getName())).setSchema("PUBLIC")).getAll();
+
+        int key = 1;
+        Person val = new Person(key, "Person 1");
+
+        client.query(new SqlFieldsQuery("INSERT INTO Person(id, name) VALUES(?, ?)").setArgs(val.getId(), val.getName())
+                .setSchema("PUBLIC")).getAll();
+
+        FieldsQueryCursor<List<?>> cursor = client
+                .query(new SqlFieldsQuery("SELECT name from Person WHERE id=?").setArgs(key).setSchema("PUBLIC"));
+
+        // Get the results; the `getAll()` methods closes the cursor; you do not have to
+        // call cursor.close();
+        List<List<?>> results = cursor.getAll();
+
+        results.stream().findFirst().ifPresent(columns -> {
+            System.out.println("name = " + columns.get(0));
+        });
+        // end::sql[]
+    }
+
+    public static final String KEYSTORE = "keystore/client.jks";
+    public static final String TRUSTSTORE = "keystore/trust.jks";
+
+    @Test
+    void configureSSL() throws ClientException, Exception {
+
+        // tag::ssl-configuration[]
+        ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+
+        clientCfg.setSslMode(SslMode.REQUIRED).setSslClientCertificateKeyStorePath(KEYSTORE)
+                .setSslClientCertificateKeyStoreType("JKS").setSslClientCertificateKeyStorePassword("123456")
+                .setSslTrustCertificateKeyStorePath(TRUSTSTORE).setSslTrustCertificateKeyStorePassword("123456")
+                .setSslTrustCertificateKeyStoreType("JKS").setSslKeyAlgorithm("SunX509").setSslTrustAll(false)
+                .setSslProtocol(SslProtocol.TLS);
+
+        try (IgniteClient client = Ignition.startClient(clientCfg)) {
+            // ...
+        }
+        // end::ssl-configuration[]
+    }
+
+    void configureSslInCluster() {
+        // tag::cluster-ssl-configuration[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        ClientConnectorConfiguration clientCfg = new ClientConnectorConfiguration();
+        clientCfg.setSslEnabled(true);
+        clientCfg.setUseIgniteSslContextFactory(false);
+        SslContextFactory sslContextFactory = new SslContextFactory();
+        sslContextFactory.setKeyStoreFilePath("/path/to/server.jks");
+        sslContextFactory.setKeyStorePassword("123456".toCharArray());
+
+        sslContextFactory.setTrustStoreFilePath("/path/to/trust.jks");
+        sslContextFactory.setTrustStorePassword("123456".toCharArray());
+
+        clientCfg.setSslContextFactory(sslContextFactory);
+
+        igniteCfg.setClientConnectorConfiguration(clientCfg);
+
+        // end::cluster-ssl-configuration[]
+    }
+
+    void clusterUseGlobalSllContext() {
+        // tag::use-global-ssl[]
+        ClientConnectorConfiguration clientConnectionCfg = new ClientConnectorConfiguration();
+        clientConnectionCfg.setSslEnabled(true);
+        // end::use-global-ssl[]
+    }
+
+    void clientAuthentication() throws ClientException, Exception {
+        // tag::client-authentication[]
+        ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("127.0.0.1:10800").setUserName("joe")
+                .setUserPassword("passw0rd!");
+
+        try (IgniteClient client = Ignition.startClient(clientCfg)) {
+            // ...
+        } catch (ClientAuthenticationException e) {
+            // Handle authentication failure
+        }
+        // end::client-authentication[]
+    }
+
+    void resultsToMap(ClientCache<Integer, Person> cache) {
+        // tag::results-to-map[]
+        Query<Cache.Entry<Integer, Person>> qry = new ScanQuery<Integer, Person>(
+                (i, p) -> p.getName().contains("Smith"));
+
+        try (QueryCursor<Cache.Entry<Integer, Person>> cur = cache.query(qry)) {
+            // Collecting the results into a map removes the duplicates
+            Map<Integer, Person> res = cur.getAll().stream()
+                    .collect(Collectors.toMap(Cache.Entry::getKey, Cache.Entry::getValue));
+        }
+        // end::results-to-map[]
+    }
+
+    
+    void veiwsystemview() {
+        //tag::system-views[]
+        ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+
+        try (IgniteClient igniteClient = Ignition.startClient(cfg)) {
+
+            // getting the id of the first node
+            UUID nodeId = (UUID) igniteClient.query(new SqlFieldsQuery("SELECT * from NODES").setSchema("IGNITE"))
+            .getAll().iterator().next().get(0);
+
+            double cpu_load = (Double) igniteClient
+            .query(new SqlFieldsQuery("select CUR_CPU_LOAD * 100 from NODE_METRICS where NODE_ID = ? ")
+            .setSchema("IGNITE").setArgs(nodeId.toString()))
+            .getAll().iterator().next().get(0);
+
+            System.out.println("node's cpu load = " + cpu_load);
+
+        } catch (ClientException e) {
+            System.err.println(e.getMessage());
+        } catch (Exception e) {
+            System.err.format("Unexpected failure: %s\n", e);
+        }
+
+        //end::system-views[]
+    }
+
+    void partitionAwareness() throws Exception {
+        //tag::partition-awareness[]
+        ClientConfiguration cfg = new ClientConfiguration()
+                .setAddresses("node1_address:10800", "node2_address:10800", "node3_address:10800")
+                .setPartitionAwarenessEnabled(true);
+
+        try (IgniteClient client = Ignition.startClient(cfg)) {
+            ClientCache<Integer, String> cache = client.cache("myCache");
+            // Put, get or remove data from the cache...
+        } catch (ClientException e) {
+            System.err.println(e.getMessage());
+        }
+        //end::partition-awareness[]
+    }
+
+
+    @Test
+    void cientCluster() throws Exception {
+        ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+        //tag::client-cluster[]
+        try (IgniteClient client = Ignition.startClient(clientCfg)) {
+            ClientCluster clientCluster = client.cluster();
+            clientCluster.state(ClusterState.ACTIVE);
+        }
+        //end::client-cluster[]
+    }
+
+    void clientClusterGroups() throws Exception {
+        ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+        //tag::client-cluster-groups[]
+        try (IgniteClient client = Ignition.startClient(clientCfg)) {
+            ClientClusterGroup serversInDc1 = client.cluster().forServers().forAttribute("dc", "dc1");
+            serversInDc1.nodes().forEach(n -> System.out.println("Node ID: " + n.id()));
+        }
+        //end::client-cluster-groups[]
+    }
+
+    void clientCompute() throws Exception {
+        //tag::client-compute-setup[]
+        ThinClientConfiguration thinClientCfg = new ThinClientConfiguration()
+                .setMaxActiveComputeTasksPerConnection(100);
+
+        ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration()
+                .setThinClientConfiguration(thinClientCfg);
+
+        IgniteConfiguration igniteCfg = new IgniteConfiguration()
+                .setClientConnectorConfiguration(clientConnectorCfg);
+
+        Ignite ignite = Ignition.start(igniteCfg);
+        //end::client-compute-setup[]
+
+        ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+        //tag::client-compute-task[]
+        try (IgniteClient client = Ignition.startClient(clientCfg)) {
+            // Suppose that the MyTask class is already deployed in the cluster
+            client.compute().execute(
+                MyTask.class.getName(), "argument");
+        }
+        //end::client-compute-task[]
+    }
+
+    void clientServices() throws Exception {
+        ClientConfiguration clientCfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+        //tag::client-services[]
+        try (IgniteClient client = Ignition.startClient(clientCfg)) {
+            // Executing the service named MyService
+            // that is already deployed in the cluster.
+            client.services().serviceProxy(
+                "MyService", MyService.class).myServiceMethod();
+        }
+        //end::client-services[]
+    }
+
+    private static class MyTask {
+    }
+
+    private static interface MyService {
+        public void myServiceMethod();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JobScheduling.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JobScheduling.java
new file mode 100644
index 0000000..b0c6e16
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/JobScheduling.java
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteException;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.compute.ComputeJob;
+import org.apache.ignite.compute.ComputeJobAdapter;
+import org.apache.ignite.compute.ComputeJobResult;
+import org.apache.ignite.compute.ComputeTaskSession;
+import org.apache.ignite.compute.ComputeTaskSplitAdapter;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.resources.TaskSessionResource;
+import org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi;
+import org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi;
+
+public class JobScheduling {
+
+    void fifo() {
+        // tag::fifo[]
+        FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();
+
+        // Execute jobs sequentially, one at a time,
+        // by setting parallel job number to 1.
+        colSpi.setParallelJobsNumber(1);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default collision SPI.
+        cfg.setCollisionSpi(colSpi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+
+        // end::fifo[]
+        ignite.close();
+    }
+
+    void priority() {
+        // tag::priority[]
+        PriorityQueueCollisionSpi colSpi = new PriorityQueueCollisionSpi();
+
+        // Change the parallel job number if needed.
+        // Default is number of cores times 2.
+        colSpi.setParallelJobsNumber(5);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default collision SPI.
+        cfg.setCollisionSpi(colSpi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+
+        // end::priority[]
+        ignite.close();
+    }
+    
+    //tag::task-priority[]
+    public class MyUrgentTask extends ComputeTaskSplitAdapter<Object, Object> {
+        // Auto-injected task session.
+        @TaskSessionResource
+        private ComputeTaskSession taskSes = null;
+
+        @Override
+        protected Collection<ComputeJob> split(int gridSize, Object arg) {
+            // Set high task priority.
+            taskSes.setAttribute("grid.task.priority", 10);
+
+            List<ComputeJob> jobs = new ArrayList<>(gridSize);
+
+            for (int i = 1; i <= gridSize; i++) {
+                jobs.add(new ComputeJobAdapter() {
+
+                    @Override
+                    public Object execute() throws IgniteException {
+
+                        //your implementation goes here
+
+                        return null;
+                    }
+                });
+            }
+
+            // These jobs will be executed with higher priority.
+            return jobs;
+        }
+
+        @Override
+        public Object reduce(List<ComputeJobResult> results) throws IgniteException {
+            return null;
+        }
+    }
+
+    //end::task-priority[]
+
+    
+    public static void main(String[] args) {
+       JobScheduling  js = new JobScheduling();
+       js.fifo();
+       js.priority();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/LoadBalancing.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/LoadBalancing.java
new file mode 100644
index 0000000..ac14a5e
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/LoadBalancing.java
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Collections;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.events.EventType;
+import org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi;
+import org.apache.ignite.spi.failover.jobstealing.JobStealingFailoverSpi;
+import org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi;
+import org.apache.ignite.spi.loadbalancing.weightedrandom.WeightedRandomLoadBalancingSpi;
+
+public class LoadBalancing {
+
+    void roundRobin() {
+        // tag::load-balancing[]
+        RoundRobinLoadBalancingSpi spi = new RoundRobinLoadBalancingSpi();
+        spi.setPerTask(true);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        // these events are required for the per-task mode
+        cfg.setIncludeEventTypes(EventType.EVT_TASK_FINISHED, EventType.EVT_TASK_FAILED, EventType.EVT_JOB_MAPPED);
+
+        // Override default load balancing SPI.
+        cfg.setLoadBalancingSpi(spi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::load-balancing[]
+
+        ignite.close();
+    }
+
+    void weighted() {
+
+        // tag::weighted[]
+        WeightedRandomLoadBalancingSpi spi = new WeightedRandomLoadBalancingSpi();
+
+        // Configure SPI to use the weighted random load balancing algorithm.
+        spi.setUseWeights(true);
+
+        // Set weight for the local node.
+        spi.setNodeWeight(10);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default load balancing SPI.
+        cfg.setLoadBalancingSpi(spi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::weighted[]
+
+        ignite.close();
+    }
+
+    void jobStealing() {
+        //tag::job-stealing[]
+        JobStealingCollisionSpi spi = new JobStealingCollisionSpi();
+
+        // Configure number of waiting jobs
+        // in the queue for job stealing.
+        spi.setWaitJobsThreshold(10);
+
+        // Configure message expire time (in milliseconds).
+        spi.setMessageExpireTime(1000);
+
+        // Configure stealing attempts number.
+        spi.setMaximumStealingAttempts(10);
+
+        // Configure number of active jobs that are allowed to execute
+        // in parallel. This number should usually be equal to the number
+        // of threads in the pool (default is 100).
+        spi.setActiveJobsThreshold(50);
+
+        // Enable stealing.
+        spi.setStealingEnabled(true);
+
+        // Set stealing attribute to steal from/to nodes that have it.
+        spi.setStealingAttributes(Collections.singletonMap("node.segment", "foobar"));
+
+        // Enable `JobStealingFailoverSpi`
+        JobStealingFailoverSpi failoverSpi = new JobStealingFailoverSpi();
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default Collision SPI.
+        cfg.setCollisionSpi(spi);
+
+        cfg.setFailoverSpi(failoverSpi);
+        //end::job-stealing[]
+        Ignition.start(cfg).close();
+    }
+
+    public static void main(String[] args) {
+        LoadBalancing lb = new LoadBalancing();
+
+        lb.roundRobin();
+        lb.weighted();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Logging.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Logging.java
new file mode 100644
index 0000000..e96998d
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Logging.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCheckedException;
+import org.apache.ignite.IgniteLogger;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.logger.jcl.JclLogger;
+import org.apache.ignite.logger.log4j.Log4JLogger;
+import org.apache.ignite.logger.log4j2.Log4J2Logger;
+import org.apache.ignite.logger.slf4j.Slf4jLogger;
+
+public class Logging {
+
+    void log4j() throws IgniteCheckedException {
+
+        // tag::log4j[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        IgniteLogger log = new Log4JLogger("log4j-config.xml");
+
+        cfg.setGridLogger(log);
+
+        // Start a node.
+        try (Ignite ignite = Ignition.start(cfg)) {
+            ignite.log().info("Info Message Logged!");
+        }
+        // end::log4j[]
+    }
+
+    void log4j2() throws IgniteCheckedException {
+        // tag::log4j2[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        IgniteLogger log = new Log4J2Logger("log4j2-config.xml");
+
+        cfg.setGridLogger(log);
+
+        // Start a node.
+        try (Ignite ignite = Ignition.start(cfg)) {
+            ignite.log().info("Info Message Logged!");
+        }
+        // end::log4j2[]
+    }
+
+    void jcl() {
+        //tag::jcl[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setGridLogger(new JclLogger());
+
+        // Start a node.
+        try (Ignite ignite = Ignition.start(cfg)) {
+            ignite.log().info("Info Message Logged!");
+        }
+        //end::jcl[]
+    }
+    
+       void slf4j() {
+        //tag::slf4j[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setGridLogger(new Slf4jLogger());
+
+        // Start a node.
+        try (Ignite ignite = Ignition.start(cfg)) {
+            ignite.log().info("Info Message Logged!");
+        }
+        //end::slf4j[]
+    }
+
+    public static void main(String[] args) throws IgniteCheckedException {
+        Logging logging = new Logging();
+
+        logging.jcl();
+        logging.slf4j();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MapReduce.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MapReduce.java
new file mode 100644
index 0000000..a22f3b0
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MapReduce.java
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.List;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCompute;
+import org.apache.ignite.IgniteException;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.compute.ComputeJob;
+import org.apache.ignite.compute.ComputeJobAdapter;
+import org.apache.ignite.compute.ComputeJobContext;
+import org.apache.ignite.compute.ComputeJobResult;
+import org.apache.ignite.compute.ComputeJobResultPolicy;
+import org.apache.ignite.compute.ComputeJobSibling;
+import org.apache.ignite.compute.ComputeTask;
+import org.apache.ignite.compute.ComputeTaskSession;
+import org.apache.ignite.compute.ComputeTaskSessionFullSupport;
+import org.apache.ignite.compute.ComputeTaskSplitAdapter;
+import org.apache.ignite.resources.JobContextResource;
+import org.apache.ignite.resources.TaskSessionResource;
+
+public class MapReduce {
+
+    private static class CharacterCountTask extends ComputeTaskSplitAdapter<String, Integer> {
+        // 1. Splits the received string into words
+        // 2. Creates a child job for each word
+        // 3. Sends the jobs to other nodes for processing.
+        @Override
+        public List<ComputeJob> split(int gridSize, String arg) {
+            String[] words = arg.split(" ");
+
+            List<ComputeJob> jobs = new ArrayList<>(words.length);
+
+            for (final String word : words) {
+                jobs.add(new ComputeJobAdapter() {
+                    @Override
+                    public Object execute() {
+                        System.out.println(">>> This compute job calculates the length of the word '" + word + "'.");
+
+                        // Return the number of letters in the word.
+                        return word.length();
+                    }
+                });
+            }
+
+            return jobs;
+        }
+
+        @Override
+        public Integer reduce(List<ComputeJobResult> results) {
+            int sum = 0;
+
+            for (ComputeJobResult res : results)
+                sum += res.<Integer>getData();
+
+            return sum;
+        }
+    }
+
+    void executeComputeTask() {
+
+        // tag::execute-compute-task[]
+        Ignite ignite = Ignition.start();
+
+        IgniteCompute compute = ignite.compute();
+
+        int count = compute.execute(new CharacterCountTask(), "Hello Grid Enabled World!");
+
+        // end::execute-compute-task[]
+    }
+
+    // tag::session[]
+    @ComputeTaskSessionFullSupport
+    private static class TaskSessionAttributesTask extends ComputeTaskSplitAdapter<Object, Object> {
+
+        @Override
+        protected Collection<? extends ComputeJob> split(int gridSize, Object arg) {
+            Collection<ComputeJob> jobs = new LinkedList<>();
+
+            // Generate jobs by number of nodes in the grid.
+            for (int i = 0; i < gridSize; i++) {
+                jobs.add(new ComputeJobAdapter(arg) {
+                    // Auto-injected task session.
+                    @TaskSessionResource
+                    private ComputeTaskSession ses;
+
+                    // Auto-injected job context.
+                    @JobContextResource
+                    private ComputeJobContext jobCtx;
+
+                    @Override
+                    public Object execute() {
+                        // Perform STEP1.
+                        // ...
+
+                        // Tell other jobs that STEP1 is complete.
+                        ses.setAttribute(jobCtx.getJobId(), "STEP1");
+
+                        // Wait for other jobs to complete STEP1.
+                        for (ComputeJobSibling sibling : ses.getJobSiblings())
+                            try {
+                                ses.waitForAttribute(sibling.getJobId(), "STEP1", 0);
+                            } catch (InterruptedException e) {
+                                e.printStackTrace();
+                            }
+
+                        // Move on to STEP2.
+                        // ...
+
+                        // tag::exclude[]
+                        /*
+                        // end::exclude[]
+                        return ... 
+
+                        // tag::exclude[]
+                        */
+                        return new Object();
+                        // end::exclude[]
+                    }
+                });
+            }
+            return jobs;
+        }
+
+        @Override
+        public Object reduce(List<ComputeJobResult> results) {
+            // No-op.
+            return null;
+        }
+        
+        //tag::exclude[]
+        //tag::failover[]
+        @Override
+        public ComputeJobResultPolicy result(ComputeJobResult res, List<ComputeJobResult> rcvd) {
+            IgniteException err = res.getException();
+
+            if (err != null)
+                return ComputeJobResultPolicy.FAILOVER;
+
+            // If there is no exception, wait for all job results.
+            return ComputeJobResultPolicy.WAIT;
+        }
+        //end::failover[]
+        //end::exclude[]
+    }
+
+    // end::session[]
+    
+    
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MyLifecycleBean.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MyLifecycleBean.java
new file mode 100644
index 0000000..79a35a9
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MyLifecycleBean.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.lifecycle.LifecycleBean;
+import org.apache.ignite.lifecycle.LifecycleEventType;
+import org.apache.ignite.resources.IgniteInstanceResource;
+
+//tag::bean[]
+public class MyLifecycleBean implements LifecycleBean {
+    @IgniteInstanceResource
+    public Ignite ignite;
+
+    @Override
+    public void onLifecycleEvent(LifecycleEventType evt) {
+        if (evt == LifecycleEventType.AFTER_NODE_START) {
+
+            System.out.format("After the node (consistentId = %s) starts.\n", ignite.cluster().node().consistentId());
+
+        }
+    }
+}
+
+//end::bean[]
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MyNodeFilter.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MyNodeFilter.java
new file mode 100644
index 0000000..20346c0
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/MyNodeFilter.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Collection;
+
+import org.apache.ignite.cluster.ClusterNode;
+import org.apache.ignite.lang.IgnitePredicate;
+
+//tag::node-filter-example[]
+public class MyNodeFilter implements IgnitePredicate<ClusterNode> {
+
+    // fill the collection with consistent IDs of the nodes you want to exclude
+    private Collection<String> nodesToExclude;
+
+    public MyNodeFilter(Collection<String> nodesToExclude) {
+        this.nodesToExclude = nodesToExclude;
+    }
+
+    @Override
+    public boolean apply(ClusterNode node) {
+        return nodesToExclude == null || !nodesToExclude.contains(node.consistentId());
+    }
+}
+
+//end::node-filter-example[]
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NearCache.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NearCache.java
new file mode 100644
index 0000000..37d26f6
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NearCache.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.NearCacheConfiguration;
+
+public class NearCache {
+
+    public static void main(String[] args) {
+        Ignite ignite = Ignition.start();
+
+        // tag::nearCacheConfiguration[]
+        // Create a near-cache configuration for "myCache".
+        NearCacheConfiguration<Integer, Integer> nearCfg = new NearCacheConfiguration<>();
+
+        // Use LRU eviction policy to automatically evict entries
+        // from near-cache whenever it reaches 100_000 entries
+        nearCfg.setNearEvictionPolicyFactory(new LruEvictionPolicyFactory<>(100_000));
+
+        CacheConfiguration<Integer, Integer> cacheCfg = new CacheConfiguration<Integer, Integer>("myCache");
+
+        cacheCfg.setNearConfiguration(nearCfg);
+
+        // Create a distributed cache on server nodes 
+        IgniteCache<Integer, Integer> cache = ignite.getOrCreateCache(cacheCfg);
+        // end::nearCacheConfiguration[]
+
+    }
+
+    public void createDynamically() {
+
+        Ignition.setClientMode(true);
+
+        Ignite ignite = Ignition.start();
+
+        // tag::createNearCacheDynamically[]
+        // Create a near-cache configuration
+        NearCacheConfiguration<Integer, String> nearCfg = new NearCacheConfiguration<>();
+
+        // Use LRU eviction policy to automatically evict entries
+        // from near-cache, whenever it reaches 100_000 in size.
+        nearCfg.setNearEvictionPolicyFactory(new LruEvictionPolicyFactory<>(100_000));
+
+        // get the cache named "myCache" and create a near cache for it
+        IgniteCache<Integer, String> cache = ignite.getOrCreateNearCache("myCache", nearCfg);
+
+        String value = cache.get(1);
+        // end::createNearCacheDynamically[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NetworkConfiguration.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NetworkConfiguration.java
new file mode 100644
index 0000000..3c5a7c5
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NetworkConfiguration.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
+import org.junit.jupiter.api.Test;
+
+public class NetworkConfiguration {
+
+    @Test
+    void discoveryConfigExample() {
+        //tag::discovery[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi().setLocalPort(8300);
+
+        cfg.setDiscoverySpi(discoverySpi);
+        Ignite ignite = Ignition.start(cfg);
+        //end::discovery[]
+        ignite.close();
+    }
+
+    @Test
+    void failureDetectionTimeout() {
+        //tag::failure-detection-timeout[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setFailureDetectionTimeout(5_000);
+
+        cfg.setClientFailureDetectionTimeout(10_000);
+        //end::failure-detection-timeout[]
+        Ignition.start(cfg).close();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NodeFilter.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NodeFilter.java
new file mode 100644
index 0000000..c512534
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/NodeFilter.java
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.util.AttributeNodeFilter;
+import org.junit.jupiter.api.Test;
+
+public class NodeFilter {
+
+	@Test
+	void setNodeFilter() {
+
+		//tag::cache-node-filter[]
+		Ignite ignite = Ignition.start();
+
+		CacheConfiguration<Integer, String> cacheCfg = new CacheConfiguration<>("myCache");
+		
+		Collection<String> consistenIdSet = new HashSet<>();
+		consistenIdSet.add("consistentId1");
+
+		//the cache will not be hosted on the provided nodes
+		cacheCfg.setNodeFilter(new MyNodeFilter(consistenIdSet));
+
+		ignite.createCache(cacheCfg);
+		//end::cache-node-filter[]
+
+		ignite.close();
+	}
+
+	@Test
+	void attributeNodeFilter() {
+		//tag::add-attribute[]
+		IgniteConfiguration cfg = new IgniteConfiguration();
+		Map<String, Object> attributes = new HashMap<String, Object>();
+		attributes.put("host_myCache", true);
+		cfg.setUserAttributes(attributes);
+
+		Ignite ignite = Ignition.start(cfg);
+
+		//end::add-attribute[]
+
+		//tag::attribute-node-filter[]
+		CacheConfiguration<Integer, String> cacheCfg = new CacheConfiguration<>("myCache");
+
+		cacheCfg.setNodeFilter(new AttributeNodeFilter("host_myCache", "true"));
+		//end::attribute-node-filter[]
+
+		ignite.createCache(cacheCfg);
+
+		ignite.close();
+	}
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ODBC.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ODBC.java
new file mode 100644
index 0000000..0d7d60f
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ODBC.java
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.configuration.ClientConnectorConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class ODBC {
+
+    void enableODBC() {
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+
+        clientConnectorCfg.setHost("127.0.0.1");
+        clientConnectorCfg.setPort(12345);
+        clientConnectorCfg.setPortRange(2);
+        clientConnectorCfg.setMaxOpenCursorsPerConnection(512);
+        clientConnectorCfg.setSocketSendBufferSize(65536);
+        clientConnectorCfg.setSocketReceiveBufferSize(131072);
+        clientConnectorCfg.setThreadPoolSize(4);
+
+        cfg.setClientConnectorConfiguration(clientConnectorCfg);
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/OnHeapCaching.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/OnHeapCaching.java
new file mode 100644
index 0000000..1d43216
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/OnHeapCaching.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.configuration.CacheConfiguration;
+
+public class OnHeapCaching {
+
+    public static void onHeapCacheExample() {
+        //tag::onHeap[]
+        CacheConfiguration cfg = new CacheConfiguration();
+        cfg.setName("myCache");
+        cfg.setOnheapCacheEnabled(true);
+        //end::onHeap[]
+
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PartitionLossPolicyExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PartitionLossPolicyExample.java
new file mode 100644
index 0000000..7300426
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PartitionLossPolicyExample.java
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Arrays;
+
+import javax.cache.CacheException;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.PartitionLossPolicy;
+import org.apache.ignite.cluster.ClusterNode;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.events.CacheRebalancingEvent;
+import org.apache.ignite.events.Event;
+import org.apache.ignite.events.EventType;
+import org.apache.ignite.internal.processors.cache.CacheInvalidStateException;
+import org.apache.ignite.lang.IgnitePredicate;
+import org.junit.jupiter.api.Test;
+
+public class PartitionLossPolicyExample {
+    @Test
+    void configure() {
+
+        //tag::cfg[]
+        CacheConfiguration cacheCfg = new CacheConfiguration("myCache");
+
+        cacheCfg.setPartitionLossPolicy(PartitionLossPolicy.READ_ONLY_SAFE);
+
+        //end::cfg[]
+    }
+
+    @Test
+    void events() {
+        //tag::events[]
+        Ignite ignite = Ignition.start();
+
+        IgnitePredicate<Event> locLsnr = evt -> {
+            CacheRebalancingEvent cacheEvt = (CacheRebalancingEvent) evt;
+
+            int lostPart = cacheEvt.partition();
+
+            ClusterNode node = cacheEvt.discoveryNode();
+
+            System.out.println(lostPart);
+
+            return true; // Continue listening.
+        };
+
+        ignite.events().localListen(locLsnr, EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST);
+
+        //end::events[]
+
+        ignite.close();
+
+    }
+
+    @Test
+    void reset() {
+        Ignite ignite = Ignition.start();
+        //tag::reset[]
+        ignite.resetLostPartitions(Arrays.asList("myCache"));
+        //end::reset[]
+        ignite.close();
+    }
+
+    void getLostPartitions(Ignite ignite) {
+        //tag::lost-partitions[]
+        IgniteCache<Integer, String> cache = ignite.cache("myCache");
+
+        cache.lostPartitions();
+
+        //end::lost-partitions[]
+    }
+
+    @Test
+    void exception() {
+
+        try (Ignite ignite = Ignition.start()) {
+            ignite.getOrCreateCache("myCache");
+
+            //tag::exception[]
+            IgniteCache<Integer, Integer> cache = ignite.cache("myCache");
+
+            try {
+                Integer value = cache.get(3);
+                System.out.println(value);
+            } catch (CacheException e) {
+                if (e.getCause() instanceof CacheInvalidStateException) {
+                    System.out.println(e.getCause().getMessage());
+                } else {
+                    e.printStackTrace();
+                }
+            }
+            //end::exception[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PeerClassLoading.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PeerClassLoading.java
new file mode 100644
index 0000000..46c4d89
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PeerClassLoading.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.DeploymentMode;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class PeerClassLoading {
+
+    @Test
+    void configure() {
+
+        //tag::configure[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setPeerClassLoadingEnabled(true);
+        cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::configure[]
+
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PerformingTransactions.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PerformingTransactions.java
new file mode 100644
index 0000000..ce38d28
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PerformingTransactions.java
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import javax.cache.CacheException;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteTransactions;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.configuration.TransactionConfiguration;
+import org.apache.ignite.transactions.Transaction;
+import org.apache.ignite.transactions.TransactionConcurrency;
+import org.apache.ignite.transactions.TransactionDeadlockException;
+import org.apache.ignite.transactions.TransactionIsolation;
+import org.apache.ignite.transactions.TransactionOptimisticException;
+import org.apache.ignite.transactions.TransactionTimeoutException;
+
+public class PerformingTransactions {
+
+    public static void main(String[] args) {
+        deadlockDetectionExample();
+    }
+
+    public static void runAll() {
+        enablingTransactions();
+        executingTransactionsExample();
+        optimisticTransactionExample();
+        deadlockDetectionExample();
+
+    }
+
+    public static void enablingTransactions() {
+        // tag::enabling[]
+        CacheConfiguration cacheCfg = new CacheConfiguration();
+
+        cacheCfg.setName("cacheName");
+
+        cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setCacheConfiguration(cacheCfg);
+
+        // Optional transaction configuration. Configure TM lookup here.
+        TransactionConfiguration txCfg = new TransactionConfiguration();
+
+        cfg.setTransactionConfiguration(txCfg);
+
+        // Start a node
+        Ignition.start(cfg);
+        // end::enabling[]
+        Ignition.ignite().close();
+    }
+
+    public static void executingTransactionsExample() {
+        try (Ignite i = Ignition.start()) {
+            CacheConfiguration<String, Integer> cfg = new CacheConfiguration<>();
+            cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
+            cfg.setName("myCache");
+            IgniteCache<String, Integer> cache = i.getOrCreateCache("myCache");
+            cache.put("Hello", 1);
+            // tag::executing[]
+            Ignite ignite = Ignition.ignite();
+
+            IgniteTransactions transactions = ignite.transactions();
+
+            try (Transaction tx = transactions.txStart()) {
+                Integer hello = cache.get("Hello");
+
+                if (hello == 1)
+                    cache.put("Hello", 11);
+
+                cache.put("World", 22);
+
+                tx.commit();
+            }
+            // end::executing[]
+            System.out.println(cache.get("Hello"));
+            System.out.println(cache.get("World"));
+        }
+    }
+
+    public static void optimisticTransactionExample() {
+        try (Ignite ignite = Ignition.start()) {
+            // tag::optimistic[]
+            CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
+            cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
+            cfg.setName("myCache");
+            IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);
+
+            // Re-try the transaction a limited number of times.
+            int retryCount = 10;
+            int retries = 0;
+
+            // Start a transaction in the optimistic mode with the serializable isolation
+            // level.
+            while (retries < retryCount) {
+                retries++;
+                try (Transaction tx = ignite.transactions().txStart(TransactionConcurrency.OPTIMISTIC,
+                        TransactionIsolation.SERIALIZABLE)) {
+                    // modify cache entries as part of this transaction.
+                    cache.put(1, "foo");
+                    cache.put(2, "bar");
+                    // commit the transaction
+                    tx.commit();
+
+                    // the transaction succeeded. Leave the while loop.
+                    break;
+                } catch (TransactionOptimisticException e) {
+                    // Transaction has failed. Retry.
+                }
+            }
+            // end::optimistic[]
+            System.out.println(cache.get(1));
+        }
+    }
+
+    void timeout() {
+        // tag::timeout[]
+        // Create a configuration
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Create a Transaction configuration
+        TransactionConfiguration txCfg = new TransactionConfiguration();
+
+        // Set the timeout to 20 seconds
+        txCfg.setTxTimeoutOnPartitionMapExchange(20000);
+
+        cfg.setTransactionConfiguration(txCfg);
+
+        // Start the node
+        Ignition.start(cfg);
+        // end::timeout[]
+    }
+
+    public static void deadlockDetectionExample() {
+        try (Ignite ignite = Ignition.start()) {
+
+            // tag::deadlock[]
+            CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
+            cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
+            cfg.setName("myCache");
+            IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cfg);
+
+            try (Transaction tx = ignite.transactions().txStart(TransactionConcurrency.PESSIMISTIC,
+                    TransactionIsolation.READ_COMMITTED, 300, 0)) {
+                cache.put(1, "1");
+                cache.put(2, "1");
+
+                tx.commit();
+            } catch (CacheException e) {
+                if (e.getCause() instanceof TransactionTimeoutException
+                        && e.getCause().getCause() instanceof TransactionDeadlockException)
+
+                    System.out.println(e.getCause().getCause().getMessage());
+            }
+            // end::deadlock[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PersistenceTuning.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PersistenceTuning.java
new file mode 100644
index 0000000..4371a95
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/PersistenceTuning.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class PersistenceTuning {
+
+    void pageSize() {
+
+        // tag::page-size[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Durable memory configuration.
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+        
+        // Changing the page size to 8 KB.
+        storageCfg.setPageSize(8192);
+
+        cfg.setDataStorageConfiguration(storageCfg);
+        // end::page-size[]
+    }
+
+    void separateWal() {
+        // tag::separate-wal[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Configuring Native Persistence.
+        DataStorageConfiguration storeCfg = new DataStorageConfiguration();
+
+        // Sets a path to the root directory where data and indexes are to be persisted.
+        // It's assumed the directory is on a separated SSD.
+        storeCfg.setStoragePath("/ssd/storage");
+
+        // Sets a path to the directory where WAL is stored.
+        // It's assumed the directory is on a separated HDD.
+        storeCfg.setWalPath("/wal");
+
+        // Sets a path to the directory where WAL archive is stored.
+        // The directory is on the same HDD as the WAL.
+        storeCfg.setWalArchivePath("/wal/archive");
+
+        cfg.setDataStorageConfiguration(storeCfg);
+
+        // Starting the node.
+        Ignite ignite = Ignition.start(cfg);
+
+        // end::separate-wal[]
+
+        ignite.close();
+    }
+
+    void writesThrottling() {
+        // tag::throttling[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Configuring Native Persistence.
+        DataStorageConfiguration storeCfg = new DataStorageConfiguration();
+
+        // Enabling the writes throttling.
+        storeCfg.setWriteThrottlingEnabled(true);
+
+        cfg.setDataStorageConfiguration(storeCfg);
+        // Starting the node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::throttling[]
+
+        ignite.close();
+    }
+
+    void checkpointingBufferSize() {
+        // tag::checkpointing-buffer-size[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Configuring Native Persistence.
+        DataStorageConfiguration storeCfg = new DataStorageConfiguration();
+        
+        // Enabling the writes throttling.
+        storeCfg.setWriteThrottlingEnabled(true);
+
+        // Increasing the buffer size to 1 GB.
+        storeCfg.getDefaultDataRegionConfiguration().setCheckpointPageBufferSize(1024L * 1024 * 1024);
+
+        cfg.setDataStorageConfiguration(storeCfg);
+
+        // Starting the node.
+        Ignite ignite = Ignition.start(cfg);
+        // end::checkpointing-buffer-size[]
+        ignite.close();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Person.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Person.java
new file mode 100644
index 0000000..b15f2a8
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Person.java
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+public class Person {
+    private long id;
+
+    private String name;
+
+    private int age;
+
+    private float salary;
+
+    private int orgId;
+    
+    public Person(int id, String name) {
+        this.id = id;
+        this.name = name;
+    }
+
+    public long getId() {
+        return id;
+    }
+
+    public void setId(long id) {
+        this.id = id;
+    }
+
+    public String getName() {
+        return name;
+    }
+
+    public void setName(String name) {
+        this.name = name;
+    }
+
+    public int getAge() {
+        return age;
+    }
+
+    public void setAge(int age) {
+        this.age = age;
+    }
+
+    public float getSalary() {
+        return salary;
+    }
+
+    public void setSalary(float salary) {
+        this.salary = salary;
+    }
+
+    public int getOrgId() {
+        return orgId;
+    }
+
+    public void setOrgId(int orgId) {
+        this.orgId = orgId;
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/QueryEntitiesExampleWithAnnotation.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/QueryEntitiesExampleWithAnnotation.java
new file mode 100644
index 0000000..9e9ea33
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/QueryEntitiesExampleWithAnnotation.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.io.Serializable;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.query.annotations.QuerySqlField;
+import org.apache.ignite.configuration.CacheConfiguration;
+
+public class QueryEntitiesExampleWithAnnotation {
+    // tag::query-entity-annotation[]
+    class Person implements Serializable {
+        /** Indexed field. Will be visible to the SQL engine. */
+        @QuerySqlField(index = true)
+        private long id;
+
+        /** Queryable field. Will be visible to the SQL engine. */
+        @QuerySqlField
+        private String name;
+
+        /** Will NOT be visible to the SQL engine. */
+        private int age;
+
+        /**
+         * Indexed field sorted in descending order. Will be visible to the SQL engine.
+         */
+        @QuerySqlField(index = true, descending = true)
+        private float salary;
+    }
+
+    public static void main(String[] args) {
+        Ignite ignite = Ignition.start();
+        CacheConfiguration<Long, Person> personCacheCfg = new CacheConfiguration<Long, Person>();
+        personCacheCfg.setName("Person");
+
+        personCacheCfg.setIndexedTypes(Long.class, Person.class);
+        IgniteCache<Long, Person> cache = ignite.createCache(personCacheCfg);
+    }
+
+    // end::query-entity-annotation[]
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/QueryEntityExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/QueryEntityExample.java
new file mode 100644
index 0000000..7826710
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/QueryEntityExample.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.io.Serializable;
+import java.util.Arrays;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.QueryEntity;
+import org.apache.ignite.cache.QueryIndex;
+import org.apache.ignite.configuration.CacheConfiguration;
+
+public class QueryEntityExample {
+    // tag::query-entity[]
+    class Person implements Serializable {
+        private long id;
+
+        private String name;
+
+        private int age;
+
+        private float salary;
+    }
+
+    public static void main(String[] args) {
+        Ignite ignite = Ignition.start();
+        CacheConfiguration<Long, Person> personCacheCfg = new CacheConfiguration<Long, Person>();
+        personCacheCfg.setName("Person");
+
+        QueryEntity queryEntity = new QueryEntity(Long.class, Person.class)
+                .addQueryField("id", Long.class.getName(), null).addQueryField("age", Integer.class.getName(), null)
+                .addQueryField("salary", Float.class.getName(), null)
+                .addQueryField("name", String.class.getName(), null);
+
+        queryEntity.setIndexes(Arrays.asList(new QueryIndex("id"), new QueryIndex("salary", false)));
+
+        personCacheCfg.setQueryEntities(Arrays.asList(queryEntity));
+
+        IgniteCache<Long, Person> cache = ignite.createCache(personCacheCfg);
+    }
+    // end::query-entity[]
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/RESTConfiguration.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/RESTConfiguration.java
new file mode 100644
index 0000000..303bb70
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/RESTConfiguration.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.configuration.ConnectorConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class RESTConfiguration {
+
+    void config() {
+        //tag::http-configuration[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        cfg.setConnectorConfiguration(new ConnectorConfiguration().setJettyPath("jetty.xml"));
+        //end::http-configuration[]
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/RebalancingConfiguration.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/RebalancingConfiguration.java
new file mode 100644
index 0000000..9dc61e9
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/RebalancingConfiguration.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheRebalanceMode;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class RebalancingConfiguration {
+
+    public static void main(String[] args) {
+        RebalancingConfiguration rc = new RebalancingConfiguration();
+
+        rc.configure();
+    }
+
+    void configure() {
+        //tag::ignite-config[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+        //tag::pool-size[]
+
+        cfg.setRebalanceThreadPoolSize(4);
+        //end::pool-size[]
+
+        CacheConfiguration cacheCfg = new CacheConfiguration("mycache");
+        //tag::mode[]
+
+        cacheCfg.setRebalanceMode(CacheRebalanceMode.SYNC);
+
+        //end::mode[]
+        //tag::throttling[]
+
+        cfg.setRebalanceBatchSize(2 * 1024 * 1024);
+        cfg.setRebalanceThrottle(100);
+
+        //end::throttling[]
+        cfg.setCacheConfiguration(cacheCfg);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::ignite-config[]
+
+        ignite.close();
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Schemas.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Schemas.java
new file mode 100644
index 0000000..03e1525
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Schemas.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.configuration.SqlConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class Schemas {
+
+    @Test
+    void config() {
+        //tag::custom-schemas[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        SqlConfiguration sqlCfg = new SqlConfiguration();
+
+        sqlCfg.setSqlSchemas("MY_SCHEMA", "MY_SECOND_SCHEMA" );
+        
+        cfg.setSqlConfiguration(sqlCfg);
+        //end::custom-schemas[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Security.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Security.java
new file mode 100644
index 0000000..e987d6b
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Security.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.ssl.SslContextFactory;
+import org.junit.jupiter.api.Test;
+
+public class Security {
+
+    @Test
+    void ssl() {
+        // tag::ssl-context-factory[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        SslContextFactory factory = new SslContextFactory();
+
+        factory.setKeyStoreFilePath("keystore/node.jks");
+        factory.setKeyStorePassword("123456".toCharArray());
+        factory.setTrustStoreFilePath("keystore/trust.jks");
+        factory.setTrustStorePassword("123456".toCharArray());
+        factory.setProtocol("TLSv1.3");
+
+        igniteCfg.setSslContextFactory(factory);
+        // end::ssl-context-factory[]
+
+        Ignition.start(igniteCfg).close();
+    }
+
+    @Test
+    void disableCertificateValidation() {
+        // tag::disable-validation[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        SslContextFactory factory = new SslContextFactory();
+
+        factory.setKeyStoreFilePath("keystore/node.jks");
+        factory.setKeyStorePassword("123456".toCharArray());
+        factory.setTrustManagers(SslContextFactory.getDisabledTrustManager());
+
+        igniteCfg.setSslContextFactory(factory);
+        // end::disable-validation[]
+
+        Ignition.start(igniteCfg).close();
+    }
+
+    @Test
+    void igniteAuthentication() {
+
+        // tag::ignite-authentication[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Ignite persistence configuration.
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+
+        // Enabling the persistence.
+        storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
+
+        // Applying settings.
+        cfg.setDataStorageConfiguration(storageCfg);
+
+        // Enable authentication
+        cfg.setAuthenticationEnabled(true);
+
+        Ignite ignite = Ignition.start(cfg);
+        // end::ignite-authentication[]
+
+        ignite.close();
+    }
+
+    
+    
+
+    
+    
+    
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Snapshots.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Snapshots.java
new file mode 100644
index 0000000..36c352d
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Snapshots.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi;
+
+public class Snapshots {
+
+    void configuration() {
+        //tag::config[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        File exSnpDir = U.resolveWorkDirectory(U.defaultWorkDirectory(), "ex_snapshots", true);
+
+        cfg.setSnapshotPath(exSnpDir.getAbsolutePath());
+        //end::config[]
+
+        Ignite ignite = Ignition.start(cfg);
+
+        //tag::create[]
+        CacheConfiguration<Long, String> ccfg = new CacheConfiguration<Long, String>("snapshot-cache");
+
+        try (IgniteCache<Long, String> cache = ignite.getOrCreateCache(ccfg)) {
+            cache.put(1, "Maxim");
+
+            // Start snapshot operation.
+            ignite.snapshot().createSnapshot("snapshot_02092020").get();
+        }
+        finally {
+            ignite.destroyCache(ccfg);
+        }
+        //end::create[]
+        
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/SqlAPI.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/SqlAPI.java
new file mode 100644
index 0000000..a6dab34
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/SqlAPI.java
@@ -0,0 +1,195 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.cache.query.QueryCursor;
+import org.apache.ignite.cache.query.SqlFieldsQuery;
+import org.apache.ignite.cache.query.annotations.QuerySqlField;
+import org.apache.ignite.cache.query.annotations.QuerySqlFunction;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class SqlAPI {
+
+    class Person implements Serializable {
+        /** Indexed field. Will be visible to the SQL engine. */
+        @QuerySqlField(index = true)
+        private long id;
+
+        /** Queryable field. Will be visible to the SQL engine. */
+        @QuerySqlField
+        private String name;
+
+        /** Will NOT be visible to the SQL engine. */
+        private int age;
+
+        /**
+         * Indexed field sorted in descending order. Will be visible to the SQL engine.
+         */
+        @QuerySqlField(index = true, descending = true)
+        private float salary;
+    }
+
+    void cancellingByTimeout() {
+        // tag::set-timeout[]
+        SqlFieldsQuery query = new SqlFieldsQuery("SELECT * from Person");
+
+        // Setting query execution timeout
+        query.setTimeout(10_000, TimeUnit.SECONDS);
+
+        // end::set-timeout[]
+    }
+
+    void cancellingByCallingClose(IgniteCache<Long, Person> cache) {
+        // tag::cancel-by-closing[]
+        SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Person");
+
+        // Executing the query
+        QueryCursor<List<?>> cursor = cache.query(query);
+
+        // Halting the query that might be still in progress.
+        cursor.close();
+
+        // end::cancel-by-closing[]
+    }
+
+    void enforceJoinOrder() {
+
+        // tag::enforceJoinOrder[]
+        SqlFieldsQuery query = new SqlFieldsQuery(
+                "SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX)"
+                        + " WHERE TABLE_A.column1 = TABLE_B.column2").setEnforceJoinOrder(true);
+        // end::enforceJoinOrder[]
+    }
+
+    void simpleQuery(Ignite ignite) {
+        // tag::simple-query[]
+        IgniteCache<Long, Person> cache = ignite.cache("Person");
+
+        SqlFieldsQuery sql = new SqlFieldsQuery(
+                "select concat(firstName, ' ', lastName) from Person");
+
+        // Iterate over the result set.
+        try (QueryCursor<List<?>> cursor = cache.query(sql)) {
+            for (List<?> row : cursor)
+                System.out.println("personName=" + row.get(0));
+        }
+        // end::simple-query[]
+    }
+
+    void insert(Ignite ignite) {
+        // tag::insert[]
+        IgniteCache<Long, Person> cache = ignite.cache("personCache");
+
+        cache.query(
+                new SqlFieldsQuery("INSERT INTO Person(id, firstName, lastName) VALUES(?, ?, ?)")
+                        .setArgs(1L, "John", "Smith"))
+                .getAll();
+
+        // end::insert[]
+
+    }
+
+    void update(Ignite ignite) {
+        // tag::update[]
+        IgniteCache<Long, Person> cache = ignite.cache("personCache");
+
+        cache.query(new SqlFieldsQuery("UPDATE Person set lastName = ? " + "WHERE id >= ?")
+                .setArgs("Jones", 2L)).getAll();
+        // end::update[]
+    }
+
+    void delete(Ignite ignite) {
+        // tag::delete[]
+        IgniteCache<Long, Person> cache = ignite.cache("personCache");
+
+        cache.query(new SqlFieldsQuery("DELETE FROM Person " + "WHERE id >= ?").setArgs(2L))
+                .getAll();
+
+        // end::delete[]
+    }
+
+    void merge(Ignite ignite) {
+        // tag::merge[]
+        IgniteCache<Long, Person> cache = ignite.cache("personCache");
+
+        cache.query(new SqlFieldsQuery("MERGE INTO Person(id, firstName, lastName)"
+                + " values (1, 'John', 'Smith'), (5, 'Mary', 'Jones')")).getAll();
+        // end::merge[]
+    }
+
+    void setSchema() {
+        // tag::set-schema[]
+        SqlFieldsQuery sql = new SqlFieldsQuery("select name from City").setSchema("PERSON");
+        // end::set-schema[]
+    }
+
+    void createTable(Ignite ignite) {
+        // tag::create-table[]
+        IgniteCache<Long, Person> cache = ignite
+                .getOrCreateCache(new CacheConfiguration<Long, Person>().setName("Person"));
+
+        // Creating City table.
+        cache.query(new SqlFieldsQuery(
+                "CREATE TABLE City (id int primary key, name varchar, region varchar)")).getAll();
+        // end::create-table[]
+    }
+
+    // tag::sql-function-example[]
+    static class SqlFunctions {
+        @QuerySqlFunction
+        public static int sqr(int x) {
+            return x * x;
+        }
+    }
+
+    // end::sql-function-example[]
+
+    @Test
+    IgniteCache setSqlFunction(Ignite ignite) {
+
+        // tag::sql-function-config[]
+        // Preparing a cache configuration.
+        CacheConfiguration cfg = new CacheConfiguration("myCache");
+
+        // Registering the class that contains custom SQL functions.
+        cfg.setSqlFunctionClasses(SqlFunctions.class);
+
+        IgniteCache cache = ignite.createCache(cfg);
+        // end::sql-function-config[]
+
+        return cache;
+    }
+
+    void call(IgniteCache cache) {
+
+        // tag::sql-function-query[]
+        // Preparing the query that uses the custom defined 'sqr' function.
+        SqlFieldsQuery query = new SqlFieldsQuery("SELECT name FROM myCache WHERE sqr(size) > 100");
+
+        // Executing the query.
+        cache.query(query).getAll();
+
+        // end::sql-function-query[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/SqlTransactions.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/SqlTransactions.java
new file mode 100644
index 0000000..0bd3895
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/SqlTransactions.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.configuration.CacheConfiguration;
+
+public class SqlTransactions {
+
+    void enableMVCC() {
+        //tag::enable[]
+        CacheConfiguration cacheCfg = new CacheConfiguration<>();
+        cacheCfg.setName("myCache");
+
+        cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT);
+
+        //end::enable[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Swap.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Swap.java
new file mode 100644
index 0000000..98a9c32
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Swap.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.configuration.DataRegionConfiguration;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class Swap {
+
+    public void configureSwap() {
+        //tag::swap[]
+        // Node configuration.
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Durable Memory configuration.
+        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
+
+        // Creating a new data region.
+        DataRegionConfiguration regionCfg = new DataRegionConfiguration();
+
+        // Region name.
+        regionCfg.setName("500MB_Region");
+
+        // Setting initial RAM size.
+        regionCfg.setInitialSize(100L * 1024 * 1024);
+
+        // Setting region max size equal to physical RAM size(5 GB)
+        regionCfg.setMaxSize(5L * 1024 * 1024 * 1024);
+                
+        // Enable swap space.
+        regionCfg.setSwapPath("/path/to/some/directory");
+                
+        // Setting the data region configuration.
+        storageCfg.setDataRegionConfigurations(regionCfg);
+                
+        // Applying the new configuration.
+        cfg.setDataStorageConfiguration(storageCfg);
+        //end::swap[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/TDE.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/TDE.java
new file mode 100644
index 0000000..c973dcd
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/TDE.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi;
+import org.junit.jupiter.api.Test;
+
+public class TDE {
+
+    @Test
+    void configuration() {
+        //tag::config[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi();
+
+        encSpi.setKeyStorePath("/home/user/ignite-keystore.jks");
+        encSpi.setKeyStorePassword("secret".toCharArray());
+
+        cfg.setEncryptionSpi(encSpi);
+        //end::config[]
+
+        Ignite ignite = Ignition.start(cfg);
+
+        //tag::cache[]
+        CacheConfiguration<Long, String> ccfg = new CacheConfiguration<Long, String>("encrypted-cache");
+
+        ccfg.setEncryptionEnabled(true);
+
+        ignite.createCache(ccfg);
+
+        //end::cache[]
+
+        //tag::master-key-rotation[]
+        // Gets the current master key name.
+        String name = ignite.encryption().getMasterKeyName();
+
+        // Starts master key change process.
+        IgniteFuture<Void> future = ignite.encryption().changeMasterKey("newMasterKeyName");
+        //end::master-key-rotation[]
+
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/TcpIpDiscovery.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/TcpIpDiscovery.java
new file mode 100644
index 0000000..be7d426
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/TcpIpDiscovery.java
@@ -0,0 +1,335 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.io.PrintWriter;
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.logging.Logger;
+import javax.sql.DataSource;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
+import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.jdbc.TcpDiscoveryJdbcIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.sharedfs.TcpDiscoverySharedFsIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder;
+import org.junit.jupiter.api.Test;
+
+public class TcpIpDiscovery {
+
+    @Test
+    void multicastIpFinderDemo() {
+        //tag::multicast[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
+
+        ipFinder.setMulticastGroup("228.10.10.157");
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::multicast[]
+        ignite.close();
+    }
+
+    @Test
+    void failureDetectionTimeout() {
+        //tag::failure-detection-timeout[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setFailureDetectionTimeout(5_000);
+
+        cfg.setClientFailureDetectionTimeout(10_000);
+        //end::failure-detection-timeout[]
+    }
+
+    @Test
+    void staticIpFinderDemo() {
+        //tag::static[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
+
+        // Set initial IP addresses.
+        // Note that you can optionally specify a port or a port range.
+        ipFinder.setAddresses(Arrays.asList("1.2.3.4", "1.2.3.5:47500..47509"));
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::static[]
+        ignite.close();
+    }
+
+    @Test
+    void multicastAndStaticDemo() {
+        //tag::multicastAndStatic[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
+
+        // Set Multicast group.
+        ipFinder.setMulticastGroup("228.10.10.157");
+
+        // Set initial IP addresses.
+        // Note that you can optionally specify a port or a port range.
+        ipFinder.setAddresses(Arrays.asList("1.2.3.4", "1.2.3.5:47500..47509"));
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start a node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::multicastAndStatic[]
+        ignite.close();
+    }
+
+    @Test
+    void isolatedClustersDemo() {
+        //tag::isolated1[]
+        IgniteConfiguration firstCfg = new IgniteConfiguration();
+
+        firstCfg.setIgniteInstanceName("first");
+
+        // Explicitly configure TCP discovery SPI to provide list of initial nodes
+        // from the first cluster.
+        TcpDiscoverySpi firstDiscoverySpi = new TcpDiscoverySpi();
+
+        // Initial local port to listen to.
+        firstDiscoverySpi.setLocalPort(48500);
+
+        // Changing local port range. This is an optional action.
+        firstDiscoverySpi.setLocalPortRange(20);
+
+        TcpDiscoveryVmIpFinder firstIpFinder = new TcpDiscoveryVmIpFinder();
+
+        // Addresses and port range of the nodes from the first cluster.
+        // 127.0.0.1 can be replaced with actual IP addresses or host names.
+        // The port range is optional.
+        firstIpFinder.setAddresses(Collections.singletonList("127.0.0.1:48500..48520"));
+
+        // Overriding IP finder.
+        firstDiscoverySpi.setIpFinder(firstIpFinder);
+
+        // Explicitly configure TCP communication SPI by changing local port number for
+        // the nodes from the first cluster.
+        TcpCommunicationSpi firstCommSpi = new TcpCommunicationSpi();
+
+        firstCommSpi.setLocalPort(48100);
+
+        // Overriding discovery SPI.
+        firstCfg.setDiscoverySpi(firstDiscoverySpi);
+
+        // Overriding communication SPI.
+        firstCfg.setCommunicationSpi(firstCommSpi);
+
+        // Starting a node.
+        Ignition.start(firstCfg);
+        //end::isolated1[]
+
+        //tag::isolated2[]
+        IgniteConfiguration secondCfg = new IgniteConfiguration();
+
+        secondCfg.setIgniteInstanceName("second");
+
+        // Explicitly configure TCP discovery SPI to provide list of initial nodes
+        // from the second cluster.
+        TcpDiscoverySpi secondDiscoverySpi = new TcpDiscoverySpi();
+
+        // Initial local port to listen to.
+        secondDiscoverySpi.setLocalPort(49500);
+
+        // Changing local port range. This is an optional action.
+        secondDiscoverySpi.setLocalPortRange(20);
+
+        TcpDiscoveryVmIpFinder secondIpFinder = new TcpDiscoveryVmIpFinder();
+
+        // Addresses and port range of the nodes from the second cluster.
+        // 127.0.0.1 can be replaced with actual IP addresses or host names.
+        // The port range is optional.
+        secondIpFinder.setAddresses(Collections.singletonList("127.0.0.1:49500..49520"));
+
+        // Overriding IP finder.
+        secondDiscoverySpi.setIpFinder(secondIpFinder);
+
+        // Explicitly configure TCP communication SPI by changing local port number for
+        // the nodes from the second cluster.
+        TcpCommunicationSpi secondCommSpi = new TcpCommunicationSpi();
+
+        secondCommSpi.setLocalPort(49100);
+
+        // Overriding discovery SPI.
+        secondCfg.setDiscoverySpi(secondDiscoverySpi);
+
+        // Overriding communication SPI.
+        secondCfg.setCommunicationSpi(secondCommSpi);
+
+        // Starting a node.
+        Ignition.start(secondCfg);
+        //end::isolated2[]
+
+        Ignition.ignite("first").close();
+        Ignition.ignite("second").close();
+    }
+
+    static class MySampleDataSource implements DataSource {
+
+        @Override
+        public Connection getConnection() throws SQLException {
+            return null;
+        }
+
+        @Override
+        public Connection getConnection(String username, String password) throws SQLException {
+            return null;
+        }
+
+        @Override
+        public PrintWriter getLogWriter() throws SQLException {
+            return null;
+        }
+
+        @Override
+        public void setLogWriter(PrintWriter out) throws SQLException {
+
+        }
+
+        @Override
+        public void setLoginTimeout(int seconds) throws SQLException {
+
+        }
+
+        @Override
+        public int getLoginTimeout() throws SQLException {
+            return 0;
+        }
+
+        @Override
+        public Logger getParentLogger() throws SQLFeatureNotSupportedException {
+            return null;
+        }
+
+        @Override
+        public <T> T unwrap(Class<T> iface) throws SQLException {
+            return null;
+        }
+
+        @Override
+        public boolean isWrapperFor(Class<?> iface) throws SQLException {
+            return false;
+        }
+    }
+
+    @Test
+    void jdbcIpFinderDemo() {
+        //tag::jdbc[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        // Configure your DataSource.
+        DataSource someDs = new MySampleDataSource();
+
+        TcpDiscoveryJdbcIpFinder ipFinder = new TcpDiscoveryJdbcIpFinder();
+
+        ipFinder.setDataSource(someDs);
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::jdbc[]
+        ignite.close();
+    }
+
+    void sharedFileSystemIpFinderDemo() {
+
+        //tag::sharedFS[]
+        // Configuring discovery SPI.
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        // Configuring IP finder.
+        TcpDiscoverySharedFsIpFinder ipFinder = new TcpDiscoverySharedFsIpFinder();
+
+        ipFinder.setPath("/var/ignite/addresses");
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::sharedFS[]
+        ignite.close();
+    }
+
+    @Test
+    void zookeeperIpFinderDemo() {
+
+        //tag::zk[]
+        TcpDiscoverySpi spi = new TcpDiscoverySpi();
+
+        TcpDiscoveryZookeeperIpFinder ipFinder = new TcpDiscoveryZookeeperIpFinder();
+
+        // Specify ZooKeeper connection string.
+        ipFinder.setZkConnectionString("127.0.0.1:2181");
+
+        spi.setIpFinder(ipFinder);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Override default discovery SPI.
+        cfg.setDiscoverySpi(spi);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::zk[]
+
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Tracing.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Tracing.java
new file mode 100644
index 0000000..d62ee3f
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/Tracing.java
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.IgniteTransactions;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.tracing.Scope;
+import org.apache.ignite.spi.tracing.TracingConfigurationCoordinates;
+import org.apache.ignite.spi.tracing.TracingConfigurationParameters;
+import org.apache.ignite.transactions.Transaction;
+import org.junit.jupiter.api.Test;
+
+import io.opencensus.exporter.trace.zipkin.ZipkinExporterConfiguration;
+import io.opencensus.exporter.trace.zipkin.ZipkinTraceExporter;
+
+public class Tracing {
+
+    @Test
+    void config() {
+        //tag::config[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        cfg.setTracingSpi(new org.apache.ignite.spi.tracing.opencensus.OpenCensusTracingSpi());
+
+        Ignite ignite = Ignition.start(cfg);
+        //end::config[]
+
+        ignite.close();
+    }
+
+    @Test
+    void enableSampling() {
+        //tag::enable-sampling[]
+        Ignite ignite = Ignition.start();
+
+        ignite.tracingConfiguration().set(
+                new TracingConfigurationCoordinates.Builder(Scope.TX).build(),
+                new TracingConfigurationParameters.Builder().withSamplingRate(1).build());
+
+        //end::enable-sampling[]
+        ignite.close();
+    }
+
+    void exportToZipkin() {
+        //tag::export-to-zipkin[]
+        //register Zipkin exporter
+        ZipkinTraceExporter.createAndRegister(
+                ZipkinExporterConfiguration.builder().setV2Url("http://localhost:9411/api/v2/spans")
+                        .setServiceName("ignite-cluster").build());
+
+        IgniteConfiguration cfg = new IgniteConfiguration().setClientMode(true)
+                .setTracingSpi(new org.apache.ignite.spi.tracing.opencensus.OpenCensusTracingSpi());
+
+        Ignite ignite = Ignition.start(cfg);
+
+        //enable trace sampling for transactions with 100% sampling rate
+        ignite.tracingConfiguration().set(
+                new TracingConfigurationCoordinates.Builder(Scope.TX).build(),
+                new TracingConfigurationParameters.Builder().withSamplingRate(1).build());
+
+        //create a transactional cache
+        IgniteCache<Integer, String> cache = ignite
+                .getOrCreateCache(new CacheConfiguration<Integer, String>("myCache")
+                        .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));
+
+        IgniteTransactions transactions = ignite.transactions();
+
+        // start a transaction
+        try (Transaction tx = transactions.txStart()) {
+            //do some operations
+            cache.put(1, "test value");
+
+            System.out.println(cache.get(1));
+
+            cache.put(1, "second value");
+
+            tx.commit();
+        }
+
+        try {
+            //This code here is to wait until the trace is exported to Zipkin. 
+            //If your application doesn't stop here, you don't need this piece of code. 
+            Thread.sleep(5_000);
+        } catch (InterruptedException e) {
+            e.printStackTrace();
+        }
+        
+        //end::export-to-zipkin[]
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UnderstandingConfiguration.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UnderstandingConfiguration.java
new file mode 100644
index 0000000..1e572cb
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UnderstandingConfiguration.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.cache.CacheMode;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class UnderstandingConfiguration {
+
+    public static void configurationDemo() {
+        //tag::cfg[]
+        //tag::dir[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+        //end::dir[]
+        //setting a work directory
+        //tag::dir[]
+        igniteCfg.setWorkDirectory("/path/to/work/directory");
+        //end::dir[]
+
+        //defining a partitioned cache
+        CacheConfiguration cacheCfg = new CacheConfiguration("myCache");
+        cacheCfg.setCacheMode(CacheMode.PARTITIONED);
+
+        igniteCfg.setCacheConfiguration(cacheCfg);
+        //end::cfg[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UserCodeDeployment.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UserCodeDeployment.java
new file mode 100644
index 0000000..7b7a568
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UserCodeDeployment.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Arrays;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.deployment.uri.UriDeploymentSpi;
+import org.junit.jupiter.api.Test;
+
+public class UserCodeDeployment {
+
+    @Test
+    void fromUrl() {
+        //tag::from-url[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        UriDeploymentSpi deploymentSpi = new UriDeploymentSpi();
+
+        deploymentSpi.setUriList(Arrays
+                .asList("http://username:password;freq=10000@www.mysite.com:110/ignite/user_libs"));
+
+        cfg.setDeploymentSpi(deploymentSpi);
+
+        try (Ignite ignite = Ignition.start(cfg)) {
+            //execute the task represented by a class located in the "user_libs" url 
+            ignite.compute().execute("org.mycompany.HelloWorldTask", "My Args");
+        }
+        //end::from-url[]
+    }
+
+    @Test
+    void fromDirectory() {
+        //tag::from-local-dir[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        UriDeploymentSpi deploymentSpi = new UriDeploymentSpi();
+
+        deploymentSpi.setUriList(Arrays.asList("file://freq=2000@localhost/home/username/user_libs"));
+
+        cfg.setDeploymentSpi(deploymentSpi);
+
+        try (Ignite ignite = Ignition.start(cfg)) {
+            //execute the task represented by a class located in the "user_libs" directory 
+            ignite.compute().execute("org.mycompany.HelloWorldTask", "My Args");
+        }
+        //end::from-local-dir[]
+    }
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UsingContinuousQueries.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UsingContinuousQueries.java
new file mode 100644
index 0000000..4d49a4d
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UsingContinuousQueries.java
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import javax.cache.Cache;
+import javax.cache.configuration.Factory;
+import javax.cache.configuration.FactoryBuilder;
+import javax.cache.event.CacheEntryEvent;
+import javax.cache.event.CacheEntryEventFilter;
+import javax.cache.event.CacheEntryListenerException;
+import javax.cache.event.CacheEntryUpdatedListener;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.query.ContinuousQuery;
+import org.apache.ignite.cache.query.ContinuousQueryWithTransformer;
+import org.apache.ignite.cache.query.QueryCursor;
+import org.apache.ignite.cache.query.ScanQuery;
+import org.apache.ignite.lang.IgniteClosure;
+
+public class UsingContinuousQueries {
+
+    public static void runAll() {
+        initialQueryExample();
+        localListenerExample();
+        remoteFilterExample();
+        remoteTransformerExample();
+    }
+
+    public static void initialQueryExample() {
+        try (Ignite ignite = Ignition.start()) {
+
+            //tag::initialQry[]
+            IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
+
+            //end::initialQry[]
+            cache.put(100, "100");
+
+            //tag::initialQry[]
+            ContinuousQuery<Integer, String> query = new ContinuousQuery<>();
+
+            // Setting an optional initial query.
+            // The query will return entries for the keys greater than 10.
+            query.setInitialQuery(new ScanQuery<>((k, v) -> k > 10));
+
+            //mandatory local listener
+            query.setLocalListener(events -> {
+            });
+
+            try (QueryCursor<Cache.Entry<Integer, String>> cursor = cache.query(query)) {
+                // Iterating over the entries returned by the initial query 
+                for (Cache.Entry<Integer, String> e : cursor)
+                    System.out.println("key=" + e.getKey() + ", val=" + e.getValue());
+            }
+            //end::initialQry[]
+        }
+    }
+
+    public static void localListenerExample() {
+        try (Ignite ignite = Ignition.start()) {
+
+            //tag::localListener[]
+            IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
+
+            ContinuousQuery<Integer, String> query = new ContinuousQuery<>();
+
+            query.setLocalListener(new CacheEntryUpdatedListener<Integer, String>() {
+
+                @Override
+                public void onUpdated(Iterable<CacheEntryEvent<? extends Integer, ? extends String>> events)
+                    throws CacheEntryListenerException {
+                    // react to the update events here
+                }
+            });
+
+            cache.query(query);
+
+            //end::localListener[]
+        }
+    }
+
+    public static void remoteFilterExample() {
+        try (Ignite ignite = Ignition.start()) {
+
+            IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
+
+            //tag::remoteFilter[]
+            ContinuousQuery<Integer, String> qry = new ContinuousQuery<>();
+
+            qry.setLocalListener(events ->
+                events.forEach(event -> System.out.format("Entry: key=[%s] value=[%s]\n", event.getKey(), event.getValue()))
+            );
+
+            qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Integer, String>>() {
+                @Override
+                public CacheEntryEventFilter<Integer, String> create() {
+                    return new CacheEntryEventFilter<Integer, String>() {
+                        @Override
+                        public boolean evaluate(CacheEntryEvent<? extends Integer, ? extends String> e) {
+                            System.out.format("the value for key [%s] was updated from [%s] to [%s]\n", e.getKey(), e.getOldValue(), e.getValue());
+                            return true;
+                        }
+                    };
+                }
+            });
+
+            //end::remoteFilter[]
+            cache.query(qry);
+            cache.put(1, "1");
+
+        }
+    }
+
+    public static void remoteTransformerExample() {
+        try (Ignite ignite = Ignition.start()) {
+
+            //tag::transformer[]
+            IgniteCache<Integer, Person> cache = ignite.getOrCreateCache("myCache");
+
+            // Create a new continuous query with a transformer.
+            ContinuousQueryWithTransformer<Integer, Person, String> qry = new ContinuousQueryWithTransformer<>();
+
+            // Factory to create transformers.
+            Factory factory = FactoryBuilder.factoryOf(
+                // Return one field of a complex object.
+                // Only this field will be sent over to the local listener.
+                (IgniteClosure<CacheEntryEvent, String>)
+                    event -> ((Person)event.getValue()).getName()
+            );
+
+            qry.setRemoteTransformerFactory(factory);
+
+            // Listener that will receive transformed data.
+            qry.setLocalListener(names -> {
+                for (String name : names)
+                    System.out.println("New person name: " + name);
+            });
+            //end::transformer[]
+
+            cache.query(qry);
+            cache.put(1, new Person(1, "Vasya"));
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UsingScanQueries.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UsingScanQueries.java
new file mode 100644
index 0000000..49a9f53
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/UsingScanQueries.java
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.List;
+import javax.cache.Cache;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.query.QueryCursor;
+import org.apache.ignite.cache.query.ScanQuery;
+import org.apache.ignite.lang.IgniteBiPredicate;
+import org.apache.ignite.lang.IgniteClosure;
+import org.junit.jupiter.api.Test;
+
+public class UsingScanQueries {
+
+    @Test
+    void localQuery() {
+        try (Ignite ignite = Ignition.start()) {
+            IgniteCache<Integer, Person> cache = ignite.getOrCreateCache("myCache");
+            //tag::localQuery[]
+            QueryCursor<Cache.Entry<Integer, Person>> cursor = cache
+                    .query(new ScanQuery<Integer, Person>().setLocal(true));
+            //end::localQuery[]
+        }
+    }
+
+    @Test
+    void executingScanQueriesExample() {
+        try (Ignite ignite = Ignition.start()) {
+            //tag::scanQry[]
+            //tag::predicate[]
+            //tag::transformer[]
+            IgniteCache<Integer, Person> cache = ignite.getOrCreateCache("myCache");
+            //end::scanQry[]
+            //end::predicate[]
+            //end::transformer[]
+
+            Person person = new Person(1, "Vasya Ivanov");
+            person.setSalary(2000);
+            cache.put(1, person);
+            //tag::scanQry[]
+
+            QueryCursor<Cache.Entry<Integer, Person>> cursor = cache.query(new ScanQuery<>());
+            //end::scanQry[]
+            System.out.println("Scan query output:" + cursor.getAll().get(0).getValue().getName());
+
+            //tag::predicate[]
+
+            // Find the persons who earn more than 1,000.
+            IgniteBiPredicate<Integer, Person> filter = (key, p) -> p.getSalary() > 1000;
+
+            try (QueryCursor<Cache.Entry<Integer, Person>> qryCursor = cache.query(new ScanQuery<>(filter))) {
+                qryCursor.forEach(
+                        entry -> System.out.println("Key = " + entry.getKey() + ", Value = " + entry.getValue()));
+            }
+            //end::predicate[]
+
+            //tag::transformer[]
+
+            // Get only keys for persons earning more than 1,000.
+            List<Integer> keys = cache.query(new ScanQuery<>(
+                    // Remote filter
+                    (IgniteBiPredicate<Integer, Person>) (k, p) -> p.getSalary() > 1000),
+                    // Transformer
+                    (IgniteClosure<Cache.Entry<Integer, Person>, Integer>) Cache.Entry::getKey).getAll();
+            //end::transformer[]
+
+            System.out.println("Transformer example output:" + keys.get(0));
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/WAL.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/WAL.java
new file mode 100644
index 0000000..9c127cf
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/WAL.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.DataStorageConfiguration;
+import org.apache.ignite.configuration.DiskPageCompression;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class WAL {
+
+    @Test
+    void walRecordsCompression() {
+        //tag::records-compression[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        DataStorageConfiguration dsCfg = new DataStorageConfiguration();
+        dsCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
+
+        //WAL page compression parameters
+        dsCfg.setWalPageCompression(DiskPageCompression.LZ4);
+        dsCfg.setWalPageCompressionLevel(8);
+
+        cfg.setDataStorageConfiguration(dsCfg);
+        Ignite ignite = Ignition.start(cfg);
+        //end::records-compression[]
+        
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/WorkingWithBinaryObjects.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/WorkingWithBinaryObjects.java
new file mode 100644
index 0000000..1dd2395
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/WorkingWithBinaryObjects.java
@@ -0,0 +1,183 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteBinary;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.binary.BinaryField;
+import org.apache.ignite.binary.BinaryIdMapper;
+import org.apache.ignite.binary.BinaryNameMapper;
+import org.apache.ignite.binary.BinaryObject;
+import org.apache.ignite.binary.BinaryObjectBuilder;
+import org.apache.ignite.binary.BinaryObjectException;
+import org.apache.ignite.binary.BinaryReader;
+import org.apache.ignite.binary.BinarySerializer;
+import org.apache.ignite.binary.BinaryTypeConfiguration;
+import org.apache.ignite.binary.BinaryWriter;
+import org.apache.ignite.cache.CacheEntryProcessor;
+import org.apache.ignite.configuration.BinaryConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+
+public class WorkingWithBinaryObjects {
+
+    public static void runAll() {
+        binaryObjectsDemo();
+        binaryFieldsExample();
+        configuringBinaryObjects();
+    }
+
+    public static void binaryObjectsDemo() {
+        try (Ignite ignite = Ignition.start()) {
+            IgniteCache<Integer, Person> cache = ignite.createCache("personCache");
+
+            //tag::enablingBinary[]
+            // Create a regular Person object and put it into the cache.
+            Person person = new Person(1, "FirstPerson");
+            ignite.cache("personCache").put(1, person);
+
+            // Get an instance of binary-enabled cache.
+            IgniteCache<Integer, BinaryObject> binaryCache = ignite.cache("personCache").withKeepBinary();
+            BinaryObject binaryPerson = binaryCache.get(1);
+            //end::enablingBinary[]
+            System.out.println("Binary object:" + binaryPerson);
+
+            //tag::binaryBuilder[]
+            BinaryObjectBuilder builder = ignite.binary().builder("org.apache.ignite.snippets.Person");
+
+            builder.setField("id", 2L);
+            builder.setField("name", "SecondPerson");
+
+            binaryCache.put(2, builder.build());
+            //end::binaryBuilder[]
+            System.out.println("Value from binary builder:" + cache.get(2).getName());
+
+            //tag::cacheEntryProc[]
+            // The EntryProcessor is to be executed for this key.
+            int key = 1;
+            ignite.cache("personCache").<Integer, BinaryObject>withKeepBinary().invoke(key, (entry, arguments) -> {
+                // Create a builder from the old value.
+                BinaryObjectBuilder bldr = entry.getValue().toBuilder();
+
+                //Update the field in the builder.
+                bldr.setField("name", "Ignite");
+
+                // Set new value to the entry.
+                entry.setValue(bldr.build());
+
+                return null;
+            });
+            //end::cacheEntryProc[]
+            System.out.println("EntryProcessor output:" + cache.get(1).getName());
+        }
+    }
+
+    public static void binaryFieldsExample() {
+        try (Ignite ignite = Ignition.start()) {
+            //tag::binaryField[]
+            Collection<BinaryObject> persons = getPersons();
+
+            BinaryField salary = null;
+            double total = 0;
+            int count = 0;
+
+            for (BinaryObject person : persons) {
+                if (salary == null) {
+                    salary = person.type().field("salary");
+                }
+
+                total += (float) salary.value(person);
+                count++;
+            }
+
+            double avg = total / count;
+            //end::binaryField[]
+            System.out.println("binary fields example:" + avg);
+        }
+    }
+
+    private static Collection<BinaryObject> getPersons() {
+        IgniteBinary binary = Ignition.ignite().binary();
+        Person p1 = new Person(1, "name1");
+        p1.setSalary(1);
+        Person p2 = new Person(2, "name2");
+        p2.setSalary(2);
+        return Arrays.asList(binary.toBinary(p1), binary.toBinary(p2));
+    }
+
+    public static void configuringBinaryObjects() {
+        //tag::cfg[]
+        IgniteConfiguration igniteCfg = new IgniteConfiguration();
+
+        BinaryConfiguration binaryConf = new BinaryConfiguration();
+        binaryConf.setNameMapper(new MyBinaryNameMapper());
+        binaryConf.setIdMapper(new MyBinaryIdMapper());
+
+        BinaryTypeConfiguration binaryTypeCfg = new BinaryTypeConfiguration();
+        binaryTypeCfg.setTypeName("org.apache.ignite.snippets.*");
+        binaryTypeCfg.setSerializer(new ExampleSerializer());
+
+        binaryConf.setTypeConfigurations(Collections.singleton(binaryTypeCfg));
+
+        igniteCfg.setBinaryConfiguration(binaryConf);
+        //end::cfg[]
+
+    }
+
+    private static class MyBinaryNameMapper implements BinaryNameMapper {
+
+        @Override
+        public String typeName(String clsName) {
+            return clsName;
+        }
+
+        @Override
+        public String fieldName(String fieldName) {
+            return fieldName;
+        }
+    }
+
+    private static class MyBinaryIdMapper implements BinaryIdMapper {
+
+        @Override
+        public int typeId(String typeName) {
+            return typeName.hashCode();
+        }
+
+        @Override
+        public int fieldId(int typeId, String fieldName) {
+            return typeId + fieldName.hashCode();
+        }
+    }
+
+    private static class ExampleSerializer implements BinarySerializer {
+
+        @Override
+        public void writeBinary(Object obj, BinaryWriter writer) throws BinaryObjectException {
+
+        }
+
+        @Override
+        public void readBinary(Object obj, BinaryReader reader) throws BinaryObjectException {
+
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ZookeeperDiscovery.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ZookeeperDiscovery.java
new file mode 100644
index 0000000..43990f6
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/ZookeeperDiscovery.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi;
+
+public class ZookeeperDiscovery {
+
+    void ZookeeperDiscoveryConfigurationExample() {
+        //tag::cfg[]
+        ZookeeperDiscoverySpi zkDiscoverySpi = new ZookeeperDiscoverySpi();
+
+        zkDiscoverySpi.setZkConnectionString("127.0.0.1:34076,127.0.0.1:43310,127.0.0.1:36745");
+        zkDiscoverySpi.setSessionTimeout(30_000);
+
+        zkDiscoverySpi.setZkRootPath("/ignite");
+        zkDiscoverySpi.setJoinTimeout(10_000);
+
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        //Override default discovery SPI.
+        cfg.setDiscoverySpi(zkDiscoverySpi);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(cfg);
+        //end::cfg[]
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/k8s/K8s.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/k8s/K8s.java
new file mode 100644
index 0000000..7068076
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/k8s/K8s.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.k8s;
+
+import org.apache.ignite.Ignition;
+import org.apache.ignite.client.ClientCache;
+import org.apache.ignite.client.IgniteClient;
+import org.apache.ignite.configuration.ClientConfiguration;
+
+public class K8s {
+
+    public static void connectThinClient() throws Exception {
+        // tag::connectThinClient[]
+        ClientConfiguration cfg = new ClientConfiguration().setAddresses("13.86.186.145:10800");
+        IgniteClient client = Ignition.startClient(cfg);
+
+        ClientCache<Integer, String> cache = client.getOrCreateCache("test_cache");
+
+        cache.put(1, "first test value");
+
+        System.out.println(cache.get(1));
+
+        client.close();
+        // end::connectThinClient[]
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/MyPlugin.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/MyPlugin.java
new file mode 100644
index 0000000..dd3d8b6
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/MyPlugin.java
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.plugin;
+
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.plugin.IgnitePlugin;
+import org.apache.ignite.plugin.PluginContext;
+
+/**
+ * 
+ * The plugin prints cache size information to console  
+ *
+ */
+public class MyPlugin implements IgnitePlugin, Runnable {
+
+    private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
+
+    private PluginContext context;
+
+    private long interval;
+
+    /**
+     * 
+     * @param context 
+     */
+    public MyPlugin(long interval, PluginContext context) {
+        this.interval = interval;
+        this.context = context;
+    }
+
+    private void print0() {
+        StringBuilder sb = new StringBuilder("\nCache Information: \n");
+
+        //get the names of all caches
+        context.grid().cacheNames().forEach(cacheName -> {
+            //get the specific cache
+            IgniteCache cache = context.grid().cache(cacheName);
+            if (cache != null) {
+                sb.append("  cacheName=").append(cacheName).append(", size=").append(cache.size())
+                        .append("\n");
+            }
+        });
+
+        System.out.print(sb.toString());
+    }
+
+    /**
+     * Prints the information about caches to console.
+     */
+    public void printCacheInfo() {
+        print0();
+    }
+
+    @Override
+    public void run() {
+        print0();
+    }
+
+    void start() {
+        scheduler.scheduleAtFixedRate(this, interval, interval, TimeUnit.SECONDS);
+    }
+
+    void stop() {
+        scheduler.shutdownNow();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/MyPluginProvider.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/MyPluginProvider.java
new file mode 100644
index 0000000..b49c435
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/MyPluginProvider.java
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.plugin;
+
+import java.io.Serializable;
+import java.util.UUID;
+
+import org.apache.ignite.IgniteCheckedException;
+import org.apache.ignite.cluster.ClusterNode;
+import org.apache.ignite.plugin.CachePluginContext;
+import org.apache.ignite.plugin.CachePluginProvider;
+import org.apache.ignite.plugin.ExtensionRegistry;
+import org.apache.ignite.plugin.PluginConfiguration;
+import org.apache.ignite.plugin.PluginContext;
+import org.apache.ignite.plugin.PluginProvider;
+import org.apache.ignite.plugin.PluginValidationException;
+import org.jetbrains.annotations.Nullable;
+
+public class MyPluginProvider implements PluginProvider<PluginConfiguration> {
+
+    /**
+     * The time interval in seconds for printing cache size information. 
+     */
+    private long interval = 10;
+
+    private MyPlugin plugin;
+
+    public MyPluginProvider() {
+    }
+
+    /**
+     * 
+     * @param interval Time interval in seconds
+     */
+    public MyPluginProvider(long interval) {
+        this.interval = interval;
+    }
+
+    @Override
+    public String name() {
+        //the name of the plugin
+        return "MyPlugin";
+    }
+
+    @Override
+    public String version() {
+        return "1.0";
+    }
+
+    @Override
+    public String copyright() {
+        return "MyCompany";
+    }
+
+    @Override
+    public MyPlugin plugin() {
+        return plugin;
+    }
+
+    @Override
+    public void initExtensions(PluginContext ctx, ExtensionRegistry registry)
+            throws IgniteCheckedException {
+        plugin = new MyPlugin(interval, ctx);
+    }
+
+    @Override
+    public void onIgniteStart() throws IgniteCheckedException {
+        //start the plugin when Ignite is started
+        plugin.start();
+    }
+
+    @Override
+    public void onIgniteStop(boolean cancel) {
+        //stop the plugin
+        plugin.stop();
+    }
+
+    /**
+     * The time interval (in seconds) for printing cache size information 
+     * @return 
+     */
+    public long getInterval() {
+        return interval;
+    }
+
+    /**
+     * Sets the time interval (in seconds) for printing cache size information
+     * @param interval 
+     */
+    public void setInterval(long interval) {
+        this.interval = interval;
+    }
+
+    // other no-op methods of PluginProvider 
+    //tag::no-op-methods[]
+    @Override
+    public <T> @Nullable T createComponent(PluginContext ctx, Class<T> cls) {
+        System.out.println(cls);
+        return null;
+    }
+
+    @Override
+    public CachePluginProvider createCacheProvider(CachePluginContext ctx) {
+        return null;
+    }
+
+    @Override
+    public void start(PluginContext ctx) throws IgniteCheckedException {
+    }
+
+    @Override
+    public void stop(boolean cancel) throws IgniteCheckedException {
+    }
+
+    @Override
+    public @Nullable Serializable provideDiscoveryData(UUID nodeId) {
+        return null;
+    }
+
+    @Override
+    public void receiveDiscoveryData(UUID nodeId, Serializable data) {
+    }
+
+    @Override
+    public void validateNewNode(ClusterNode node) throws PluginValidationException {
+    }
+    //end::no-op-methods[]
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/PluginExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/PluginExample.java
new file mode 100644
index 0000000..393298c
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/plugin/PluginExample.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.plugin;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class PluginExample {
+    
+    @Test
+    void registerPlugin() {
+        //tag::example[]
+        //tag::register-plugin[]
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        //register a plugin that prints the cache size information every 100 seconds 
+        cfg.setPluginProviders(new MyPluginProvider(100));
+
+        //start the node
+        Ignite ignite = Ignition.start(cfg);
+        //end::register-plugin[]
+        
+        //tag::access-plugin[]
+        //get an instance of the plugin
+        MyPlugin p = ignite.plugin("MyPlugin");
+        
+        //print the cache size information
+        p.printCacheInfo();
+        //end::access-plugin[]
+        
+        IgniteCache cache = ignite.getOrCreateCache(new CacheConfiguration("test_cache").setBackups(1));
+        
+        for (int i = 0; i < 10; i++) {
+           cache.put(i, "value " + i); 
+        }
+
+        //print the cache size information
+        p.printCacheInfo();
+        //end::example[]
+        ignite.close();
+    }
+    
+    public static void main(String[] args) {
+       PluginExample pe = new PluginExample(); 
+       pe.registerPlugin();
+    }
+}
+
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/MyCounterService.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/MyCounterService.java
new file mode 100644
index 0000000..3c3b503
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/MyCounterService.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.services;
+
+import javax.cache.CacheException;
+
+public interface MyCounterService {
+    /**
+     * Increment counter value and return the new value.
+     */
+    int increment() throws CacheException;
+
+    /**
+     * Get current counter value.
+     */
+    int get() throws CacheException;
+
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/MyCounterServiceImpl.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/MyCounterServiceImpl.java
new file mode 100644
index 0000000..2c01bfc
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/MyCounterServiceImpl.java
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.services;
+
+import javax.cache.CacheException;
+import javax.cache.processor.EntryProcessor;
+import javax.cache.processor.MutableEntry;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.resources.IgniteInstanceResource;
+import org.apache.ignite.services.Service;
+import org.apache.ignite.services.ServiceContext;
+
+public class MyCounterServiceImpl implements MyCounterService, Service {
+
+    @IgniteInstanceResource
+    private Ignite ignite;
+
+    private IgniteCache<String, Integer> cache;
+
+    /** Service name. */
+    private String svcName;
+
+    /**
+     * Service initialization.
+     */
+    @Override
+    public void init(ServiceContext ctx) {
+
+        cache = ignite.getOrCreateCache("myCounterCache");
+
+        svcName = ctx.name();
+
+        System.out.println("Service was initialized: " + svcName);
+    }
+
+    /**
+     * Cancel this service.
+     */
+    @Override
+    public void cancel(ServiceContext ctx) {
+        // Remove counter from the cache.
+        cache.remove(svcName);
+
+        System.out.println("Service was cancelled: " + svcName);
+    }
+
+    /**
+     * Start service execution.
+     */
+    @Override
+    public void execute(ServiceContext ctx) {
+        // Since our service is simply represented by a counter value stored in a cache,
+        // there is nothing we need to do in order to start it up.
+        System.out.println("Executing distributed service: " + svcName);
+    }
+
+    @Override
+    public int get() throws CacheException {
+        Integer i = cache.get(svcName);
+
+        return i == null ? 0 : i;
+    }
+
+    @Override
+    public int increment() throws CacheException {
+        return cache.invoke(svcName, new CounterEntryProcessor());
+    }
+
+    /**
+     * Entry processor which atomically increments value currently stored in cache.
+     */
+    private static class CounterEntryProcessor implements EntryProcessor<String, Integer, Integer> {
+        @Override
+        public Integer process(MutableEntry<String, Integer> e, Object... args) {
+            int newVal = e.exists() ? e.getValue() + 1 : 1;
+
+            // Update cache.
+            e.setValue(newVal);
+
+            return newVal;
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/ServiceExample.java b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/ServiceExample.java
new file mode 100644
index 0000000..d7cd753
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/java/org/apache/ignite/snippets/services/ServiceExample.java
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ignite.snippets.services;
+
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteServices;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cluster.ClusterNode;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.lang.IgnitePredicate;
+import org.apache.ignite.services.ServiceConfiguration;
+import org.junit.jupiter.api.Test;
+
+public class ServiceExample {
+
+    @Test
+    void serviceExample() {
+        //tag::start-with-method[]
+        Ignite ignite = Ignition.start();
+
+        //get the services interface associated with all server nodes
+        IgniteServices services = ignite.services();
+
+        //start a node singleton
+        services.deployClusterSingleton("myCounterService", new MyCounterServiceImpl());
+        //end::start-with-method[]
+
+        //tag::access-service[]
+        //access the service by name
+        MyCounterService counterService = ignite.services().serviceProxy("myCounterService",
+                MyCounterService.class, false); //non-sticky proxy
+
+        //call a service method
+        counterService.increment();
+        //end::access-service[]
+
+        // Print the latest counter value from our counter service.
+        System.out.println("Incremented value : " + counterService.get());
+        
+        //tag::undeploy[]
+        services.cancel("myCounterService");
+        //end::undeploy[]
+        
+        ignite.close();
+    }
+
+    @Test
+    void deployWithClusterGroup() {
+        //tag::deploy-with-cluster-group[]
+        Ignite ignite = Ignition.start();
+
+        //deploy the service to the nodes that host the cache named "myCache" 
+        ignite.services(ignite.cluster().forCacheNodes("myCache"));
+
+        //end::deploy-with-cluster-group[]
+        ignite.close();
+    }
+
+    //tag::node-filter[]
+    public static class ServiceFilter implements IgnitePredicate<ClusterNode> {
+        @Override
+        public boolean apply(ClusterNode node) {
+            // The service will be deployed on the server nodes
+            // that have the 'west.coast.node' attribute.
+            return !node.isClient() && node.attributes().containsKey("west.coast.node");
+        }
+    }
+    //end::node-filter[]
+
+    @Test
+    void affinityKey() {
+        
+        //tag::deploy-by-key[]
+        Ignite ignite = Ignition.start();
+
+        //making sure the cache exists
+        ignite.getOrCreateCache("orgCache");
+
+        ServiceConfiguration serviceCfg = new ServiceConfiguration();
+
+        // Setting service instance to deploy.
+        serviceCfg.setService(new MyCounterServiceImpl());
+
+        // Setting service name.
+        serviceCfg.setName("serviceName");
+        serviceCfg.setTotalCount(1);
+
+        // Specifying the cache name and key for the affinity based deployment.
+        serviceCfg.setCacheName("orgCache");
+        serviceCfg.setAffinityKey(123);
+
+        IgniteServices services = ignite.services();
+
+        // Deploying the service.
+        services.deploy(serviceCfg);
+        //end::deploy-by-key[]
+        ignite.close();
+    }
+
+    @Test
+    void deployingWithNodeFilter() {
+
+        System.setProperty("west.coast.node", "true");
+
+        //tag::deploy-with-node-filter[]
+        Ignite ignite = Ignition.start();
+
+        ServiceConfiguration serviceCfg = new ServiceConfiguration();
+
+        // Setting service instance to deploy.
+        serviceCfg.setService(new MyCounterServiceImpl());
+        serviceCfg.setName("serviceName");
+        serviceCfg.setMaxPerNodeCount(1);
+
+        // Setting the nodes filter.
+        serviceCfg.setNodeFilter(new ServiceFilter());
+
+        // Getting an instance of IgniteService.
+        IgniteServices services = ignite.services();
+
+        // Deploying the service.
+        services.deploy(serviceCfg);
+        //end::deploy-with-node-filter[]
+        ignite.close();
+    }
+
+    @Test
+    void startWithConfig() {
+        //tag::start-with-service-config[]
+        Ignite ignite = Ignition.start();
+
+        ServiceConfiguration serviceCfg = new ServiceConfiguration();
+
+        serviceCfg.setName("myCounterService");
+        serviceCfg.setMaxPerNodeCount(1);
+        serviceCfg.setTotalCount(1);
+        serviceCfg.setService(new MyCounterServiceImpl());
+
+        ignite.services().deploy(serviceCfg);
+        //end::start-with-service-config[]
+
+        ignite.close();
+    }
+
+    @Test
+    void serviceConfiguration() {
+        //tag::service-configuration[]
+        ServiceConfiguration serviceCfg = new ServiceConfiguration();
+
+        serviceCfg.setName("myCounterService");
+        serviceCfg.setMaxPerNodeCount(1);
+        serviceCfg.setTotalCount(1);
+        serviceCfg.setService(new MyCounterServiceImpl());
+
+        IgniteConfiguration igniteCfg = new IgniteConfiguration()
+                .setServiceConfiguration(serviceCfg);
+
+        // Start the node.
+        Ignite ignite = Ignition.start(igniteCfg);
+        //end::service-configuration[]
+        ignite.close();
+    }
+}
diff --git a/docs/_docs/code-snippets/java/src/main/resources/config/ignite-jdbc.xml b/docs/_docs/code-snippets/java/src/main/resources/config/ignite-jdbc.xml
new file mode 100644
index 0000000..5f17d3f
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/resources/config/ignite-jdbc.xml
@@ -0,0 +1,39 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::config-block[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="clientMode" value="true"/> 
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+              <property name="reconnectCount" value="1"/>
+                <property name="ipFinder">
+
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+    </bean>
+    <!-- end::config-block[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/java/src/main/resources/keystore/node.jks b/docs/_docs/code-snippets/java/src/main/resources/keystore/node.jks
new file mode 100644
index 0000000..006ecec
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/resources/keystore/node.jks
Binary files differ
diff --git a/docs/_docs/code-snippets/java/src/main/resources/keystore/trust.jks b/docs/_docs/code-snippets/java/src/main/resources/keystore/trust.jks
new file mode 100644
index 0000000..a00f125
--- /dev/null
+++ b/docs/_docs/code-snippets/java/src/main/resources/keystore/trust.jks
Binary files differ
diff --git a/docs/_docs/code-snippets/k8s/cluster-role.yaml b/docs/_docs/code-snippets/k8s/cluster-role.yaml
new file mode 100644
index 0000000..8d30884
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/cluster-role.yaml
@@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::config-block[]
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: ignite
+  namespace: ignite
+rules:
+- apiGroups:
+  - ""
+  resources: # Here are the resources you can access
+  - pods
+  - endpoints
+  verbs: # That is what you can do with them
+  - get
+  - list
+  - watch
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: ignite 
+roleRef:
+  kind: ClusterRole
+  name: ignite 
+  apiGroup: rbac.authorization.k8s.io
+subjects:
+- kind: ServiceAccount
+  name: ignite
+  namespace: ignite
+#end::config-block[]
diff --git a/docs/_docs/code-snippets/k8s/service-account.yaml b/docs/_docs/code-snippets/k8s/service-account.yaml
new file mode 100644
index 0000000..0e2e63a
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/service-account.yaml
@@ -0,0 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::config-block[]
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+    name: ignite
+    namespace: ignite
+#end::config-block[]
diff --git a/docs/_docs/code-snippets/k8s/service.yaml b/docs/_docs/code-snippets/k8s/service.yaml
new file mode 100644
index 0000000..6858bb7
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/service.yaml
@@ -0,0 +1,43 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::config-block[]
+apiVersion: v1
+kind: Service
+metadata: 
+  # The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName
+  name: ignite-service
+  # The name must be equal to TcpDiscoveryKubernetesIpFinder.namespace
+  namespace: ignite
+  labels:
+    app: ignite
+spec:
+  type: LoadBalancer
+  ports:
+    - name: rest
+      port: 8080
+      targetPort: 8080
+    - name: thinclients
+      port: 10800
+      targetPort: 10800
+  # Optional - remove 'sessionAffinity' property if the cluster
+  # and applications are deployed within Kubernetes
+  #  sessionAffinity: ClientIP   
+  selector:
+    # Must be equal to the label set for pods.
+    app: ignite
+status:
+  loadBalancer: {}
+#end::config-block[]
diff --git a/docs/_docs/code-snippets/k8s/setup.sh b/docs/_docs/code-snippets/k8s/setup.sh
new file mode 100755
index 0000000..b890baa
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/setup.sh
@@ -0,0 +1,96 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+ver=2.8.1
+
+if [ $# -eq 0 ]
+  then
+    echo "Use one input parameter: stateless or stateful"
+	exit 1
+fi
+
+mode=$1
+
+kubectl delete configmap ignite-config -n ignite --ignore-not-found
+
+if [ "$mode" = "stateful" ]; then
+kubectl delete deployment ignite-cluster -n ignite --ignore-not-found
+else 
+kubectl delete statefulset ignite-cluster -n ignite --ignore-not-found
+fi
+
+kubectl delete service ignite-service -n ignite --ignore-not-found
+
+kubectl delete clusterrole ignite -n ignite --ignore-not-found
+
+kubectl delete clusterrolebinding ignite -n ignite --ignore-not-found
+
+kubectl delete namespace ignite --ignore-not-found
+
+
+# tag::create-namespace[]
+kubectl create namespace ignite
+# end::create-namespace[]
+
+# tag::create-service[]
+kubectl create -f service.yaml
+# end::create-service[]
+
+
+# tag::create-service-account[]
+kubectl create sa ignite -n ignite
+# end::create-service-account[]
+
+# tag::create-cluster-role[]
+kubectl create -f cluster-role.yaml
+# end::create-cluster-role[]
+
+
+if [ "$mode" = "stateful" ]; then
+	cd stateful
+    sed -e "s/{version}/$ver/" statefulset-template.yaml > statefulset.yaml
+else 
+   cd stateless
+   sed -e "s/{version}/$ver/" deployment-template.yaml > deployment.yaml
+fi
+
+# tag::create-configmap[]
+kubectl create configmap ignite-config -n ignite --from-file=node-configuration.xml
+# end::create-configmap[]
+
+if [ "$mode" = "stateful" ]; then
+
+  # tag::create-statefulset[]
+  kubectl create -f statefulset.yaml
+  # end::create-statefulset[]
+    rm statefulset.yaml
+
+else 
+
+  # tag::create-deployment[]
+  kubectl create -f deployment.yaml
+  # end::create-deployment[]
+  rm deployment.yaml
+
+fi
+
+
+# tag::get-pods[]
+kubectl get pods -n ignite
+# end::get-pods[]
diff --git a/docs/_docs/code-snippets/k8s/stateful/node-configuration.xml b/docs/_docs/code-snippets/k8s/stateful/node-configuration.xml
new file mode 100644
index 0000000..d4d6292
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/stateful/node-configuration.xml
@@ -0,0 +1,55 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
+    xsi:schemaLocation="http://www.springframework.org/schema/beans         
+    http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <!-- tag::config-block[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="workDirectory" value="/ignite/work"/>
+
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="persistenceEnabled" value="true"/>
+                    </bean>
+                </property>
+
+                <property name="walPath" value="/ignite/wal"/>
+                <property name="walArchivePath" value="/ignite/walarchive"/>
+            </bean>
+
+        </property>
+
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
+                        <property name="namespace" value="ignite"/>
+                        <property name="serviceName" value="ignite-service"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+
+    </bean>
+    <!-- end::config-block[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/k8s/stateful/statefulset-template.yaml b/docs/_docs/code-snippets/k8s/stateful/statefulset-template.yaml
new file mode 100644
index 0000000..8c665e8
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/stateful/statefulset-template.yaml
@@ -0,0 +1,96 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::config-block[]
+# An example of a Kubernetes configuration for pod deployment.
+apiVersion: apps/v1 
+kind: StatefulSet 
+metadata:
+  # Cluster name.
+  name: ignite-cluster
+  namespace: ignite
+spec:
+  # The initial number of pods to be started by Kubernetes.
+  replicas: 2
+  serviceName: ignite
+  selector:
+    matchLabels:
+      app: ignite
+  template:
+    metadata:
+      labels:
+        app: ignite 
+    spec:
+      serviceAccountName: ignite 
+      terminationGracePeriodSeconds: 60000 
+      containers:
+        # Custom pod name.
+      - name: ignite-node
+        image: apacheignite/ignite:{version}
+        env:
+        - name: OPTION_LIBS
+          value: ignite-kubernetes,ignite-rest-http
+        - name: CONFIG_URI
+          value: file:///ignite/config/node-configuration.xml
+        - name: JVM_OPTS
+          value: "-DIGNITE_WAL_MMAP=false"
+        ports:
+        # Ports to open.
+        - containerPort: 47100 # communication SPI port
+        - containerPort: 47500 # discovery SPI port
+        - containerPort: 49112 # JMX port
+        - containerPort: 10800 # thin clients/JDBC driver port
+        - containerPort: 8080 # REST API
+        volumeMounts:
+        - mountPath: /ignite/config
+          name: config-vol
+        - mountPath: /ignite/work
+          name: work-vol
+        - mountPath: /ignite/wal
+          name: wal-vol
+        - mountPath: /ignite/walarchive
+          name: walarchive-vol
+      securityContext:
+        fsGroup: 2000 # try removing this if you have permission issues
+      volumes:
+      - name: config-vol
+        configMap:
+          name: ignite-config
+  volumeClaimTemplates:
+  - metadata:
+      name: work-vol
+    spec:
+      accessModes: [ "ReadWriteOnce" ]
+#      storageClassName: "ignite-persistence-storage-class"
+      resources:
+        requests:
+          storage: "1Gi" # make sure to provide enought space for your application data
+  - metadata:
+      name: wal-vol
+    spec:
+      accessModes: [ "ReadWriteOnce" ]
+#      storageClassName: "ignite-wal-storage-class"
+      resources:
+        requests:
+          storage: "1Gi" 
+  - metadata:
+      name: walarchive-vol
+    spec:
+      accessModes: [ "ReadWriteOnce" ]
+#      storageClassName: "ignite-wal-storage-class"
+      resources:
+        requests:
+          storage: "1Gi"
+#end::config-block[]
diff --git a/docs/_docs/code-snippets/k8s/stateless/deployment-template.yaml b/docs/_docs/code-snippets/k8s/stateless/deployment-template.yaml
new file mode 100644
index 0000000..fe388d8
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/stateless/deployment-template.yaml
@@ -0,0 +1,60 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::config-block[]
+# An example of a Kubernetes configuration for pod deployment.
+apiVersion: apps/v1 
+kind: Deployment
+metadata:
+  # Cluster name.
+  name: ignite-cluster
+  namespace: ignite
+spec:
+  # The initial number of pods to be started by Kubernetes.
+  replicas: 2
+  selector:
+    matchLabels:
+      app: ignite
+  template:
+    metadata:
+      labels:
+        app: ignite 
+    spec:
+      serviceAccountName: ignite 
+      terminationGracePeriodSeconds: 60000 
+      containers:
+        # Custom pod name.
+      - name: ignite-node
+        image: apacheignite/ignite:{version}
+        env:
+        - name: OPTION_LIBS
+          value: ignite-kubernetes,ignite-rest-http
+        - name: CONFIG_URI
+          value: file:///ignite/config/node-configuration.xml
+        ports:
+        # Ports to open.
+        - containerPort: 47100 # communication SPI port
+        - containerPort: 47500 # discovery SPI port
+        - containerPort: 49112 # dafault JMX port
+        - containerPort: 10800 # thin clients/JDBC driver port
+        - containerPort: 8080 # REST API
+        volumeMounts:
+        - mountPath: /ignite/config
+          name: config-vol
+      volumes:
+      - name: config-vol
+        configMap:
+          name: ignite-config
+#end::config-block[]
diff --git a/docs/_docs/code-snippets/k8s/stateless/node-configuration.xml b/docs/_docs/code-snippets/k8s/stateless/node-configuration.xml
new file mode 100644
index 0000000..d0f56f0
--- /dev/null
+++ b/docs/_docs/code-snippets/k8s/stateless/node-configuration.xml
@@ -0,0 +1,39 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xsi:schemaLocation="
+        http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <!-- tag::config-block[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
+                        <property name="namespace" value="ignite"/>
+                        <property name="serviceName" value="ignite-service"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+    </bean>
+    <!-- end::config-block[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/nodejs/authentication.js b/docs/_docs/code-snippets/nodejs/authentication.js
new file mode 100644
index 0000000..6c00548
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/authentication.js
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+async function connectClient() {
+    const igniteClient = new IgniteClient(onStateChanged);
+    try {
+        //tag::auth[]
+        const ENDPOINT = 'localhost:10800';
+        const USER_NAME = 'ignite';
+        const PASSWORD = 'ignite';
+
+        const igniteClientConfiguration = new IgniteClientConfiguration(
+            ENDPOINT).setUserName(USER_NAME).setPassword(PASSWORD);
+        //end::auth[]
+        // Connect to Ignite node
+        await igniteClient.connect(igniteClientConfiguration);
+    } catch (err) {
+        console.log(err.message);
+    }
+}
+
+function onStateChanged(state, reason) {
+    if (state === IgniteClient.STATE.CONNECTED) {
+        console.log('Client is started');
+    } else if (state === IgniteClient.STATE.CONNECTING) {
+        console.log('Client is connecting');
+    } else if (state === IgniteClient.STATE.DISCONNECTED) {
+        console.log('Client is stopped');
+        if (reason) {
+            console.log(reason);
+        }
+    }
+}
+
+connectClient();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/binary-types.js b/docs/_docs/code-snippets/nodejs/binary-types.js
new file mode 100644
index 0000000..c6b9867
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/binary-types.js
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const CacheEntry = IgniteClient.CacheEntry;
+const ComplexObjectType = IgniteClient.ComplexObjectType;
+
+class Person {
+    constructor(id = null, name = null, salary = null) {
+        this.id = id;
+        this.name = name;
+        this.salary = salary;
+    }
+}
+
+async function putGetComplexAndBinaryObjects() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        const cache = await igniteClient.getOrCreateCache('myPersonCache');
+        // Complex Object type for JavaScript Person class instances
+        const personComplexObjectType = new ComplexObjectType(new Person(0, '', 0)).
+        setFieldType('id', ObjectType.PRIMITIVE_TYPE.INTEGER);
+        // Set cache key and value types
+        cache.setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER).
+        setValueType(personComplexObjectType);
+        // Put Complex Objects to the cache
+        await cache.put(1, new Person(1, 'John Doe', 1000));
+        await cache.put(2, new Person(2, 'Jane Roe', 2000));
+        // Get Complex Object; returned value is an instance of Person class
+        const person = await cache.get(1);
+        console.log(person);
+
+        // New CacheClient instance of the same cache to operate with BinaryObjects
+        const binaryCache = igniteClient.getCache('myPersonCache').
+        setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER);
+        // Get Complex Object from the cache in a binary form, returned value is an instance of BinaryObject class
+        let binaryPerson = await binaryCache.get(2);
+        console.log('Binary form of Person:');
+        for (let fieldName of binaryPerson.getFieldNames()) {
+            let fieldValue = await binaryPerson.getField(fieldName);
+            console.log(fieldName + ' : ' + fieldValue);
+        }
+        // Modify Binary Object and put it to the cache
+        binaryPerson.setField('id', 3, ObjectType.PRIMITIVE_TYPE.INTEGER).
+        setField('name', 'Mary Major');
+        await binaryCache.put(3, binaryPerson);
+
+        // Get Binary Object from the cache and convert it to JavaScript Object
+        binaryPerson = await binaryCache.get(3);
+        console.log(await binaryPerson.toObject(personComplexObjectType));
+
+        await igniteClient.destroyCache('myPersonCache');
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+putGetComplexAndBinaryObjects();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/conf1.js b/docs/_docs/code-snippets/nodejs/conf1.js
new file mode 100644
index 0000000..ea39c61
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/conf1.js
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+//tag::conf1[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+const igniteClientConfiguration = new IgniteClientConfiguration('127.0.0.1:10800');
+
+const igniteClient = new IgniteClient(function onStateChanged(state, reason) {
+    if (state === IgniteClient.STATE.CONNECTED) {
+        console.log('Client is started');
+    } else if (state === IgniteClient.STATE.DISCONNECTED) {
+        console.log('Client is stopped');
+        if (reason) {
+            console.log(reason);
+        }
+    }
+});
+//end::conf1[]
+igniteClient.connect(igniteClientConfiguration).then(() => igniteClient.disconnect());
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/conf2.js b/docs/_docs/code-snippets/nodejs/conf2.js
new file mode 100644
index 0000000..da9c7e3
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/conf2.js
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+//tag::conf2[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+const igniteClientConfiguration = new IgniteClientConfiguration('127.0.0.1:10800')
+    .setUserName('ignite')
+    .setPassword('ignite')
+    .setConnectionOptions(false, {'timeout': 0});
+//end::conf2[]
+
+const igniteClient = new IgniteClient(function onStateChanged(state, reason) {
+    if (state === IgniteClient.STATE.CONNECTED) {
+        console.log('Client is started');
+    } else if (state === IgniteClient.STATE.DISCONNECTED) {
+        console.log('Client is stopped');
+        if (reason) {
+            console.log(reason);
+        }
+    }
+});
+igniteClient.connect(igniteClientConfiguration).then(() => igniteClient.disconnect());
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/configuring-cache-1.js b/docs/_docs/code-snippets/nodejs/configuring-cache-1.js
new file mode 100644
index 0000000..42e0f86
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/configuring-cache-1.js
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+async function getOrCreateCacheByName() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        // Get or create cache by name
+        const cache = await igniteClient.getOrCreateCache('myCache');
+
+        // Perform cache key-value operations
+        // ...
+
+        // Destroy cache
+        await igniteClient.destroyCache('myCache');
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+getOrCreateCacheByName();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/configuring-cache-2.js b/docs/_docs/code-snippets/nodejs/configuring-cache-2.js
new file mode 100644
index 0000000..ded0f0b
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/configuring-cache-2.js
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const CacheConfiguration = IgniteClient.CacheConfiguration;
+
+async function createCacheByConfiguration() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        // Create cache by name and configuration
+        const cache = await igniteClient.createCache(
+            'myCache',
+            new CacheConfiguration().setSqlSchema('PUBLIC'));
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+createCacheByConfiguration();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/connecting.js b/docs/_docs/code-snippets/nodejs/connecting.js
new file mode 100644
index 0000000..6143940
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/connecting.js
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+async function connectClient() {
+    const igniteClient = new IgniteClient(onStateChanged);
+    try {
+        const igniteClientConfiguration = new IgniteClientConfiguration(
+            '127.0.0.1:10800', '127.0.0.1:10801', '127.0.0.1:10802');
+        // Connect to Ignite node
+        await igniteClient.connect(igniteClientConfiguration);
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+}
+
+function onStateChanged(state, reason) {
+    if (state === IgniteClient.STATE.CONNECTED) {
+        console.log('Client is started');
+    }
+    else if (state === IgniteClient.STATE.CONNECTING) {
+        console.log('Client is connecting');
+    }
+    else if (state === IgniteClient.STATE.DISCONNECTED) {
+        console.log('Client is stopped');
+        if (reason) {
+            console.log(reason);
+        }
+    }
+}
+
+connectClient();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/enabling-debug.js b/docs/_docs/code-snippets/nodejs/enabling-debug.js
new file mode 100644
index 0000000..31bd5ce
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/enabling-debug.js
@@ -0,0 +1,22 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+
+const igniteClient = new IgniteClient();
+igniteClient.setDebug(true);
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/get-existing-cache.js b/docs/_docs/code-snippets/nodejs/get-existing-cache.js
new file mode 100644
index 0000000..4c0e6e8
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/get-existing-cache.js
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+async function getExistingCache() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        // Get existing cache by name
+        const cache = igniteClient.getCache('myCache');
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+getExistingCache();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/initialize.js b/docs/_docs/code-snippets/nodejs/initialize.js
new file mode 100644
index 0000000..c2d8f4b
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/initialize.js
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+
+const igniteClient = new IgniteClient(onStateChanged);
+
+function onStateChanged(state, reason) {
+    if (state === IgniteClient.STATE.CONNECTED) {
+        console.log('Client is started');
+    }
+    else if (state === IgniteClient.STATE.DISCONNECTED) {
+        console.log('Client is stopped');
+        if (reason) {
+            console.log(reason);
+        }
+    }
+}
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/key-value.js b/docs/_docs/code-snippets/nodejs/key-value.js
new file mode 100644
index 0000000..f7d5165
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/key-value.js
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const CacheEntry = IgniteClient.CacheEntry;
+
+async function performCacheKeyValueOperations() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        const cache = (await igniteClient.getOrCreateCache('myCache')).
+        setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER);
+        // Put and get value
+        await cache.put(1, 'abc');
+        const value = await cache.get(1);
+        console.log(value);
+
+        // Put and get multiple values using putAll()/getAll() methods
+        await cache.putAll([new CacheEntry(2, 'value2'), new CacheEntry(3, 'value3')]);
+        const values = await cache.getAll([1, 2, 3]);
+        console.log(values.flatMap(val => val.getValue()));
+
+        // Removes all entries from the cache
+        await cache.clear();
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+performCacheKeyValueOperations();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/scan-query.js b/docs/_docs/code-snippets/nodejs/scan-query.js
new file mode 100644
index 0000000..fa4e281
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/scan-query.js
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const CacheEntry = IgniteClient.CacheEntry;
+const ScanQuery = IgniteClient.ScanQuery;
+
+async function performScanQuery() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        const cache = (await igniteClient.getOrCreateCache('myCache')).setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER);
+
+        // Put multiple values using putAll()
+        await cache.putAll([
+            new CacheEntry(1, 'value1'),
+            new CacheEntry(2, 'value2'),
+            new CacheEntry(3, 'value3')]);
+
+        // Create and configure scan query
+        const scanQuery = new ScanQuery()
+            .setPageSize(1);
+        // Obtain scan query cursor
+        const cursor = await cache.query(scanQuery);
+        // Get all cache entries returned by the scan query
+        for (let cacheEntry of await cursor.getAll()) {
+            console.log(cacheEntry.getValue());
+        }
+
+        await igniteClient.destroyCache('myCache');
+    } catch (err) {
+        console.log(err.message);
+    } finally {
+        igniteClient.disconnect();
+    }
+}
+
+performScanQuery();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/scanquery.js b/docs/_docs/code-snippets/nodejs/scanquery.js
new file mode 100644
index 0000000..dd0e4c1
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/scanquery.js
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const CacheEntry = IgniteClient.CacheEntry;
+const ScanQuery = IgniteClient.ScanQuery;
+
+async function performScanQuery() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+
+        //tag::scan-query[]
+        const cache = (await igniteClient.getOrCreateCache('myCache')).
+            setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER);
+
+        // Put multiple values using putAll()
+        await cache.putAll([
+            new CacheEntry(1, 'value1'),
+            new CacheEntry(2, 'value2'),
+            new CacheEntry(3, 'value3')]);
+
+        // Create and configure scan query
+        const scanQuery = new ScanQuery().
+            setPageSize(1);
+        // Obtain scan query cursor
+        const cursor = await cache.query(scanQuery);
+        // Get all cache entries returned by the scan query
+        for (let cacheEntry of await cursor.getAll()) {
+            console.log(cacheEntry.getValue());
+        }
+
+        //end::scan-query[]
+
+        await igniteClient.destroyCache('myCache');
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+performScanQuery();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/sql-fields-query.js b/docs/_docs/code-snippets/nodejs/sql-fields-query.js
new file mode 100644
index 0000000..d9ebacad
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/sql-fields-query.js
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const CacheConfiguration = IgniteClient.CacheConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const SqlFieldsQuery = IgniteClient.SqlFieldsQuery;
+
+async function performSqlFieldsQuery() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        const cache = await igniteClient.getOrCreateCache('myPersonCache', new CacheConfiguration()
+            .setSqlSchema('PUBLIC'));
+
+        // Create table using SqlFieldsQuery
+        (await cache.query(new SqlFieldsQuery(
+            'CREATE TABLE Person (id INTEGER PRIMARY KEY, firstName VARCHAR, lastName VARCHAR, salary DOUBLE)'))).getAll();
+
+        // Insert data into the table
+        const insertQuery = new SqlFieldsQuery('INSERT INTO Person (id, firstName, lastName, salary) values (?, ?, ?, ?)')
+            .setArgTypes(ObjectType.PRIMITIVE_TYPE.INTEGER);
+        (await cache.query(insertQuery.setArgs(1, 'John', 'Doe', 1000))).getAll();
+        (await cache.query(insertQuery.setArgs(2, 'Jane', 'Roe', 2000))).getAll();
+
+        // Obtain sql fields cursor
+        const sqlFieldsCursor = await cache.query(
+            new SqlFieldsQuery("SELECT concat(firstName, ' ', lastName), salary from Person").setPageSize(1));
+
+        // Iterate over elements returned by the query
+        do {
+            console.log(await sqlFieldsCursor.getValue());
+        } while (sqlFieldsCursor.hasMore());
+
+        // Drop the table
+        (await cache.query(new SqlFieldsQuery("DROP TABLE Person"))).getAll();
+    } catch (err) {
+        console.log(err.message);
+    } finally {
+        igniteClient.disconnect();
+    }
+}
+
+performSqlFieldsQuery();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/sql.js b/docs/_docs/code-snippets/nodejs/sql.js
new file mode 100644
index 0000000..59c28d9
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/sql.js
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const CacheConfiguration = IgniteClient.CacheConfiguration;
+const QueryEntity = IgniteClient.QueryEntity;
+const QueryField = IgniteClient.QueryField;
+const ObjectType = IgniteClient.ObjectType;
+const ComplexObjectType = IgniteClient.ComplexObjectType;
+const CacheEntry = IgniteClient.CacheEntry;
+const SqlQuery = IgniteClient.SqlQuery;
+
+async function performSqlQuery() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+	  	//tag::sql[]
+        // Cache configuration required for sql query execution
+        const cacheConfiguration = new CacheConfiguration().
+            setQueryEntities(
+                new QueryEntity().
+                    setValueTypeName('Person').
+                    setFields([
+                        new QueryField('name', 'java.lang.String'),
+                        new QueryField('salary', 'java.lang.Double')
+                    ]));
+        const cache = (await igniteClient.getOrCreateCache('sqlQueryPersonCache', cacheConfiguration)).
+            setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER).
+            setValueType(new ComplexObjectType({ 'name' : '', 'salary' : 0 }, 'Person'));
+
+        // Put multiple values using putAll()
+        await cache.putAll([
+            new CacheEntry(1, { 'name' : 'John Doe', 'salary' : 1000 }),
+            new CacheEntry(2, { 'name' : 'Jane Roe', 'salary' : 2000 }),
+            new CacheEntry(3, { 'name' : 'Mary Major', 'salary' : 1500 })]);
+
+        // Create and configure sql query
+        const sqlQuery = new SqlQuery('Person', 'salary > ? and salary <= ?').
+            setArgs(900, 1600);
+        // Obtain sql query cursor
+        const cursor = await cache.query(sqlQuery);
+        // Get all cache entries returned by the sql query
+        for (let cacheEntry of await cursor.getAll()) {
+            console.log(cacheEntry.getValue());
+        }
+
+	  	//end::sql[]
+
+        await igniteClient.destroyCache('sqlQueryPersonCache');
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+performSqlQuery();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/tls.js b/docs/_docs/code-snippets/nodejs/tls.js
new file mode 100644
index 0000000..b205f9d
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/tls.js
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const tls = require('tls');
+
+const FS = require('fs');
+const IgniteClient = require("apache-ignite-client");
+const ObjectType = IgniteClient.ObjectType;
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+
+const ENDPOINT = 'localhost:10800';
+const USER_NAME = 'ignite';
+const PASSWORD = 'ignite';
+
+const TLS_KEY_FILE_NAME = __dirname + '/certs/client.key';
+const TLS_CERT_FILE_NAME = __dirname + '/certs/client.crt';
+const TLS_CA_FILE_NAME = __dirname + '/certs/ca.crt';
+
+const CACHE_NAME = 'AuthTlsExample_cache';
+
+// This example demonstrates how to establish a secure connection to an Ignite node and use username/password authentication,
+// as well as basic Key-Value Queries operations for primitive types:
+// - connects to a node using TLS and providing username/password
+// - creates a cache, if it doesn't exist
+//   - specifies key and value type of the cache
+// - put data of primitive types into the cache
+// - get data from the cache
+// - destroys the cache
+class AuthTlsExample {
+
+    async start() {
+        const igniteClient = new IgniteClient(this.onStateChanged.bind(this));
+        igniteClient.setDebug(true);
+        try {
+            const connectionOptions = {
+                'key' : FS.readFileSync(TLS_KEY_FILE_NAME),
+                'cert' : FS.readFileSync(TLS_CERT_FILE_NAME),
+                'ca' : FS.readFileSync(TLS_CA_FILE_NAME)
+            };
+            await igniteClient.connect(new IgniteClientConfiguration(ENDPOINT).
+            setUserName(USER_NAME).
+            setPassword(PASSWORD).
+            setConnectionOptions(true, connectionOptions));
+
+            const cache = (await igniteClient.getOrCreateCache(CACHE_NAME)).
+            setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER).
+            setValueType(ObjectType.PRIMITIVE_TYPE.SHORT_ARRAY);
+
+            await this.putGetData(cache);
+
+            await igniteClient.destroyCache(CACHE_NAME);
+        }
+        catch (err) {
+            console.log('ERROR: ' + err.message);
+        }
+        finally {
+            igniteClient.disconnect();
+        }
+    }
+
+    async putGetData(cache) {
+        let keys = [1, 2, 3];
+        let values = keys.map(key => this.generateValue(key));
+
+        // Put multiple values in parallel
+        await Promise.all([
+            await cache.put(keys[0], values[0]),
+            await cache.put(keys[1], values[1]),
+            await cache.put(keys[2], values[2])
+        ]);
+        console.log('Cache values put successfully');
+
+        // Get values sequentially
+        let value;
+        for (let i = 0; i < keys.length; i++) {
+            value = await cache.get(keys[i]);
+            if (!this.compareValues(value, values[i])) {
+                console.log('Unexpected cache value!');
+                return;
+            }
+        }
+        console.log('Cache values get successfully');
+    }
+
+    compareValues(array1, array2) {
+        return array1.length === array2.length &&
+            array1.every((value1, index) => value1 === array2[index]);
+    }
+
+    generateValue(key) {
+        const length = key + 5;
+        const result = new Array(length);
+        for (let i = 0; i < length; i++) {
+            result[i] = key * 10 + i;
+        }
+        return result;
+    }
+
+    onStateChanged(state, reason) {
+        if (state === IgniteClient.STATE.CONNECTED) {
+            console.log('Client is started');
+        }
+        else if (state === IgniteClient.STATE.DISCONNECTED) {
+            console.log('Client is stopped');
+            if (reason) {
+                console.log(reason);
+            }
+        }
+    }
+}
+
+const authTlsExample = new AuthTlsExample();
+authTlsExample.start().then();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/nodejs/types-mapping-configuration.js b/docs/_docs/code-snippets/nodejs/types-mapping-configuration.js
new file mode 100644
index 0000000..003c6ef
--- /dev/null
+++ b/docs/_docs/code-snippets/nodejs/types-mapping-configuration.js
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::example-block[]
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const MapObjectType = IgniteClient.MapObjectType;
+
+async function setCacheKeyValueTypes() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        //tag::mapping[]
+        const cache = await igniteClient.getOrCreateCache('myCache');
+        // Set cache key/value types
+        cache.setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER)
+            .setValueType(new MapObjectType(
+                MapObjectType.MAP_SUBTYPE.LINKED_HASH_MAP,
+                ObjectType.PRIMITIVE_TYPE.SHORT,
+                ObjectType.PRIMITIVE_TYPE.BYTE_ARRAY));
+        //end::mapping[]
+        await cache.get(1)
+    } catch (err) {
+        console.log(err.message);
+    } finally {
+        igniteClient.disconnect();
+    }
+}
+
+setCacheKeyValueTypes();
+//end::example-block[]
diff --git a/docs/_docs/code-snippets/php/ConnectingToCluster.php b/docs/_docs/code-snippets/php/ConnectingToCluster.php
new file mode 100644
index 0000000..bf44b01
--- /dev/null
+++ b/docs/_docs/code-snippets/php/ConnectingToCluster.php
@@ -0,0 +1,39 @@
+<?php
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+require_once "/home/abudnikov/tmp/php-thin-client/vendor/autoload.php";
+
+#tag::connecting[]
+use Apache\Ignite\Client;
+use Apache\Ignite\ClientConfiguration;
+use Apache\Ignite\Exception\ClientException;
+
+function connectClient(): void
+{
+    $client = new Client();
+    try {
+        $clientConfiguration = new ClientConfiguration(
+            '127.0.0.1:10800', '127.0.0.1:10801', '127.0.0.1:10802');
+        // Connect to Ignite node
+        $client->connect($clientConfiguration);
+    } catch (ClientException $e) {
+        echo($e->getMessage());
+    }
+}
+
+connectClient();
+#end::connecting[]
diff --git a/docs/_docs/code-snippets/php/Security.php b/docs/_docs/code-snippets/php/Security.php
new file mode 100644
index 0000000..2196f12
--- /dev/null
+++ b/docs/_docs/code-snippets/php/Security.php
@@ -0,0 +1,45 @@
+<?php
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+require_once "/home/abudnikov/tmp/php-thin-client/vendor/autoload.php";
+
+use Apache\Ignite\Client;
+use Apache\Ignite\ClientConfiguration;
+
+//tag::tls[]
+$tlsOptions = [
+    'local_cert' => '/path/to/client/cert',
+    'cafile' => '/path/to/ca/file',
+    'local_pk' => '/path/to/key/file'
+];
+
+$config = new ClientConfiguration('localhost:10800');
+$config->setTLSOptions($tlsOptions);
+
+$client = new Client();
+$client->connect($config);
+//end::tls[]
+
+//tag::authentication[]
+$config = new ClientConfiguration('localhost:10800');
+$config->setUserName('ignite');
+$config->setPassword('ignite');
+//$config->setTLSOptions($tlsOptions);
+
+$client = new Client();
+$client->connect($config);
+//end::authentication[]
diff --git a/docs/_docs/code-snippets/php/UsingKeyValueApi.php b/docs/_docs/code-snippets/php/UsingKeyValueApi.php
new file mode 100644
index 0000000..949076e
--- /dev/null
+++ b/docs/_docs/code-snippets/php/UsingKeyValueApi.php
@@ -0,0 +1,134 @@
+<?php
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+require_once "/home/abudnikov/tmp/php-thin-client/vendor/autoload.php";
+
+use Apache\Ignite\Cache\CacheConfiguration;
+use Apache\Ignite\Cache\CacheEntry;
+use Apache\Ignite\Client;
+use Apache\Ignite\ClientConfiguration;
+use Apache\Ignite\Query\ScanQuery;
+use Apache\Ignite\Query\SqlFieldsQuery;
+
+$client = new Client();
+$clientConfiguration = new ClientConfiguration('127.0.0.1:10800');
+// Connect to Ignite node
+$client->connect($clientConfiguration);
+
+//tag::createCache[]
+$cacheCfg = new CacheConfiguration();
+$cacheCfg->setCacheMode(CacheConfiguration::CACHE_MODE_REPLICATED);
+$cacheCfg->setWriteSynchronizationMode(CacheConfiguration::WRITE_SYNC_MODE_FULL_SYNC);
+
+$cache = $client->getOrCreateCache('References', $cacheCfg);
+//end::createCache[]
+
+//tag::basicOperations[]
+$val = array();
+$keys = range(1, 100);
+foreach ($keys as $number) {
+    $val[] = new CacheEntry($number, strval($number));
+}
+$cache->putAll($val);
+
+$replace = $cache->replaceIfEquals(1, '2', '3');
+echo $replace ? 'true' : 'false'; //false
+echo "\r\n";
+
+$value = $cache->get(1);
+echo $value; //1
+echo "\r\n";
+
+$replace = $cache->replaceIfEquals(1, "1", 3);
+echo $replace ? 'true' : 'false'; //true
+echo "\r\n";
+
+$value = $cache->get(1);
+echo $value; //3
+echo "\r\n";
+
+$cache->put(101, '101');
+
+$cache->removeKeys($keys);
+$sizeIsOne = $cache->getSize() == 1;
+echo $sizeIsOne ? 'true' : 'false'; //true
+echo "\r\n";
+
+$value = $cache->get(101);
+echo $value; //101
+echo "\r\n";
+
+$cache->removeAll();
+$sizeIsZero = $cache->getSize() == 0;
+echo $sizeIsZero ? 'true' : 'false'; //true
+echo "\r\n";
+
+//end::basicOperations[]
+
+class Person
+{
+    public $id;
+    public $name;
+
+    public function __construct($id, $name)
+    {
+        $this->id = $id;
+        $this->name = $name;
+    }
+
+}
+
+//tag::scanQry[]
+$cache = $client->getOrCreateCache('personCache');
+
+$cache->put(1, new Person(1, 'John Smith'));
+$cache->put(1, new Person(1, 'John Johnson'));
+
+$qry = new ScanQuery();
+$cache->query(new ScanQuery());
+//end::scanQry[]
+
+$cache->removeAll();
+
+//tag::executingSql[]
+$create_table = new SqlFieldsQuery(
+    sprintf('CREATE TABLE IF NOT EXISTS Person (id INT PRIMARY KEY, name VARCHAR) WITH "VALUE_TYPE=%s"', Person::class)
+);
+$create_table->setSchema('PUBLIC');
+$cache->query($create_table)->getAll();
+
+$key = 1;
+$val = new Person(1, 'Person 1');
+
+$insert = new SqlFieldsQuery('INSERT INTO Person(id, name) VALUES(?, ?)');
+$insert->setArgs($val->id, $val->name);
+$insert->setSchema('PUBLIC');
+$cache->query($insert)->getAll();
+
+$select = new SqlFieldsQuery('SELECT name FROM Person WHERE id = ?');
+$select->setArgs($key);
+$select->setSchema('PUBLIC');
+$cursor = $cache->query($select);
+// Get the results; the `getAll()` methods closes the cursor; you do not have to call cursor.close();
+$results = $cursor->getAll();
+
+if (sizeof($results) != 0) {
+    echo 'name = ' . $results[0][0];
+    echo "\r\n";
+}
+
+//end::executingSql[]
diff --git a/docs/_docs/code-snippets/python/auth.py b/docs/_docs/code-snippets/python/auth.py
new file mode 100644
index 0000000..ce50457
--- /dev/null
+++ b/docs/_docs/code-snippets/python/auth.py
@@ -0,0 +1,33 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+import ssl
+
+#tag::no-ssl[]
+client = Client(username='ignite', password='ignite', use_ssl=False)
+#end::no-ssl[]
+
+client = Client(
+                ssl_cert_reqs=ssl.CERT_REQUIRED,
+                ssl_keyfile='/path/to/key/file',
+                ssl_certfile='/path/to/client/cert',
+                ssl_ca_certfile='/path/to/trusted/cert/or/chain',
+                username='ignite',
+                password='ignite',)
+
+client.connect('localhost', 10800)
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/basic_operations.py b/docs/_docs/code-snippets/python/basic_operations.py
new file mode 100644
index 0000000..9f407a6
--- /dev/null
+++ b/docs/_docs/code-snippets/python/basic_operations.py
@@ -0,0 +1,42 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+# Create cache
+my_cache = client.create_cache('my cache')
+
+# Put value in cache
+my_cache.put('my key', 42)
+
+# Get value from cache
+result = my_cache.get('my key')
+print(result)  # 42
+
+result = my_cache.get('non-existent key')
+print(result)  # None
+
+# Get multiple values from cache
+result = my_cache.get_all([
+    'my key',
+    'non-existent key',
+    'other-key',
+])
+print(result)  # {'my key': 42}
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/client_reconnect.py b/docs/_docs/code-snippets/python/client_reconnect.py
new file mode 100644
index 0000000..1aeca27
--- /dev/null
+++ b/docs/_docs/code-snippets/python/client_reconnect.py
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+from pyignite.datatypes.cache_config import CacheMode
+from pyignite.datatypes.prop_codes import *
+from pyignite.exceptions import SocketError
+
+nodes = [
+    ('127.0.0.1', 10800),
+    ('217.29.2.1', 10800),
+    ('200.10.33.1', 10800),
+]
+
+client = Client(timeout=40.0)
+client.connect(nodes)
+print('Connected to {}'.format(client))
+
+my_cache = client.get_or_create_cache({
+    PROP_NAME: 'my_cache',
+    PROP_CACHE_MODE: CacheMode.REPLICATED,
+})
+my_cache.put('test_key', 0)
+
+# Abstract main loop
+while True:
+    try:
+        # Do the work
+        test_value = my_cache.get('test_key')
+        my_cache.put('test_key', test_value + 1)
+    except (OSError, SocketError) as e:
+        # Recover from error (repeat last command, check data
+        # consistency or just continue − depends on the task)
+        print('Error: {}'.format(e))
+        print('Last value: {}'.format(my_cache.get('test_key')))
+        print('Reconnected to {}'.format(client))
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/client_ssl.py b/docs/_docs/code-snippets/python/client_ssl.py
new file mode 100644
index 0000000..3904d0f
--- /dev/null
+++ b/docs/_docs/code-snippets/python/client_ssl.py
@@ -0,0 +1,29 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+import ssl
+
+client = Client(
+                use_ssl=True,
+                ssl_cert_reqs=ssl.CERT_REQUIRED,
+                ssl_keyfile='/path/to/key/file',
+                ssl_certfile='/path/to/client/cert',
+                ssl_ca_certfile='/path/to/trusted/cert/or/chain',
+)
+
+client.connect('localhost', 10800)
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/connect.py b/docs/_docs/code-snippets/python/connect.py
new file mode 100644
index 0000000..27a0bc8
--- /dev/null
+++ b/docs/_docs/code-snippets/python/connect.py
@@ -0,0 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+
+## Open a connection
+client = Client()
+client.connect('127.0.0.1', 10800)
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/create_cache.py b/docs/_docs/code-snippets/python/create_cache.py
new file mode 100644
index 0000000..be58156
--- /dev/null
+++ b/docs/_docs/code-snippets/python/create_cache.py
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+
+# Open a connection
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+# Create a cache
+my_cache = client.create_cache('myCache')
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/create_cache_with_properties.py b/docs/_docs/code-snippets/python/create_cache_with_properties.py
new file mode 100644
index 0000000..2051beb
--- /dev/null
+++ b/docs/_docs/code-snippets/python/create_cache_with_properties.py
@@ -0,0 +1,52 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from collections import OrderedDict
+
+from pyignite import Client, GenericObjectMeta
+from pyignite.datatypes import *
+from pyignite.datatypes.prop_codes import *
+
+# Open a connection
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+cache_config = {
+    PROP_NAME: 'my_cache',
+    PROP_BACKUPS_NUMBER: 2,
+    PROP_CACHE_KEY_CONFIGURATION: [
+        {
+            'type_name': 'PersonKey',
+            'affinity_key_field_name': 'companyId'
+        }
+    ]
+}
+
+my_cache = client.create_cache(cache_config)
+
+
+class PersonKey(metaclass=GenericObjectMeta, type_name='PersonKey', schema=OrderedDict([
+    ('personId', IntObject),
+    ('companyId', IntObject),
+])):
+    pass
+
+
+personKey = PersonKey(personId=1, companyId=1)
+my_cache.put(personKey, 'test')
+
+print(my_cache.get(personKey))
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/scan.py b/docs/_docs/code-snippets/python/scan.py
new file mode 100644
index 0000000..b048ca6
--- /dev/null
+++ b/docs/_docs/code-snippets/python/scan.py
@@ -0,0 +1,59 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+my_cache = client.create_cache('myCache')
+
+my_cache.put_all({'key_{}'.format(v): v for v in range(20)})
+# {
+#     'key_0': 0,
+#     'key_1': 1,
+#     'key_2': 2,
+#     ... 20 elements in total...
+#     'key_18': 18,
+#     'key_19': 19
+# }
+
+result = my_cache.scan()
+
+for k, v in result:
+    print(k, v)
+# 'key_17' 17
+# 'key_10' 10
+# 'key_6' 6,
+# ... 20 elements in total...
+# 'key_16' 16
+# 'key_12' 12
+
+
+# tag::dict[]
+result = my_cache.scan()
+print(dict(result))
+# {
+#     'key_17': 17,
+#     'key_10': 10,
+#     'key_6': 6,
+#     ... 20 elements in total...
+#     'key_16': 16,
+#     'key_12': 12
+# }
+# end::dict[]
+
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/sql.py b/docs/_docs/code-snippets/python/sql.py
new file mode 100644
index 0000000..bad59c4
--- /dev/null
+++ b/docs/_docs/code-snippets/python/sql.py
@@ -0,0 +1,66 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+CITY_CREATE_TABLE_QUERY = '''CREATE TABLE City (
+    ID INT(11),
+    Name CHAR(35),
+    CountryCode CHAR(3),
+    District CHAR(20),
+    Population INT(11),
+    PRIMARY KEY (ID, CountryCode)
+) WITH "affinityKey=CountryCode"'''
+
+client.sql(CITY_CREATE_TABLE_QUERY)
+
+CITY_CREATE_INDEX = '''CREATE INDEX idx_country_code ON city (CountryCode)'''
+
+client.sql(CITY_CREATE_INDEX)
+
+CITY_INSERT_QUERY = '''INSERT INTO City(
+    ID, Name, CountryCode, District, Population
+) VALUES (?, ?, ?, ?, ?)'''
+
+CITY_DATA = [
+    [3793, 'New York', 'USA', 'New York', 8008278],
+    [3794, 'Los Angeles', 'USA', 'California', 3694820],
+    [3795, 'Chicago', 'USA', 'Illinois', 2896016],
+    [3796, 'Houston', 'USA', 'Texas', 1953631],
+    [3797, 'Philadelphia', 'USA', 'Pennsylvania', 1517550],
+    [3798, 'Phoenix', 'USA', 'Arizona', 1321045],
+    [3799, 'San Diego', 'USA', 'California', 1223400],
+    [3800, 'Dallas', 'USA', 'Texas', 1188580],
+]
+
+for row in CITY_DATA:
+    client.sql(CITY_INSERT_QUERY, query_args=row)
+
+CITY_SELECT_QUERY = "SELECT * FROM City"
+
+cities = client.sql(CITY_SELECT_QUERY)
+for city in cities:
+    print(*city)
+
+#tag::field-names[]
+field_names = client.sql(CITY_SELECT_QUERY, include_field_names=True).__next__()
+print(field_names)
+#end::field-names[]
+
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/python/type_hints.py b/docs/_docs/code-snippets/python/type_hints.py
new file mode 100644
index 0000000..d31332b
--- /dev/null
+++ b/docs/_docs/code-snippets/python/type_hints.py
@@ -0,0 +1,48 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#tag::example-block[]
+from pyignite import Client
+from pyignite.datatypes import CharObject, ShortObject
+
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+my_cache = client.get_or_create_cache('my cache')
+
+my_cache.put('my key', 42)
+# Value ‘42’ takes 9 bytes of memory as a LongObject
+
+my_cache.put('my key', 42, value_hint=ShortObject)
+# Value ‘42’ takes only 3 bytes as a ShortObject
+
+my_cache.put('a', 1)
+# ‘a’ is a key of type String
+
+my_cache.put('a', 2, key_hint=CharObject)
+# Another key ‘a’ of type CharObject is created
+
+value = my_cache.get('a')
+print(value)  # 1
+
+value = my_cache.get('a', key_hint=CharObject)
+print(value)  # 2
+
+# Now let us delete both keys at once
+my_cache.remove_keys([
+    'a',  # a default type key
+    ('a', CharObject),  # a key of type CharObject
+])
+#end::example-block[]
diff --git a/docs/_docs/code-snippets/xml/affinity-backup-filter.xml b/docs/_docs/code-snippets/xml/affinity-backup-filter.xml
new file mode 100644
index 0000000..47051a9
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/affinity-backup-filter.xml
@@ -0,0 +1,65 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::node-attribute[] -->
+        <property name="userAttributes">
+            <map>
+                <entry key="AVAILABILITY_ZONE" value="us-east-1a"/>
+            </map>
+        </property>
+        <!-- end::node-attribute[] -->
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="backups" value="1"/>
+                <property name="affinity">
+                    <bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
+                        <property name="affinityBackupFilter">
+                            <bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
+                                <constructor-arg>
+                                    <array value-type="java.lang.String">
+                                        <!-- Backups must go to different availability zones -->
+                                        <value>AVAILABILITY_ZONE</value>
+                                    </array>
+                                </constructor-arg>
+                            </bean>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/attribute-node-filter.xml b/docs/_docs/code-snippets/xml/attribute-node-filter.xml
new file mode 100644
index 0000000..80a1360
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/attribute-node-filter.xml
@@ -0,0 +1,58 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::node-attribute[] -->
+        <property name="userAttributes">
+            <map>
+                <entry key="host_myCache" value="true"/>
+            </map>
+        </property>
+        <!-- end::node-attribute[] -->
+        <!-- tag::cache-config[]  -->
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="nodeFilter">
+                    <bean class="org.apache.ignite.util.AttributeNodeFilter">
+                        <constructor-arg value="host_myCache"/>
+                        <constructor-arg value="true"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::cache-config[]  -->
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/binary-objects.xml b/docs/_docs/code-snippets/xml/binary-objects.xml
new file mode 100644
index 0000000..9ec5783
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/binary-objects.xml
@@ -0,0 +1,54 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="binaryConfiguration">
+            <bean class="org.apache.ignite.configuration.BinaryConfiguration">
+                <property name="nameMapper" ref="globalNameMapper"/>
+                <property name="idMapper" ref="globalIdMapper"/>
+                <property name="typeConfigurations">
+                    <list>
+                        <bean class="org.apache.ignite.binary.BinaryTypeConfiguration">
+                            <property name="typeName" value="org.apache.ignite.examples.*"/>
+                            <property name="serializer" ref="exampleSerializer"/>
+                        </bean>
+                    </list>
+                </property>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/cache-configuration.xml b/docs/_docs/code-snippets/xml/cache-configuration.xml
new file mode 100644
index 0000000..398f938
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/cache-configuration.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="cacheMode" value="PARTITIONED"/>
+                <property name="backups" value="2"/>
+                <property name="rebalanceMode" value="SYNC"/>
+                <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+                <property name="partitionLossPolicy" value="READ_ONLY_SAFE"/>
+                <!-- Other parameters -->
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/cache-groups.xml b/docs/_docs/code-snippets/xml/cache-groups.xml
new file mode 100644
index 0000000..03871f7
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/cache-groups.xml
@@ -0,0 +1,56 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <list>
+                <!-- Partitioned cache for Persons data. -->
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="Person"/>
+                    <property name="backups" value="1"/>
+                    <!-- Group the cache belongs to. -->
+                    <property name="groupName" value="group1"/>
+                </bean>
+                <!-- Partitioned cache for Organizations data. -->
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="Organization"/>
+                    <property name="backups" value="1"/>
+                    <!-- Group the cache belongs to. -->
+                    <property name="groupName" value="group1"/>
+                </bean>
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/cache-jdbc-pojo-store.xml b/docs/_docs/code-snippets/xml/cache-jdbc-pojo-store.xml
new file mode 100644
index 0000000..1cd948e
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/cache-jdbc-pojo-store.xml
@@ -0,0 +1,114 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans              http://www.springframework.org/schema/beans/spring-beans.xsd              http://www.springframework.org/schema/util              http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- Data source bean -->
+    <bean class="com.mysql.cj.jdbc.MysqlDataSource" id="mysqlDataSource">
+        <property name="URL" value="jdbc:mysql://[host]:[port]/[database]"/>
+        <property name="user" value="YOUR_USER_NAME"/>
+        <property name="password" value="YOUR_PASSWORD"/>
+    </bean>
+    <!-- Ignite Configuration -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <list>
+                <!-- Configuration for PersonCache -->
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="PersonCache"/>
+                    <property name="cacheMode" value="PARTITIONED"/>
+                    <property name="atomicityMode" value="ATOMIC"/>
+                    <property name="cacheStoreFactory">
+                        <bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
+                            <property name="dataSourceBean" value="mysqlDataSource"/>
+                            <property name="dialect">
+                                <bean class="org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect"/>
+                            </property>
+                            <property name="types">
+                                <list>
+                                    <bean class="org.apache.ignite.cache.store.jdbc.JdbcType">
+                                        <property name="cacheName" value="PersonCache"/>
+                                        <property name="keyType" value="java.lang.Integer"/>
+                                        <property name="valueType" value="org.apache.ignite.snippets.Person"/>
+                                        <!--Specify the schema if applicable -->
+                                        <!--property name="databaseSchema" value="MY_DB_SCHEMA"/-->
+                                        <property name="databaseTable" value="PERSON"/>
+                                        <property name="keyFields">
+                                            <list>
+                                                <bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
+                                                    <constructor-arg>
+                                                        <util:constant static-field="java.sql.Types.INTEGER"/>
+                                                    </constructor-arg>
+                                                    <constructor-arg value="id"/>
+                                                    <constructor-arg value="int"/>
+                                                    <constructor-arg value="id"/>
+                                                </bean>
+                                            </list>
+                                        </property>
+                                        <property name="valueFields">
+                                            <list>
+                                                <bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
+                                                    <constructor-arg>
+                                                        <util:constant static-field="java.sql.Types.INTEGER"/>
+                                                    </constructor-arg>
+                                                    <constructor-arg value="id"/>
+                                                    <constructor-arg value="int"/>
+                                                    <constructor-arg value="id"/>
+                                                </bean>
+                                                <bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
+                                                    <constructor-arg>
+                                                        <util:constant static-field="java.sql.Types.VARCHAR"/>
+                                                    </constructor-arg>
+                                                    <constructor-arg value="name"/>
+                                                    <constructor-arg value="java.lang.String"/>
+                                                    <constructor-arg value="name"/>
+                                                </bean>
+                                            </list>
+                                        </property>
+                                    </bean>
+                                </list>
+                            </property>
+                        </bean>
+                    </property>
+                    <property name="readThrough" value="true"/>
+                    <property name="writeThrough" value="true"/>
+                    <!-- Configure query entities if you want to use SQL queries -->
+                    <property name="queryEntities">
+                        <list>
+                            <bean class="org.apache.ignite.cache.QueryEntity">
+                                <property name="keyType" value="java.lang.Integer"/>
+                                <property name="valueType" value="org.apache.ignite.snippets.Person"/>
+                                <property name="keyFieldName" value="id"/>
+                                <property name="keyFields">
+                                    <list>
+                                        <value>id</value>
+                                    </list>
+                                </property>
+                                <property name="fields">
+                                    <map>
+                                        <entry key="name" value="java.lang.String"/>
+                                        <entry key="id" value="java.lang.Integer"/>
+                                    </map>
+                                </property>
+                            </bean>
+                        </list>
+                    </property>
+                </bean>
+                <!-- Provide similar configurations for other caches/tables -->
+            </list>
+        </property>
+    </bean>
+</beans>
diff --git a/docs/_docs/code-snippets/xml/cache-template.xml b/docs/_docs/code-snippets/xml/cache-template.xml
new file mode 100644
index 0000000..7ad5741
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/cache-template.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <list>
+                <bean abstract="true" class="org.apache.ignite.configuration.CacheConfiguration" id="cache-template-bean">
+                    <!-- when you create a template via XML configuration, you must add an asterisk to the name of the template -->
+                    <property name="name" value="myCacheTemplate*"/>
+                    <property name="cacheMode" value="PARTITIONED"/>
+                    <property name="backups" value="2"/>
+                    <!-- Other cache parameters -->
+                </bean>
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/client-behind-nat.xml b/docs/_docs/code-snippets/xml/client-behind-nat.xml
new file mode 100644
index 0000000..9a5ae3e
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/client-behind-nat.xml
@@ -0,0 +1,44 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="clientMode" value="true"/>
+        <property name="communicationSpi">
+            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+                <property name="forceClientToServerConnections" value="true"/>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/client-node.xml b/docs/_docs/code-snippets/xml/client-node.xml
new file mode 100644
index 0000000..5616550
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/client-node.xml
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="clientMode" value="true"/>
+      
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <!-- prevent this client from reconnecting on connection loss -->
+                <property name="clientReconnectDisabled" value="true"/>
+                <property name="ipFinder">
+
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+        <!-- tag::slow-client[] -->
+        <property name="communicationSpi">
+            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+                <property name="slowClientQueueLimit" value="1000"/>
+            </bean>
+        </property>
+        <!-- end::slow-client[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/configure-backups.xml b/docs/_docs/code-snippets/xml/configure-backups.xml
new file mode 100644
index 0000000..7d50dee
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/configure-backups.xml
@@ -0,0 +1,54 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <!-- Set the cache name. -->
+                <property name="name" value="cacheName"/>
+                <!-- tag::cache-mode[] -->
+                <!-- Set the cache mode. -->
+                <property name="cacheMode" value="PARTITIONED"/>
+                <!-- end::cache-mode[] -->
+                <!-- Number of backup copies -->
+                <property name="backups" value="1"/>
+                <!-- tag::sync-mode[] -->
+
+                <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+                <!-- end::sync-mode[] -->
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/configuring-metrics.xml b/docs/_docs/code-snippets/xml/configuring-metrics.xml
new file mode 100644
index 0000000..2199a00
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/configuring-metrics.xml
@@ -0,0 +1,89 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <!-- tag::data-storage-metrics[] -->
+
+                <property name="metricsEnabled" value="true"/>
+
+                <!-- end::data-storage-metrics[] -->
+                <!-- tag::data-region-metrics[] -->
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <!-- enable mertrics for the default data region -->
+                        <property name="metricsEnabled" value="true"/>
+                        <!-- other properties -->
+                    </bean>
+                </property>
+                <property name="dataRegionConfigurations">
+                    <list>
+                        <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                            <!-- Custom region name. -->
+                            <property name="name" value="myDataRegion"/>
+                            <!-- Enable metrics for this data region  -->
+                            <property name="metricsEnabled" value="true"/>
+
+                            <property name="persistenceEnabled" value="true"/>
+                            <!-- other properties -->
+                        </bean>
+                    </list>
+                </property>
+                <!-- end::data-region-metrics[] -->
+            </bean>
+        </property>
+        <!-- tag::cache-metrics[] -->
+        <property name="cacheConfiguration">
+            <list>
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="mycache"/>
+                    <!-- Enable statistics for the cache. -->
+                    <property name="statisticsEnabled" value="true"/>
+                </bean>
+            </list>
+        </property>
+        <!-- end::cache-metrics[] -->
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
\ No newline at end of file
diff --git a/docs/_docs/code-snippets/xml/custom-keys.xml b/docs/_docs/code-snippets/xml/custom-keys.xml
new file mode 100644
index 0000000..ad24fd0
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/custom-keys.xml
@@ -0,0 +1,70 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="personCache"/>
+                <!-- Configure query entities -->
+                <property name="queryEntities">
+                    <list>
+                        <bean class="org.apache.ignite.cache.QueryEntity">
+                            <!-- Registering key's class. -->
+                            <property name="keyType" value="CustomKey"/>
+                            <!-- Registering value's class. -->
+                            <property name="valueType" value="org.apache.ignite.examples.Person"/>
+                            <!-- Defining all the fields that will be accessible from DML. -->
+                            <property name="fields">
+                                <map>
+                                    <entry key="firstName" value="java.lang.String"/>
+                                    <entry key="lastName" value="java.lang.String"/>
+                                    <entry key="intKeyField" value="java.lang.Integer"/>
+                                    <entry key="strKeyField" value="java.lang.String"/>
+                                </map>
+                            </property>
+                            <!-- Defining the subset of key's fields -->
+                            <property name="keyFields">
+                                <set>
+                                    <value>intKeyField</value>
+                                    <value>strKeyField</value>
+                                </set>
+                            </property>
+                        </bean>
+                    </list>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/data-regions-configuration.xml b/docs/_docs/code-snippets/xml/data-regions-configuration.xml
new file mode 100644
index 0000000..fb3614b
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/data-regions-configuration.xml
@@ -0,0 +1,90 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <!-- tag::default[] -->
+                <!--
+                Default memory region that grows endlessly. Any cache will be bound to this memory region
+                unless another region is set in the cache's configuration.
+                -->
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="name" value="Default_Region"/>
+                        <!-- 100 MB memory region with disabled eviction. -->
+                        <property name="initialSize" value="#{100 * 1024 * 1024}"/>
+                    </bean>
+                </property>
+                <!-- end::default[] -->
+                <!-- tag::data-region[] -->
+                <property name="dataRegionConfigurations">
+                    <list>
+                        <!--
+                        40MB memory region with eviction enabled.
+                        -->
+                        <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                            <property name="name" value="40MB_Region_Eviction"/>
+                            <!-- Memory region of 20 MB initial size. -->
+                            <property name="initialSize" value="#{20 * 1024 * 1024}"/>
+                            <!-- Maximum size is 40 MB. -->
+                            <property name="maxSize" value="#{40 * 1024 * 1024}"/>
+                            <!-- Enabling eviction for this memory region. -->
+                            <property name="pageEvictionMode" value="RANDOM_2_LRU"/>
+                        </bean>
+                    </list>
+                </property>
+                <!-- end::data-region[] -->
+            </bean>
+        </property>
+        <!-- tag::caches[] -->
+        <property name="cacheConfiguration">
+            <list>
+                <!-- Cache that is mapped to a specific data region. -->
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+
+                    <property name="name" value="SampleCache"/>
+                    <!--
+                    Assigning the cache to the `40MB_Region_Eviction` region.
+                    -->
+                    <property name="dataRegionName" value="40MB_Region_Eviction"/>
+                </bean>
+            </list>
+        </property>
+        <!-- end::caches[] -->
+        <!-- other properties -->
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/deployment.xml b/docs/_docs/code-snippets/xml/deployment.xml
new file mode 100644
index 0000000..93e609f
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/deployment.xml
@@ -0,0 +1,55 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="deploymentSpi">
+            <bean class="org.apache.ignite.spi.deployment.uri.UriDeploymentSpi">
+                <property name="temporaryDirectoryPath" value="/tmp/temp_ignite_libs"/>
+                <property name="uriList">
+                    <list>
+                        <!--tag::from-local-dir[] -->
+                        <value>file://freq=2000@localhost/home/username/user_libs</value>
+                        <!--end::from-local-dir[] -->
+                        <!--tag::from-url[] -->
+                        <value>http://username:password;freq=10000@www.mysite.com:110/ignite/user_libs</value>
+                        <!--end::from-url[] -->
+                    </list>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <!-- prevent this client from reconnecting on connection loss -->
+                <property name="clientReconnectDisabled" value="true"/>
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/discovery-multicast.xml b/docs/_docs/code-snippets/xml/discovery-multicast.xml
new file mode 100644
index 0000000..6c1da5c
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/discovery-multicast.xml
@@ -0,0 +1,36 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
+                        <property name="multicastGroup" value="228.10.10.157"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/discovery-static-and-multicast.xml b/docs/_docs/code-snippets/xml/discovery-static-and-multicast.xml
new file mode 100644
index 0000000..170d3ba
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/discovery-static-and-multicast.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
+                        <property name="multicastGroup" value="228.10.10.157"/>
+                        <!-- list of static IP addresses-->
+                        <property name="addresses">
+                            <list>
+                                <value>1.2.3.4</value>
+                                <!--
+                                  IP Address and optional port range.
+                                  You can also optionally specify an individual port.
+                                 -->
+                                <value>1.2.3.5:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/discovery-static.xml b/docs/_docs/code-snippets/xml/discovery-static.xml
new file mode 100644
index 0000000..1452ac2
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/discovery-static.xml
@@ -0,0 +1,48 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <!--
+                                  Explicitly specifying address of a local node to let it start and
+                                  operate normally even if there is no more nodes in the cluster.
+                                  You can also optionally specify an individual port or port range.
+                                  -->
+                                <value>1.2.3.4</value>
+                                <!--
+                                  IP Address and optional port range of a remote node.
+                                  You can also optionally specify an individual port.
+                                  -->
+                                <value>1.2.3.5:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/disk-compression.xml b/docs/_docs/code-snippets/xml/disk-compression.xml
new file mode 100644
index 0000000..388b991
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/disk-compression.xml
@@ -0,0 +1,59 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <property name="pageSize" value="#{4096 * 2}"/>
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="persistenceEnabled" value="true"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <!-- enable disk page compression for this cache -->
+                <property name="diskPageCompression" value="LZ4"/>
+                <!-- optionally set the compression level -->
+                <property name="diskPageCompressionLevel" value="10"/>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <!-- prevent this client from reconnecting on connection loss -->
+                <property name="clientReconnectDisabled" value="true"/>
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/events.xml b/docs/_docs/code-snippets/xml/events.xml
new file mode 100644
index 0000000..a168424
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/events.xml
@@ -0,0 +1,54 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans"
+    xmlns:util="http://www.springframework.org/schema/util" 
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
+    xsi:schemaLocation="         http://www.springframework.org/schema/beans         
+    http://www.springframework.org/schema/beans/spring-beans.xsd         
+    http://www.springframework.org/schema/util         
+    http://www.springframework.org/schema/util/spring-util.xsd">
+
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="includeEventTypes">
+            <list>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_NODE_LEFT"/>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_NODE_JOINED"/>
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+
+</beans>
diff --git a/docs/_docs/code-snippets/xml/eviction.xml b/docs/_docs/code-snippets/xml/eviction.xml
new file mode 100644
index 0000000..2276873
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/eviction.xml
@@ -0,0 +1,58 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- Memory configuration. -->
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <property name="dataRegionConfigurations">
+                    <list>
+                        <!-- Defining a data region that will consume up to 20 GB of RAM. -->
+                        <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                            <!-- Custom region name. -->
+                            <property name="name" value="20GB_Region"/>
+                            <!-- 500 MB initial size (RAM). -->
+                            <property name="initialSize" value="#{500L * 1024 * 1024}"/>
+                            <!-- 20 GB maximum size (RAM). -->
+                            <property name="maxSize" value="#{20L * 1024 * 1024 * 1024}"/>
+                            <!-- Enabling RANDOM_LRU eviction for this region.  -->
+                            <property name="pageEvictionMode" value="RANDOM_LRU"/>
+                        </bean>
+                    </list>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/expiry.xml b/docs/_docs/code-snippets/xml/expiry.xml
new file mode 100644
index 0000000..756c0a7
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/expiry.xml
@@ -0,0 +1,56 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <!-- tag::cache-with-expiry[] -->
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="expiryPolicyFactory">
+                    <bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
+                        <constructor-arg>
+                            <bean class="javax.cache.expiry.Duration">
+                                <constructor-arg value="MINUTES"/>
+                                <constructor-arg value="5"/>
+                            </bean>
+                        </constructor-arg>
+                    </bean>
+                </property>
+            </bean>
+
+            <!-- end::cache-with-expiry[] -->
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/failover-always.xml b/docs/_docs/code-snippets/xml/failover-always.xml
new file mode 100644
index 0000000..b565af1
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/failover-always.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="failoverSpi">
+            <bean class="org.apache.ignite.spi.failover.always.AlwaysFailoverSpi">
+                <property name="maximumFailoverAttempts" value="5"/>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/failover-never.xml b/docs/_docs/code-snippets/xml/failover-never.xml
new file mode 100644
index 0000000..fcfe55c
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/failover-never.xml
@@ -0,0 +1,43 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        
+        <property name="failoverSpi">
+            <bean class="org.apache.ignite.spi.failover.never.NeverFailoverSpi"/>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/http-configuration.xml b/docs/_docs/code-snippets/xml/http-configuration.xml
new file mode 100644
index 0000000..294488c
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/http-configuration.xml
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+         http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+<!--
+      Ignite configuration with all defaults and enabled p2p deployment and enabled events.
+  -->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <!-- tag::http-configuration[] -->
+        <property name="connectorConfiguration">
+            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
+                <property name="jettyPath" value="jetty.xml"/>
+            </bean>
+        </property>
+        <!-- end::http-configuration[] -->
+       <!-- tag::discovery[] --> 
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+
+    <!-- end::ignite-config[] -->
+</beans>
\ No newline at end of file
diff --git a/docs/_docs/code-snippets/xml/ignite-authentication.xml b/docs/_docs/code-snippets/xml/ignite-authentication.xml
new file mode 100644
index 0000000..3c8beec5
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/ignite-authentication.xml
@@ -0,0 +1,58 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="persistenceEnabled" value="true"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+
+       <property name="authenticationEnabled" value="true"/> 
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/jcl.xml b/docs/_docs/code-snippets/xml/jcl.xml
new file mode 100644
index 0000000..6efcf56
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/jcl.xml
@@ -0,0 +1,57 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!--
+    Ignite configuration with all defaults and enabled p2p deployment and enabled events.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::jcl[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="gridLogger">
+            <bean class="org.apache.ignite.logger.jcl.JclLogger">
+            </bean>
+        </property>
+
+        <!-- other properties --> 
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::jcl[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/jetty.xml b/docs/_docs/code-snippets/xml/jetty.xml
new file mode 100644
index 0000000..f4de41c
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/jetty.xml
@@ -0,0 +1,69 @@
+<?xml version="1.0"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">
+<Configure id="Server" class="org.eclipse.jetty.server.Server">
+    <Arg name="threadPool">
+        <!-- Default queued blocking thread pool -->
+        <New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
+            <Set name="minThreads">20</Set>
+            <Set name="maxThreads">200</Set>
+        </New>
+    </Arg>
+    <New id="httpCfg" class="org.eclipse.jetty.server.HttpConfiguration">
+        <Set name="secureScheme">https</Set>
+        <Set name="securePort">8443</Set>
+        <Set name="sendServerVersion">true</Set>
+        <Set name="sendDateHeader">true</Set>
+    </New>
+    <Call name="addConnector">
+        <Arg>
+            <New class="org.eclipse.jetty.server.ServerConnector">
+                <Arg name="server"><Ref refid="Server"/></Arg>
+                <Arg name="factories">
+                    <Array type="org.eclipse.jetty.server.ConnectionFactory">
+                        <Item>
+                            <New class="org.eclipse.jetty.server.HttpConnectionFactory">
+                                <Ref refid="httpCfg"/>
+                            </New>
+                        </Item>
+                    </Array>
+                </Arg>
+                <Set name="host">
+                  <SystemProperty name="IGNITE_JETTY_HOST" default="localhost"/>
+                </Set>
+                <Set name="port">
+                  <SystemProperty name="IGNITE_JETTY_PORT" default="8080"/>
+                </Set>
+                <Set name="idleTimeout">30000</Set>
+                <Set name="reuseAddress">true</Set>
+            </New>
+        </Arg>
+    </Call>
+    <Set name="handler">
+        <New id="Handlers" class="org.eclipse.jetty.server.handler.HandlerCollection">
+            <Set name="handlers">
+                <Array type="org.eclipse.jetty.server.Handler">
+                    <Item>
+                        <New id="Contexts" class="org.eclipse.jetty.server.handler.ContextHandlerCollection"/>
+                    </Item>
+                </Array>
+            </Set>
+        </New>
+    </Set>
+    <Set name="stopAtShutdown">false</Set>
+</Configure>
diff --git a/docs/_docs/code-snippets/xml/job-scheduling-fifo.xml b/docs/_docs/code-snippets/xml/job-scheduling-fifo.xml
new file mode 100644
index 0000000..2f5c50d
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/job-scheduling-fifo.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="collisionSpi">
+            <bean class="org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi">
+                <!-- Execute one job at a time. -->
+                <property name="parallelJobsNumber" value="1"/>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/job-scheduling-priority.xml b/docs/_docs/code-snippets/xml/job-scheduling-priority.xml
new file mode 100644
index 0000000..3d39f9d
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/job-scheduling-priority.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="collisionSpi">
+            <bean class="org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi">
+                <!-- Change the parallel job number if needed. 
+                     Default is number of cores times 2. -->
+                <property name="parallelJobsNumber" value="5"/>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/job-stealing.xml b/docs/_docs/code-snippets/xml/job-stealing.xml
new file mode 100644
index 0000000..e4c140e
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/job-stealing.xml
@@ -0,0 +1,66 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans          http://www.springframework.org/schema/beans/spring-beans.xsd          http://www.springframework.org/schema/util          http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- Enabling the required Failover SPI. -->
+        <property name="failoverSpi">
+            <bean class="org.apache.ignite.spi.failover.jobstealing.JobStealingFailoverSpi"/>
+        </property>
+        <!-- Enabling the JobStealingCollisionSpi for late load balancing. -->
+        <property name="collisionSpi">
+            <bean class="org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi">
+                <property name="activeJobsThreshold" value="50"/>
+                <property name="waitJobsThreshold" value="0"/>
+                <property name="messageExpireTime" value="1000"/>
+                <property name="maximumStealingAttempts" value="10"/>
+                <property name="stealingEnabled" value="true"/>
+                <property name="stealingAttributes">
+                    <map>
+                        <entry key="node.segment" value="foobar"/>
+                    </map>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/lifecycle.xml b/docs/_docs/code-snippets/xml/lifecycle.xml
new file mode 100644
index 0000000..f42d462
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/lifecycle.xml
@@ -0,0 +1,43 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="lifecycleBeans">
+            <list>
+                <bean class="org.apache.ignite.snippets.MyLifecycleBean"/>
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/log4j-config.xml b/docs/_docs/code-snippets/xml/log4j-config.xml
new file mode 100644
index 0000000..f8c1a37
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/log4j-config.xml
@@ -0,0 +1,107 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!DOCTYPE log4j:configuration PUBLIC "-//APACHE//DTD LOG4J 1.2//EN"
+    "http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/xml/doc-files/log4j.dtd">
+
+<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false">
+
+    <!--
+        Logs all ERROR messages to console.
+    -->
+    <appender name="CONSOLE_ERR" class="org.apache.log4j.ConsoleAppender">
+        <!-- Log to STDERR. -->
+        <param name="Target" value="System.err"/>
+
+        <!-- Log from ERROR and higher (change to WARN if needed). -->
+        <param name="Threshold" value="ERROR"/>
+
+        <!-- The default pattern: Date Priority [Category] Message\n -->
+        <layout class="org.apache.log4j.PatternLayout">
+            <param name="ConversionPattern" value="[%d{ISO8601}][%-5p][%t][%c{1}] %m%n"/>
+        </layout>
+    </appender>
+
+    <!--
+        Logs all output to specified file.
+        By default, the logging goes to IGNITE_HOME/work/log folder
+    -->
+    <appender name="FILE" class="org.apache.ignite.logger.log4j.Log4jRollingFileAppender">
+        <param name="Threshold" value="DEBUG"/>
+        <param name="File" value="${IGNITE_HOME}/work/log/ignite.log"/>
+        <param name="Append" value="true"/>
+        <param name="MaxFileSize" value="10MB"/>
+        <param name="MaxBackupIndex" value="10"/>
+        <layout class="org.apache.log4j.PatternLayout">
+            <param name="ConversionPattern" value="[%d{ISO8601}][%-5p][%t][%c{1}] %m%n"/>
+        </layout>
+    </appender>
+
+    <!--
+    <category name="org.apache.ignite">
+        <level value="DEBUG"/>
+    </category>
+    -->
+
+    <!--
+        Uncomment to disable courtesy notices, such as SPI configuration
+        consistency warnings.
+    -->
+    <!--
+    <category name="org.apache.ignite.CourtesyConfigNotice">
+        <level value="OFF"/>
+    </category>
+    -->
+
+    <category name="org.springframework">
+        <level value="WARN"/>
+    </category>
+
+    <category name="org.eclipse.jetty">
+        <level value="WARN"/>
+    </category>
+
+    <!--
+        Avoid warnings about failed bind attempt when multiple nodes running on the same host.
+    -->
+    <category name="org.eclipse.jetty.util.log">
+        <level value="ERROR"/>
+    </category>
+
+    <category name="org.eclipse.jetty.util.component">
+        <level value="ERROR"/>
+    </category>
+
+    <category name="com.amazonaws">
+        <level value="WARN"/>
+    </category>
+
+    <!-- Default settings. -->
+    <root>
+        <!-- Print out all info by default. -->
+        <level value="INFO"/>
+
+        <!-- Uncomment to enable logging to console. -->
+        <!--
+        <appender-ref ref="CONSOLE"/>
+        -->
+
+        <appender-ref ref="CONSOLE_ERR"/>
+        <appender-ref ref="FILE"/>
+    </root>
+</log4j:configuration>
diff --git a/docs/_docs/code-snippets/xml/log4j.xml b/docs/_docs/code-snippets/xml/log4j.xml
new file mode 100644
index 0000000..fdb6203
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/log4j.xml
@@ -0,0 +1,59 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!--
+    Ignite configuration with all defaults and enabled p2p deployment and enabled events.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::log4j[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="gridLogger">
+            <bean class="org.apache.ignite.logger.log4j.Log4JLogger">
+                <!-- log4j configuration file -->
+                <constructor-arg type="java.lang.String" value="log4j-config.xml"/>
+            </bean>
+        </property>
+
+        <!-- other properties --> 
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::log4j[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/log4j2-config.xml b/docs/_docs/code-snippets/xml/log4j2-config.xml
new file mode 100644
index 0000000..2b41228
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/log4j2-config.xml
@@ -0,0 +1,79 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<Configuration monitorInterval="60">
+    <Appenders>
+        <Console name="CONSOLE" target="SYSTEM_OUT">
+            <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/>
+            <ThresholdFilter level="ERROR" onMatch="DENY" onMismatch="ACCEPT"/>
+        </Console>
+
+        <Console name="CONSOLE_ERR" target="SYSTEM_ERR">
+            <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/>
+        </Console>
+
+        <Routing name="FILE">
+            <Routes pattern="$${sys:nodeId}">
+                <Route>
+                    <RollingFile name="Rolling-${sys:nodeId}" fileName="${sys:IGNITE_HOME}/work/log/ignite-${sys:nodeId}.log"
+                                 filePattern="${sys:IGNITE_HOME}/work/log/ignite-${sys:nodeId}-%i-%d{yyyy-MM-dd}.log.gz">
+                        <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/>
+                        <Policies>
+                            <TimeBasedTriggeringPolicy interval="6" modulate="true" />
+                            <SizeBasedTriggeringPolicy size="10 MB" />
+                        </Policies>
+                    </RollingFile>
+                </Route>
+            </Routes>
+        </Routing>
+    </Appenders>
+
+    <Loggers>
+        <!--
+        <Logger name="org.apache.ignite" level=DEBUG/>
+        -->
+
+        <!--
+            Uncomment to disable courtesy notices, such as SPI configuration
+            consistency warnings.
+        -->
+        <!--
+        <Logger name="org.apache.ignite.CourtesyConfigNotice" level=OFF/>
+        -->
+
+        <Logger name="org.springframework" level="WARN"/>
+        <Logger name="org.eclipse.jetty" level="WARN"/>
+
+        <!--
+        Avoid warnings about failed bind attempt when multiple nodes running on the same host.
+        -->
+        <Logger name="org.eclipse.jetty.util.log" level="ERROR"/>
+        <Logger name="org.eclipse.jetty.util.component" level="ERROR"/>
+
+        <Logger name="com.amazonaws" level="WARN"/>
+
+        <Root level="INFO">
+            <!-- Uncomment to enable logging to console. -->
+            <!--
+            <AppenderRef ref="CONSOLE" level="DEBUG"/>
+            -->
+
+            <AppenderRef ref="CONSOLE_ERR" level="ERROR"/>
+            <AppenderRef ref="FILE" level="DEBUG"/>
+        </Root>
+    </Loggers>
+</Configuration>
diff --git a/docs/_docs/code-snippets/xml/log4j2.xml b/docs/_docs/code-snippets/xml/log4j2.xml
new file mode 100644
index 0000000..41620cf
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/log4j2.xml
@@ -0,0 +1,59 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!--
+    Ignite configuration with all defaults and enabled p2p deployment and enabled events.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::log4j2[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="gridLogger">
+            <bean class="org.apache.ignite.logger.log4j2.Log4J2Logger">
+                <!-- log4j2 configuration file -->
+                <constructor-arg type="java.lang.String" value="log4j2-config.xml"/>
+            </bean>
+        </property>
+
+        <!-- other properties --> 
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::log4j2[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/metrics.xml b/docs/_docs/code-snippets/xml/metrics.xml
new file mode 100644
index 0000000..0172c33
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/metrics.xml
@@ -0,0 +1,56 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="metricExporterSpi">
+            <list>
+                <!-- tag::jmx-exporter[] -->
+                <bean class="org.apache.ignite.spi.metric.jmx.JmxMetricExporterSpi"/>
+                <!-- end::jmx-exporter[] -->
+                <!-- tag::sql-exporter[] -->
+                <bean class="org.apache.ignite.spi.metric.sql.SqlViewMetricExporterSpi"/>
+                <!-- end::sql-exporter[] -->
+                <!-- tag::log-exporter[] -->
+                <bean class="org.apache.ignite.spi.metric.log.LogExporterSpi"/>
+                <!-- end::log-exporter[] -->
+                <!-- tag::opencensus-exporter[] -->
+                <bean class="org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi"/>
+                <!-- end::opencensus-exporter[] -->
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <!-- prevent this client from reconnecting on connection loss -->
+                <property name="clientReconnectDisabled" value="true"/>
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/mvcc.xml b/docs/_docs/code-snippets/xml/mvcc.xml
new file mode 100644
index 0000000..2213bff
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/mvcc.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="atomicityMode" value="TRANSACTIONAL_SNAPSHOT"/>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/near-cache-config.xml b/docs/_docs/code-snippets/xml/near-cache-config.xml
new file mode 100644
index 0000000..cbecd2a
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/near-cache-config.xml
@@ -0,0 +1,52 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="cacheConfiguration">
+            <!-- tag::cache-with-near-cache[] -->
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="nearConfiguration">
+                    <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
+                        <property name="nearEvictionPolicyFactory">
+                            <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
+                                <property name="maxSize" value="100000"/>
+                            </bean>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+
+            <!-- end::cache-with-near-cache[] -->
+        </property>
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+    </bean>
+</beans>
diff --git a/docs/_docs/code-snippets/xml/network-configuration.xml b/docs/_docs/code-snippets/xml/network-configuration.xml
new file mode 100644
index 0000000..50ce8bc
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/network-configuration.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::failure-detection-timeout[] -->
+
+        <property name="failureDetectionTimeout" value="5000"/>
+
+        <property name="clientFailureDetectionTimeout" value="10000"/>
+        <!-- end::failure-detection-timeout[] -->
+        <!-- tag::discovery[] -->
+
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="localPort" value="8300"/>  
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+        <!-- tag::communication-spi[] -->
+
+        <property name="communicationSpi">
+            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+                <property name="localPort" value="4321"/> 
+            </bean>
+        </property>
+        <!-- end::communication-spi[] -->
+
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/odbc-cache-config.xml b/docs/_docs/code-snippets/xml/odbc-cache-config.xml
new file mode 100644
index 0000000..9f30d8c
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/odbc-cache-config.xml
@@ -0,0 +1,95 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans          http://www.springframework.org/schema/beans/spring-beans.xsd          http://www.springframework.org/schema/util          http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <list>
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="Person"/>
+                    <property name="cacheMode" value="PARTITIONED"/>
+                    <property name="atomicityMode" value="TRANSACTIONAL"/>
+                    <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+                    <property name="queryEntities">
+                        <list>
+                            <bean class="org.apache.ignite.cache.QueryEntity">
+                                <property name="keyType" value="java.lang.Long"/>
+                                <property name="keyFieldName" value="id"/>
+                                <property name="valueType" value="Person"/>
+                                <property name="fields">
+                                    <map>
+                                        <entry key="id" value="java.lang.Long"/>
+                                        <entry key="firstName" value="java.lang.String"/>
+                                        <entry key="lastName" value="java.lang.String"/>
+                                        <entry key="salary" value="java.lang.Double"/>
+                                    </map>
+                                </property>
+                            </bean>
+                        </list>
+                    </property>
+                </bean>
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="Organization"/>
+                    <property name="cacheMode" value="PARTITIONED"/>
+                    <property name="atomicityMode" value="TRANSACTIONAL"/>
+                    <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+                    <property name="queryEntities">
+                        <list>
+                            <bean class="org.apache.ignite.cache.QueryEntity">
+                                <property name="keyType" value="java.lang.Long"/>
+                                <property name="keyFieldName" value="id"/>
+                                <property name="valueType" value="Organization"/>
+                                <property name="fields">
+                                    <map>
+                                        <entry key="id" value="java.lang.Long"/>
+                                        <entry key="name" value="java.lang.String"/>
+                                    </map>
+                                </property>
+                                <property name="indexes">
+                                    <list>
+                                        <bean class="org.apache.ignite.cache.QueryIndex">
+                                            <constructor-arg value="name"/>
+                                        </bean>
+                                    </list>
+                                </property>
+                            </bean>
+                        </list>
+                    </property>
+                </bean>
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/odbc.xml b/docs/_docs/code-snippets/xml/odbc.xml
new file mode 100644
index 0000000..8db4f57
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/odbc.xml
@@ -0,0 +1,52 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans          http://www.springframework.org/schema/beans/spring-beans.xsd          http://www.springframework.org/schema/util          http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- Enabling ODBC. -->
+        <property name="clientConnectorConfiguration">
+            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+                <property name="host" value="127.0.0.1"/>
+                <property name="port" value="10800"/>
+                <property name="portRange" value="5"/>
+                <property name="maxOpenCursorsPerConnection" value="512"/>
+                <property name="socketSendBufferSize" value="65536"/>
+                <property name="socketReceiveBufferSize" value="131072"/>
+                <property name="threadPoolSize" value="4"/>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/on-heap-cache.xml b/docs/_docs/code-snippets/xml/on-heap-cache.xml
new file mode 100644
index 0000000..89a7790
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/on-heap-cache.xml
@@ -0,0 +1,44 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="onheapCacheEnabled" value="true"/>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/partition-loss-policy.xml b/docs/_docs/code-snippets/xml/partition-loss-policy.xml
new file mode 100644
index 0000000..9e5a83b
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/partition-loss-policy.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+
+                <property name="partitionLossPolicy" value="READ_ONLY_SAFE"/>
+            </bean>
+        </property>
+        <!-- other properties -->
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/peer-class-loading.xml b/docs/_docs/code-snippets/xml/peer-class-loading.xml
new file mode 100644
index 0000000..236624f
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/peer-class-loading.xml
@@ -0,0 +1,44 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <!-- Enable peer class loading. -->
+        <property name="peerClassLoadingEnabled" value="true"/>
+        <!-- Set deployment mode. -->
+        <property name="deploymentMode" value="CONTINUOUS"/>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/persistence-metrics.xml b/docs/_docs/code-snippets/xml/persistence-metrics.xml
new file mode 100644
index 0000000..52e12b9
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/persistence-metrics.xml
@@ -0,0 +1,64 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+
+              <!-- persistent storage metrics -->
+                <property name="metricsEnabled" value="true"/>
+
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="persistenceEnabled" value="true"/>
+
+                        <!-- enable mertrics for the default data region -->
+                        <!--property name="metricsEnabled" value="true"/-->
+                        <!-- other properties -->
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/persistence-tuning.xml b/docs/_docs/code-snippets/xml/persistence-tuning.xml
new file mode 100644
index 0000000..744bf27
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/persistence-tuning.xml
@@ -0,0 +1,81 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- tag::ds[] -->
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+
+                <!-- tag::page-size[] -->
+                <!-- Set the page size to 8 KB -->
+                <property name="pageSize" value="#{8 * 1024}"/>
+                <!-- end::page-size[] -->
+                <!-- tag::paths[] -->
+                <!--
+                    Sets a path to the root directory where data and indexes are
+                    to be persisted. It's assumed the directory is on a separated SSD.
+                -->
+                <property name="storagePath" value="/opt/persistence"/>
+                <property name="walPath" value="/opt/wal"/>
+                <property name="walArchivePath" value="/opt/wal-archive"/>
+                <!-- end::paths[] -->
+                <!-- tag::page-write-throttling[] -->
+                <property name="writeThrottlingEnabled" value="true"/>
+
+                <!-- end::page-write-throttling[] -->
+                <!-- tag::data-region[] -->
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <!-- Enabling persistence. -->
+                        <property name="persistenceEnabled" value="true"/>
+                        <!-- Increasing the buffer size to 1 GB. -->
+                        <property name="checkpointPageBufferSize" value="#{1024L * 1024 * 1024}"/>
+                    </bean>
+                </property>
+
+                <!-- end::data-region[] -->
+            </bean>
+        </property>
+        <!-- end::ds[] -->
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/persistence.xml b/docs/_docs/code-snippets/xml/persistence.xml
new file mode 100644
index 0000000..6ec5c25
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/persistence.xml
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="persistenceEnabled" value="true"/>
+                    </bean>
+                </property>
+                <!-- tag::storage-path[] -->
+                <property name="storagePath" value="/opt/storage"/>
+                <!-- end::storage-path[] -->
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/plugins.xml b/docs/_docs/code-snippets/xml/plugins.xml
new file mode 100644
index 0000000..9f8b950
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/plugins.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="pluginProviders">
+            <bean class="org.apache.ignite.snippets.plugin.MyPluginProvider">
+               <property name="interval" value="100"/> 
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <!-- prevent this client from reconnecting on connection loss -->
+                <property name="clientReconnectDisabled" value="true"/>
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/query-entities.xml b/docs/_docs/code-snippets/xml/query-entities.xml
new file mode 100644
index 0000000..5f2f64a
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/query-entities.xml
@@ -0,0 +1,71 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="Person"/>
+                <!-- Configure query entities -->
+                <property name="queryEntities">
+                    <list>
+                        <bean class="org.apache.ignite.cache.QueryEntity">
+                            <!-- Setting  the type of the key -->
+                            <property name="keyType" value="java.lang.Long"/>
+
+                            <property name="keyFieldName" value="id"/>
+
+                            <!-- Setting type of the value -->
+                            <property name="valueType" value="org.apache.ignite.examples.Person"/>
+
+                            <!-- Defining fields that will be either indexed or queryable.
+                                 Indexed fields are added to the 'indexes' list below.-->
+                            <property name="fields">
+                                <map>
+                                    <entry key="id" value="java.lang.Long"/>
+                                    <entry key="name" value="java.lang.String"/>
+                                    <entry key="salary" value="java.lang.Float "/>
+                                </map>
+                            </property>
+                            <!-- Defining indexed fields.-->
+                            <property name="indexes">
+                                <list>
+                                    <!-- Single field (aka. column) index -->
+                                    <bean class="org.apache.ignite.cache.QueryIndex">
+                                        <constructor-arg value="name"/>
+                                    </bean>
+                                    <!-- Group index. -->
+                                    <bean class="org.apache.ignite.cache.QueryIndex">
+                                        <constructor-arg>
+                                            <list>
+                                                <value>id</value>
+                                                <value>salary</value>
+                                            </list>
+                                        </constructor-arg>
+                                        <constructor-arg value="SORTED"/>
+                                    </bean>
+                                </list>
+                            </property>
+                        </bean>
+                    </list>
+                </property>
+            </bean>
+        </property>
+    </bean>
+	<!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/rebalancing-config.xml b/docs/_docs/code-snippets/xml/rebalancing-config.xml
new file mode 100644
index 0000000..44b6c32
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/rebalancing-config.xml
@@ -0,0 +1,65 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <!-- tag::pool-size[] -->
+
+        <property name="rebalanceThreadPoolSize" value="4"/>
+
+        <!-- end::pool-size[] -->
+        <property name="cacheConfiguration">
+            <list>
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="mycache"/>
+                    <!-- tag::mode[] -->
+                    <!-- enable synchronous rebalance mode -->
+                    <property name="rebalanceMode" value="SYNC"/>
+                    <!-- end::mode[] -->
+                    <!-- tag::throttling[] -->
+                    <!-- Set batch size. -->
+                    <property name="rebalanceBatchSize" value="#{2 * 1024 * 1024}"/>
+                    <!-- Set throttle interval. -->
+                    <property name="rebalanceThrottle" value="100"/>
+                    <!-- end::throttling[] -->
+                </bean>
+            </list>
+        </property>
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                   
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/round-robin-load-balancing.xml b/docs/_docs/code-snippets/xml/round-robin-load-balancing.xml
new file mode 100644
index 0000000..6dca126
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/round-robin-load-balancing.xml
@@ -0,0 +1,69 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans"
+    xmlns:util="http://www.springframework.org/schema/util" 
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
+    xsi:schemaLocation="         http://www.springframework.org/schema/beans
+         http://www.springframework.org/schema/beans/spring-beans.xsd
+         http://www.springframework.org/schema/util
+         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="includeEventTypes">
+            <list>
+                <!--these events are required for the per-task mode-->
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
+                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
+            </list>
+        </property>
+
+        <property name="loadBalancingSpi">
+            <bean class="org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi">
+                <!-- Activate the per-task round-robin mode. -->
+                <property name="perTask" value="true"/>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/schemas.xml b/docs/_docs/code-snippets/xml/schemas.xml
new file mode 100644
index 0000000..cea1009
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/schemas.xml
@@ -0,0 +1,48 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="sqlConfiguration">
+            <bean class="org.apache.ignite.configuration.SqlConfiguration">
+                <property name="sqlSchemas">
+                    <list>
+                        <value>MY_SCHEMA</value>
+                        <value>MY_SECOND_SCHEMA</value>
+                    </list>
+                </property>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/services.xml b/docs/_docs/code-snippets/xml/services.xml
new file mode 100644
index 0000000..66cadfa
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/services.xml
@@ -0,0 +1,52 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="serviceConfiguration">
+            <list>
+                <bean class="org.apache.ignite.services.ServiceConfiguration">
+                    <property name="name" value="myCounterService"/>
+                    <property name="maxPerNodeCount" value="1"/>
+                    <property name="totalCount" value="1"/>
+                    <property name="service">
+                        <bean class="org.apache.ignite.snippets.services.MyCounterServiceImpl"/>
+                    </property>
+                </bean>
+            </list>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/slf4j.xml b/docs/_docs/code-snippets/xml/slf4j.xml
new file mode 100644
index 0000000..8802832
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/slf4j.xml
@@ -0,0 +1,57 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!--
+    Ignite configuration with all defaults and enabled p2p deployment and enabled events.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::slf4j[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+        <property name="gridLogger">
+            <bean class="org.apache.ignite.logger.slf4j.Slf4jLogger">
+            </bean>
+        </property>
+
+        <!-- other properties --> 
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::slf4j[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/snapshots.xml b/docs/_docs/code-snippets/xml/snapshots.xml
new file mode 100644
index 0000000..f2e9d98
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/snapshots.xml
@@ -0,0 +1,52 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!--
+           Sets a path to the root directory where snapshot files will be persisted.
+           By default, the `snapshots` directory is placed under the `IGNITE_HOME/db`.
+        -->
+        <property name="snapshotPath" value="/snapshots"/>
+
+        <!-- tag::cache[] -->
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="snapshot-cache"/>
+            </bean>
+        </property>
+        <!-- end::cache[] -->
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/sql-on-heap-cache.xml b/docs/_docs/code-snippets/xml/sql-on-heap-cache.xml
new file mode 100644
index 0000000..c555bc4
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/sql-on-heap-cache.xml
@@ -0,0 +1,44 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="sqlOnheapCacheEnabled" value="true"/>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/ssl-without-validation.xml b/docs/_docs/code-snippets/xml/ssl-without-validation.xml
new file mode 100644
index 0000000..b8cbc07
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/ssl-without-validation.xml
@@ -0,0 +1,58 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="sslContextFactory">
+            <bean class="org.apache.ignite.ssl.SslContextFactory">
+                <property name="keyStoreFilePath" value="keystore/node.jks"/>
+                <property name="keyStorePassword" value="123456"/>
+                <property name="trustManagers">
+                    <bean class="org.apache.ignite.ssl.SslContextFactory" factory-method="getDisabledTrustManager"/>
+                </property>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/ssl.xml b/docs/_docs/code-snippets/xml/ssl.xml
new file mode 100644
index 0000000..5932f02
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/ssl.xml
@@ -0,0 +1,58 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="sslContextFactory">
+            <bean class="org.apache.ignite.ssl.SslContextFactory">
+                <property name="keyStoreFilePath" value="keystore/node.jks"/>
+                <property name="keyStorePassword" value="123456"/>
+                <property name="trustStoreFilePath" value="keystore/trust.jks"/>
+                <property name="trustStorePassword" value="123456"/>
+                <property name="protocol" value="TLSv1.3"/>
+            </bean>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/swap.xml b/docs/_docs/code-snippets/xml/swap.xml
new file mode 100644
index 0000000..4e0a602
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/swap.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::swap[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- Durable memory configuration. -->
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+                <property name="dataRegionConfigurations">
+                    <list>
+                        <!--
+                        Defining a data region that will consume up to 500 MB of RAM 
+                        with swap enabled.
+                        -->
+                        <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                            <!-- Custom region name. -->
+                            <property name="name" value="500MB_Region"/>
+                            <!-- 100 MB initial size. -->
+                            <property name="initialSize" value="#{100L * 1024 * 1024}"/>
+                            <!-- Setting region max size equal to physical RAM size(5 GB). -->
+                            <property name="maxSize" value="#{5L * 1024 * 1024 * 1024}"/>
+                            <!-- Enabling swap space for the region. -->
+                            <property name="swapPath" value="/path/to/some/directory"/>
+                        </bean>
+                    </list>
+                </property>
+            </bean>
+        </property>
+        <!-- Other configurations. -->
+    </bean>
+    <!-- end::swap[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/tcp-ip-discovery.xml b/docs/_docs/code-snippets/xml/tcp-ip-discovery.xml
new file mode 100644
index 0000000..7f28904
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/tcp-ip-discovery.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+
+        <!-- tag::failure-detection-timeout[] -->
+        <property name="failureDetectionTimeout" value="5000"/>
+
+        <property name="clientFailureDetectionTimeout" value="10000"/>
+        <!-- end::failure-detection-timeout[] -->
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/tde.xml b/docs/_docs/code-snippets/xml/tde.xml
new file mode 100644
index 0000000..46a457f
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/tde.xml
@@ -0,0 +1,61 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <!-- We need to configure EncryptionSpi to enable encryption feature. -->
+        <property name="encryptionSpi">
+            <!-- Using EncryptionSpi implementation based on java keystore. -->
+            <bean class="org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi">
+                <!-- Path to the keystore file. -->
+                <property name="keyStorePath" value="ignite_keystore.jks"/>
+                <!-- Password for keystore file. -->
+                <property name="keyStorePassword" value="mypassw0rd"/>
+                <!-- Name of the key in keystore to be used as a master key. -->
+                <property name="masterKeyName" value="ignite.master.key"/>
+                <!-- Size of the cache encryption keys in bits. Can be 128, 192, or 256 bits.-->
+                <property name="keySize" value="256"/>
+            </bean>
+        </property>
+        <!-- tag::cache[] -->
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="encrypted-cache"/>
+                <property name="encryptionEnabled" value="true"/>
+            </bean>
+        </property>
+        <!-- end::cache[] -->
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/thin-client-cluster-config.xml b/docs/_docs/code-snippets/xml/thin-client-cluster-config.xml
new file mode 100644
index 0000000..67b2f89
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/thin-client-cluster-config.xml
@@ -0,0 +1,65 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+         http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+<!--
+      Ignite configuration with all defaults and enabled p2p deployment and enabled events.
+  -->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+  <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+
+    <!-- tag::ssl-configuration[] -->
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+            <property name="sslEnabled" value="true"/>
+            <property name="useIgniteSslContextFactory" value="false"/>
+            <property name="sslContextFactory">
+                <bean class="org.apache.ignite.ssl.SslContextFactory">
+                    <property name="keyStoreFilePath" value="/path/to/server.jks"/>
+                    <property name="keyStorePassword" value="123456"/>
+                    <property name="trustStoreFilePath" value="/path/to/trust.jks"/>
+                    <property name="trustStorePassword" value="123456"/>
+                </bean>
+            </property>
+        </bean>
+    </property>
+    <!-- end::ssl-configuration[] -->
+
+
+       <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+    <property name="discoverySpi">
+      <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+        <property name="ipFinder">
+          <!--
+              Ignite provides several options for automatic discovery that can be used
+              instead os static IP based discovery. For information on all options refer
+              to our documentation: http://apacheignite.readme.io/docs/cluster-config
+          -->
+          <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+          <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+            <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+            <property name="addresses">
+              <list>
+                <!-- In distributed environment, replace with actual host IP address. -->
+                <value>127.0.0.1:47500..47509</value>
+              </list>
+            </property>
+          </bean>
+        </property>
+      </bean>
+    </property>
+  </bean>
+</beans>
diff --git a/docs/_docs/code-snippets/xml/thread-pool.xml b/docs/_docs/code-snippets/xml/thread-pool.xml
new file mode 100644
index 0000000..c5c84d5
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/thread-pool.xml
@@ -0,0 +1,48 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="executorConfiguration">
+            <list>
+                <bean class="org.apache.ignite.configuration.ExecutorConfiguration">
+                    <property name="name" value="myPool"/>
+                    <property name="size" value="16"/>
+                </bean>
+            </list>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/tracing.xml b/docs/_docs/code-snippets/xml/tracing.xml
new file mode 100644
index 0000000..ff51071
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/tracing.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+        <property name="tracingSpi">
+            <bean class="org.apache.ignite.spi.tracing.opencensus.OpenCensusTracingSpi"/>
+        </property>
+
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <!-- prevent this client from reconnecting on connection loss -->
+                <property name="clientReconnectDisabled" value="true"/>
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/transactions.xml b/docs/_docs/code-snippets/xml/transactions.xml
new file mode 100644
index 0000000..ec85cfd
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/transactions.xml
@@ -0,0 +1,57 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        
+        <!-- tag::cache[] -->
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="myCache"/>
+                <property name="atomicityMode" value="TRANSACTIONAL"/>
+            </bean>
+        </property>
+
+        <!-- end::cache[] -->
+        <!-- tag::configuration[] -->
+        <property name="transactionConfiguration">
+            <bean class="org.apache.ignite.configuration.TransactionConfiguration">
+                <!--Set the timeout to 20 seconds-->
+                <property name="TxTimeoutOnPartitionMapExchange" value="20000"/>
+            </bean>
+        </property>
+
+        <!-- end::configuration[] -->
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/wal.xml b/docs/_docs/code-snippets/xml/wal.xml
new file mode 100644
index 0000000..b11534b
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/wal.xml
@@ -0,0 +1,57 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
+
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+
+        <!-- tag::segment-size[] -->
+        <property name="dataStorageConfiguration">
+            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+
+                <!-- set the size of wal segments to 128MB -->
+                <property name="walSegmentSize" value="#{128 * 1024 * 1024}"/>
+
+                <property name="defaultDataRegionConfiguration">
+                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+                        <property name="persistenceEnabled" value="true"/>
+                    </bean>
+                </property>
+
+            </bean>
+        </property>
+        <!-- end::segment-size[] -->
+        <!-- tag::discovery[] -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/code-snippets/xml/weighted-load-balancing.xml b/docs/_docs/code-snippets/xml/weighted-load-balancing.xml
new file mode 100644
index 0000000..b2429f3
--- /dev/null
+++ b/docs/_docs/code-snippets/xml/weighted-load-balancing.xml
@@ -0,0 +1,59 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<beans xmlns="http://www.springframework.org/schema/beans"
+    xmlns:util="http://www.springframework.org/schema/util" 
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
+    xsi:schemaLocation="         http://www.springframework.org/schema/beans
+         http://www.springframework.org/schema/beans/spring-beans.xsd
+         http://www.springframework.org/schema/util
+         http://www.springframework.org/schema/util/spring-util.xsd">
+    <!-- tag::ignite-config[] -->
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="loadBalancingSpi">
+            <bean class="org.apache.ignite.spi.loadbalancing.weightedrandom.WeightedRandomLoadBalancingSpi">
+                <property name="useWeights" value="true"/>
+                <property name="nodeWeight" value="10"/>
+            </bean>
+        </property>
+        <!-- tag::discovery[] -->
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: http://apacheignite.readme.io/docs/cluster-config
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <!--bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"-->
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        <!-- end::discovery[] -->
+    </bean>
+    <!-- end::ignite-config[] -->
+</beans>
diff --git a/docs/_docs/configuring-caches/atomicity-modes.adoc b/docs/_docs/configuring-caches/atomicity-modes.adoc
new file mode 100644
index 0000000..6820e8f
--- /dev/null
+++ b/docs/_docs/configuring-caches/atomicity-modes.adoc
@@ -0,0 +1,113 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Atomicity Modes
+
+By default, a cache supports only atomic operations, and bulk operations such as `putAll()` or `removeAll()` are executed as a sequence of individual puts and removes.
+You can enable transactional support and group multiple cache operations, on one or more keys, into a single atomic transaction.
+These operations are executed without any other interleaved operations on the specified keys, and either all succeed or all fail.
+There is no partial execution of the operations.
+
+To enable support for transactions for a cache, set the `atomicityMode` parameter in the cache configuration to `TRANSACTIONAL`.
+
+CAUTION: If you configure multiple caches within one link:configuring-caches/cache-groups[cache group], the caches must be either all atomic, or all transactional. You cannot have both TRANSACTIONAL and ATOMIC caches in one cache group.
+
+Ignite supports 3 atomicity modes, which are described in the following table.
+
+[cols="30%,70%",opts="autowidth"]
+|===
+| Atomicity Mode | Description
+
+| ATOMIC | The default mode.
+All operations are performed atomically, one at a time.
+Transactions are not supported.
+The `ATOMIC` mode provides better performance by avoiding transactional locks, whilst providing data atomicity and consistency for each single operation.
+Bulk writes, such as the `putAll(...)` and `removeAll(...)` methods, are not executed in one transaction and can partially fail.
+If this happens, a `CachePartialUpdateException` is thrown and contains a list of keys for which the update failed.
+| TRANSACTIONAL
+a| Enables support for ACID-compliant transactions executed via the key-value API.
+SQL transactions are not supported.
+Transactions in this mode can have different link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency modes and isolation levels].
+Enable this mode only if you need support for ACID-compliant operations.
+For more information about transactions, see link:key-value-api/transactions[Performing Transactions].
+
+[NOTE]
+====
+[discrete]
+=== Performance Considerations
+The `TRANSACTIONAL` mode adds a performance cost to cache operations and should be enabled only if you need transactions.
+====
+
+| TRANSACTIONAL_SNAPSHOT
+
+a| An experimental mode that implements multiversion concurrency control (MVCC) and supports both key-value transactions and SQL transactions. See link:transactions/mvcc[Multiversion Concurrency Control] for details about and limitations of this mode.
+
+[WARNING]
+====
+MVCC implementation is in beta and should not be considered for production.
+====
+
+
+|===
+
+
+You can enable transactions for a cache in the cache configuration.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="cacheConfiguration">
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+            <property name="name" value="myCache"/>
+
+            <property name="atomicityMode" value="TRANSACTIONAL"/>
+        </bean>
+    </property>
+
+    <!-- Optional transaction configuration. -->
+    <property name="transactionConfiguration">
+        <bean class="org.apache.ignite.configuration.TransactionConfiguration">
+            <!-- Configure TM lookup here. -->
+        </bean>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/PerformingTransactions.java[tags=enabling,!exclude,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration
+{
+    CacheConfiguration = new[]
+    {
+        new CacheConfiguration("txCache")
+        {
+            AtomicityMode = CacheAtomicityMode.Transactional
+        }
+    },
+    TransactionConfiguration = new TransactionConfiguration
+    {
+        DefaultTransactionConcurrency = TransactionConcurrency.Optimistic
+    }
+};
+----
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/configuring-caches/cache-groups.adoc b/docs/_docs/configuring-caches/cache-groups.adoc
new file mode 100644
index 0000000..2ad71d6
--- /dev/null
+++ b/docs/_docs/configuring-caches/cache-groups.adoc
@@ -0,0 +1,80 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cache Groups
+
+For each cache deployed in the cluster, there is always overhead: the cache is split into partitions whose state must be tracked on every cluster node.
+
+If link:persistence/native-persistence[Native Persistence] is enabled, then for every partition there is an open file on the disk that Ignite actively writes to and reads from. Thus, the more caches and partitions you have:
+
+* The more Java heap is occupied by partition maps. Every cache has its own partition map.
+* The longer it might take for a new node to join the cluster.
+* The longer it might take to initiate rebalancing if a node leaves the cluster.
+* The more partition files are kept open and the worse the performance of the checkpointing might be.
+
+Usually, you will not spot any of these problems for deployments with dozens or several hundreds of caches. However, when it comes to thousands the impact can be noticeable.
+
+To avoid this impact, consider using cache groups. Caches within a single cache group share various internal structures such as partitions maps, thus boosting topology events processing and decreasing overall memory usage. Note that from the API standpoint, there is no difference whether a cache is a part of a group or not.
+
+You can create a cache group by setting the `groupName` property of `CacheConfiguration`.
+Here is an example of how to assign caches to a specific group:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/cache-groups.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/DataPartitioning.java[tag=cfg,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataModellingDataPartitioning.cs[tag=partitioning,indent=0]
+----
+tab:C++[unsupported]
+--
+
+In the above example, the `Person` and `Organization` caches belong to `group1`.
+
+[NOTE]
+====
+[discrete]
+=== How are key-value pairs distinguished?
+
+If a cache is assigned to a cache group, its data is stored in shared partitions' internal structures.
+Every key you put into the cache is enriched with the unique ID of the cache the key belongs to.
+The ID is derived from the cache name.
+This happens automatically and allows storing data of different caches in the same partitions and B+tree structures.
+====
+
+The reason for grouping caches is simple — if you decide to group 1000 caches, then you have 1000x fewer structures that store partitions' data, partition maps, and open partition files.
+
+
+[NOTE]
+====
+[discrete]
+=== Should cache groups be used all the time?
+
+With all the benefits cache groups have, they might impact the performance of read operations and indexes lookups.
+This is caused by the fact that all data and indexes get mixed in shared data structures (partition maps, B+trees), and it will take more time to query over them.
+
+Thus, consider using the cache groups if you have a cluster of dozens and hundreds of nodes and caches, and you spot increased Java heap usage by internal structures, checkpointing performance drop, slow node connectivity to the cluster.
+====
+
diff --git a/docs/_docs/configuring-caches/configuration-overview.adoc b/docs/_docs/configuring-caches/configuration-overview.adoc
new file mode 100644
index 0000000..1a6c955
--- /dev/null
+++ b/docs/_docs/configuring-caches/configuration-overview.adoc
@@ -0,0 +1,153 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Overview
+
+This chapter explains how you can set cache configuration parameters.
+Once a cache is created, you cannot change its configuration parameters.
+
+[NOTE]
+====
+[discrete]
+=== Caches vs. Tables in Ignite
+
+The cache-driven configuration approach is one of the configuration options. You can also configure caches/tables using
+standard SQL commands such as `CREATE TABLE`. Refer to the link:data-modeling/data-modeling#key-value-cache-vs-sql-table[Caches vs. Tables]
+section to know the relation between caches and tables in Ignite.
+====
+
+
+== Configuration Example
+Below is an example of a cache configuration.
+
+[tabs]
+--
+
+tab:XML[]
+
+[source,xml]
+----
+include::code-snippets/xml/cache-configuration.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+//tag::params[]
+For the full list of parameters, refer to the javadoc:org.apache.ignite.configuration.CacheConfiguration[] javadoc.
+
+[cols="1,3,1",options="header",separator=|]
+|===
+|Parameter|Description|Default Value
+
+| `name` | The cache name. | None.
+
+|`cacheMode`
+a| The `cacheMode` parameter defines the way data is distributed in the cluster.
+
+In the `PARTITIONED` mode (default), the overall data set is divided into partitions and all partitions are split between participating nodes in a balanced manner.
+
+In the `REPLICATED` mode, all the data is replicated to every node in the cluster.
+
+See the link:data-modeling/data-partitioning#partitionedreplicated-mode[Partitioned/Replicated Mode] section for more details.
+| `PARTITIONED`
+
+| `writeSynchronizationMode` | Write synchronization mode. Refer to the link:configuring-caches/configuring-backups[Configuring Partition Backups] section. | `PRIMARY_SYNC`
+
+|`rebalanceMode`
+a| This parameter controls the way the rebalancing process is performed. Possible values include:
+
+* `SYNC` -- Any requests to the cache's API are blocked until rebalancing is completed.
+* `ASYNC` (default) -- Rebalancing is performed in the background.
+* `NONE` -- Rebalancing is not triggered.
+| `ASYNC`
+
+|`backups`
+|The number of link:data-modeling/data-partitioning#backup-partitions[backup partitions] for the cache.
+| `0`
+
+|`partitionLossPolicy`
+| link:configuring-caches/partition-loss-policy[Partition loss policy].
+| `IGNORE`
+
+|`readFromBackup`
+| [[readfrombackup]] Read requested cache entry from the backup partition if it is available on the local node instead of making a request to the primary partition (which can be located on the remote nodes).
+|  `true`
+
+|`queryPrallelism` | The number of threads in a single node to process a SQL query executed on the cache. Refer to the link:SQL/sql-tuning#query-parallelism[Query Parallelism] section in the Performance guide for more information.
+| 1
+|===
+
+//end::params[]
+
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ConfiguringCaches.java[tag=cfg,indent=0]
+----
+
+include::configuring-caches/configuration-overview.adoc[tag=params]
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataModellingConfiguringCaches.cs[tag=cfg,indent=0]
+----
+tab:SQL[]
+[source, sql]
+----
+CREATE TABLE IF NOT EXISTS Person (
+  id int,
+  city_id int,
+  name varchar,
+  age int,
+  company varchar,
+  PRIMARY KEY (id, city_id)
+) WITH "cache_name=myCache,template=partitioned,backups=2";
+----
+
+
+For the full list of parameters, refer to the link:sql-reference/ddl#create-table[CREATE TABLE] section.
+tab:C++[unsupported]
+--
+
+
+== Cache Templates
+A cache template is an instance of `CacheConfiguration` that can be registered in the cluster and used later as a basis for creating new caches or SQL tables. A cache or table created from a template inherits all the properties of the template.
+
+Templates are useful when creating a table using the link:sql-reference/ddl#create-table[CREATE TABLE] command, because the command does not support all available cache parameters.
+
+NOTE: Currently, templates are supported for the CREATE TABLE and REST commands.
+
+To create a template, define a cache configuration and add it to the `Ignite` instance, as shown below. If you want do define a cache template in the XML configuration file, you must add an asterisk to the template's name. This is required to indicate that the configuration is a template and not an actual cache.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/cache-template.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ConfiguringCaches.java[tag=template,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataModellingConfiguringCaches.cs[tag=template,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Once the cache template is registered in the cluster, as shown in the code snippet above, you can use it to create another cache with the same configuration.
diff --git a/docs/_docs/configuring-caches/configuring-backups.adoc b/docs/_docs/configuring-caches/configuring-backups.adoc
new file mode 100644
index 0000000..30535eb
--- /dev/null
+++ b/docs/_docs/configuring-caches/configuring-backups.adoc
@@ -0,0 +1,92 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Configuring Partition Backups
+
+include::data-modeling/data-partitioning.adoc[tag=partition-backups]
+
+== Configuring Backups
+
+To configure the number of backup copies, set the `backups` property in the cache configuration.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/configure-backups.xml[tags=ignite-config;!discovery;!sync-mode, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ConfiguringCaches.java[tag=backups,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataModellingConfiguringCaches.cs[tag=backups,indent=0]
+----
+tab:C++[unsupported]
+
+--
+
+== Synchronous and Asynchronous Backups
+////
+TODO: explain this better
+////
+
+You can configure whether updates of primary and backup copies should be synchronous or asynchronous by specifying a write synchronization mode.
+You can set the write synchronization mode in the cache configuration:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/configure-backups.xml[tags=ignite-config;!discovery;!cache-mode, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ConfiguringCaches.java[tag=synchronization-mode,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataModellingConfiguringCaches.cs[tag=synchronization-mode,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The write synchronization mode can be set to the following values:
+
+[cols="1,5",opts="stretch,header"]
+|===
+| Value |  Description
+| FULL_SYNC
+| Client node will wait for write or commit to complete on all participating remote nodes (primary and backup).
+
+|FULL_ASYNC
+|Client node does not wait for responses from participating nodes, in which case remote nodes may get their state updated slightly after any of the cache write methods complete or after the Transaction.commit() method completes.
+
+|PRIMARY_SYNC
+|This is the default mode. Client node will wait for write or commit to complete on primary node, but will not wait for backups to be updated.
+
+|===
+
+
+//NOTE: Regardless of write synchronization mode, cache data will always remain fully consistent across all participating nodes when using transactions.
+
+
diff --git a/docs/_docs/configuring-caches/expiry-policies.adoc b/docs/_docs/configuring-caches/expiry-policies.adoc
new file mode 100644
index 0000000..4ddde9f
--- /dev/null
+++ b/docs/_docs/configuring-caches/expiry-policies.adoc
@@ -0,0 +1,90 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Expiry Policies
+
+== Overview
+Expiry Policy specifies the amount of time that must pass before an entry is considered expired. Time can be counted from creation, last access, or modification time.
+
+Depending on the memory configuration, the expiration policies remove entries from either RAM or disk:
+
+* *In-Memory Mode* (data is stored solely in RAM): expired entries are purged from RAM.
+* *In-Memory + Native persistence*: expired entries are removed from both memory and disk. Note that expiry policies  remove entries from the partition files on disk without freeing up space. The space is reused to write subsequent entries.
+* *In-Memory + External Storage*: expired entries are removed from memory only (in Ignite) and left untouched in the external storage (RDBMS, NoSQL, and other databases).
+* *In-Memory + Swap*: expired entries are removed from both RAM and swap files.
+
+To set up an expiration policy, you can use any of the standard implementations of `javax.cache.expiry.ExpiryPolicy` or implement your own.
+
+== Configuration
+Below is an example expiry policy configuration.
+
+
+[tabs]
+--
+tab:XML[]
+
+[source,xml]
+----
+include::code-snippets/xml/expiry.xml[tags=cache-with-expiry, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ExpiryPolicies.java[tag=cfg,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ExpiryPolicies.cs[tag=cfg,indent=0]
+----
+tab:C++[unsupported]
+--
+
+You can also change or set Expiry Policy for individual cache operations. This policy is used for each operation invoked on the returned cache instance.
+
+[source,java]
+----
+include::{javaCodeDir}/ExpiryPolicies.java[tag=expiry2,indent=0]
+----
+
+== Eager TTL
+
+Entries that are expired can be removed from the cache either eagerly or when they are accessed by a cache operation. If there is at least one cache configured with eager TTL enabled, Ignite creates a single thread to clean up expired entries in the background.
+
+If the property is set to `false`, expired entries are not removed immediately. Instead, they are removed when they are requested in a cache operation by the thread that executes the operation.
+
+Eager TTL can be enabled or disabled via the `CacheConfiguration.eagerTtl` property (the default value is `true`):
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.CacheConfiguration">
+    <property name="eagerTtl" value="true"/>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ExpiryPolicies.java[tag=eagerTtl,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ExpiryPolicies.cs[tag=eagerTTL,indent=0]
+----
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/configuring-caches/near-cache.adoc b/docs/_docs/configuring-caches/near-cache.adoc
new file mode 100644
index 0000000..c50889f
--- /dev/null
+++ b/docs/_docs/configuring-caches/near-cache.adoc
@@ -0,0 +1,102 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Near Caches
+
+A near cache is a local cache that stores the most recently or most frequently accessed data on the local node. Let's say your application launches a client node and regularly queries reference data, such as country codes. Because client nodes do not store data, these queries always fetch data from the remote nodes. You can configure a near cache to keep the country codes on the local node while your application is running.
+
+A near cache is configured for a specific regular cache and holds data for that cache only.
+
+Near caches store data in the on-heap memory. You can configure the maximum size of the cache and eviction policy for near cache entries.
+
+[NOTE]
+====
+Near caches are fully transactional and get updated or invalidated automatically whenever the data changes on the server nodes.
+====
+
+== Configuring Near Cache
+
+You can configure a near cache for a particular cache in the cache configuration.
+
+:javaCodeFile: {javaCodeDir}/NearCache.java
+:xmlFile: code-snippets/xml/near-cache-config.xml
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tag=cache-with-near-cache,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeFile}[tag=nearCacheConfiguration,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/NearCaches.cs[tag=nearCacheConf,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Once configured in this way, the near cache is created on any node that requests data from the underlying cache, including both server nodes and client nodes.
+When you get an instance of the cache, as shown in the following example, the data requests go through the near cache.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+IgniteCache<Integer, Integer> cache = ignite.cache("myCache");
+
+int value = cache.get(1);
+----
+--
+
+Most configuration parameters available in the cache configuration that make sense for the near cache are inherited from the underlying cache configuration.
+For example, if the underlying cache has an link:configuring-caches/expiry-policies[expiry policy] configured, entries in the near cache are expired based on the same policy.
+
+The parameters listed in the table below are not inherited from the underlying cache configuration.
+
+[cols="1,3,1",opts="autowidth.stretch,header"]
+|===
+|Parameter | Description | Default Value
+|nearEvictionPolicy| The eviction policy for the near cache. See the link:memory-configuration/eviction-policies[Eviction policies] page for details. | none
+|nearStartSize| The initial capacity of the near cache (the number of entries it can hold). | 375,000
+|===
+
+== Creating Near Cache Dynamically On Client Nodes
+When making request from a client node to a cache that hasn't been configured to use a near cache, you can create a near cache for that cache dynamically.
+This increases performance by storing "hot" data locally on the client side.
+This cache is operable only on the node where it was created.
+
+To do this, create a near cache configuration and pass it as an argument to the method that gets the instance of the cache.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeFile}[tag=createNearCacheDynamically,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/NearCaches.cs[tag=nearCacheClientNode,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/configuring-caches/on-heap-caching.adoc b/docs/_docs/configuring-caches/on-heap-caching.adoc
new file mode 100644
index 0000000..648a194
--- /dev/null
+++ b/docs/_docs/configuring-caches/on-heap-caching.adoc
@@ -0,0 +1,182 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= On-Heap Caching
+
+Ignite uses off-heap memory to allocate memory regions outside of Java heap. However, you can enable on-heap caching  by setting `CacheConfiguration.setOnheapCacheEnabled(true)`.
+
+On-heap caching is useful in scenarios when you do a lot of cache reads on server nodes that work with cache entries in link:data-modeling/data-modeling#binary-object-format[binary form] or invoke cache entries' deserialization. For instance, this might happen when a distributed computation or deployed service gets some data from caches for further processing.
+
+
+[tabs]
+--
+tab:XML[]
+
+[source,xml]
+----
+include::code-snippets/xml/on-heap-cache.xml[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/OnHeapCaching.java[tag=onHeap,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/OnHeapCaching.cs[tag=onheap,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+== Configuring Eviction Policy
+
+When on-heap caching is enabled, you can use one of the on-heap eviction policies to manage the growing on-heap cache.
+
+Eviction policies control the maximum number of elements that can be stored in a cache's on-heap memory. Whenever the maximum on-heap cache size is reached, entries are evicted from Java heap.
+
+NOTE: The on-heap eviction policies remove the cache entries from Java heap only. The entries stored in the off-heap region of the memory are not affected.
+
+Some eviction policies support batch eviction and eviction by memory size limit. If batch eviction is enabled, then eviction starts when cache size becomes `batchSize` elements greater than the maximum cache size. In this case, `batchSize` entries are evicted. If eviction by memory size limit is enabled, then eviction starts when the size of cache entries in bytes becomes greater than the maximum memory size.
+
+NOTE: Batch eviction is supported only if maximum memory limit isn't set.
+
+Eviction policies are pluggable and are controlled via the `EvictionPolicy` interface. The implementation of eviction policy is notified of every cache change and defines the algorithm of choosing the entries to evict from the on-heap cache.
+
+=== Least Recently Used (LRU)
+
+LRU eviction policy, based on the link:http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used[Least Recently Used (LRU)] algorithm, ensures that the least recently used entry (i.e. the entry that has not been touched for the longest time) is evicted first.
+
+NOTE: LRU eviction policy nicely fits most of the use cases for on-heap caching. Use it whenever in doubt.
+
+This eviction policy can be enabled in the cache configuration as shown in the example below. It supports batch eviction and eviction by memory size limit.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.cache.CacheConfiguration">
+  <property name="name" value="myCache"/>
+
+  <!-- Enabling on-heap caching for this distributed cache. -->
+  <property name="onheapCacheEnabled" value="true"/>
+
+  <property name="evictionPolicy">
+    <!-- LRU eviction policy. -->
+    <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
+        <!-- Set the maximum cache size to 1 million (default is 100,000). -->
+      <property name="maxSize" value="1000000"/>
+    </bean>
+  </property>
+
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/EvictionPolicies.java[tag=LRU,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/EvictionPolicies.cs[tag=LRU,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+=== First In First Out (FIFO)
+
+FIFO eviction policy, based on the https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)[First-In-First-Out (FIFO)] algorithm, ensures that the entry that has been in the on-heap cache for the longest time is evicted first.
+It is different from `LruEvictionPolicy` because it ignores the order in which the entries are accessed.
+
+This eviction policy can be enabled in the cache configuration as shown in the example below.
+It supports batch eviction and eviction by memory size limit.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.cache.CacheConfiguration">
+  <property name="name" value="myCache"/>
+
+  <!-- Enabling on-heap caching for this distributed cache. -->
+  <property name="onheapCacheEnabled" value="true"/>
+
+  <property name="evictionPolicy">
+    <!-- FIFO eviction policy. -->
+    <bean class="org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy">
+        <!-- Set the maximum cache size to 1 million (default is 100,000). -->
+      <property name="maxSize" value="1000000"/>
+    </bean>
+  </property>
+
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/EvictionPolicies.java[tag=FIFO,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/EvictionPolicies.cs[tag=FIFO,indent=0]
+----
+tab:C++[unsupported]
+--
+
+=== Sorted
+
+Sorted eviction policy is similar to FIFO eviction policy with the difference that entries' order is defined by default or by a user defined comparator and ensures that the minimal entry (i.e. the entry that has an integer key with the smallest value) gets evicted first.
+
+The default comparator uses cache entries' keys for comparison that imposes a requirement for keys to implement the `Comparable` interface. You can provide your own comparator implementation which can use keys, values, or both for entries comparison.
+
+Enable sorted eviction policy in the cache configuration as shown below. It supports batch eviction and eviction by memory size limit.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.cache.CacheConfiguration">
+  <property name="name" value="myCache"/>
+
+  <!-- Enabling on-heap caching for this distributed cache. -->
+  <property name="onheapCacheEnabled" value="true"/>
+
+  <property name="evictionPolicy">
+    <!-- Sorted eviction policy. -->
+    <bean class="org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy">
+      <!--
+      Set the maximum cache size to 1 million (default is 100,000)
+      and use default comparator.
+      -->
+      <property name="maxSize" value="1000000"/>
+    </bean>
+  </property>
+
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/EvictionPolicies.java[tag=sorted,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/configuring-caches/partition-loss-policy.adoc b/docs/_docs/configuring-caches/partition-loss-policy.adoc
new file mode 100644
index 0000000..63a8acd
--- /dev/null
+++ b/docs/_docs/configuring-caches/partition-loss-policy.adoc
@@ -0,0 +1,196 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Partition Loss Policy
+:javaFile:  {javaCodeDir}/PartitionLossPolicyExample.java
+
+Throughout the cluster’s lifecycle, it may happen that some data partitions are lost due to the failure of the primary and backup nodes for the partitions.
+Such a situation leads to a partial data loss and needs to be addressed according to your use case.
+
+A partition is lost when both the primary copy and all backup copies of the partition are not available to the cluster, i.e. when the primary and backup nodes for the partition become unavailable. It means that for a given cache, you cannot afford to lose more than `number_of_backups` nodes.
+You can set the number of backup partitions for a cache in the link:configuring-caches/configuring-backups[cache configuration].
+
+When the cluster topology changes, Ignite checks if the change resulted in a partition loss, and, depending on the configured partition loss policy and baseline autoadjustment settings, allows or prohibits operations on caches.
+See the description of each policy in the next section.
+
+For pure in-memory caches, when a partition is lost, the data from the partition cannot be recovered unless you load it into the cluster again.
+For persistent caches, the data is not physically lost, because it has been persisted to disk.
+When the nodes that failed or disconnected return to the cluster (after a restart), the data is loaded from the disk.
+In this case, you need to reset the state of the lost partitions in order to continue to use the data. See <<Handling Partition Loss>>.
+
+
+== Configuring Partition Loss Policy
+Ignite supports the following partition loss policies:
+
+[cols="1,5",opts="header",stripes=none]
+|===
+| Policy | Description
+| `IGNORE` | Partition loss is ignored.  The cluster treats lost partitions as if they are empty. When you request data from such partitions, the cluster returns empty values as if the data was never there.
+
+This policy can only be used in pure in-memory clusters where baseline autoadjustment is enabled with a 0 timeout, and is the default value for such configurations.
+//This policy is the default behavior for pure in-memory clusters where baseline auto-adjustment is enabled with a zero timeout.
+In all other configurations (the clusters where there is at least one data region with persistence), the `IGNORE` policy is replaced with `READ_WRITE_SAFE` even if you explicitly set `IGNORE` in the cache configuration.
+
+| `READ_WRITE_SAFE` | Any attempt to read from or write to a lost partition of the cache results in an exception. However, you can read/write to the available partitions.
+| `READ_ONLY_SAFE` | The cache is available in read-only mode. Write operations to the cache result in an exception. Read operations from the lost partitions result in an exception as well. See the <<Handling Partition Loss>> section below.
+|===
+
+
+Partition loss policy is configured per cache.
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/partition-loss-policy.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tag=cfg,indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+== Listening to Partition Loss Events
+
+You can listen to the `EVT_CACHE_REBALANCE_PART_DATA_LOST` event to be notified when a partition loss occurs.
+This event is fired for every partition that is lost and contains the number of the lost partition and ID of the node that held the partition.
+Partition loss events are triggered only when either `READ_WRITE_SAFE` or `READ_ONLY_SAFE` policy is used.
+
+Enable the event in the cluster configuration first.
+See link:events/listening-to-events#enabling-events[Enabling Events].
+
+[tabs]
+--
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=events,indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+See link:events/events#cache-rebalancing-events[Cache Rebalancing Events] for the information about other events related to rebalancing of partitions.
+
+== Handling Partition Loss
+
+If data is not physically lost, you can return the nodes that left the cluster and reset the state of the lost partitions so that you can continue to work with the data.
+You can reset the state of lost partition by calling `Ignite.resetLostPartitions(cacheNames)` for specific caches or via the control script.
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=reset, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+The control script command:
+
+
+[source, shell]
+----
+control.sh --cache reset_lost_partitions myCache
+----
+
+
+If you don't reset lost partitions, read and write operations (depending on the policy configured for the cache) from the lost partitions will throw a `CacheException`.
+You can check if the exception is due to the state of the partitions by analyzing its root cause, like so:
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=exception,indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+//You can reset the state of the lost partitions even if the nodes with the partitions are still anavailable.
+
+You can get the list of lost partitions for a cache via `IgniteCache.lostPartitions()`.
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=lost-partitions,indent=0]
+----
+
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+== Recovering From a Partition Loss
+
+The following sections explain how you can recover from a partition loss in different cluster configurations.
+
+=== Pure In-memory Cluster with IGNORE policy
+
+In this configuration, the `IGNORE` policy is only applicable when baseline autoadjustment is enabled with a 0 timeout, which is the default setting for in-memory clusters.
+For such configurations, partition loss is ignored.
+The cache continues to be operational with the lost partitions treated as empty.
+
+When baseline autoadjustment is disabled or when the timeout is greater than 0, the `IGNORE` policy is replaced with `READ_WRITE_SAFE`.
+
+=== Pure In-memory Cluster with READ_WRITE_SAFE or READ_ONLY_SAFE policy
+
+User operations are blocked until you reset the lost partitions.
+After the reset, continue using the cache but the data will be lost.
+
+When baseline autoadjustment is disabled or when the timeout is greater than 0, you must return the nodes (at least one partition owner for each partition) to the baseline topology before resetting the lost partitions.
+Otherwise, `Ignite.resetLostPartitions(cacheNames)` throws a `ClusterTopologyCheckedException` with a message `Cannot reset lost partitions because no baseline nodes are online [cache=someCahe, partition=someLostPart]` indicating that safe recovery is not possible.
+If you cannot return the nodes for some reason (e.g. hardware failure), exclude them from the baseline topology manually before attempting to reset the lost partitions.
+
+=== Clusters with Persistence
+
+In clusters where all data regions are configured to persist data on disk (there is no in-memory regions), there are two ways to recover from a partition loss (provided the data is not damaged physically):
+
+. Return _all_ nodes to the baseline topology,
+. Reset lost partitions (call `Ignite.resetLostPartitions(...)` for all caches).
+
+or
+
+. Stop all nodes,
+. Start all nodes including those that failed and activate the cluster.
+
+If some nodes cannot be returned, exclude them from the baseline topology before attempting to reset the state of lost partitions.
+
+=== Clusters with Both In-memory and Persistent Caches
+
+In clusters where there are both in-memory regions and persistent regions, in-memory caches are treated the same way as in pure in-memory clusters with partition loss policy set to `READ_WRITE_SAFE`, and persistent caches are treated the same way as in persistent clusters.
diff --git a/docs/_docs/cpp-specific/cpp-objects-lifetime.adoc b/docs/_docs/cpp-specific/cpp-objects-lifetime.adoc
new file mode 100644
index 0000000..ceaff70
--- /dev/null
+++ b/docs/_docs/cpp-specific/cpp-objects-lifetime.adoc
@@ -0,0 +1,92 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Objects Lifetime in Ignite.C++
+
+== Ignite Objects
+
+Apache Ignite objects, such as `Ignite` or `Cache`, that are created using Ignite public APIs are implemented as a thin
+handler of an internal/underlying object and can be safely and quickly copied or passed to functions by value. It is also
+a recommended way for passing Ignite objects from one function to another because an underlying object lives as long as
+there is at least one handler object alive.
+
+[tabs]
+--
+tab:C++[]
+[source,cpp]
+----
+// Fast and safe passing of the ignite::Ignite instance to the function.
+// Here 'val' points to the same underlying node instance even though
+// Ignite object gets copied on call.
+// It's guarateed that the underlying object will live as long as 'val'
+// object is alive.
+void Foo(ignite::Ignite val)
+{
+  ...
+}
+----
+--
+
+== Custom Objects
+Your application can put in Ignite custom objects which lifetime can not be easily
+determined during the compilation. For example, when a `ContinuousQuery` instance is created, you are required to
+provide the continuous query with an instance of the local listener - `CacheEntryEventListener`. In such a case, it is
+unclear whether it is the responsibility of Apache Ignite or the application to manage the local listener's lifetime and
+release it once it is no longer needed.
+
+Apache Ignite C{pp} is pretty flexible in this part. It uses `ignite::Reference` class to address custom objects ownership
+problem. Refer to the code below to see how this class can be used in practice.
+
+[tabs]
+--
+tab:C++[]
+[source,cpp]
+----
+// Ignite function that takes a value of 'SomeType'.
+void Foo(ignite::Reference<SomeType> val);
+
+//...
+
+// Defining an object.
+SomeType obj1;
+
+// Passing a simple reference to the function.
+// Ignite will not get ownership over the instance.
+// The application is responsible for keeping instance alive while
+// it's used by Ignite and for releasing it once it is no longer needed.
+Foo(ignite::MakeReference(obj1);
+
+// Passing the object by copy.
+// Ignite gets a copy of the object instance and manages
+// its lifetime by itself.
+// 'SomeType' is required to have a copy constructor.
+foo(ignite::MakeReferenceFromCopy(obj1);
+
+// Defining another object.
+SomeType* obj2 = new SomeType;
+
+// Passing object's ownership to the function.
+// Ignite will release the object once it's no longer needed.
+// The applicaiton must not use the pointer once it have been passed
+// to Ignite as it might be released at any point of time.
+foo(ignite::MakeReferenceFromOwningPointer(obj2);
+
+std::shared_ptr<SomeType> obj3 = std::make_shared<SomeType>();
+
+// Passing the object by smart pointer.
+// In this case, Reference class behaves just like an underlying
+// smart pointer type.
+foo(ignite::MakeReferenceFromSmartPointer(obj3);
+----
+--
diff --git a/docs/_docs/cpp-specific/cpp-platform-interoperability.adoc b/docs/_docs/cpp-specific/cpp-platform-interoperability.adoc
new file mode 100644
index 0000000..bc22ca2
--- /dev/null
+++ b/docs/_docs/cpp-specific/cpp-platform-interoperability.adoc
@@ -0,0 +1,250 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite.C++ and Platform Interoperability
+
+== Overview
+
+When using Apache Ignite C++, it is quite common to have several C++ and Java nodes running in a single cluster. To seamlessly
+interoperate between C++ and Java nodes, you need to take several aspects into consideration. Let's review them.
+
+== Binary Marshaller Configuration
+
+Ignite uses its binary marshaller for data, logic, messages serialization and deserialization. Due to architectural specificities,
+Java and C++ nodes are started with different default settings of the binary marshaller that can lead to exceptions like
+ the one below during a node startup if you try to set up a heterogeneous cluster:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+class org.apache.ignite.spi.IgniteSpiException: Local node's
+binary configuration is not equal to remote node's binary configuration
+[locNodeId=b3f0367d-3c2b-47b4-865f-a62c656b5d3f,
+rmtNodeId=556a3f41-eab1-4d9f-b67c-d94d77ddd89d,
+locBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper,
+compactFooter=false, globSerializer=null}, rmtBinaryCfg=null]
+----
+--
+
+To avoid the exception and to make sure Java and C++ nodes can co-exist in a single cluster, add the following binary
+marshaller's settings to the Java configuration:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xsi:schemaLocation="http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+        ...
+        <property name="binaryConfiguration">
+            <bean class="org.apache.ignite.configuration.BinaryConfiguration">
+                <property name="compactFooter" value="false"/>
+
+                <property name="idMapper">
+                    <bean class="org.apache.ignite.binary.BinaryBasicIdMapper">
+                        <property name="lowerCase" value="true"/>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+        ...
+    </bean>
+</beans>
+----
+--
+
+== Basic Types Compatibility
+
+Your C++ application can put a value into the cluster and another Java application can read it back. The table below
+shows how the types are matched between Java and C++:
+
+[opts="header"]
+|===
+|Java Type | C++ Type
+
+| `boolean`, `java.lang.Boolean`| `bool`
+| `byte`, `java.lang.Byte`| `int8_t`
+| `short`, `java.lang.Short`| `int16_t`
+| `int`, `java.lang.Integer`| `int32_t`
+| `long`, `java.lang.Long`| `int64_t`
+| `float`, `java.lang.Float`| `float`
+| `double`, `java.lang.Double`| `double`
+| `char`, `java.lang.Character`| `uint16_t`
+| `java.lang.String`| `std::string`, `char[]`
+| `java.util.Date`| `ignite::Date`
+| `java.sql.Time`| `ignite::Time`
+| `java.sql.Timestamp`| `ignite::Timestamp`
+| `java.util.UUID`| `ignite::Guid`
+|===
+
+== Custom Types Compatibility
+
+To get access to the same application-specific object on both Java and C++ nodes, you need to describe it similarly in
+both the languages. This includes the same type name, type id, fields id, hash code algorithm, as well as read/write functions
+for the type.
+
+To do this on the C++ end, you need to use the `ignite::binary::BinaryType` class template.
+
+Let's consider the following example that defines a Java class that will be later read by a C++ application:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+package org.apache.ignite.examples;
+
+public class CrossClass implements Binarylizable {
+    private long id;
+
+    private int idPart;
+
+    public void readBinary(BinaryReader reader) throws BinaryObjectException {
+        id = reader.readLong("id");
+        idPart = reader.readInt("idPart");
+    }
+
+    public void writeBinary(BinaryWriter writer) throws BinaryObjectException {
+        writer.writeLong("id", id);
+        writer.writeInt("idPart", idPart);
+    }
+}
+----
+--
+
+Next, you create a counter-part on the C++ end:
+
+[tabs]
+--
+tab:C++[]
+[source,cpp]
+----
+namespace ignite
+{
+  namespace binary
+  {
+    template<>
+    struct BinaryType<CrossClass>
+    {
+      static int32_t GetTypeId()
+      {
+        return GetBinaryStringHashCode("CrossClass");
+      }
+
+      static void GetTypeName(std::string& name)
+      {
+        name = "CrossClass";
+      }
+
+      static int32_t GetFieldId(const char* name)
+      {
+        return GetBinaryStringHashCode(name);
+      }
+
+      static bool IsNull(const CrossClass& obj)
+      {
+        return false;
+      }
+
+      static void GetNull(CrossClass& dst)
+      {
+        dst = CrossClass();
+      }
+
+      static void Read(BinaryReader& reader, CrossClass& dst)
+      {
+        dst.id = reader.ReadInt64("id");
+        dst.idPart = reader.ReadInt32("idPart");
+      }
+
+      static void Write(BinaryWriter& writer, const CrossClass& obj)
+      {
+        writer.WriteInt64("id", obj.id);
+        writer.WriteInt32("idPart", obj.idPart);
+      }
+    };
+  }
+}
+----
+--
+
+Finally, you need to use the following `BinaryConfiguration` for **both** Java and C++ nodes:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xsi:schemaLocation="http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+        ...
+        <property name="binaryConfiguration">
+            <bean class="org.apache.ignite.configuration.BinaryConfiguration">
+                <property name="compactFooter" value="false"/>
+
+                <property name="idMapper">
+                    <bean class="org.apache.ignite.binary.BinaryBasicIdMapper">
+                        <property name="lowerCase" value="true"/>
+                    </bean>
+                </property>
+
+                <property name="nameMapper">
+                    <bean class="org.apache.ignite.binary.BinaryBasicNameMapper">
+                        <property name="simpleName" value="true"/>
+                    </bean>
+                </property>
+
+                <property name="classNames">
+                    <list>
+                        <value>org.apache.ignite.examples.CrossClass</value>
+                    </list>
+                </property>
+            </bean>
+        </property>
+        ...
+    </bean>
+</beans>
+----
+--
+
+[CAUTION]
+====
+[discrete]
+It is especially important to implement `GetTypeName()` and `GetTypeId()` methods in the right manner for the types that
+are used for the keys.
+====
+
+[CAUTION]
+====
+[discrete]
+C++ function `GetBinaryStringHashCode()` always calculates hash as `BinaryBasicIdMapper` when its property `lowerCase` is set
+to `true`. So make sure you have the correct configuration for the `BinaryBasicIdMapper` if you are going to use this
+function to calculate the type id in C++.
+====
+
diff --git a/docs/_docs/cpp-specific/cpp-serialization.adoc b/docs/_docs/cpp-specific/cpp-serialization.adoc
new file mode 100644
index 0000000..300719a
--- /dev/null
+++ b/docs/_docs/cpp-specific/cpp-serialization.adoc
@@ -0,0 +1,266 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Serialization in Ignite.C++
+
+== BinaryType Templates
+
+Most user-defined classes going through Ignite C{pp} API will be passed over the wire to other cluster nodes. These classes
+include your data records, compute tasks, and other objects.
+
+Passing objects of these classes over wire requires serialization. For Ignite C{pp} it can be achieved by providing
+a `BinaryType` class template for your object type:
+
+[tabs]
+--
+tab:C++[]
+[source,cpp]
+----
+class Address
+{
+  friend struct ignite::binary::BinaryType<Address>;
+public:
+  Address() { }
+
+  Address(const std::string& street, int32_t zip) :
+  street(street), zip(zip) { }
+
+  const std::string& GetStreet() const
+  {
+    return street;
+  }
+
+  int32_t GetZip() const
+  {
+    return zip;
+  }
+
+private:
+  std::string street;
+  int32_t zip;
+};
+
+template<>
+struct ignite::binary::BinaryType<Address>
+{
+  static int32_t GetTypeId()
+  {
+    return GetBinaryStringHashCode("Address");
+  }
+
+  static void GetTypeName(std::string& name)
+  {
+    name = "Address";
+  }
+
+  static int32_t GetFieldId(const char* name)
+  {
+    return GetBinaryStringHashCode(name);
+  }
+
+  static bool IsNull(const Address& obj)
+  {
+    return obj.GetZip() && !obj.GetStreet().empty();
+  }
+
+  static void GetNull(Address& dst)
+  {
+    dst = Address();
+  }
+
+  static void Write(BinaryWriter& writer, const Address& obj)
+  {
+    writer.WriteString("street", obj.GetStreet());
+    writer.WriteInt32("zip", obj.GetZip());
+  }
+
+  static void Read(BinaryReader& reader, Address& dst)
+  {
+    dst.street = reader.ReadString("street");
+    dst.zip = reader.ReadInt32("zip");
+  }
+};
+----
+--
+
+Also, you can use the raw serialization mode without storing names of the object's fields in the serialized form. This
+mode is more compact and performs faster, but disables SQL queries that require to keep the names of the fields in the serialized form:
+
+[tabs]
+--
+tab:C++[]
+[source,cpp]
+----
+template<>
+struct ignite::binary::BinaryType<Address>
+{
+  static int32_t GetTypeId()
+  {
+    return GetBinaryStringHashCode("Address");
+  }
+
+  static void GetTypeName(std::string& name)
+  {
+    name = "Address";
+  }
+
+  static int32_t GetFieldId(const char* name)
+  {
+    return GetBinaryStringHashCode(name);
+  }
+
+  static bool IsNull(const Address& obj)
+  {
+    return false;
+  }
+
+  static void GetNull(Address& dst)
+  {
+    dst = Address();
+  }
+
+  static void Write(BinaryWriter& writer, const Address& obj)
+  {
+    BinaryRawWriter rawWriter = writer.RawWriter();
+
+    rawWriter.WriteString(obj.GetStreet());
+    rawWriter.WriteInt32(obj.GetZip());
+  }
+
+  static void Read(BinaryReader& reader, Address& dst)
+  {
+    BinaryRawReader rawReader = reader.RawReader();
+
+    dst.street = rawReader.ReadString();
+    dst.zip = rawReader.ReadInt32();
+  }
+};
+----
+--
+
+== Serialization Macros
+
+Ignite C{pp} defines a set of utility macros that could be used to simplify the `BinaryType` specialization. Here is a list of such macros with description:
+
+* `IGNITE_BINARY_TYPE_START(T)` - Start the binary type's specialization.
+* `IGNITE_BINARY_TYPE_END` - End the binary type's specialization.
+* `IGNITE_BINARY_GET_TYPE_ID_AS_CONST(id)` - Implementation of `GetTypeId()` which returns predefined constant `id`.
+* `IGNITE_BINARY_GET_TYPE_ID_AS_HASH(T)` - Implementation of `GetTypeId()` which returns hash of passed type name.
+* `IGNITE_BINARY_GET_TYPE_NAME_AS_IS(T)` - Implementation of `GetTypeName()` which returns type name as is.
+* `IGNITE_BINARY_GET_FIELD_ID_AS_HASH` - Default implementation of `GetFieldId()` function which returns Java-way hash code of the string.
+* `IGNITE_BINARY_IS_NULL_FALSE(T)` - Implementation of `IsNull()` function which always returns `false`.
+* `IGNITE_BINARY_IS_NULL_IF_NULLPTR(T)` - Implementation of `IsNull()` function which return `true` if passed object is null pointer.
+* `IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(T)` - Implementation of `GetNull()` function which returns an instance created with default constructor.
+* `IGNITE_BINARY_GET_NULL_NULLPTR(T)` - Implementation of GetNull() function which returns `NULL` pointer.
+
+You can describe the `Address` class declared earlier using these macros:
+
+[tabs]
+--
+tab:C++[]
+[source,cpp]
+----
+namespace ignite
+{
+  namespace binary
+  {
+    IGNITE_BINARY_TYPE_START(Address)
+      IGNITE_BINARY_GET_TYPE_ID_AS_HASH(Address)
+      IGNITE_BINARY_GET_TYPE_NAME_AS_IS(Address)
+      IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(Address)
+      IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+
+      static bool IsNull(const Address& obj)
+      {
+        return obj.GetZip() == 0 && !obj.GetStreet().empty();
+      }
+
+      static void Write(BinaryWriter& writer, const Address& obj)
+      {
+        writer.WriteString("street", obj.GetStreet());
+        writer.WriteInt32("zip", obj.GetZip());
+      }
+
+      static void Read(BinaryReader& reader, Address& dst)
+      {
+        dst.street = reader.ReadString("street");
+        dst.zip = reader.ReadInt32("zip");
+      }
+
+    IGNITE_BINARY_TYPE_END
+  }
+}
+----
+--
+
+== Reading and Writing Values
+
+There are several ways for writing and reading data. The first way is to use an object's value directly:
+
+
+[tabs]
+--
+tab:Writing[]
+[source,cpp]
+----
+CustomType val;
+
+// some application code here
+// ...
+
+writer.WriteObject<CustomType>("field_name", val);
+----
+tab:Reading[]
+[source,cpp]
+----
+CustomType val = reader.ReadObject<CustomType>("field_name");
+----
+--
+
+The second approach does the same but uses a pointer to the object:
+
+[tabs]
+--
+tab:Writing[]
+[source,cpp]
+----
+// Writing null to as a value for integer field.
+writer.WriteObject<int32_t*>("int_field_name", nullptr);
+
+// Writing a value of the custom type by pointer.
+CustomType *val;
+
+// some application code here
+// ...
+
+writer.WriteObject<CustomType*>("field_name", val);
+----
+tab:Reading[]
+[source,cpp]
+----
+// Reading value which can be null.
+CustomType* nullableVal = reader.ReadObject<CustomType*>("field_name");
+if (nullableVal) {
+  // ...
+}
+
+// You can use a smart pointer as well.
+std::unique_ptr<CustomType> nullablePtr = reader.ReadObject<CustomType*>();
+if (nullablePtr) {
+  // ...
+}
+----
+--
+
+An advantage of the pointer-based technique is that it allows writing or reading `null` values.
diff --git a/docs/_docs/cpp-specific/index.adoc b/docs/_docs/cpp-specific/index.adoc
new file mode 100644
index 0000000..8e8f720
--- /dev/null
+++ b/docs/_docs/cpp-specific/index.adoc
@@ -0,0 +1,22 @@
+---
+layout: toc
+---
+
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite.C++ Specific Capabilities of Ignite
+
+This section covers Ignite features, configuration approaches and architectural nuances that are specific for C++
+applications.
diff --git a/docs/_docs/data-modeling/affinity-collocation.adoc b/docs/_docs/data-modeling/affinity-collocation.adoc
new file mode 100644
index 0000000..e6576fa
--- /dev/null
+++ b/docs/_docs/data-modeling/affinity-collocation.adoc
@@ -0,0 +1,123 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Affinity Colocation
+
+In many cases it is beneficial to colocate different entries if they are often accessed together.
+In this way, multi-entry queries are executed on one node (where the objects are stored).
+This concept is known as _affinity colocation_.
+
+Entries are assigned to partitions by the affinity function.
+The objects that have the same affinity keys go to the same partitions.
+This allows you to design your data model in such a way that related entries are stored together.
+"Related" here refers to the objects that are in a parent-child relationship or objects that are often queried together.
+
+For example, let's say you have `Person` and `Company` objects, and each person has the `companyId` field that indicates the company the person works for.
+By specifying the `Person.companyId` and `Company.ID` as affinity keys, you ensure that all the persons working for the same company are stored on the same node, where the company object is stored as well.
+Queries that request persons working for a specific company are processed on a single node.
+
+////
+The following image shows how data is distributed with the default affinity configuration:
+
+*TODO*
+
+And here is how data is distributed when you colocate persons with the companies:
+
+*TODO image*
+////
+
+You can also colocate a computation task with the data. See link:distributed-computing/collocated-computations[Colocating Computations With Data].
+////
+*TODO: add examples and use cases*
+////
+== Configuring Affinity Key
+
+If you do not specify the affinity key explicitly, the cache key is used as the default affinity key.
+If you create your caches as SQL tables using SQL statements, the PRIMARY KEY is the default affinity key.
+
+If you want to colocate data from two caches by a different field, you have to use a complex object as the key. That object usually contains a field that uniquely identifies the object in that cache and a field that you want to use for colocation.
+
+There are several ways to configure a custom affinity field within the custom key, which are described below.
+
+The following example illustrates how you can colocate the person objects with the company objects using a custom key class and the `@AffinityKeyMapped` annotation.
+
+:javaSourceFile: {javaCodeDir}/AffinityCollocationExample.java
+:dotnetSourceFile: code-snippets/dotnet/AffinityCollocation.cs
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tags=collocation;!config-with-key-configuration;!affinity-key-class,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetSourceFile}[tag=affinityCollocation,indent=0]
+----
+tab:C++[unsupported]
+
+tab:SQL[]
+[source,sql]
+----
+CREATE TABLE IF NOT EXISTS Person (
+  id int,
+  city_id int,
+  name varchar,
+  company_id varchar,
+  PRIMARY KEY (id, city_id)
+) WITH "template=partitioned,backups=1,affinity_key=company_id";
+
+CREATE TABLE IF NOT EXISTS Company (
+  id int,
+  name varchar,
+  PRIMARY KEY (id)
+) WITH "template=partitioned,backups=1";
+----
+--
+
+You can also configure the affinity key field in the cache configuration by using the `CacheKeyConfiguration` class.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=config-with-key-configuration,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetSourceFile}[tag=config-with-key-configuration,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Instead of defining a custom key class, you can use the `AffinityKey` class, which is designed specifically for the purpose of using custom affinity mapping.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=affinity-key-class,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetSourceFile}[tag=affinity-key-class,indent=0]
+----
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/data-modeling/binary-marshaller.adoc b/docs/_docs/data-modeling/binary-marshaller.adoc
new file mode 100644
index 0000000..bef6dc0
--- /dev/null
+++ b/docs/_docs/data-modeling/binary-marshaller.adoc
@@ -0,0 +1,299 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Marshaller
+
+== Basic Concepts
+
+Binary Marshaller is a component of Ignite that is responsible for data serialization. It has the advantages:
+
+* It enables you to read an arbitrary field from an object's serialized form without full object deserialization.
+This ability completely removes the requirement to have the cache key and value classes deployed on the server node's classpath.
+* It enables you to add and remove fields from objects of the same type. Given that server nodes do not have model classes
+definitions, this ability allows dynamic change to an object's structure, and even allows multiple clients with different versions of class definitions to co-exist.
+* It enables you to construct new objects based on a type name without having class definitions at all, hence
+allowing dynamic type creation.
+
+Binary objects can be used only when the default binary marshaller is used (i.e. no other marshaller is set to the configuration explicitly).
+
+[NOTE]
+====
+[discrete]
+=== Restrictions
+There are several restrictions that are implied by the BinaryObject format implementation:
+
+* Internally, Ignite does not write field and type names but uses a lower-case name hash to identify a field or a type.
+It means that fields or types with the same name hash are not allowed. Even though serialization will not work out-of-the-box
+in the case of hash collision, Ignite provides a way to resolve this collision at the configuration level.
+* For the same reason, BinaryObject format does not allow identical field names on different levels of a class hierarchy.
+* If a class implements `Externalizable` interface, Ignite will use `OptimizedMarshaller` instead of the binary one.
+The `OptimizedMarshaller` uses `writeExternal()` and `readExternal()` methods to serialize and deserialize objects of
+this class which requires adding classes of `Externalizable` objects to the classpath of server nodes.
+====
+
+The `IgniteBinary` facade, which can be obtained from an instance of Ignite, contains all the necessary methods to work with binary objects.
+
+[NOTE]
+====
+[discrete]
+=== Automatic Hash Code Calculation and Equals Implementation
+There are several restrictions that are implied by the BinaryObject format implementation:
+
+If an object can be serialized into a binary form, then Ignite will calculate its hash code during serialization and
+write it to the resulting binary array. Also, Ignite provides a custom implementation of the equals method for the binary
+object's comparison needs. This means that you do not need to override the GetHashCode and Equals methods of your custom
+keys and values in order for them to be used in Ignite, unless they can not be serialized into the binary form.
+For instance, objects of `Externalizable` type cannot be serialized into the binary form and require you to implement
+the `hashCode` and `equals` methods manually. See Restrictions section above for more details.
+====
+
+== Configuring Binary Objects
+
+In the vast majority of use cases, there is no need to additionally configure binary objects.
+
+However, in a case when you need to override the default type and field IDs calculation, or to plug in `BinarySerializer`,
+a `BinaryConfiguration` object should be defined in `IgniteConfiguration`. This object allows specifying a global
+name mapper, a global ID mapper, and a global binary serializer as well as per-type mappers and serializers. Wildcards
+are supported for per-type configuration, in which case, the provided configuration will be applied to all types
+that match the type name template.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="binaryConfiguration">
+    <bean class="org.apache.ignite.configuration.BinaryConfiguration">
+
+      <property name="nameMapper" ref="globalNameMapper"/>
+      <property name="idMapper" ref="globalIdMapper"/>
+
+      <property name="typeConfigurations">
+        <list>
+          <bean class="org.apache.ignite.binary.BinaryTypeConfiguration">
+            <property name="typeName" value="org.apache.ignite.examples.*"/>
+            <property name="serializer" ref="exampleSerializer"/>
+          </bean>
+        </list>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+--
+
+== BinaryObject API
+
+By default, Ignite works with deserialized values as it is the most common use case. To enable `BinaryObject`
+processing, a user needs to obtain an instance of `IgniteCache` using the `withKeepBinary()` method. When enabled,
+this flag will ensure that objects returned from the cache will be in `BinaryObject` format, when possible. The same
+applies to values being passed to the `EntryProcessor` and `CacheInterceptor`.
+
+[NOTE]
+====
+[discrete]
+=== Platform Types
+Note that not all types will be represented as `BinaryObject` when the `withKeepBinary()` flag is enabled. There is a
+set of 'platform' types that includes primitive types, String, UUID, Date, Timestamp, BigDecimal, Collections,
+Maps and arrays of these that will never be represented as a `BinaryObject`.
+
+Note that in the example below key type Integer does not change because it is a platform type.
+====
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// Create a regular Person object and put it to the cache.
+Person person = buildPerson(personId);
+ignite.cache("myCache").put(personId, person);
+
+// Get an instance of binary-enabled cache.
+IgniteCache<Integer, BinaryObject> binaryCache = ignite.cache("myCache").withKeepBinary();
+
+// Get the above person object in the BinaryObject format.
+BinaryObject binaryPerson = binaryCache.get(personId);
+----
+--
+
+== Modifying Binary Objects Using BinaryObjectBuilder
+
+`BinaryObject` instances are immutable. An instance of `BinaryObjectBuilder` must be used in order to update fields and
+create a new `BinaryObject`.
+
+An instance of `BinaryObjectBuilder` can be obtained from `IgniteBinary` facade. The builder may be created using a type
+name, in this case the returned builder will contain no fields, or it may be created using an existing `BinaryObject`,
+in this case the returned builder will copy all the fields from the given `BinaryObject`.
+
+Another way to get an instance of `BinaryObjectBuilder` is to call `toBuilder()` on an existing instance of a `BinaryObject`.
+This will also copy all data from the `BinaryObject` to the created builder.
+
+[NOTE]
+====
+[discrete]
+=== Limitations
+
+* You cannot change the types of existing fields.
+* You cannot change the order of enum values or add new constants at the beginning or in the middle of the list of enum's
+values. You can add new constants to the end of the list though.
+====
+
+Below is an example of using the `BinaryObject` API to process data on server nodes without having user classes deployed
+on servers and without actual data deserialization.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// The EntryProcessor is to be executed for this key.
+int key = 101;
+
+cache.<Integer, BinaryObject>withKeepBinary().invoke(
+  key, new CacheEntryProcessor<Integer, BinaryObject, Object>() {
+    public Object process(MutableEntry<Integer, BinaryObject> entry,
+                          Object... objects) throws EntryProcessorException {
+            // Create builder from the old value.
+        BinaryObjectBuilder bldr = entry.getValue().toBuilder();
+
+        //Update the field in the builder.
+        bldr.setField("name", "Ignite");
+
+        // Set new value to the entry.
+        entry.setValue(bldr.build());
+
+        return null;
+     }
+  });
+----
+--
+
+== BinaryObject Type Metadata
+
+As it was mentioned above, binary object structure may be changed at runtime hence it may also be useful to get
+information about a particular type that is stored in a cache such as field names, field type names, and affinity
+field name. Ignite facilitates this requirement via the `BinaryType` interface.
+
+This interface also introduces a faster version of field getter called `BinaryField`. The concept is similar to java
+reflection and allows to cache certain information about the field being read in the `BinaryField` instance, which is
+useful when reading the same field from a large collection of binary objects.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+Collection<BinaryObject> persons = getPersons();
+
+BinaryField salary = null;
+
+double total = 0;
+int cnt = 0;
+
+for (BinaryObject person : persons) {
+    if (salary == null)
+        salary = person.type().field("salary");
+
+    total += salary.value(person);
+    cnt++;
+}
+
+double avg = total / cnt;
+----
+--
+
+== BinaryObject and CacheStore
+
+Setting `withKeepBinary()` on the cache API does not affect the way user objects are passed to a `CacheStore`. This is
+intentional because in most cases a single `CacheStore` implementation works either with deserialized classes, or with
+`BinaryObject` representations. To control the way objects are passed to the store, the `storeKeepBinary` flag on
+`CacheConfiguration` should be used. When this flag is set to `false`, deserialized values will be passed to the store,
+otherwise `BinaryObject` representations will be used.
+
+Below is an example pseudo-code implementation of a store working with `BinaryObject`:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+public class CacheExampleBinaryStore extends CacheStoreAdapter<Integer, BinaryObject> {
+    @IgniteInstanceResource
+    private Ignite ignite;
+
+    /** {@inheritDoc} */
+    @Override public BinaryObject load(Integer key) {
+        IgniteBinary binary = ignite.binary();
+
+        List<?> rs = loadRow(key);
+
+        BinaryObjectBuilder bldr = binary.builder("Person");
+
+        for (int i = 0; i < rs.size(); i++)
+            bldr.setField(name(i), rs.get(i));
+
+        return bldr.build();
+    }
+
+    /** {@inheritDoc} */
+    @Override public void write(Cache.Entry<? extends Integer, ? extends BinaryObject> entry) {
+        BinaryObject obj = entry.getValue();
+
+        BinaryType type = obj.type();
+
+        Collection<String> fields = type.fieldNames();
+
+        List<Object> row = new ArrayList<>(fields.size());
+
+        for (String fieldName : fields)
+            row.add(obj.field(fieldName));
+
+        saveRow(entry.getKey(), row);
+    }
+}
+----
+--
+
+== Binary Name Mapper and Binary ID Mapper
+
+Internally, Ignite never writes full strings for field or type names. Instead, for performance reasons, Ignite writes
+integer hash codes for type and field names. Testing has indicated that hash code conflicts for the type names or the
+field names within the same type are virtually non-existent and, to gain performance, it is safe to work with hash codes.
+For the cases when hash codes for different types or fields actually do collide, `BinaryNameMapper` and `BinaryIdMapper`
+support overriding the automatically generated hash code IDs for the type and field names.
+
+`BinaryNameMapper` - maps type/class and field names to different names.
+`BinaryIdMapper` - maps given from `BinaryNameMapper` type and field name to ID that will be used by Ignite in internals.
+
+Ignite provides the following out-of-the-box mappers implementation:
+
+* `BinaryBasicNameMapper` - a basic implementation of `BinaryNameMapper` that returns a full or a simple name of a given
+class depending on whether the `setSimpleName(boolean useSimpleName)` property is set.
+* `BinaryBasicIdMapper` - a basic implementation of `BinaryIdMapper`. It has a configuration property called
+`setLowerCase(boolean isLowerCase)`. If the property is set to `false` then a hash code of given type or field name
+will be returned. If the property is set to `true` then a hash code of given type or field name in lower case will be returned.
+
+If you are using Java or .NET clients and do not specify mappers in `BinaryConfiguration`, then Ignite will use
+`BinaryBasicNameMapper` and the `simpleName` property will be set to `false`, and `BinaryBasicIdMapper` and the
+`lowerCase` property will be set to `true`.
+
+If you are using the C{pp} client and do not specify mappers in `BinaryConfiguration`, then Ignite will use
+`BinaryBasicNameMapper` and the `simpleName` property will be set to `true`, and `BinaryBasicIdMapper` and the
+`lowerCase` property will be set to `true`.
+
+By default, there is no need to configure anything if you use Java, .NET or C{pp}. Mappers need to be configured if
+there is a tricky name conversion when platform interoperability is needed.
diff --git a/docs/_docs/data-modeling/data-modeling.adoc b/docs/_docs/data-modeling/data-modeling.adoc
new file mode 100644
index 0000000..54732f0
--- /dev/null
+++ b/docs/_docs/data-modeling/data-modeling.adoc
@@ -0,0 +1,74 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Modeling
+
+A well-designed data model can improve your application's performance, utilize resources more efficiently, and help achieve your business goals. When designing a data model, it is important to understand how data is distributed in an Ignite  cluster and the different ways you can access the data.
+
+In this chapter, we discuss important components of the Ignite data distribution model, including partitioning and affinity colocation, as well as the two distinct interfaces that you can use to access your data (key-value API and SQL).
+
+== Overview
+
+To understand how data is stored and used in Ignite, it is useful to draw a distinction between the physical organization of data in a cluster and the logical representation of data, i.e. how users are going to view their data in their applications.
+
+On the physical level, each data entry (either cache entry or table row) is stored in the form of a <<Binary Object Format,binary object>>, and the entire data set is divided into smaller sets called _partitions_. The partitions are evenly distributed between all the nodes. The way data is divided into partitions and partitions into nodes is controlled by the  link:data-modeling/affinity-collocation[affinity function].
+
+On the logical level, data should be represented in a way that is easy to work with and convenient for end users to use in their applications.
+Ignite provides two distinct logical representations of data: _key-value cache_ and _SQL tables (schema)_.
+Although these two representations may seem different, in reality they are equivalent and can represent the same set of data.
+
+IMPORTANT: Keep in mind that, in Ignite, the concepts of a SQL table and a key-value cache are two equivalent representations of the same (internal) data structure. You can access your data using either the key-value API or SQL statements, or both.
+
+== Key-Value Cache vs. SQL Table
+
+A cache is a collection of key-value pairs that can be accessed through the key-value API. A SQL table in Ignite corresponds to the notion of tables in traditional RDBMSs with some additional constraints; for example, each SQL table must have a primary key.
+
+A table with a primary key can be presented as a key-value cache, in which the primary key column serves as the key, and the rest of the table columns represent the fields of the object (the value).
+
+image:images/cache_table.png[Key-value cache vs SQL table]
+
+The difference between these two representations is in the way you access the data. The key-value cache allows you to work with objects via supported programming languages. SQL tables support traditional SQL syntax and can help you, for example, migrate from an existing database. You can combine the two approaches and use either — or both — depending on your use case.
+
+Cache API supports the following features:
+
+* Support for JCache (JSR 107) specification
+* ACID Transactions
+* Continuous Queries
+* Events
+
+NOTE: Even after you get your cluster up and running, you can create both key-value caches and SQL tables link:key-value-api/basic-cache-operations#creating-caches-dynamically[dynamically].
+
+== Binary Object Format
+
+Ignite stores data entries in a specific format called _binary objects_. This serialization format provides several advantages:
+
+ * You can read an arbitrary field from a serialized object without full object deserialization. This completely removes the requirement to have the key and value classes deployed on the server node's classpath.
+ * You can add or remove fields from objects of the same type. Given that server nodes do not have model classes' definitions, this ability allows dynamic change to the object's structure, and even allows multiple clients with different versions of class definitions to co-exist.
+ * It enables you to construct new objects based on a type name without having class definitions at all, hence allowing dynamic type creation.
+ * It allows seamless interoperability between the Java, .NET, and C++ platforms.
+
+Binary objects can be used only when the default binary marshaller is used (i.e., no other marshaller is set in the configuration).
+
+For more information on how to configure and use binary objects, refer to the link:key-value-api/binary-objects[Working with Binary Objects] page.
+
+
+== Data Partitioning
+
+Data partitioning is a method of subdividing large sets of data into smaller chunks and distributing them between all server nodes in a balanced manner. Data partitioning is discussed at length in the link:data-modeling/data-partitioning[Data Partitioning] section.
+
+
+
+
+
+
diff --git a/docs/_docs/data-modeling/data-partitioning.adoc b/docs/_docs/data-modeling/data-partitioning.adoc
new file mode 100644
index 0000000..633b1d2
--- /dev/null
+++ b/docs/_docs/data-modeling/data-partitioning.adoc
@@ -0,0 +1,140 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Partitioning
+
+Data partitioning is a method of subdividing large sets of data into smaller chunks and distributing them between all server nodes in a balanced manner.
+
+Partitioning is controlled by the _affinity function_.
+The affinity function determines the mapping between keys and partitions.
+Each partition is identified by a number from a limited set (0 to 1023 by default).
+The set of partitions is distributed between the server nodes available at the moment.
+Thus, each key is mapped to a specific node and is stored on that node.
+When the number of nodes in the cluster changes, the partitions are re-distributed — through a process called <<rebalancing,rebalancing>> — between the new set of nodes.
+
+image:images/partitioning.png[Data Partitioning]
+
+The affinity function takes the _affinity key_ as an argument.
+The affinity key can be any field of the objects stored in the cache (any column in the SQL table).
+If the affinity key is not specified, the default key is used (in case of SQL tables, it is the PRIMARY KEY column).
+
+Partitioning boosts performance by distributing both read and write operations.
+Moreover, you can design your data model in such a way that the data entries that are used together are stored together (i.e., in one partition).
+When you request that data, only a small number of partitions is scanned.
+This technique is called link:data-modeling/affinity-collocation[Affinity Colocation].
+
+Partitioning helps achieve linear scalability at virtually any scale.
+You can add more nodes to the cluster as your data set grows, and Ignite makes sure that the data is distributed "equally" among all the nodes.
+
+== Affinity Function
+
+The affinity function controls how data entries are mapped onto partitions and partitions onto nodes.
+The default affinity function implements the _rendezvous hashing_ algorithm.
+It allows a bit of discrepancy in the partition-to-node mapping (i.e., some nodes may be responsible for a slightly larger number of partitions than others).
+However, the affinity function guarantees that when the topology changes, partitions are migrated only to the new node that joined or from the node that left.
+No data exchange happens between the remaining nodes.
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+TODO:
+You can implement a custom affinity function if you want to control the way data is distributed in the cluster.
+See the link:advanced-topics/affinity-function[Affinity Function] section in Advanced Topics.
+
+////////////////////////////////////////////////////////////////////////////////
+
+== Partitioned/Replicated Mode
+
+When creating a cache or SQL table, you can choose between partitioned and replicated mode of cache operation. The two modes are designed for different use case scenarios and provide different performance and availability benefits.
+
+
+=== PARTITIONED
+
+In this mode, all partitions are split equally between all server nodes.
+This mode is the most scalable distributed cache mode and allows you to store as much data as fits in the total memory (RAM and disk) available across all nodes.
+Essentially, the more nodes you have, the more data you can store.
+
+Unlike the `REPLICATED` mode, where updates are expensive because every node in the cluster needs to be updated, with `PARTITIONED` mode, updates become cheap because only one primary node (and optionally 1 or more backup nodes) need to be updated for every key. However, reads are somewhat more expensive because only certain nodes have the data cached.
+
+NOTE: Partitioned caches are ideal when data sets are large and updates are frequent.
+
+The picture below illustrates the distribution of a partitioned cache. Essentially we have key A assigned to a node running in JVM1, key B assigned to a node running in JVM3, etc.
+
+image:images/partitioned_cache.png[]
+
+
+===  REPLICATED
+
+In the `REPLICATED` mode, all the data (every partition) is replicated to every node in the cluster. This cache mode provides the utmost availability of data as it is available on every node. However, every data update must be propagated to all other nodes, which can impact performance and scalability.
+
+NOTE: Replicated caches are ideal when data sets are small and updates are infrequent.
+
+In the diagram below, the node running in JVM1 is a primary node for key A, but it also stores backup copies for all other keys as well (B, C, D).
+
+image:images/replicated_cache.png[]
+
+Because the same data is stored on all cluster nodes, the size of a replicated cache is limited by the amount of memory (RAM and disk) available on the node. This mode is ideal for scenarios where cache reads are a lot more frequent than cache writes, and data sets are small. If your system does cache lookups over 80% of the time, then you should consider using the `REPLICATED` cache mode.
+
+== Backup Partitions [[backup-partitions]]
+
+//tag::partition-backups[]
+
+By default, Ignite keeps a single copy of each partition (a single copy of the entire data set). In this case, if one or multiple nodes become unavailable, you lose access to partitions stored on these nodes. To avoid this, you can configure Ignite to maintain backup copies of each partition.
+
+IMPORTANT: By default, backups are disabled.
+
+Backup copies are configured per cache (table).
+If you configure 2 backup copies, the cluster maintains 3 copies of each partition.
+One of the partitions is called the _primary_ partition, and the other two are called _backup_ partitions.
+By extension, the node that has the primary partition is called the _primary node for the keys stored in the partition_.
+The node with backup partitions is called the _backup node_.
+
+When a node with the primary partition for some key leaves the cluster, Ignite triggers the partition map exchange (PME) process.
+PME labels one of the backup partitions (if they are configured) for the key as primary.
+
+Backup partitions increase the availability of your data, and in some cases, the speed of read operations, since Ignite reads data from backed-up partitions if they are available on the local node (this is the default behavior that can be disabled. See link:configuring-caches/configuration-overview#readfrombackup[Cache Configuration] for details.). However, they also increase memory consumption or the size of the persistent storage (if enabled).
+
+//end::partition-backups[]
+
+////////////////////////////////////////////////////////////////////////////////
+*TODO: draw a diagram that illustrates backup partition distribution*
+////////////////////////////////////////////////////////////////////////////////
+
+NOTE: Backup partitions can be configured in PARTITIONED mode only. Refer to the link:configuring-caches/configuring-backups[Configuring Partition Backups] section.
+
+== Partition Map Exchange
+Partition map exchange (PME) is a process of sharing information about partition distribution (partition map) across the cluster so that every node knows where to look for specific keys. PME is required whenever the partition distribution for any cache changes, for example, when new nodes are added to the topology or old nodes leave the topology (whether on user request or due to a failure).
+
+Examples of events that trigger PME include (but are not limited to):
+
+* A new node joins/leaves the topology.
+* A new cache starts/stops.
+* An index is created.
+
+When one of the PME-triggering events occurs, the cluster waits for all ongoing transactions to complete and then starts PME. Also, during PME, new transactions are postponed until the process finishes.
+
+The PME process works in the following way: The coordinator node requests from all nodes the information about the partitions they own. Each node sends this information to the coordinator. Once the coordinator node receives the messages from all nodes, it merges the information into a full partition map and sends it to all nodes. When the coordinator has received confirmation messages from all nodes, PME is considered completed.
+
+== Rebalancing
+////
+*TODO: the information from the https://apacheignite.readme.io/docs/rebalancing[data rebalancing] page can be useful*
+////
+
+Refer to the link:data-rebalancing[Data Rebalancing] page for details.
+
+== Partition Loss Policy
+
+It may happen that throughout the cluster’s lifecycle, some of the data partitions are lost due to the failure of some primary node and backup nodes that held a copy of the partitions. Such a situation leads to a partial data loss and needs to be addressed according to your use case. For detailed information about partition loss policies, see link:configuring-caches/partition-loss-policy[Partition Loss Policy].
+
+
diff --git a/docs/_docs/data-rebalancing.adoc b/docs/_docs/data-rebalancing.adoc
new file mode 100644
index 0000000..1f92807
--- /dev/null
+++ b/docs/_docs/data-rebalancing.adoc
@@ -0,0 +1,151 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Rebalancing
+
+== Overview
+
+When a new node joins the cluster, some of the partitions are relocated to the new node so that the data remains distributed equally in the cluster. This process is called _data rebalancing_.
+
+If an existing node permanently leaves the cluster and backups are not configured, you lose the partitions stored on this node.
+When backups are configured, one of the backup copies of the lost partitions becomes a primary partition and the rebalancing process is initiated.
+
+[CAUTION]
+====
+Data rebalancing is triggered by changes in the link:clustering/baseline-topology[Baseline Topology].
+In pure in-memory clusters, the default behavior is to start rebalancing immediately when a node leaves or joins the cluster (the baseline topology changes automatically).
+In clusters with persistence, the baseline topology has to be changed manually (default behavior), or can be changed automatically when link:clustering/baseline-topology#baseline-topology-autoadjustment[automatic baseline adjustment] is enabled.
+====
+
+Rebalancing is configured per cache.
+
+== Configuring Rebalancing Mode
+
+Ignite supports both synchronous and asynchronous rebalancing.
+In the synchronous mode, any operation on the cache data is blocked until rebalancing is finished.
+In the asynchronous mode, the rebalancing process is done asynchronously.
+You can also disable rebalancing for a particular cache.
+
+To change the rebalancing mode, set one of the following values in the cache configuration.
+
+- `SYNC` — Synchronous rebalancing mode. In this mode, any call to the cache public API is blocked until rebalancing is finished.
+- `ASYNC` — Asynchronous rebalancing mode. Distributed caches are available immediately and load all necessary data from other available cluster nodes in the background.
+- `NONE` — In this mode no rebalancing takes place, which means that caches are either loaded on demand from the persistent storage whenever data is accessed, or populated explicitly.
+
+:javaFile: {javaCodeDir}/RebalancingConfiguration.java
+:xmlFile: code-snippets/xml/rebalancing-config.xml
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;mode,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=!*;ignite-config;mode,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataRebalancing.cs[tag=RebalanceMode,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Configuring Rebalance Thread Pool
+
+By default, rebalancing is performed in one thread on each node.
+It means that at each point in time only one thread is used to transfer batches from one node to another, or to process batches coming from the remote node.
+////
+For example, if the cluster has two nodes and a single cache, all the cache's partitions will be re-balanced sequentially, one by one.
+If the cluster has two nodes and two caches, then the caches will be re-balanced in-parallel *TODO*
+////
+
+You can increase the number of threads that are taken from the system thread pool and used for rebalancing.
+A system thread is taken from the pool every time a node needs to send a batch of data to a remote node or needs to process a batch that came from a remote node.
+The thread is relinquished after the batch is processed.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;pool-size,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=!*;ignite-config;pool-size,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+CAUTION: System thread pool is widely used internally by all the cache related operations (put, get, etc.), SQL engine, and other modules. Setting the size of the rebalancing thread pool to a large value may significantly increase rebalancing performance at the cost of decreased throughput.
+
+
+== Rebalance Message Throttling [[throttling]]
+
+When data is transferred from one node to another, the whole data set is split into batches and each batch is sent in a separate message.
+You can configure the batch size and the amount of time the node waits between messages.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;throttling,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=!*;ignite-config;throttling,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataRebalancing.cs[tag=RebalanceThrottle,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Other Properties
+
+The following table lists the properties of `CacheConfiguration` related to rebalancing:
+
+
+[cols="1,4,1",opts="header"]
+|===
+| Property | Description  | Default Value
+| `rebalanceDelay` | A delay in milliseconds before the rebalancing process starts after a node joins or leaves the topology. Rebalancing delay is useful if you plan to restart nodes or start multiple nodes at once or one after another and don't want to repartition and rebalance the data until all nodes are started.
+|0 (no delay)
+
+|`rebalanceBatchSize` | The size in bytes of a single rebalance message. The rebalancing algorithm splits the data on every node into multiple batches prior to sending it to other nodes. | 512KB
+
+|`rebalanceThrottle`  | See <<#throttling>>.| 0 (throttling disabled)
+
+| `rebalanceOrder` | The order in which rebalancing should be done. Rebalance order can be set to a non-zero value for caches with SYNC or ASYNC rebalance modes only. Rebalancing for caches with smaller rebalance order is completed first. By default, rebalancing is not ordered. | 0
+
+|`rebalanceTimeout` | Timeout for pending rebalancing messages when they are exchanged between the nodes. | 10 seconds
+|===
+
+
+== Monitoring Rebalancing Process
+
+You can monitor the link:monitoring-metrics/metrics#monitoring-rebalancing[rebalancing process for specific caches using JMX].
diff --git a/docs/_docs/data-streaming.adoc b/docs/_docs/data-streaming.adoc
new file mode 100644
index 0000000..8736ec5
--- /dev/null
+++ b/docs/_docs/data-streaming.adoc
@@ -0,0 +1,190 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Streaming
+
+:javaFile: {javaCodeDir}/DataStreaming.java
+
+== Overview
+
+Ignite provides a Data Streaming API that can be used to inject large amounts of continuous streams of data into an Ignite cluster.
+The Data Streaming API is designed to be scalable and fault-tolerant, and provides _at-least-once_ delivery semantics for the data streamed into Ignite, meaning each entry is processed at least once.
+
+Data is streamed into a cache via a <<Data Streamers, data streamer>> associated with the cache. Data streamers automatically buffer the data and group it into batches for better performance and send it in parallel to multiple nodes.
+
+The Data Streaming API provides the following features:
+
+* The data that is added to a data streamer is automatically partitioned and distributed between the nodes.
+* You can process the data concurrently in a colocated fashion.
+* Clients can perform concurrent SQL queries on the data as it is being streamed in.
+
+image:images/data_streaming.png[Data Streaming]
+
+== Data Streamers
+A data streamer is associated with a specific cache and provides an interface for streaming data into the cache.
+
+In a typical scenario, you obtain a data streamer and use one of its methods to stream data into the cache, and Ignite takes care of data partitioning and colocation by batching data entries according to partitioning rules to avoid unnecessary data movement.
+
+You can obtain the data streamer for a specific cache as follows:
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tag=dataStreamer1,indent=0]
+----
+
+In the Java version of Ignite, a data streamer is an implementation of the `IgniteDataStreamer` interface. `IgniteDataStreamer` provides a number of `addData(...)` methods for adding key-value pairs to caches. Refer to the link:{javadoc_base_url}/org/apache/ignite/IgniteDataStreamer.html[IgniteDataStreamer] javadoc for the complete list of methods.
+
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataStreaming.cs[tag=dataStreamer1,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+== Overwriting Existing Keys
+
+By default, data streamers do not overwrite existing data and skip entries that are already in the cache. You can change that behavior by setting the `allowOverwrite` property of the data streamer to `true`.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=dataStreamer2,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataStreaming.cs[tag=dataStreamer2,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+NOTE: When `allowOverwrite` is set to `false` (default), the updates are not propagated to the link:persistence/external-storage[external storage] (if it is used).
+
+== Processing Data
+In cases when you need to execute custom logic before adding new data, you can use a stream receiver.
+A stream receiver is used to process the data in a colocated manner before it is stored into the cache.
+The logic implemented in a stream receiver is executed on the node where data is to be stored.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=streamReceiver,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataStreaming.cs[tag=streamReceiver,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+NOTE: Note that a stream receiver does not put data into the cache automatically. You need to call one of the `put(...)` methods explicitly.
+
+[IMPORTANT]
+====
+The class definitions of the stream receivers to be executed on remote nodes must be available on the nodes. This can be achieved in two ways:
+
+* Add the classes to the classpath of the nodes;
+* Enable link:code-deployment/peer-class-loading[peer class loading].
+====
+
+=== Stream Transformer
+A stream transformer is a convenient implementation of a stream receiver, that updates the data in the stream.
+Stream transformers take advantage of the colocation feature and update the data on the node where it is going to be stored.
+
+In the example below, we use a stream transformer to increment a counter for each distinct word found in the text stream.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=streamTransformer,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataStreaming.cs[tag=streamTransformer,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+=== Stream Visitor
+
+A stream visitor is another implementation of a stream receiver, which visits every key-value pair in the stream. The visitor does not update the cache. If a pair needs to be stored in the cache, one of the `put(...)` methods must be called explicitly.
+
+In the example below, we have 2 caches: "marketData", and "instruments". We receive market data ticks and put them into the streamer for the "marketData" cache. The stream visitor for the "marketData" streamer is invoked on the cluster member mapped to the particular market symbol. Upon receiving individual market ticks it updates the "instrument" cache with the latest market price.
+
+Note, that we do not update the "marketData" cache at all, leaving it empty. We simply use it for colocated processing of the market data within the cluster directly on the node where the data is stored.
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=stream-visitor,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DataStreaming.cs[tag=streamVisitor,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+== Configuring Data Streamer Thread Pool Size
+The data streamer thread pool is dedicated to process messages coming from the data streamers.
+
+The default pool size is `max(8, total number of cores)`.
+Use `IgniteConfiguration.setDataStreamerThreadPoolSize(...)` to change the pool size.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="dataStreamerThreadPoolSize" value="10"/>
+
+    <!-- other properties -->
+
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=pool-size,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/data-structures/atomic-sequence.adoc b/docs/_docs/data-structures/atomic-sequence.adoc
new file mode 100644
index 0000000..f5422b7
--- /dev/null
+++ b/docs/_docs/data-structures/atomic-sequence.adoc
@@ -0,0 +1,38 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Atomic Sequence
+
+:javaFile: {javaCodeDir}/DataStructures.java
+
+== Overview
+
+Distributed atomic sequence provided by `IgniteCacheAtomicSequence`  interface is similar to distributed atomic long, but its value can only go up. It also supports reserving a range of values to avoid costly network trips or cache updates every time a sequence must provide a next value. That is, when you perform `incrementAndGet()` (or any other atomic operation) on an atomic sequence, the data structure reserves ahead a range of values, which are guaranteed to be unique across the cluster for this sequence instance.
+
+Here is an example of how atomic sequence can be created:
+
+
+[source, java]
+----
+include::{javaFile}[tags=atomic-sequence, indent=0]
+----
+
+== Sequence Reserve Size
+
+The key parameter of `IgniteAtomicSequence` is `atomicSequenceReserveSize` which is the number of sequence values reserved, per node .  When a node tries to obtain an instance of `IgniteAtomicSequence`, a number of sequence values will be reserved for that node and consequent increments of sequence will happen locally without communication with other nodes, until the next reservation has to be made.
+
+The default value for `atomicSequenceReserveSize` is `1000`. This default setting can be changed by modifying the `atomicSequenceReserveSize` property of `AtomicConfiguration`.
+
+Refer to link:data-structures/atomic-types#atomic-configuration[Atomic Configuration] for more information on various atomic configuration properties.
+
diff --git a/docs/_docs/data-structures/atomic-types.adoc b/docs/_docs/data-structures/atomic-types.adoc
new file mode 100644
index 0000000..0bd9d6c
--- /dev/null
+++ b/docs/_docs/data-structures/atomic-types.adoc
@@ -0,0 +1,63 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Atomic Types
+
+:javaFile: {javaCodeDir}/DataStructures.java
+
+Ignite supports distributed atomic long and atomic reference, similar to `java.util.concurrent.atomic.AtomicLong` and `java.util.concurrent.atomic.AtomicReference` respectively.
+
+Atomics in Ignite are distributed across the cluster, essentially enabling performing atomic operations (such as increment-and-get or compare-and-set) with the same globally-visible value. For example, you could update the value of an atomic long on one node and read it from another node.
+
+Features:
+
+  * Retrieve current value.
+  * Atomically modify current value.
+  * Atomically increment or decrement current value.
+  * Atomically compare-and-set the current value to new value.
+
+Distributed atomic long and atomic reference can be obtained via `IgniteAtomicLong` and `IgniteAtomicReference` interfaces respectively, as shown below:
+
+
+
+.AtomicLong:
+[source, java]
+----
+include::{javaFile}[tags=atomic-long, indent=0]
+----
+
+.AtomicReference:
+[source, java]
+----
+include::{javaFile}[tags=atomic-reference, indent=0]
+----
+
+All atomic operations provided by `IgniteAtomicLong` and `IgniteAtomicReference` are synchronous. The time an atomic operation will take depends on the number of nodes performing concurrent operations with the same instance of atomic long, the intensity of these operations, and network latency.
+
+
+== Atomic Configuration
+
+Atomics in Ignite can be configured via the `atomicConfiguration` property of `IgniteConfiguration`.
+
+The following table lists available configuration parameters:
+
+[cols="1,1,1",opts="header"]
+|===
+| Setter | Description | Default
+| `setBackups(int)` | The number of backups. | 0
+| `setCacheMode(CacheMode)` | Cache mode for all atomic types. | `PARTITIONED`
+| `setAtomicSequenceReserveSize(int)` | Sets the number of sequence values reserved for `IgniteAtomicSequence` instances. |  1000
+|===
+
+
diff --git a/docs/_docs/data-structures/countdownlatch.adoc b/docs/_docs/data-structures/countdownlatch.adoc
new file mode 100644
index 0000000..50f2e58
--- /dev/null
+++ b/docs/_docs/data-structures/countdownlatch.adoc
@@ -0,0 +1,39 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= CountDownLatch
+
+:javaFile: {javaCodeDir}/DataStructures.java
+
+`IgniteCountDownLatch` provides functionality that is similar to that of `java.util.concurrent.CountDownLatch` and allows you to synchronize operations accross cluster nodes.
+
+A distributed CountDownLatch can be created as follows:
+
+[source, java]
+----
+include::{javaFile}[tags=count-down-latch, indent=0]
+----
+
+
+After the above code is executed, all nodes in the specified cache will be able to synchronize on the latch named `latchName`.
+Below is a code example of such synchronization:
+
+
+[source, java]
+----
+include::{javaFile}[tags=sync-on-latch, indent=0]
+
+----
+
+
diff --git a/docs/_docs/data-structures/id-generator.adoc b/docs/_docs/data-structures/id-generator.adoc
new file mode 100644
index 0000000..8695bdf
--- /dev/null
+++ b/docs/_docs/data-structures/id-generator.adoc
@@ -0,0 +1,76 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Distributed ID Generator
+
+== Overview
+
+The distributed atomic sequence provided by the `IgniteCacheAtomicSequence` interface is similar to the distributed atomic long,
+but its value can only go up. It also supports reserving a range of values to avoid costly network trips or cluster updates
+every time a sequence must provide a next value. That is, when you perform `incrementAndGet()` (or any other atomic operation)
+on an atomic sequence, the data structure reserves ahead a range of values, which are guaranteed to be unique across the
+cluster for this sequence instance.
+
+As a result, the atomic sequence is a suitable and efficient data structure for the implementation of a
+distributed ID generator. For instance, such a generator can be used to produce unique primary keys across the whole cluster.
+
+Here is an example of how atomic sequence can be created:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+Ignite ignite = Ignition.ignite();
+
+IgniteAtomicSequence seq = ignite.atomicSequence(
+    "seqName", // Sequence name.
+    0,       // Initial value for sequence.
+    true     // Create if it does not exist.
+);
+----
+--
+
+Below is a simple usage example:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+Ignite ignite = Ignition.ignite();
+
+// Initialize atomic sequence.
+final IgniteAtomicSequence seq = ignite.atomicSequence("seqName", 0, true);
+
+// Increment atomic sequence.
+for (int i = 0; i < 20; i++) {
+  long currentValue = seq.get();
+  long newValue = seq.incrementAndGet();
+
+  ...
+}
+----
+--
+
+== Sequence Reserve Size
+
+The key parameter of `IgniteAtomicSequence` is `atomicSequenceReserveSize` which is the number of sequence values reserved
+per node. When a node tries to obtain an instance of `IgniteAtomicSequence`, a number of sequence values will be reserved
+for that node and consequent increments of sequence will happen locally without communication with other nodes, until
+the next reservation has to be made.
+
+The default value for `atomicSequenceReserveSize` is `1000`. This default setting can be changed by modifying the
+`atomicSequenceReserveSize` property of `AtomicConfiguration`.
+
diff --git a/docs/_docs/data-structures/queue-and-set.adoc b/docs/_docs/data-structures/queue-and-set.adoc
new file mode 100644
index 0000000..6091b27
--- /dev/null
+++ b/docs/_docs/data-structures/queue-and-set.adoc
@@ -0,0 +1,81 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Queue and Set
+:javaFile: {javaCodeDir}/DataStructures.java
+== Overview
+
+In addition to providing standard key-value map-like storage, Ignite also provides an implementation of a fast Distributed Blocking Queue and Distributed Set.
+
+`IgniteQueue` and `IgniteSet`, an implementation of `java.util.concurrent.BlockingQueue` and `java.util.Set` interface respectively, also support all operations from the `java.util.Collection` interface.
+Both types can be created in either collocated or non-collocated mode.
+
+Below is an example of how to create a distributed queue and set.
+
+.Queue:
+[source, java]
+----
+include::{javaFile}[tags=queue, indent=0]
+----
+
+
+.Set:
+[source, java]
+----
+include::{javaFile}[tags=set, indent=0]
+----
+
+== Collocated vs. Non-Collocated Mode
+
+If you plan to create just a few queues or sets containing lots of data, then you would create them in non-collocated mode. This will make sure that about equal portion of each queue or set will be stored on each cluster node. On the other hand, if you plan to have many queues or sets, relatively small in size (compared to the whole cache), then you would most likely create them in collocated mode. In this mode all queue or set elements will be stored on the same cluster node, but about equal amount of queues/sets will be assigned to every node.
+
+Non-collocated mode only makes sense for and is only supported for `PARTITIONED` caches.
+
+A collocated queue and set can be created by setting the `collocated` property of `CollectionConfiguration`, like so:
+
+.Queue:
+[source, java]
+----
+include::{javaFile}[tags=colocated-queue, indent=0]
+----
+
+
+.Set:
+[source, java]
+----
+include::{javaFile}[tags=colocated-set, indent=0]
+----
+
+== Cache Queues and Load Balancing
+
+Given that elements will remain in the queue until someone takes them, and that no two nodes should ever receive the same element from the queue, cache queues can be used as an alternate work distribution and load balancing approach within Ignite.
+
+For example, you could simply add computations, such as instances of `IgniteRunnable` to a queue, and have threads on remote nodes call `IgniteQueue.take()`  method which will block if queue is empty. Once the `take()` method will return a job, a thread will process it and call `take()` again to get the next job. Given this approach, threads on remote nodes will only start working on the next job when they have completed the previous one, hence creating ideally balanced system where every node only takes the number of jobs it can process, and not more.
+
+== Collection Configuration
+
+Ignite collections can be in configured in API via `CollectionConfiguration` (see above examples). The following configuration parameters can be used:
+
+
+
+[cols="1,1,1",opts="header"]
+|===
+| Setter | Description |  Default
+| `setCollocated(boolean)` | Sets collocation mode. | `false`
+|`setCacheMode(CacheMode)` | Sets underlying cache mode (`PARTITIONED`, `REPLICATED` or `LOCAL`). | `PARTITIONED`
+| `setAtomicityMode(CacheAtomicityMode)` | Sets underlying cache atomicity mode (`ATOMIC` or `TRANSACTIONAL`). | `ATOMIC`
+| `setOffHeapMaxMemory(long)` | Sets offheap maximum memory size. | `0` (unlimited)
+| `setBackups(int)` |  Sets number of backups. | `0`
+|`setNodeFilter(IgnitePredicate<ClusterNode>)` | Sets optional predicate specifying on which nodes entries should be stored. |
+|===
diff --git a/docs/_docs/data-structures/semaphore.adoc b/docs/_docs/data-structures/semaphore.adoc
new file mode 100644
index 0000000..508c017
--- /dev/null
+++ b/docs/_docs/data-structures/semaphore.adoc
@@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Semaphore
+
+:javaFile: {javaCodeDir}/DataStructures.java
+
+Ignite's counting distributed semaphore implementation and behavior is similar to the concept of a well-known `java.util.concurrent.Semaphore`. As any other semaphore it maintains a set of permits that are taken using `acquire()` method and released with `release()` counterpart allowing to restrict access to some logical or physical resource or synchronize execution flow. The only difference is that Ignite's semaphore empowers you to fulfill these kind of actions not only in boundaries of a single JVM but rather a cluster wide, across many remote nodes.
+
+You can create a distributed semaphore as follows:
+[source, java]
+----
+include::{javaFile}[tags=semaphore, indent=0]
+----
+
+Once the semaphore is created, it can be used concurrently by multiple cluster nodes in order to implement some distributed logic or restrict access to a distributed resource like in the following example:
+
+[source, java]
+----
+include::{javaFile}[tags=use-semaphore, indent=0]
+----
+
diff --git a/docs/_docs/distributed-computing/cluster-groups.adoc b/docs/_docs/distributed-computing/cluster-groups.adoc
new file mode 100644
index 0000000..d8acc4f
--- /dev/null
+++ b/docs/_docs/distributed-computing/cluster-groups.adoc
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cluster Groups
+
+:javaFile: {javaCodeDir}/ClusterAPI.java
+
+The `ClusterGroup` interface represents a logical group of nodes, which can be used in many of Ignite's APIs when you want to limit the scope of specific operations to a subset of nodes (instead of the whole cluster). For example, you may wish to deploy a service only on remote nodes or execute a job only on the set of nodes that have a specific attribute.
+////
+TODO: explain attributes
+////
+TIP: Note that the `IgniteCluster` interface is also a cluster group which includes all the nodes in the cluster.
+
+You can limit job execution, service deployment, messaging, events, and
+other tasks to run only on a specific set of node. For example, here is
+how to broadcast a job only to remote nodes (excluding the local node).
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=remote-nodes,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusterGroups.cs[tag=broadcastAction,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+For convenience, Ignite comes with a number of predefined cluster groups.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=group-examples,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusterGroups.cs[tag=clusterGroups,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/distributed-computing/collocated-computations.adoc b/docs/_docs/distributed-computing/collocated-computations.adoc
new file mode 100644
index 0000000..47bd72ff
--- /dev/null
+++ b/docs/_docs/distributed-computing/collocated-computations.adoc
@@ -0,0 +1,179 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+:javaSourceFile: {javaCodeDir}/CollocatedComputations.java
+:dotnetSourceFile: code-snippets/dotnet/CollocationgComputationsWithData.cs
+= Colocating Computations with Data
+
+Colocated computation is type of distributed data processing wherein the computational task you want to perform over a specific data set is sent to the nodes where the required data is located and only the results of the computations are sent back. This approach minimizes data transfer between nodes and can significantly reduce the task execution time.
+
+Ignite provides several ways to perform colocated computations, all of which use the affinity function to determine the location of the data.
+
+The compute interface provides the `affinityCall(...)` and `affinityRun(...)` methods that colocate a task with data either by key or by partition.
+
+[IMPORTANT]
+====
+The `affinityCall(...)` and `affinityRun(...)` methods guarantee that the data for the given key or partition is present on the target node for the duration of the task.
+====
+
+[IMPORTANT]
+====
+The class definitions of the task to be executed on remote nodes must be available on the nodes.
+You can ensure this in two ways:
+
+* Add the classes to the classpath of the nodes;
+* Enable link:code-deployment/peer-class-loading[peer class loading].
+====
+
+== Colocating by Key
+To send a computational task to the node where a given key is located, use the following methods:
+
+- `IgniteCompute.affinityCall(String cacheName, Object key, IgniteCallable<R> job)`
+- `IgniteCompute.affinityRun(String cacheName, Object key, IgniteRunnable job)`
+
+Ignite calls the configured affinity function to determine the location of the given key.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=collocating-by-key,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetSourceFile}[tag=affinityRun,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/affinity_run.cpp[tag=affinity-run,indent=0]
+----
+--
+
+== Colocating by Partition
+
+The `affinityCall(Collection<String> cacheNames, int partId, IgniteRunnable job)` and `affinityRun(Collection<String> cacheNames, int partId, IgniteRunnable job)` send a given task to the node where the partition with a given ID is located. This is useful when you need to retrieve objects for multiple keys and you know that the keys belong to the same partition. In this case, you can create one task instead of multiple task for each key.
+
+For example, let's say you want to calculate the arithmetic mean of a specific field for a specific subset of keys.
+If you want to distribute the computation, you can group the keys by partitions and send each group of keys to the node where the partition is located to get the values.
+The number of groups and, therefore, the number of tasks is no more than the total number of partitions (default is 1024).
+Below is a code snippet that illustrates this example.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=calculate-average,indent=0]
+----
+tab:C#/.NET[unsupported]
+affinityCall(..) method with partition id as parameter is unsupported in Ignite .NET
+
+tab:C++[unsupported]
+--
+
+If you want to process all the data in the cache, you can iterate over all cache partitions and send tasks that process the data stored on each individual partition.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=sum-by-partition,indent=0]
+----
+tab:C#/.NET[unsupported]
+
+tab:C++[unsupported]
+--
+
+
+[IMPORTANT]
+====
+[discrete]
+=== Performance Considerations
+Colocated computations yield performance benefits when the amount of the data you want to process is sufficiently large. In some cases, when the amount of data is small, a link:key-value-api/using-scan-queries[scan query] may perform better.
+
+====
+
+
+== Entry Processor
+
+// This section should probably be expanded with more details and examples
+
+An entry processor is used to process cache entries on the nodes where they are stored and return the result of the processing. With an entry processor, you do not have to transfer the entire object to perform an operation with it, you can perform the operation remotely and only transfer the results.
+
+If an entry processor sets the value for an entry that does not exist, the entry is added to the cache.
+
+Entry processors are executed atomically within a lock on the given cache key.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=entry-processor,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+void CacheInvoke()
+{
+    var ignite = Ignition.Start();
+
+    var cache = ignite.GetOrCreateCache<int, int>("myCache");
+
+    var proc = new Processor();
+
+    // Increment cache value 10 times
+    for (int i = 0; i < 10; i++)
+        cache.Invoke(1, proc, 5);
+}
+
+class Processor : ICacheEntryProcessor<int, int, int, int>
+{
+    public int Process(IMutableCacheEntry<int, int> entry, int arg)
+    {
+        entry.Value = entry.Exists ? arg : entry.Value + arg;
+
+        return entry.Value;
+    }
+}
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/invoke.cpp[tag=invoke,indent=0]
+----
+
+--
+
+////
+
+TODO: the importance of this section is questionable
+
+== Cache Interceptor
+
+Ignite lets you execute custom logic before or after specific operations on a cache. You can:
+
+- change the returned value of the `get` operation;
+- process an entry before or after any `put`/`remove` operation.
+
+
+////
diff --git a/docs/_docs/distributed-computing/distributed-computing.adoc b/docs/_docs/distributed-computing/distributed-computing.adoc
new file mode 100644
index 0000000..ea2b9e1
--- /dev/null
+++ b/docs/_docs/distributed-computing/distributed-computing.adoc
@@ -0,0 +1,388 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Distributed Computing
+
+:javaFile: {javaCodeDir}/DistributedComputing.java
+
+Ignite provides an API for distributing computations across cluster nodes in a balanced and fault-tolerant manner. You can submit individual tasks for execution as well as implement the MapReduce pattern with automatic task splitting. The API provides fine-grained control over the link:distributed-computing/load-balancing[job distribution strategy].
+
+
+////
+*TODO: recommendation: define tasks for execution in seprate classes (not as nested clusses or closures), because the enclosing class will be peer-deployed as well and you may not want this*
+////
+
+== Getting the Compute Interface
+
+The main entry point for running distributed computations is the compute interface, which can be obtained from an instance of `Ignite`.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=get-compute,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=gettingCompute,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/compute_get.cpp[tag=compute-get,indent=0]
+----
+--
+
+The compute interface provides methods for distributing different types of tasks over cluster nodes and running link:distributed-computing/collocated-computations[colocated computations].
+
+== Specifying the Set of Nodes for Computations
+
+Each instance of the compute interface is associated with a link:distributed-computing/cluster-groups[set of nodes] on which the tasks are executed.
+When called without arguments, `ignite.compute()` returns the compute interface that is associated with all server nodes.
+To obtain an instance for a specific subset of nodes, use `Ignite.compute(ClusterGroup group)`.
+In the following example, the compute interface is bound to the remote nodes only, i.e. all nodes except for the one that runs this code.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=get-compute-for-nodes,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=forRemotes,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+== Executing Tasks
+
+Ignite provides three interfaces that can be implemented to represent a task and executed via the compute interface:
+
+- `IgniteRunnable` — an extension of `java.lang.Runnable` that can be used to implement calculations that do not have input parameters and return no result.
+- `IgniteCallable` — an extension of `java.util.concurrent.Callable` that returns a specific value.
+- `IgniteClosure` — a functional interface that accepts a parameter and returns a value.
+
+
+You can execute a task once (on one of the nodes) or broadcast it to all nodes.
+
+[IMPORTANT]
+====
+In order to run tasks on the remote nodes, make sure the class definitions of the tasks are available on the nodes.
+You can do this in two ways:
+
+- Add the classes to the classpath of the nodes;
+- Enable link:code-deployment/peer-class-loading[peer class loading].
+====
+
+=== Executing a Runnable Task
+
+To execute a runnable task, use the `run(...)` method of the compute interface. The task is sent to one of the nodes associated with the compute instance.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=execute-runnable,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=computeAction,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/compute_run.cpp[tag=compute-run,indent=0]
+----
+--
+
+
+=== Executing a Callable Task
+
+To execute a callable task, use the `call(...)` method of the compute interface.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=execute-callable,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=computeFunc,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/compute_call.cpp[tag=compute-call,indent=0]
+----
+--
+
+=== Executing an IgniteClosure
+
+To execute an `IgniteClosure`, use the `apply(...)` method of the compute interface. The method accepts a task and an input parameter for the task. The parameter is passed to the given `IgniteClosure` at the execution time.
+
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tag=execute-closure,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=computeFuncApply,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+=== Broadcasting a Task
+The `broadcast()` method executes a task on _all nodes_ associated with the compute instance.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=broadcast,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=broadcast,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/compute_broadcast.cpp[tag=compute-broadcast,indent=0]
+----
+--
+
+=== Asynchronous Execution
+
+All methods described in the previous sections have asynchronous counterparts:
+
+- `callAsync(...)`
+- `runAsync(...)`
+- `applyAsync(...)`
+- `broadcastAsync(...)`
+
+The asynchronous methods return an `IgniteFuture` that represents the result of the operation. In the following example, a collection of callable tasks is executed asynchronously.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=async,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=async,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/compute_call_async.cpp[tag=compute-call-async,indent=0]
+----
+--
+
+== Task Execution Timeout
+
+You can set a timeout for task execution.
+If the task does not finish within the given time frame, it be stopped and all jobs produced by this task are cancelled.
+
+To execute a task with a timeout, use the `withTimeout(...)` method of the compute interface.
+The method returns a compute interface that executes the first task given to it in a time-limited manner.
+Consequent tasks do not have a timeout: you need to call `withTimeout(...)` for every task that should have a timeout.
+
+//TODO: code samples for other languages
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=timeout,indent=0]
+----
+--
+
+== Sharing State Between Jobs on Local Node
+It is often useful to share a state between different compute jobs executed on one node. For this purpose, there is a shared concurrent local map available on each node.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=get-map,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+Node-local values are similar to thread local variables in that these values are not distributed and kept only on the local node.
+Node-local data can be used to share the state between compute jobs.
+It can also be used by deployed services.
+
+In the following example, a job increments a node-local counter every time it executes on some node. As a result, the node-local counter on each node tells us how many times the job has executed on that node.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=job-counter,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== Accessing Data from Computational Tasks
+
+If your computational task needs to access the data stored in caches, you can do it via the instance of `Ignite`:
+
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tag=access-data,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DistributedComputingApi.cs[tag=instanceResource,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/compute_acessing_data.cpp[tag=compute-acessing-data,indent=0]
+----
+--
+
+Note that the example shown above may not be the most effective way.
+The reason is that the person object that corresponds to key `1` may be located on a node that is different from the node where the task is executed.
+In this case, the object is fetched via network. This can be avoided by link:distributed-computing/collocated-computations[colocating the task with the data].
+
+[CAUTION]
+====
+If you want to use the key and value objects inside `IgniteCallable` and `IgniteRunnable` tasks, make sure the key and value classes are deployed on all cluster nodes.
+====
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+
+
+In the cases where you do not need to colocate computations with data but simply want to process all data remotely, you can run local cache queries inside the `call()` method. Consider the following example.
+
+Let's say we have a cache that stores information about persons and we want to calculate the average age of all persons. One way to accomplish this is to run a link:key-value-api/querying[scan query] that will fetch the ages of all persons to the local node, where you can calculate the average age.
+
+A more efficient way, however, is to avoid network calls to other nodes by running the query locally on each remote node and aggregating the result on the local node.
+
+This task can be easily split
+
+[source, java]
+-------------------------------------------------------------------------------
+private class AverageAgeJob implements IgniteCallable<Double> {
+
+    @IgniteInstanceResource
+    private Ignite ignite;
+
+    @Override
+    public Double call() throws Exception {
+
+        IgniteCache<Long, Person> cache = ignite.cache("person");
+
+        int localAvg = 0;
+        try (QueryCursor<Cache.Entry<Long, Person>> cursor = cache
+                .query(new ScanQuery<Long, Person>().setLocal(true))) {
+            for (Cache.Entry<Long, Person> entry : cursor) {
+                localAvg += (int) entry.getValue().getAge();
+            }
+        }
+
+        return (localAvg / (double) cache.size());
+    }
+}
+
+-------------------------------------------------------------------------------
+Note that the scan query is executed in the local mode. It means that it will only fetch objects from the Person cache that are stored localy and will not request data from other nodes.
+
+If you broadcast this task to all nodes, all person objecs will be processed (each locally), and the results are sent to the node that initiated the task.
+
+[source, java]
+-------------------------------------------------------------------------------
+Ignite ignite = Ignition.ignite();
+
+double average = ignite.compute().broadcast(new AverageAgeTask()).stream().reduce(0D, (a, b) -> a + b);
+-------------------------------------------------------------------------------
+
+
+The task is executed on every node, where it will query all persons stored locally and calculate the local average. Then the result are sent to the node that initiated the task and summed up. In this implementation, objects are not transferred via network.
+
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+
+
+== Accessing Ignite Resources from Tasks
+
+Ignite provides a number of resources that can be injected into a task, such as an instance of `Ignite`.
+
+`TaskSessionResource` - this
+
+`IgniteInstanceResource` -
+
+`LoggerResource` -
+
+`SpringApplicationContextResource` -
+
+`SpringResource` -
+
+[source, java]
+-------------------------------------------------------------------------------
+example
+-------------------------------------------------------------------------------
+
+
+////////////////////////////////////////////////////////////////////////////////
diff --git a/docs/_docs/distributed-computing/executor-service.adoc b/docs/_docs/distributed-computing/executor-service.adoc
new file mode 100644
index 0000000..fe5498e
--- /dev/null
+++ b/docs/_docs/distributed-computing/executor-service.adoc
@@ -0,0 +1,39 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Executor Service
+
+:javaFile: {javaCodeDir}/IgniteExecutorService.java
+
+Ignite provides a distributed implementation of `java.util.concurrent.ExecutorService` that submits tasks to a cluster's server nodes for execution.
+The tasks are load balanced across the cluster nodes and are guaranteed to be executed as long as there is at least one node in the cluster.
+
+////
+TODO: C# unsupported?
+////
+An executor service can be obtained from an instance of `Ignite`:
+
+[source, java]
+----
+include::{javaFile}[tag=execute,indent=0]
+----
+
+You can also limit the set of nodes available for the executor service by specifying a link:distributed-computing/cluster-groups[cluster group]:
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{javaFile}[tag=cluster-group,indent=0]
+-------------------------------------------------------------------------------
+
+
diff --git a/docs/_docs/distributed-computing/fault-tolerance.adoc b/docs/_docs/distributed-computing/fault-tolerance.adoc
new file mode 100644
index 0000000..685fba9
--- /dev/null
+++ b/docs/_docs/distributed-computing/fault-tolerance.adoc
@@ -0,0 +1,65 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Fault Tolerance
+:javaFile: {javaCodeDir}/FaultTolerance.java
+
+Ignite supports automatic job failover.
+In case of a node crash, jobs are automatically transferred to other available nodes for re-execution.
+As long as there is at least one node standing, no job is ever lost.
+
+The global failover strategy is controlled by the `IgniteConfiguration.failoverSpi` property.
+
+Available implementations:
+
+* `AlwaysFailoverSpi` — This implementation always reroutes a failed job to another node, and is used by default.
++
+When a job from a compute task fails, an attempt is made to reroute the failed job to a node that has not executed any other job from the same task. If no such node is available, then an attempt is made to reroute the failed job to one of the nodes that may be running other jobs from the same task. If none of the above attempts succeeds, then the job is not failed over.
++
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/failover-always.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=always,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+* `NeverFailoverSpi` — This implementation never fails over failed jobs.
++
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/failover-never.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=never,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+* `JobStealingFailoverSpi` — This implementation must be used only if you want to enable link:distributed-computing/load-balancing#job-stealing[job stealing].
diff --git a/docs/_docs/distributed-computing/job-scheduling.adoc b/docs/_docs/distributed-computing/job-scheduling.adoc
new file mode 100644
index 0000000..c242a69
--- /dev/null
+++ b/docs/_docs/distributed-computing/job-scheduling.adoc
@@ -0,0 +1,78 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Job Scheduling
+
+:javaFile: {javaCodeDir}/JobScheduling.java
+
+When jobs arrive at the destination node, they are submitted to a thread pool and scheduled for execution in random order.
+However, you can change job ordering by configuring `CollisionSpi`.
+The `CollisionSpi` interface provides a way to control how jobs are scheduled for processing on each node.
+
+Ignite provides several implementations of the `CollisionSpi` interface:
+
+- `FifoQueueCollisionSpi` — simple FIFO ordering in multiple threads. This implementation is used by default;
+- `PriorityQueueCollisionSpi` — priority ordering;
+- `JobStealingFailoverSpi` — use this implementation to enable link:distributed-computing/load-balancing#job-stealing[job stealing].
+
+To enable a specific collision spi, change the `IgniteConfiguration.collisionSpi` property.
+
+== FIFO Ordering
+
+`FifoQueueCollisionSpi` provides FIFO ordering of jobs as they arrive. The jobs are executed in multiple threads. The number of threads is controlled by the `parallelJobsNumber` parameter. The default value equals 2 times the number of processor cores.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/job-scheduling-fifo.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=fifo,indent=0]
+----
+tab:C#/.NET[unsupported]
+--
+
+
+== Priority Ordering
+
+Use `PriorityQueueCollisionSpi` to assign priorities to individual jobs, so that jobs with higher priority are executed ahead of lower priority jobs. You can also specify the number of threads to process jobs.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/job-scheduling-priority.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=priority,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+Task priorities are set in the link:distributed-computing/map-reduce#distributed-task-session[task session] via the `grid.task.priority` attribute. If no priority is assigned to a task, then the default priority of 0 is used.
+
+
+[source, java]
+----
+include::{javaFile}[tag=task-priority,indent=0]
+----
+
diff --git a/docs/_docs/distributed-computing/load-balancing.adoc b/docs/_docs/distributed-computing/load-balancing.adoc
new file mode 100644
index 0000000..863c091
--- /dev/null
+++ b/docs/_docs/distributed-computing/load-balancing.adoc
@@ -0,0 +1,127 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Load Balancing
+
+:javaFile: {javaCodeDir}/LoadBalancing.java
+
+Ignite automatically load balances jobs produced by a link:distributed-computing/map-reduce[compute task] as well as individual tasks submitted via the distributed computing API. Individual tasks submitted via `IgniteCompute.run(...)` and other compute methods are treated as tasks producing a single job.
+
+////////////////////////////////////////////////////////////////////////////////
+
+IgniteCompute.run(IgniteRunnable task)  -> produces one job
+
+IgniteCompute.call(IgniteCallable task) -> produces one job
+
+IgniteCompute.execute(ComputeTask) -> splits into multiple jobs
+
+////////////////////////////////////////////////////////////////////////////////
+
+By default, Ignite uses a round-robin algorithm (`RoundRobinLoadBalancingSpi`), which distributes jobs in sequential order across the nodes specified for the compute task.
+
+[NOTE]
+====
+Load balancing does not apply to link:distributed-computing/collocated-computations[colocated computations].
+====
+
+The load balancing algorithm is controlled by the `IgniteConfiguration.loadBalancingSpi` property.
+
+== Round-Robin Load Balancing
+
+`RoundRobinLoadBalancingSpi` iterates through the available nodes in a round-robin fashion and picks the next sequential node. The available nodes are defined when you link:distributed-computing/distributed-computing#getting-the-compute-interface[get the compute instance] through which you execute your tasks.
+
+Round-Robin load balancing supports two modes of operation: per-task and global.
+
+When configured in per-task mode, the implementation picks a random node at the beginning of every task execution and then sequentially iterates through all the nodes in the topology starting from that node. For cases when the split size of a task is equal to the number of nodes, this mode guarantees that all nodes will participate in job execution.
+
+[IMPORTANT]
+====
+The per-task mode requires that the following event types be enabled: `EVT_TASK_FAILED`, `EVT_TASK_FINISHED`, `EVT_JOB_MAPPED`.
+====
+
+
+When configured in global mode, a single sequential queue of nodes is maintained for all tasks and the next node in the queue is picked every time. In this mode (unlike in per-task mode), it is possible that even if the split size of a task is equal to the number of nodes, some jobs within the same task will be assigned to the same node whenever multiple tasks are executing concurrently.
+
+The global mode is used by default.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/round-robin-load-balancing.xml[tags=!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=load-balancing,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Random and Weighted Load Balancing
+`WeightedRandomLoadBalancingSpi` picks a random node from the list of available nodes. You can also optionally assign weights to nodes, so that nodes with larger weights will end up getting proportionally more jobs routed to them. By default all nodes get a weight of 10.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/weighted-load-balancing.xml[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=weighted,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== Job Stealing
+
+Quite often clusters are deployed across many computers some of which may be more powerful or under-utilized than others. Enabling `JobStealingCollisionSpi` helps avoid jobs being stuck at an over-utilized node, as they will be stolen by an under-utilized node.
+
+`JobStealingCollisionSpi` supports job stealing from over-utilized nodes to under-utilized nodes. This SPI is especially useful if you have some jobs that complete quickly, while others are sitting in the waiting queue on over-utilized nodes. In such a case, the waiting jobs will be stolen from the slower node and moved to the fast/under-utilized node.
+
+`JobStealingCollisionSpi` adopts a "late" load balancing technique, which allows reassigning a job from node A to node B after the job has been scheduled for execution on node A​.
+
+[IMPORTANT]
+====
+If you want to enable job stealing, you have to configure `JobStealingFailoverSpi` as the failover SPI. See link:distributed-computing/fault-tolerance[Fault Tolerance] for details.
+====
+
+
+Here is an example of how to configure `JobStealingCollisionSpi`:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/job-stealing.xml[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=job-stealing,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
diff --git a/docs/_docs/distributed-computing/map-reduce.adoc b/docs/_docs/distributed-computing/map-reduce.adoc
new file mode 100644
index 0000000..c229885
--- /dev/null
+++ b/docs/_docs/distributed-computing/map-reduce.adoc
@@ -0,0 +1,140 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= MapReduce API
+
+:javaFile: {javaCodeDir}/MapReduce.java
+
+== Overview
+
+Ignite provides an API for performing simplified MapReduce operations.
+The MapReduce pattern is based on the assumption that the task that you
+want to execute can be split into multiple jobs (the mapping phase),
+with each job executed separately. The results produced by each job are
+aggregated into the final results (the reducing phase).
+
+In a distributed system such as Ignite, the jobs are distributed between
+the nodes according to the preconfigured link:distributed-computing/load-balancing[load balancing strategy] and the results are aggregated on the node that submitted the task.
+
+The MapReduce pattern is provided by the `ComputeTask` interface.
+
+[NOTE]
+====
+Use `ComputeTask` only when you need fine-grained control over the
+job-to-node mapping, or custom fail-over logic. For all other cases you
+should use link:distributed-computing/distributed-computing#executing-an-igniteclosure[simple closures].
+====
+
+== Understanding Compute Task Interface
+
+The `ComputeTask` interface provides a way to implement custom map and reduce logic. The interface has three methods: `map(...)`, `result()`, and `reduce()`.
+
+The `map()` method should be implemented to create the compute jobs based on the input parameter and map them to worker nodes. The method receives the collection of cluster nodes on which the task is to be run and the task's input parameter. The method returns a map with jobs as keys and mapped worker nodes as values. The jobs are then sent to the mapped nodes and executed there.
+
+The `result()` method is called after completion of each job and returns an instance of `ComputeJobResultPolicy` indicating how to proceed with the task. The method receives the results of the job and the list of all the job results received so far. The method may return one of the following values:
+
+- `WAIT` - wait for all remaining jobs to complete (if any);
+- `REDUCE` - immediately move to the reduce step, discarding all the remaining jobs and results not yet received;
+- `FAILOVER` - failover the job to another node (see link:distributed-computing/fault-tolerance[Fault Tolerance]).
+
+The `reduce()` method is called during the reduce step, when all the jobs have completed (or the `result()` method returned the `REDUCE` result policy for a particular job). The method receives a list with all completed results and returns the final result of the computation.
+
+//When you submit a compute task for execution via the `IgniteCompute.execute()` method, ..
+
+== Executing a Compute Task
+
+To execute a compute task, call the `IgniteCompute.execute(...)` method and pass the input parameter for the compute task as the last argument.
+////
+TODO: should we provide the full example for C#?
+////
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=execute-compute-task;!exclude,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/MapReduceApi.cs[tag=mapReduceComputeTask,indent=0]
+----
+tab:C++[unsupported]
+--
+
+You can limit the execution of jobs to a subset of nodes by using a link:distributed-computing/cluster-groups[cluster group].
+
+
+== Handling Job Failures
+
+If a node crashes or becomes unavailable during a task execution, all jobs scheduled for the node are automatically sent to another available node (due to the built-in failover mechanism). However, if a job throws an exception, you can treat the job as failed and fail it over to another node for re-execution. To do this, return `FAILOVER` in the `result(...)` method:
+
+[source, java]
+----
+include::{javaFile}[tags=failover,indent=0]
+----
+
+
+== Compute Task Adapters
+There are several helper classes that provide most commonly used implementations of the `result(...)` and `map(...)` methods.
+
+* `ComputeTaskAdapter` — This class implements the `result()` method to return the `FAILOVER` policy if a job throws an exception and the `WAIT` policy otherwise. It means that this implementation will wait for all jobs to finish with a result.
+
+* `ComputeTaskSplitAdapter` — This class extends `ComputeTaskAdapter` and implements the `map(...)` method to automatically assign jobs to nodes. It introduces a new `split(...)` method that implements the logic of producing jobs based on the input data.
+
+See link:{githubUrl}/modules/core/src/main/java/org/apache/ignite/compute/ComputeTaskSplitAdapter.java[ComputeTaskSplitAdapter.java,window=_blank] and link:{githubUrl}/modules/core/src/main/java/org/apache/ignite/compute/ComputeTaskAdapter.java[ComputeTaskAdapter.java,window=_blank] for details.
+
+== Distributed Task Session
+
+NOTE: Not available in .NET/C#/C++.
+
+For each task, Ignite creates a distributed session that holds information about the task and is visible to the task itself and to all jobs spawned by it. You can use this session to share attributes between jobs. Attributes can be assigned before or during job execution and become visible to other jobs in the same order in which they were set.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=session;!exclude,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+////////////////////////////////////////////////////////////////////////////////
+== Streaming Jobs Continuously
+
+TODO
+////////////////////////////////////////////////////////////////////////////////
+
+== Compute Task Example
+The following example demonstrates a simple character counting application that splits a given string into words and calculates the length of each word in an individual job. The jobs are distributed to all cluster nodes.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ComputeTaskExample.java[tag=compute-task-example,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+
+include::code-snippets/dotnet/MapReduceApi.cs[tag=computeTaskExample,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/distributed-locks.adoc b/docs/_docs/distributed-locks.adoc
new file mode 100644
index 0000000..48c4f4a
--- /dev/null
+++ b/docs/_docs/distributed-locks.adoc
@@ -0,0 +1,59 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Distributed Locks
+
+== Overview
+
+Ignite transactions acquire distributed locks implicitly. However, there are certain use cases when you might need to
+acquire the locks explicitly. The `lock()` method of the `IgniteCache` API returns an instance of `java.util.concurrent.locks.Lock`
+that lets you define explicit distributed locks for any given key. Locks can also be acquired on a collection of objects using the
+`IgniteCache.lockAll()` method.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+IgniteCache<String, Integer> cache = ignite.cache("myCache");
+
+// Create a lock for the given key
+Lock lock = cache.lock("keyLock");
+try {
+    // Acquire the lock
+    lock.lock();
+
+    cache.put("Hello", 11);
+    cache.put("World", 22);
+}
+finally {
+    // Release the lock
+    lock.unlock();
+}
+----
+--
+
+[NOTE]
+====
+[discrete]
+=== Atomicity Mode
+In Ignite, locks are supported only for the `TRANSACTIONAL` atomicity mode, which can be set via the
+`CacheConfiguration.atomicityMode` parameter.
+====
+
+== Locks and Transactions
+
+Explicit locks are not transactional and cannot not be used from within transactions (exception will be thrown).
+If you do need explicit locking within transactions, then you should use the `TransactionConcurrency.PESSIMISTIC` concurrency
+control for transactions which will acquire explicit locks for relevant cluster data requests.
diff --git a/docs/_docs/events/events.adoc b/docs/_docs/events/events.adoc
new file mode 100644
index 0000000..a6c7cc9
--- /dev/null
+++ b/docs/_docs/events/events.adoc
@@ -0,0 +1,342 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Events
+:javaFile: {javaCodeDir}/Events.java
+
+:events_url: {javadoc_base_url}/org/apache/ignite/events
+
+
+
+This page describes different event types, when and where they are generated, and how you can use them.
+
+You can always find the most complete and up to date list of events in the javadoc:org.apache.ignite.events.EventType[] javadoc.
+
+== General Information
+
+All events implement the `Event` interface.
+You may want to cast each event to the specific class to get extended information about the action the event was triggered by.
+For example, the 'cache update' action triggers an event that is an instance of the `CacheEvent` class, which contains the information about the data that was modified, the ID of the subject that triggered the event, etc.
+
+
+All events contain information about the node where the event was generated.
+For example, when you execute an `IgniteClosure` job, the `EVT_JOB_STARTED` and `EVT_JOB_FINISHED` events contain the information about the node where the closure was executed.
+
+[source, java]
+----
+include::{javaFile}[tags=get-node,indent=0]
+----
+////
+When an event is generated by another event, the second event will contain the information about the first event.
+For example, the cache rebalancing event can be triggered by
+The rebalancing event will contain the information about the cause
+////
+
+
+[CAUTION]
+====
+[discrete]
+=== Event Ordering
+
+The order of events in the event listener is not guaranteed to be the same as the order in which they were generated.
+
+====
+
+=== SubjectID
+
+Some events contain the `subjectID` field, which represents the ID of the entity that initiated the action:
+
+* When the action is initiated by a server or client node, the `subjectID` is the ID of that node.
+* When the action is done by a thin client, JDBC/ODBC/REST client, the `subjectID` is generated when the client connects to the cluster and remains the same as long as the client is connected to the cluster.
+
+Check the specific event class to learn if the `subjectID` field is present.
+
+
+== Cluster State Changed Events
+
+Cluster state changed events are instances of the javadoc:org.apache.ignite.events.ClusterStateChangeEvent[] class.
+
+The cluster state changed events are generated when the cluster state changes, which happens either on auto-activation, or when a user changes the state manually.
+The events contain the new and old states, and the list of baseline nodes after the change.
+
+[cols="2,5,3",opts="header"]
+|===
+|Event Type | Event Description | Where Event Is Fired
+|EVT_CLUSTER_STATE_CHANGED | The cluster state changed.  | All cluster nodes.
+|===
+
+== Cache Lifecycle Events
+
+Cache Lifecycle events are instances of the link:{events_url}/CacheEvent.html[CacheEvent, window=_blank] class.
+Each cache lifecycle event is associated with a specific cache and has a field that contains the name of the cache.
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_CACHE_STARTED
+a| A cache is started on a specific node.
+Each server node holds an internal instance of a cache.
+This event is fired when the instance is created, which includes the following actions:
+
+* A cluster with existing caches is activated. The event is generated for every cache on all server nodes where the cache is configured.
+* A server node joins the cluster with existing caches (the caches are started on that node).
+* When you create a new cache dynamically by calling `Ignite.getOrCreateCache(...)` or similar methods. The event is fired on all nodes that host the cache.
+* When you obtain an instance of a cache on a client node.
+* When you create a cache via the link:sql-reference/ddl#create-table[CREATE TABLE] command.
+
+
+| All nodes where the cache is started.
+| EVT_CACHE_STOPPED a| This event happens when a cache is stopped, which includes the following actions:
+
+* The cluster is deactivated. All caches on all server nodes are stopped.
+* `IgniteCache.close()` is called. The event is triggered on the node where the method is called.
+* A SQL table is dropped.
+* If you call `cache = Ignite.getOrCreateCache(...)` and then call `Ignite.close()`, the `cache` is also closed on that node.
+
+|All nodes where the cache is stopped.
+
+| EVT_CACHE_NODES_LEFT | All nodes that host a specific cache have left the cluster. This can happen when a cache is deployed on a subset of server nodes or when all server nodes leave the cluster and only client nodes remain. | All remaining nodes.
+|===
+
+
+== Cache Events
+Cache events are instances of the link:{events_url}/CacheEvent.html[CacheEvent] class and
+represent the operations on cache objects, such as 'get', 'put', 'remove', 'lock', etc.
+
+Each event contains the information about the cache, the key that is accessed by the operation, the value before and after the operation (if applicable), etc.
+
+Cache events are also generated when you use DML commands.
+
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_CACHE_OBJECT_PUT | An object is put to a cache. This event is fired for every invocation of `IgniteCache.put()`. The bulk operations, such as `putAll(...)`, produce multiple events of this type.
+
+| The primary and backup nodes for the entry.
+
+| EVT_CACHE_OBJECT_READ
+| An object is read from a cache.
+This event is not emitted when you use link:key-value-api/using-scan-queries[scan queries] (use <<Cache Query Events>> to monitor scan queries).
+| The node where read operation is executed.
+It can be either the primary or backup node (the latter case is only possible when link:configuring-caches/configuration-overview#readfrombackup[reading from backups is enabled]).
+In transactional caches, the event can be generated on both the primary and backup nodes depending on the concurrency and isolation levels.
+
+| EVT_CACHE_OBJECT_REMOVED | An object is removed from a cache. |The primary and backup nodes for the entry.
+
+| EVT_CACHE_OBJECT_LOCKED
+a| A lock is acquired on a specific key.
+Locks can be acquired only on keys in transactional caches.
+User actions that acquire a lock include the following cases:
+
+* The user explicitly acquires a lock by calling `IgniteCache.lock()` or `IgniteCache.lockAll()`.
+* A lock is acquired for every atomic (non-transactional) data modifying operation (put, update, remove).
+In this case, the event is triggered on both primary and backup nodes for the key.
+* Locks are acquired on the keys accessed within a transaction (depending on the link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency and isolation levels]).
+
+| The primary or/and backup nodes for the entry depending on the link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency and isolation levels].
+
+| EVT_CACHE_OBJECT_UNLOCKED | A lock on a key is released. | The primary node for the entry.
+
+| EVT_CACHE_OBJECT_EXPIRED | The event is fired when a cache entry expires. This happens only if an link:configuring-caches/expiry-policies[expiry policy] is configured.  | The primary and backup nodes for the entry.
+| EVT_CACHE_ENTRY_CREATED | This event is triggered when Ignite creates an internal entry for working with a specific object from a cache. We don't recommend using this event. If you want to monitor cache put operations, the `EVT_CACHE_OBJECT_PUT` event should be enough for most cases. | The primary and backup nodes for the entry.
+
+| EVT_CACHE_ENTRY_DESTROYED
+|  This event is triggered when Ignite destroys an internal entry that was created for working with a specific object from a cache.
+We don't recommend using it.
+Destroying the internal entry does not remove any data from the cache.
+If you want to monitor cache remove operations, use the `EVT_CACHE_OBJECT_REMOVED` event.
+| The primary and backup nodes for the entry.
+|===
+
+== Cache Query Events
+
+There are two types of events that are related to cache queries:
+
+* Cache query object read events, which are instances of the link:{events_url}/CacheQueryReadEvent.html[CacheQueryReadEvent, window=_blank] class.
+* Cache query executed events, which are instances of the link:{events_url}/CacheQueryExecutedEvent.html[CacheQueryExecutedEvent, window=_blank] class.
+
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_CACHE_QUERY_OBJECT_READ | An object is read as part of a query execution. This event is generated for every object that matches the link:key-value-api/using-scan-queries#executing-scan-queries[query filter]. | The primary node of the object that is read.
+| EVT_CACHE_QUERY_EXECUTED  |  This event is generated when a query is executed. | All server nodes that host the cache.
+|===
+
+////
+== Checkpointing Events
+
+Related to checkpointingspi in map-reduce
+
+Checkpointing events are instances of the link:{events_url}/CheckpointEvent.html[CheckpointEvent] class.
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_CHECKPOINT_LOADED |  | The node
+| EVT_CHECKPOINT_REMOVED | |
+| EVT_CHECKPOINT_SAVED | |
+|===
+////
+
+== Class and Task Deployment Events
+
+Deployment events are instances of the link:{events_url}/DeploymentEvent.html[DeploymentEvent] class.
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_CLASS_DEPLOYED | A class (non-task) is deployed on a specific node. | The node where the class is deployed.
+| EVT_CLASS_UNDEPLOYED | A class is undeployed. | The node where the class is undeployed.
+| EVT_CLASS_DEPLOY_FAILED | Class deployment failed. |The node where the class is to be deployed.
+| EVT_TASK_DEPLOYED | A task class is deployed on a specific node. | The node where the class is deployed.
+| EVT_TASK_UNDEPLOYED | A task class is undeployed on a specific node.|The node where the class is undeployed.
+| EVT_TASK_DEPLOY_FAILED | Class deployment failed.|The node where the class is to be deployed.
+|===
+
+== Discovery Events
+
+Discovery events occur when nodes (both servers and clients) join or leave the cluster, including cases when nodes leave due to a failure.
+
+Discovery events are instances of the link:{events_url}/DiscoveryEvent.html[DiscoveryEvent] class.
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_NODE_JOINED | A node joins the cluster. | All nodes in the cluster (other than the one that joined).
+| EVT_NODE_LEFT | A node leaves the cluster. |All remaining nodes in the cluster.
+| EVT_NODE_FAILED | The cluster detects that a node left the cluster in a non-graceful way. | All remaining nodes in the cluster.
+| EVT_NODE_SEGMENTED | This happens on a node that decides that it was segmented. | The node that is segmented.
+| EVT_CLIENT_NODE_DISCONNECTED | A client node loses connection to the cluster.  | The client node that disconnected from the cluster.
+| EVT_CLIENT_NODE_RECONNECTED | A client node reconnects to the cluster.| The client node that reconnected to the cluster.
+|===
+
+== Task Execution Events
+
+Task execution events are associated with different stages of link:distributed-computing/map-reduce[task execution].
+They are also generated when you execute link:distributed-computing/distributed-computing[simple closures] because internally a closure is treated as a task that produces a single job.
+
+////
+This is what happens when you execute a task through the compute interface:
+
+. Task is deployed on all nodes (associated with the compute interface)
+. Task is started (the map stage)
+. Jobs are executed on the remote nodes
+. The reduce stage
+////
+
+Task Execution events are instances of the link:{events_url}/TaskEvent.html[TaskEvent] class.
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_TASK_STARTED | A task is started. `IgniteCompute.execute()` or other method is called   | The node that initiates the task.
+| EVT_TASK_REDUCED | This event represents the 'reduce' stage of the task execution flow.  | The node where the task was started.
+| EVT_TASK_FINISHED | The execution of the task finishes. | The node where the task was started.
+| EVT_TASK_FAILED | The task failed  | The node where the task was started.
+| EVT_TASK_TIMEDOUT |  The execution of the task timed out. This can happen when `Ignite.compute().withTimeout(...)` to execute tasks. When a task times out, it cancels all jobs that are being executed. It also generates the `EVT_TASK_FAILED` event.| The node where the task was started.
+| EVT_TASK_SESSION_ATTR_SET | A job sets an attribute in the link:distributed-computing/map-reduce#distributed-task-session[session]. | The node where the job is executed.
+|===
+
+{sp}+
+
+Job Execution events are instances of the link:{events_url}/JobEvent.html[JobEvent] class.
+The job execution events are generated at different stages of job execution and are associated with particular instances of the job.
+The event contains information about the task that produced the job (task name, task class, etc.).
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+
+| EVT_JOB_MAPPED | A job is mapped to a specific node. Mapping happens on the node where the task is started. This event is generated for every job produced in the "map" stage. | The node that started the task.
+
+| EVT_JOB_QUEUED | The job is added to the queue on the node to which it was mapped. | The node where the job is scheduled for execution.
+
+| EVT_JOB_STARTED | Execution of the job started.| The node where the job is executed.
+
+| EVT_JOB_FINISHED | Execution of the job finished. This also includes cases when the job is cancelled.| The node where the job is executed.
+
+| EVT_JOB_RESULTED | The job returned a result to the node from which it was sent. | The node where the task was started.
+
+| EVT_JOB_FAILED | Execution of a job fails. If the job failover strategy is configured (default), this event is accompanied by the `EVT_JOB_FAILED_OVER` event. | The node where the job is executed.
+
+| EVT_JOB_FAILED_OVER | The job was failed over to another node. | The node that started the task.
+
+| EVT_JOB_TIMEDOUT | The job timed out. |
+
+| EVT_JOB_REJECTED | The job is rejected. The job can be rejected if a link:distributed-computing/job-scheduling[collision spi] is configured. | The node where the job is rejected.
+
+| EVT_JOB_CANCELLED | The job was cancelled. | The node where the job is being executed.
+|===
+
+
+== Cache Rebalancing Events
+
+Cache Rebalancing events (all except for `EVT_CACHE_REBALANCE_OBJECT_LOADED` and `EVT_CACHE_REBALANCE_OBJECT_UNLOADED`) are instances of the link:{events_url}/CacheRebalancingEvent.html[CacheRebalancingEvent] class.
+
+Rebalancing occurs on a per cache basis; therefore, each rebalancing event corresponds to a specific cache.
+The event contains the name of the cache.
+
+The process of moving a single cache partition from Node A to Node B consists of the following steps:
+
+. Node A supplies a partition (REBALANCE_PART_SUPPLIED). The objects from the partition start to move to node B.
+. Node B receives the partition data (REBALANCE_PART_LOADED).
+. Node A removes the partition from its storage (REBALANCE_PART_UNLOADED).
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_CACHE_REBALANCE_STARTED | The rebalancing of a cache starts. | All nodes that host the cache.
+| EVT_CACHE_REBALANCE_STOPPED | The rebalancing of a cache stops. | All nodes that host the cache.
+| EVT_CACHE_REBALANCE_PART_LOADED | A cache's partition is loaded on the new node. This event is fired for every partition that participates in the cache rebalancing.| The node where the partition is loaded.
+| EVT_CACHE_REBALANCE_PART_UNLOADED |A cache's partition is removed from the node after it has been loaded to its new destination. | The node where the partition was held before the rebalancing process started.
+| EVT_CACHE_REBALANCE_OBJECT_LOADED | An object is moved to a new node as part of cache rebalancing. | The node where the object is loaded.
+| EVT_CACHE_REBALANCE_OBJECT_UNLOADED | An object is removed from a node after it has been moved to a new node.| The node from which the object is removed.
+| EVT_CACHE_REBALANCE_PART_DATA_LOST | A partition that is to be rebalanced is lost, for example, due to a node failure. |
+| EVT_CACHE_REBALANCE_PART_SUPPLIED | A node supplies a cache partition as part of the rebalancing process. | The node that owns the partition.
+//| EVT_CACHE_REBALANCE_PART_MISSED | *TODO*|
+|===
+
+== Transaction Events
+
+Transaction events are instances of the link:{events_url}/TransactionStateChangedEvent.html[TransactionStateChangedEvent] class.
+They allow you to get notification about different stages of transaction execution. Each event contains the `Transaction` object this event is associated with.
+
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_TX_STARTED | A transaction is started. Note that in transactional caches, each atomic operation executed outside a transaction is considered a transaction with a single operation. | The node where the transaction was started.
+| EVT_TX_COMMITTED | A transaction is committed. |  The node where the transaction was started.
+| EVT_TX_ROLLED_BACK | A transaction is rolled back. |The node where the transaction was executed.
+| EVT_TX_SUSPENDED |  A transaction is suspended.|The node where the transaction was started.
+| EVT_TX_RESUMED | A transaction is resumed. |The node where the transaction was started.
+|===
+
+
+////
+== Management Task Events
+
+Management task events represent the tasks that are executed by Visor or Web Console.
+This event type can be used to monitor a link:security/cluster-monitor-audit[Web Console activity].
+
+[cols="2,5,3",opts="header"]
+|===
+| Event Type | Event Description | Where Event Is Fired
+| EVT_MANAGEMENT_TASK_STARTED | A task from Visor or Web Console starts. | The node where the task is executed.
+|===
+
+
+////
diff --git a/docs/_docs/events/listening-to-events.adoc b/docs/_docs/events/listening-to-events.adoc
new file mode 100644
index 0000000..ea61c35
--- /dev/null
+++ b/docs/_docs/events/listening-to-events.adoc
@@ -0,0 +1,268 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Working with Events
+:javaFile: {javaCodeDir}/Events.java
+:xmlFile: code-snippets/xml/events.xml
+
+== Overview
+Ignite can generate events for a variety of operations happening in the cluster and notify your application about those operations. There are many types of events, including cache events, node discovery events, distributed task execution events, and many more.
+
+The list of events is available in the link:events/events[Events] section.
+
+== Enabling Events
+By default, events are disabled, and you have to enable each event type explicitly if you want to use it in your application.
+To enable specific event types, list them in the `includeEventTypes` property of `IgniteConfiguration` as shown below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=**;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=enabling-events,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=enablingEvents,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Getting the Events Interface
+
+The events functionality is available through the events interface, which provides methods for listening to cluster events. The events interface can be obtained from an instance of `Ignite` as follows:
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=get-events,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=gettingEventsInterface1,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The events interface can be associated with a link:distributed-computing/cluster-groups[set of nodes]. This means that you can access events that happen on a given set of nodes. In the following example, the events interface is obtained for the set of nodes that host the data for the Person cache.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=get-events-for-cache,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=gettingEventsInterface2,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+== Listening to Events
+
+You can listen to either local or remote events. Local events are events that are generated on the node where the listener is registered. Remote events are events that happen on other nodes.
+
+Note that some events may be fired on multiple nodes even if the corresponding real-world event happens only once. For example, when a node leaves the cluster, the `EVT_NODE_LEFT` event is generated on every remaining node.
+
+Another example is when you put an object into a cache. In this case, the `EVT_CACHE_OBJECT_PUT` event occurs on the node that hosts the link:data-modeling/data-partitioning#backup-partitions[primary partition] into which the object is actually written, which may be different from the node where the `put(...)` method is called. In addition, the event is fired on all nodes that hold the link:data-modeling/data-partitioning#backup-partitions[backup partitions] for the cache if they are configured.
+
+The events interface provides methods for listening to local events only, and for listening to both local and remote events.
+
+=== Listening to Local Events
+
+To listen to local events, use the  `localListen(listener, eventTypes...)` method, as shown below. The method accepts an event listener that is called every time an event of the given type occurs on the local node.
+
+To unregister the local listener, return `false` in its functional method.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=local,indent=0]
+----
+
+The event listener is an object of the `IgnitePredicate<T>` class with a type argument that matches the type of events the listener is going to process.
+For example, cache events (`EVT_CACHE_OBJECT_PUT`, `EVT_CACHE_OBJECT_READ`, etc.) correspond to the link:{javadoc_base_url}/org/apache/ignite/events/CacheEvent.html[CacheEvent] class, discovery events (`EVT_NODE_LEFT`, `EVT_NODE_JOINED`, etc.) correspond to
+the link:{javadoc_base_url}/org/apache/ignite/events/DiscoveryEvent.html[DiscoveryEvent,window=_blank] class, and so on.
+If you want to listen to events of different types, you can use the generic link:{javadoc_base_url}/org/apache/ignite/events/Event.html[Event,window=_blank] interface:
+
+[source, java]
+-------------------------------------------------------------------------------
+IgnitePredicate<Event> localListener = evt -> {
+    // process the event
+    return true;
+};
+-------------------------------------------------------------------------------
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=localListen,indent=0]
+----
+tab:C++[unsupported]
+--
+
+=== Listening to Remote Events
+
+The `IgniteEvents.remoteListen(localListener, filter, types)` method can be used to register a listener that listens for both remote and local events.
+It accepts a local listener, a filter, and a list of event types you want to listen to.
+
+The filter is deployed to all the nodes associated with the events interface, including the local node. The events that pass the filter are sent to the local listener.
+
+The method returns a unique identifier that can be used to unregister the listener and filters. To do this, call `IgniteEvents.stopRemoteListen(uuid)`. Another way to unregister the listener is to return `false` in the `apply()` method.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=remote,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+////////////////////////////////////////////////////////////////////////////////
+TODO
+The `IgniteEvents.remoteListen(...)` has an asynchronous counterpart that will register the given listener asynchronously.
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+
+----
+++++
+</code-tab>
+<code-tab data-tab="C#/.NET">
+++++
+[source,csharp]
+----
+
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+////////////////////////////////////////////////////////////////////////////////
+
+=== Batching Events
+
+Each activity in a cache can result in an event notification being generated and sent. For systems with high cache activity, getting notified for every event could be network intensive, possibly leading to a decreased performance of cache operations.
+
+Event notifications can be grouped together and sent in batches or timely intervals to mitigate the impact on performance. Here is an example of how this can be done:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=batching,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== Storing and Querying Events
+
+You can configure an event storage that will keep events on the nodes where they occur. You can then query events in your application.
+
+The event storage can be configured to keep events for a specific period, keep only the most recent events, or keep the events that satisfy a specific filter. See the link:{javadoc_base_url}/org/apache/ignite/spi/eventstorage/memory/MemoryEventStorageSpi.html[MemoryEventStorageSpi,window=_blank] javadoc for details.
+
+Below is an example of event storage configuration:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+    <property name="eventStorageSpi" >
+        <bean class="org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi">
+            <property name="expireAgeMs" value="600000"/>
+        </bean>
+    </property>
+
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=event-storage,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=storingEvents,indent=0]
+----
+tab:C++[unsupported]
+--
+
+=== Querying Local Events
+
+The following example shows how you can query local `EVT_CACHE_OBJECT_PUT` events stored in the event storage.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=query-local-events,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=queryLocal,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+=== Querying Remote Events
+Here is an example of querying remote events:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=query-remote-events,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithEvents.cs[tag=queryRemote,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/extensions-and-integrations/cassandra/configuration.adoc b/docs/_docs/extensions-and-integrations/cassandra/configuration.adoc
new file mode 100644
index 0000000..4d85ccf
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/cassandra/configuration.adoc
@@ -0,0 +1,588 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite Cassandra Integration Configuration
+
+== Overview
+
+To setup Cassandra as a persistent store, you need to set `CacheStoreFactory` for your Ignite caches to
+`org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory`.
+
+This could be done using Spring context configuration like this:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="cacheConfiguration">
+        <list>
+            ...
+            <!-- Configuring persistence for "cache1" cache -->
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <property name="name" value="cache1"/>
+                <!-- Tune on Read-Through and Write-Through mode -->
+                <property name="readThrough" value="true"/>
+                <property name="writeThrough" value="true"/>
+                <!-- Specifying CacheStoreFactory -->
+                <property name="cacheStoreFactory">
+                    <bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
+                        <!-- Datasource configuration bean which is responsible for Cassandra connection details -->
+                        <property name="dataSourceBean" value="cassandraDataSource"/>
+                        <!-- Persistent settings bean which is responsible for the details of how objects will be persisted to Cassandra -->
+                        <property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
+                    </bean>
+                </property>
+            </bean>
+            ...
+        </list>
+        ...
+    </property>
+</bean>
+----
+--
+
+There are two main properties which should be specified for `CassandraCacheStoreFactory`:
+
+* `dataSourceBean` - instance of the `org.apache.ignite.cache.store.cassandra.datasource.DataSource` class responsible for
+all the aspects of Cassandra database connection (credentials, contact points, read/write consistency level, load balancing policy and etc...)
+* `persistenceSettingsBean` - instance of the `org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings`
+class responsible for all the aspects of how objects should be persisted into Cassandra (keyspace and its options, table
+and its options, partition and cluster key options, POJO object fields mapping, secondary indexes, serializer for BLOB objects and etc...)
+
+In the below section these two beans and their configuration settings will be described in details.
+
+== DataSourceBean
+
+This bean stores all the details required for Cassandra database connection and CRUD operations. In the table below you can find all the bean properties:
+
+[cols="20%,70%,10%",opts="header"]
+|===
+| Property | Description | Default
+| `user`| User name used to connect to Cassandra|
+| `password`| User password used to connect to Cassandra|
+| `credentials`| Credentials bean providing `username` and `password`|
+| `authProvider`| Use the specified AuthProvider when connecting to Cassandra. Use this method when a custom authentication scheme is in place.|
+| `port`| Port to use to connect to Cassandra (if it's not provided in connection point specification)|
+| `contactPoints`| Array of contact points (`hostaname:[port]`) to use for Cassandra connection|
+| `maxSchemaAgreementWaitSeconds`| Maximum time to wait for schema agreement before returning from a DDL query| `10` seconds
+| `protocolVersion`| Specifies what version of Cassandra driver protocol should be used (could be helpful for backward compatibility with old versions of Cassandra)| `3`
+| `compression`| Compression to use for the transport. Supported compressions: `snappy`, `lz4`|
+| `useSSL`| Enables the use of SSL| `false`
+| `sslOptions`| Enables the use of SSL using the provided options|`false`
+| `collectMetrix`| Enables metrics collection|`false`
+| `jmxReporting`| Enables JMX reporting of the metrics|`false`
+| `fetchSize`| Specifies query fetch size. Fetch size controls how much resulting rows will be retrieved simultaneously.|
+| `readConsistency`| Specifies consistency level for READ queries|
+| `writeConsistency`| Specifies consistency level for WRITE/DELETE/UPDATE queries|
+| `loadBalancingPolicy`| Specifies load balancing policy to use| `TokenAwarePolicy`
+| `reconnectionPolicy`| Specifies reconnection policy to use| `ExponentialReconnectionPolicy`
+| `retryPolicy`| Specifies retry policy to use| `DefaultRetryPolicy`
+| `addressTranslater`| Specifies address translater to use| `IdentityTranslater`
+| `speculativeExecutionPolicy`| Specifies speculative execution policy to use| `NoSpeculativeExecutionPolicy`
+| `poolingOptions`| Specifies connection pooling options|
+| `socketOptions`| Specifies low-level socket options for the connections kept to the Cassandra hosts|
+| `nettyOptions`| Hooks that allow clients to customize Cassandra driver's underlying Netty layer|
+|===
+
+
+== PersistenceSettingsBean
+
+This bean stores all the details(keyspace, table, partition options, POJO fields mapping and etc...) of how objects
+(keys and values) should be persisted into Cassandra database.
+
+The constructor of `org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings` allows to create such
+a bean from a string which contains XML configuration document of specific structure (see below) or from the resource pointing to XML document.
+
+Here is the generic example of an XML configuration document (*persistence descriptor*) which specifies how Ignite cache
+keys and values should be serialized/deserialized to/from Cassandra:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<!--
+Root container for persistence settings configuration.
+
+Note: required element
+
+Attributes:
+  1) keyspace [required] - specifies keyspace for Cassandra tables which should be used to store key/value pairs
+  2) table    [required] - specifies Cassandra table which should be used to store key/value pairs
+  3) ttl      [optional] - specifies expiration period for the table rows (in seconds)
+-->
+<persistence keyspace="my_keyspace" table="my_table" ttl="86400">
+    <!--
+    Specifies Cassandra keyspace options which should be used to create provided keyspace if it doesn't exist.
+
+    Note: optional element
+    -->
+    <keyspaceOptions>
+        REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3}
+        AND DURABLE_WRITES = true
+    </keyspaceOptions>
+
+    <!--
+    Specifies Cassandra table options which should be used to create provided table if it doesn't exist.
+
+    Note: optional element
+    -->
+    <tableOptions>
+        comment = 'A most excellent and useful table'
+        AND read_repair_chance = 0.2
+    </tableOptions>
+
+    <!--
+    Specifies persistent settings for Ignite cache keys.
+
+    Note: required element
+
+    Attributes:
+      1) class      [required] - java class name for Ignite cache key
+      2) strategy   [required] - one of three possible persistent strategies:
+            a) PRIMITIVE - stores key value as is, by mapping it to Cassandra table column with corresponding type.
+                Should be used only for simple java types (int, long, String, double, Date) which could be mapped
+                to corresponding Cassadra types.
+            b) BLOB - stores key value as BLOB, by mapping it to Cassandra table column with blob type.
+                Could be used for any java object. Conversion of java object to BLOB is handled by "serializer"
+                which could be specified in serializer attribute (see below).
+            c) POJO - stores each field of an object as a column having corresponding type in Cassandra table.
+                Provides ability to utilize Cassandra secondary indexes for object fields.
+      3) serializer [optional] - specifies serializer class for BLOB strategy. Shouldn't be used for PRIMITIVE and
+        POJO strategies. Available implementations:
+            a) org.apache.ignite.cache.store.cassandra.serializer.JavaSerializer - uses standard Java
+                serialization framework
+            b) org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer - uses Kryo
+                serialization framework
+      4) column     [optional] - specifies column name for PRIMITIVE and BLOB strategies where to store key value.
+        If not specified column having 'key' name will be used. Shouldn't be used for POJO strategy.
+    -->
+    <keyPersistence class="org.mycompany.MyKeyClass" strategy="..." serializer="..." column="...">
+        <!--
+        Specifies partition key fields if POJO strategy used.
+
+        Note: optional element, only required for POJO strategy in case you want to manually specify
+            POJO fields to Cassandra columns mapping, instead of relying on dynamic discovering of
+            POJO fields and mapping them to the same columns of Cassandra table.
+        -->
+        <partitionKey>
+            <!--
+             Specifies mapping from POJO field to Cassandra table column.
+
+             Note: required element
+
+             Attributes:
+               1) name   [required] - POJO field name
+               2) column [optional] - Cassandra table column name. If not specified lowercase
+                  POJO field name will be used.
+            -->
+            <field name="companyCode" column="company" />
+            ...
+            ...
+        </partitionKey>
+
+        <!--
+        Specifies cluster key fields if POJO strategy used.
+
+        Note: optional element, only required for POJO strategy in case you want to manually specify
+            POJO fields to Cassandra columns mapping, instead of relying on dynamic discovering of
+            POJO fields and mapping them to the same columns of Cassandra table.
+        -->
+        <clusterKey>
+            <!--
+             Specifies mapping from POJO field to Cassandra table column.
+
+             Note: required element
+
+             Attributes:
+               1) name   [required] - POJO field name
+               2) column [optional] - Cassandra table column name. If not specified lowercase
+                  POJO field name will be used.
+               3) sort   [optional] - specifies sort order (asc or desc)
+            -->
+            <field name="personNumber" column="number" sort="desc"/>
+            ...
+            ...
+        </clusterKey>
+    </keyPersistence>
+
+    <!--
+    Specifies persistent settings for Ignite cache values.
+
+    Note: required element
+
+    Attributes:
+      1) class      [required] - java class name for Ignite cache value
+      2) strategy   [required] - one of three possible persistent strategies:
+            a) PRIMITIVE - stores key value as is, by mapping it to Cassandra table column with corresponding type.
+                Should be used only for simple java types (int, long, String, double, Date) which could be mapped
+                to corresponding Cassadra types.
+            b) BLOB - stores key value as BLOB, by mapping it to Cassandra table column with blob type.
+                Could be used for any java object. Conversion of java object to BLOB is handled by "serializer"
+                which could be specified in serializer attribute (see below).
+            c) POJO - stores each field of an object as a column having corresponding type in Cassandra table.
+                Provides ability to utilize Cassandra secondary indexes for object fields.
+      3) serializer [optional] - specifies serializer class for BLOB strategy. Shouldn't be used for PRIMITIVE and
+        POJO strategies. Available implementations:
+            a) org.apache.ignite.cache.store.cassandra.serializer.JavaSerializer - uses standard Java
+                serialization framework
+            b) org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer - uses Kryo
+                serialization framework
+      4) column     [optional] - specifies column name for PRIMITIVE and BLOB strategies where to store value.
+        If not specified column having 'value' name will be used. Shouldn't be used for POJO strategy.
+    -->
+    <valuePersistence class="org.mycompany.MyValueClass" strategy="..." serializer="..." column="">
+        <!--
+         Specifies mapping from POJO field to Cassandra table column.
+
+         Note: required element
+
+         Attributes:
+           1) name         [required] - POJO field name
+           2) column       [optional] - Cassandra table column name. If not specified lowercase
+              POJO field name will be used.
+           3) static       [optional] - boolean flag which specifies that column is static withing a given partition
+           4) index        [optional] - boolean flag specifying that secondary index should be created for the field
+           5) indexClass   [optional] - custom index java class name if you want to use custom index
+           6) indexOptions [optional] - custom index options
+        -->
+        <field name="firstName" column="first_name" static="..." index="..." indexClass="..." indexOptions="..."/>
+        ...
+        ...
+    </valuePersistence>
+</persistence>
+----
+--
+
+Below are provided all the details about persistence descriptor configuration and its elements:
+
+=== persistence
+
+[CAUTION]
+====
+[discrete]
+=== ! Required Element
+Root container for persistence settings configuration.
+====
+
+[cols="20%,20%,60%",opts="header"]
+|===
+| Attribute | Required | Description
+| `keyspace`| yes | Keyspace for Cassandra tables which should be used to store key/value pairs. If keyspace doesn't
+exist it will be created (if specified Cassandra account has appropriate permissions).
+| `table`| no | Cassandra table which should be used to store key/value pairs. If table doesn't exist it will be created
+(if specified Cassandra account has appropriate permissions). If table name doesn't specified Ignite cache name will be used as a table name.
+| `ttl`| no | Expiration period for the table rows (in seconds).
+|===
+
+In the next chapters you'll find what child elements could be placed inside persistence settings container.
+
+=== keyspaceOptions
+
+[NOTE]
+====
+[discrete]
+=== Optional Element
+Options to create Cassandra keyspace specified in the `keyspace` attribute of persistence settings container.
+====
+
+Keyspace will be created only if it doesn't exist and if an account used to connect to Cassandra has appropriate permissions.
+
+The text specified in this XML element is just a chunk of
+http://docs.datastax.com/en/cql/3.0/cql/cql_reference/create_keyspace_r.html[CREATE KEYSPACE, window=_blank] Cassandra DDL statement which goes after *WITH* keyword.
+
+=== tableOptions
+
+[NOTE]
+====
+[discrete]
+=== Optional Element
+Options to create Cassandra table specified in the table attribute of persistence settings container.
+====
+
+A table will be created only if it doesn't exist and if an account used to connect to Cassandra has appropriate permissions.
+
+The text specified in this XML element is just a chunk of
+http://docs.datastax.com/en/cql/3.0/cql/cql_reference/create_table_r.html[CREATE TABLE, window=_blank] Cassandra DDL statement which goes after *WITH* keyword.
+
+=== keyPersistence
+
+[CAUTION]
+====
+[discrete]
+=== ! Required Element
+Persistent settings for Ignite cache keys.
+====
+
+These settings specify how key objects from Ignite cache should be stored/loaded to/from Cassandra table:
+
+[cols="20%,20%,60%",opts="header"]
+|===
+| Attribute | Required | Description
+
+| `class`
+| yes
+| Java class name for Ignite cache keys.
+
+| `strategy`
+| yes
+| Specifies one of three possible persistent strategies (see below) which controls how object should be persisted/loaded to/from Cassandra table.
+
+| `serializer`
+| no
+| Serializer class for BLOB strategy (see below for available implementations). Shouldn't be used for PRIMITIVE and POJO strategies.
+
+| `column`
+| no
+| Column name for PRIMITIVE and BLOB strategies where to store key. If not specified, column having 'key' name will be
+used. Attribute shouldn't be specified for POJO strategy.
+|===
+
+Persistence strategies:
+
+[cols="1,3",opts="header"]
+|===
+| Name | Description
+
+| `PRIMITIVE`
+| Stores object as is, by mapping it to Cassandra table column with corresponding type. Should be used only for simple java types
+(int, long, String, double, Date) which could be directly mapped to corresponding Cassadra types. Use this
+https://docs.datastax.com/en/developer/java-driver/4.4/manual/core/#cql-to-java-type-mapping[link, window=_blank] to figure out Java to Cassandra types mapping.
+
+| `BLOB`
+| Stores object as BLOB, by mapping it to Cassandra table column with blob type. Could be used for any java object.
+Conversion of java object to BLOB is handled by "serializer" which could be specified in serializer attribute of *keyPersistence* container.
+
+| `POJO`
+| Stores each field of an object as a column having corresponding type in Cassandra table. Provides ability to utilize
+Cassandra secondary indexes for object fields. Could be used only for POJO objects following Java Beans convention and
+having their fields of https://docs.datastax.com/en/developer/java-driver/4.4/manual/core/#cql-to-java-type-mapping[simple java type which could be directly mapped to corresponding Cassandra types, window=_blank].
+|===
+
+Available serializer implementations:
+
+[cols="1,3",opts="header"]
+|===
+| Class | Description
+
+| `org.apache.ignite.cache.store.cassandra.serializer.JavaSerializer`
+| Uses standard Java serialization framework
+
+| `org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer`
+| Uses Kryo serialization framework
+|===
+
+If you are using `PRIMITIVE` or `BLOB` persistence strategy you don't need to specify internal elements of `keyPersistence`
+tag, cause the idea of these two strategies is that the whole object should be persisted into one column of Cassandra table
+(which could be specified by `column` attribute).
+
+If you are using the `POJO` persistence strategy you have two option:
+
+* Leave `keyPersistence` tag empty - in a such case, all the fields of POJO object class will be detected automatically using such rules:
+ ** Only fields having simple java types which could be directly mapped to
+http://docs.datastax.com/en/developer/java-driver/1.0/java-driver/reference/javaClass2Cql3Datatypes_r.html[appropriate Cassandra types, window=_blank]
+will be detected.
+ ** Fields discovery mechanism takes into account `@QuerySqlField` annotation:
+  *** If `name` attribute is specified it will be used as a column name for Cassandra table. Otherwise field name in a lowercase will be used as a column name.
+  *** If `descending` attribute is specified for a field mapped to *cluster key* column, it will be used to set sort order for the column.
+ ** Fields discovery mechanism takes into account `@AffinityKeyMapped` annotation. All the fields marked by this annotation
+will be treated as http://docs.datastax.com/en/cql/3.0/cql/ddl/ddl_compound_keys_c.html[partition key, window=_blank]
+fields (in an order as they are declared in a class). All other fields will be treated as
+http://docs.datastax.com/en/cql/3.0/cql/ddl/ddl_compound_keys_c.html[cluster key] fields.
+ ** If there are no fields annotated with `@AffinityKeyMapped` all the discovered fields will be treated as
+http://docs.datastax.com/en/cql/3.0/cql/ddl/ddl_compound_keys_c.html[partition key, window=_blank] fields.
+* Specify persistence details inside `keyPersistence` tag - in such case, you have to specify *partition key* fields
+mapping to Cassandra table columns inside `partitionKey` tag. This tag is used just as a container for mapping settings
+and doesn't have any attributes. Optionally (if you are going to use cluster key) you can also specify *cluster key*
+fields mapping to appropriate Cassandra table columns inside `clusterKey` tag. This tag is used just as a container for
+mapping settings and doesn't have any attributes.
+
+Next two sections are providing a detailed specification for `partition` and `cluster` key fields mappings (which makes
+sense if you choose the second option from the list above).
+
+=== partitionKey
+
+[NOTE]
+====
+[discrete]
+=== Optional Element
+Container for `field` elements specifying Cassandra partition key.
+====
+
+Defines the Ignite cache KEY object fields (inside it), which should be used as a *partition key* fields in Cassandra
+table and specifies fields mappings to table columns.
+
+Mappings are specified by using `<field>` tag having such attributes:
+
+[cols="20%,20%,60%",opts="header"]
+|===
+| Attribute | Required | Description
+
+| `name`
+| yes
+| POJO object field name.
+
+| `column`
+| no
+| Cassandra table column name. If not specified lowercase POJO field name will be used.
+|===
+
+=== clusterKey
+
+[NOTE]
+====
+[discrete]
+=== Optional Element
+Container for `field` elements specifying Cassandra cluster key.
+====
+
+Defines the Ignite cache KEY object fields (inside it), which should be used as a *cluster key* fields in Cassandra
+table and specifies fields mappings to table columns.
+
+Mapping are specified by using `<field>` tag having such attributes:
+
+[cols="20%,20%,60%",opts="header"]
+|===
+| Attribute | Required | Description
+
+| `name`
+| yes
+| POJO object field name.
+
+| `column`
+| no
+| Cassandra table column name. If not specified lowercase POJO field name will be used.
+
+
+| `sort`
+| no
+| Specifies sort order for the field (`asc` or `desc`).
+|===
+
+=== valuePersistence
+
+[CAUTION]
+====
+[discrete]
+=== ! Required Element
+Persistent settings for Ignite cache values.
+====
+
+These settings specify how value objects from Ignite cache should be stored/loaded to/from Cassandra table. The settings attributes
+look very similar to corresponding settings for Ignite cache keys:
+
+[cols="20%,20%,60%",opts="header"]
+|===
+| Attribute | Required | Description
+
+| `class`
+| yes
+| Java class name for Ignite cache values.
+
+| `strategy`
+| yes
+| Specifies one of three possible persistent strategies (see below) which controls how object should be persisted/loaded to/from Cassandra table.
+
+| `serializer`
+| no
+| Serializer class for BLOB strategy (see below for available implementations). Shouldn't be used for `PRIMITIVE` and `POJO` strategies.
+
+| `column`
+| no
+| Column name for `PRIMITIVE` and `BLOB` strategies where to store value. If not specified, column having `value` name will be used.
+Attribute shouldn't be specified for POJO strategy.
+|===
+
+Persistence strategies (same as for key persistence settings):
+
+[cols="1,3",opts="header"]
+|===
+| Name | Description
+
+| `PRIMITIVE`
+| Stores object as is, by mapping it to Cassandra table column with corresponding type. Should be used only for simple java types
+(int, long, String, double, Date) which could be directly mapped to corresponding Cassadra types. Use this
+http://docs.datastax.com/en/developer/java-driver/2.0/java-driver/reference/javaClass2Cql3Datatypes_r.html[link, window=_blank] to figure out Java to Cassandra types mapping.
+
+| `BLOB`
+| Stores object as `BLOB`, by mapping it to Cassandra table column with blob type. Could be used for any java object. Conversion of
+java object to `BLOB` is handled by "serializer" which could be specified in serializer attribute of `keyPersistence` container.
+
+| `POJO`
+| Stores each field of an object as a column having a corresponding type in Cassandra table. Provides ability to utilize Cassandra
+secondary indexes for object fields. Could be used only for POJO objects following Java Beans convention and having their fields
+of http://docs.datastax.com/en/developer/java-driver/1.0/java-driver/reference/javaClass2Cql3Datatypes_r.html[simple java type which could be directly mapped to corresponding Cassandra types, window=_blank].
+|===
+
+Available serializer implementations (same as for key persistence settings):
+
+[cols="1,3",opts="header"]
+|===
+| Class | Description
+
+| `org.apache.ignite.cache.store.cassandra.serializer.JavaSerializer`
+| Uses standard Java serialization framework.
+
+| `org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer`
+| Uses Kryo serialization framework.
+|===
+
+If you are using `PRIMITIVE` or `BLOB` persistence strategy you don't need to specify internal elements of `valuePersistence`
+tag, cause the idea of these two strategies is that the whole object should be persisted into one column of Cassandra table
+(which could be specified by `column` attribute).
+
+If you are using `POJO` persistence strategy you have two option (similar to the same options for keys):
+
+* Leave `valuePersistence` tag empty - in such a case, all the fields of POJO object class will be detected automatically using such rules:
+ ** Only fields having simple java types which could be directly mapped to
+http://docs.datastax.com/en/developer/java-driver/1.0/java-driver/reference/javaClass2Cql3Datatypes_r.html[appropriate Cassandra types, window=_blank] will be detected.
+ ** Fields discovery mechanism takes into account `@QuerySqlField` annotation:
+  *** If `name` attribute is specified it will be used as a column name for Cassandra table. Otherwise, field name in a lower case will be used as a column name.
+  *** If `index` attribute is specified, secondary index will be created for a corresponding column in Cassandra table (if such table doesn't exist).
+* Specify persistence details inside `valuePersistence` tag - in such a case, you have to specify your POJO fields mapping to Cassandra table columns
+inside `valuePersistence` tag.
+
+If you selected the second option from the list above, you have to use `<field>` tag to specify POJO fields to Cassandra
+table columns mapping. The tag has following attributes:
+
+[cols="20%,20%,60%",opts="header"]
+|===
+| Attribute | Required | Description
+
+| `name`
+| yes
+| POJO object field name.
+
+| `column`
+| no
+| Cassandra table column name. If not specified lowercase POJO field name will be used.
+
+| `static`
+| no
+| Boolean flag which specifies that column is static withing a given partition.
+
+| `index`
+| no
+| Boolean flag specifying that secondary index should be created for the field.
+
+| `indexClass`
+| no
+| Custom index java class name, in case you want to use custom index.
+
+| `indexOptions`
+| no
+| Custom index options.
+|===
diff --git a/docs/_docs/extensions-and-integrations/cassandra/ddl-generator.adoc b/docs/_docs/extensions-and-integrations/cassandra/ddl-generator.adoc
new file mode 100644
index 0000000..878c6f7
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/cassandra/ddl-generator.adoc
@@ -0,0 +1,99 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= DDL Generator
+
+== Overview
+
+One of the benefits of the Ignite Cassandra integration is that you don't need to care about Cassandra DDL syntax for
+table creation and Java to Cassandra type mapping details.
+
+You just need to create an XML configuration which specifies how Ignite cache keys and values should be serialized/deserialized to/from Cassandra.
+Based on this settings all the absent Cassandra keyspaces and tables will be created automatically. The only requirement for all this "magic" to work:
+
+[CAUTION]
+====
+[discrete]
+===
+In connection settings for Cassandra, you should specify user having enough permissions to create keyspaces/tables
+====
+
+However, for some of the deployments it's not possible because of a very strict security policy. Thus the only solution in
+a such situation is to provide DDL scripts for DevOps team to create all the necessary Cassandra keyspaces/tables in advance.
+
+That's the exact use-case for DDL generator utility that generates DDLs from
+link:extensions-and-integrations/cassandra/configuration#persistencesettingsbean[PersistenceSettingsBean settings].
+
+Below is a sample syntax for the Cassandra DDL generation:
+
+[tabs]
+--
+tab:Shell[]
+[source, shell]
+----
+java org.apache.ignite.cache.store.cassandra.utils.DDLGenerator /opt/dev/ignite/persistence-settings-1.xml /opt/dev/ignite/persistence-settings-2.xml
+----
+--
+
+The generated DDL can look as follows:
+
+[tabs]
+--
+tab:Generated Cassandra DDL[]
+[source, sql]
+----
+-------------------------------------------------------------
+DDL for keyspace/table from file: /opt/dev/ignite/persistence-settings-1.xml
+-------------------------------------------------------------
+
+create keyspace if not exists test1
+with replication = {'class' : 'SimpleStrategy', 'replication_factor' : 3} and durable_writes = true;
+
+create table if not exists test1.primitive_test1
+(
+ key int,
+ value int,
+ primary key ((key))
+);
+
+-------------------------------------------------------------
+DDL for keyspace/table from file: /opt/dev/ignite/persistence-settings-2.xml
+-------------------------------------------------------------
+
+create keyspace if not exists test1
+with REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3} AND DURABLE_WRITES = true;
+
+create table if not exists test1.pojo_test3
+(
+ company text,
+ department text,
+ number int,
+ first_name text,
+ last_name text,
+ age int,
+ married boolean,
+ height bigint,
+ weight float,
+ birth_date timestamp,
+ phones blob,
+ primary key ((company, department), number)
+)
+with comment = 'A most excellent and useful table' AND read_repair_chance = 0.2 and clustering order by (number desc);
+----
+--
+
+Just don't forget to set the `CLASSPATH` environment variable correctly:
+
+. Include the jar file for Ignite Cassandra module (`ignite-cassandra-<version-number>.jar`) in your `CLASSPATH`.
+. If you are using `POJO` persistence strategy for some of your custom java classes you need to include jars with these classes in your CLASSPATH as well.
diff --git a/docs/_docs/extensions-and-integrations/cassandra/overview.adoc b/docs/_docs/extensions-and-integrations/cassandra/overview.adoc
new file mode 100644
index 0000000..38b91cc
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/cassandra/overview.adoc
@@ -0,0 +1,54 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Cassandra Acceleration With Apache Ignite
+
+== Overview
+
+The Ignite Cassandra integration implements the link:persistence/external-storage#overview[CacheStore] interface allowing
+to deploy Ignite as a high-performance caching layer on top of Cassandra.
+
+Some observations in regards to the integration:
+
+. The integration uses Cassandra http://www.datastax.com/dev/blog/java-driver-async-queries[asynchronous queries, window=_blank]
+for `CacheStore` batch operations such as such as `loadAll()`, `writeAll()` and `deleteAll()` to provide extremely high performance.
+. The integration automatically creates all necessary tables (and keyspaces) in Cassandra if they are absent. Also, it
+automatically detects all the necessary fields for Ignite key-value tuples that will be stored as POJOs, and creates an
+appropriate table structure. Thus you don't need to care about the Cassandra DDL syntax for table creation and Java to
+Cassandra type mapping details.
+. You can optionally specify the settings (replication factor, replication strategy, bloom filter and etc.) for Cassandra
+tables and keyspaces which should be created.
+. Combines functionality of BLOB and POJO storage, allowing to specify how you prefer to store (as a BLOB or as a POJO)
+key-value tuples from your Ignite cache.
+. Supports standard https://docs.oracle.com/javase/tutorial/jndi/objects/serial.html[Java, window=_blank] and
+https://github.com/EsotericSoftware/kryo[Kryo, window=_blank] serialization for key-values which should be stored as BLOBs in Cassandra
+. Supports Cassandra http://docs.datastax.com/en/cql/3.0/cql/cql_reference/create_index_r.html[secondary indexes, window=_blank] (including custom indexes)
+through persistence configuration settings for particular Ignite cache or such settings could be detected automatically
+if you configured link:SQL/indexes#configuring-indexes-using-annotations[SQL Indexes by Annotations] by using `@QuerySqlField(index = true)` annotation
+. Supports sort order for Cassandra cluster key fields through persistence configuration settings or such settings could be
+detected automatically if you are using `@QuerySqlField(descending = true)` annotation.
+. Supports link:data-modeling/affinity-collocation[affinity co-location] for the POJO key classes having one of their fields
+annotated by `@AffinityKeyMapped`. In such a way, key-values tuples which were stored on one node in an Ignite cache will
+be also stored (co-located) on one node in Cassandra.
+
+[CAUTION]
+====
+[discrete]
+=== Ignite SQL Queries and Cassandra
+Note that in order to execute SQL queries you need to have all the data loaded from Cassandra into an Ignite cluster.
+The Ignite SQL engine doesn't assumes that all the records are available in memory and won't try to query Cassandra.
+
+An alternative would be to use Ignite Native Persistence - a distributed, ACID, and SQL-compliant disk store that allows
+performing SQL queries on the data stored in-memory as well as on disk.
+====
diff --git a/docs/_docs/extensions-and-integrations/cassandra/usage-examples.adoc b/docs/_docs/extensions-and-integrations/cassandra/usage-examples.adoc
new file mode 100644
index 0000000..0a49952
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/cassandra/usage-examples.adoc
@@ -0,0 +1,691 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite Cassandra Integration Usage Examples
+
+== Overview
+
+As described in link:extensions-and-integrations/cassandra/configuration[configuration section], to configure Cassandra
+as a cache store you need to set `CacheStoreFactory` for your Ignite caches to `org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory`.
+
+Below is an example of a typical configuration for Ignite cache to use Cassandra as a cache store. We will go step-by-step
+through all the configuration items, further down. The example is taken from the unit tests resource file
+`store/src/test/resources/org/apache/ignite/tests/persistence/blob/ignite-config.xml` of the Cassandra module source code.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xsi:schemaLocation="
+        http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <!-- Cassandra connection settings -->
+    <import resource="classpath:org/apache/ignite/tests/cassandra/connection-settings.xml" />
+
+    <!-- Persistence settings for 'cache1' -->
+    <bean id="cache1_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
+        <constructor-arg type="org.springframework.core.io.Resource" value="classpath:org/apache/ignite/tests/persistence/blob/persistence-settings-1.xml" />
+    </bean>
+
+    <!-- Persistence settings for 'cache2' -->
+    <bean id="cache2_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
+        <constructor-arg type="org.springframework.core.io.Resource" value="classpath:org/apache/ignite/tests/persistence/blob/persistence-settings-3.xml" />
+    </bean>
+
+    <!-- Ignite configuration -->
+    <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="cacheConfiguration">
+            <list>
+                <!-- Configuring persistence for "cache1" cache -->
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="cache1"/>
+                    <property name="readThrough" value="true"/>
+                    <property name="writeThrough" value="true"/>
+                    <property name="cacheStoreFactory">
+                        <bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
+                            <property name="dataSourceBean" value="cassandraAdminDataSource"/>
+                            <property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
+                        </bean>
+                    </property>
+                </bean>
+
+                <!-- Configuring persistence for "cache2" cache -->
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="cache2"/>
+                    <property name="readThrough" value="true"/>
+                    <property name="writeThrough" value="true"/>
+                    <property name="cacheStoreFactory">
+                        <bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
+                            <property name="dataSourceBean" value="cassandraAdminDataSource"/>
+                            <property name="persistenceSettingsBean" value="cache2_persistence_settings"/>
+                        </bean>
+                    </property>
+                </bean>
+            </list>
+        </property>
+
+        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <!--
+                        Ignite provides several options for automatic discovery that can be used
+                        instead os static IP based discovery. For information on all options refer
+                        to our documentation: https://ignite.apache.org/docs/latest/clustering/clustering
+                    -->
+                    <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
+                    <!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <!-- In distributed environment, replace with actual host IP address. -->
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+            </bean>
+        </property>
+    </bean>
+</beans>
+----
+--
+
+In the specified example we have two Ignite caches configured: `cache1` and `cache2`. So lets look at the configuration details.
+
+Lets start from the cache configuration details. They are pretty similar for both caches (`cache1` and `cache2`) and looks like that:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<bean class="org.apache.ignite.configuration.CacheConfiguration">
+    <property name="name" value="cache1"/>
+    <property name="readThrough" value="true"/>
+    <property name="writeThrough" value="true"/>
+    <property name="cacheStoreFactory">
+        <bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
+            <property name="dataSourceBean" value="cassandraAdminDataSource"/>
+            <property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
+        </bean>
+    </property>
+</bean>
+----
+--
+
+First of all we can see that `read-through` and `write-through` options are enabled:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<property name="readThrough" value="true"/>
+<property name="writeThrough" value="true"/>
+----
+--
+
+which is required for Ignite cache, if you plan to use a persistent store for cache entries which expired.
+
+You can optionally specify the `write-behind` setting if you prefer persistent store to be updated asynchronously:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<property name="readThrough" value="true"/>
+<property name="writeThrough" value="true"/>
+----
+--
+
+The next important thing is `CacheStoreFactory` configuration:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<property name="cacheStoreFactory">
+    <bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
+        <property name="dataSourceBean" value="cassandraAdminDataSource"/>
+        <property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
+    </bean>
+</property>
+----
+--
+
+You should use `org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory` as a `CacheStoreFactory` for your
+Ignite caches to utilize Cassandra as a persistent store. For `CassandraCacheStoreFactory` you should specify two required properties:
+
+* `dataSourceBean` - name of the Spring bean, which specifies all the details about Cassandra database connection.
+
+* `persistenceSettingsBean` - name of the Spring bean, which specifies all the details about how objects should be persisted into Cassandra database.
+
+In the specified example `cassandraAdminDataSource` is a data source bean, which is imported into Ignite cache config file using this directive:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<import resource="classpath:org/apache/ignite/tests/cassandra/connection-settings.xml" />
+----
+--
+
+and `cache1_persistence_settings` is a persistence settings bean, which is defined in Ignite cache config file using such directive:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<bean id="cache1_persistence_settings" class="org.apache.ignite.cache.store.cassandra.utils.persistence.KeyValuePersistenceSettings">
+    <constructor-arg type="org.springframework.core.io.Resource" value="classpath:org/apache/ignite/tests/persistence/blob/persistence-settings-1.xml" />
+</bean>
+----
+--
+
+Now lets look at the specification of `cassandraAdminDataSource` from `store/src/test/resources/org/apache/ignite/tests/cassandra/connection-settings.xml`
+test resource.
+
+Specifically,`CassandraAdminCredentials` and `CassandraRegularCredentials` are classes which extend
+`org.apache.ignite.cache.store.cassandra.datasource.Credentials`. You are welcome to implement these classes and reference them afterwards.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xsi:schemaLocation="
+        http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <bean id="cassandraAdminCredentials" class="org.my.project.CassandraAdminCredentials"/>
+    <bean id="cassandraRegularCredentials" class="org.my.project.CassandraRegularCredentials"/>
+
+    <bean id="loadBalancingPolicy" class="com.datastax.driver.core.policies.TokenAwarePolicy">
+        <constructor-arg type="com.datastax.driver.core.policies.LoadBalancingPolicy">
+            <bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
+        </constructor-arg>
+    </bean>
+
+    <bean id="contactPoints" class="org.apache.ignite.tests.utils.CassandraHelper" factory-method="getContactPointsArray"/>
+
+    <bean id="cassandraAdminDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
+        <property name="credentials" ref="cassandraAdminCredentials"/>
+        <property name="contactPoints" ref="contactPoints"/>
+        <property name="readConsistency" value="ONE"/>
+        <property name="writeConsistency" value="ONE"/>
+        <property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
+    </bean>
+
+    <bean id="cassandraRegularDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
+        <property name="credentials" ref="cassandraRegularCredentials"/>
+        <property name="contactPoints" ref="contactPoints"/>
+        <property name="readConsistency" value="ONE"/>
+        <property name="writeConsistency" value="ONE"/>
+        <property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
+    </bean>
+</beans>
+----
+--
+
+For more details about Cassandra data source connection configuration visit the link:extensions-and-integrations/cassandra/configuration[integration configuration page].
+
+Finally, the last piece which wasn't still described is persistence settings configuration. Lets look at the
+`cache1_persistence_settings` from the `org/apache/ignite/tests/persistence/blob/persistence-settings-1.xml` test resource.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="blob_test1">
+    <keyPersistence class="java.lang.Integer" strategy="PRIMITIVE" />
+    <valuePersistence strategy="BLOB"/>
+</persistence>
+----
+--
+
+In the configuration above, we can see that Cassandra `test1.blob_test1` table will be used to store key/value objects for
+**cache1** cache. Key objects of the cache will be stored as **integer** in `key` column. Value objects of the cache will be
+stored as **blob** in `value` column. For more information about persistence settings configuration visit the
+link:extensions-and-integrations/cassandra/configuration[integration configuration page].
+
+Next sections will provide examples of persistence settings configuration for different kind of persistence strategies
+(see more details about persistence strategies on the link:extensions-and-integrations/cassandra/configuration[integration configuration page].
+
+== Example 1
+
+Persistence setting for Ignite cache with keys of `Integer` type to be persisted as `int` in Cassandra and values of
+`String` type to be persisted as `text` in Cassandra.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="my_table">
+    <keyPersistence class="java.lang.Integer" strategy="PRIMITIVE" column="my_key"/>
+    <valuePersistence class="java.lang.String" strategy="PRIMITIVE" />
+</persistence>
+----
+--
+
+Keys will be stored in `my_key` column. Values will be stored in `value` column (which is used by default if `column` attribute wasn't specified).
+
+== Example 2
+
+Persistence setting for Ignite cache with keys of `Integer` type to be persisted as `int` in Cassandra and values of `any`
+type (you don't need to specify the type for **BLOB** persistence strategy) to be persisted as `blob` in Cassandra.
+The only solution for this situation is to store value as a `BLOB` in Cassandra table.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="my_table">
+    <keyPersistence class="java.lang.Integer" strategy="PRIMITIVE" />
+    <valuePersistence strategy="BLOB"/>
+</persistence>
+----
+--
+
+Keys will be stored in `key` column (which is used by default if `column` attribute wasn't specified). Values will be stored in `value` column.
+
+== Example 3
+
+Persistence setting for Ignite cache with keys of `Integer` type and values of **any** type, both to be persisted as `BLOB` in Cassandra.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="my_table">
+    <!-- By default Java standard serialization is going to be used -->
+    <keyPersistence class="java.lang.Integer"
+                    strategy="BLOB"/>
+
+    <!-- Kryo serialization specified to be used -->
+    <valuePersistence class="org.apache.ignite.tests.pojos.Person"
+                      strategy="BLOB"
+                      serializer="org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer"/>
+</persistence>
+----
+--
+
+Keys will be stored in `key` column having `blob` type and using
+https://docs.oracle.com/javase/tutorial/jndi/objects/serial.html[Java standard serialization, window=_blank]. Values will be stored in
+`value` column having `blob` type and using https://github.com/EsotericSoftware/kryo[Kryo serialization, window=_blank].
+
+== Example 4
+
+Persistence setting for Ignite cache with keys of `Integer` type to be persisted as `int` in Cassandra and values of custom
+POJO `org.apache.ignite.tests.pojos.Person` type to be dynamically analyzed and persisted into a set of table columns,
+so that each POJO field will be mapped to appropriate table column. For more details about dynamic POJO fields discovery
+refer to link:extensions-and-integrations/cassandra/configuration#persistencesettingsbean[PersistenceSettingsBean] documentation section.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="my_table">
+    <keyPersistence class="java.lang.Integer" strategy="PRIMITIVE"/>
+    <valuePersistence class="org.apache.ignite.tests.pojos.Person" strategy="POJO"/>
+</persistence>
+----
+--
+
+Keys will be stored in `key` column having `int` type. 
+
+Now lets imagine that the `org.apache.ignite.tests.pojos.Person` class has such an implementation:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+public class Person {
+    private String firstName;
+    private String lastName;
+    private int age;
+    private boolean married;
+    private long height;
+    private float weight;
+    private Date birthDate;
+    private List<String> phones;
+
+    public void setFirstName(String name) {
+        firstName = name;
+    }
+
+    public String getFirstName() {
+        return firstName;
+    }
+
+    public void setLastName(String name) {
+        lastName = name;
+    }
+
+    public String getLastName() {
+        return lastName;
+    }
+
+    public void setAge(int age) {
+        this.age = age;
+    }
+
+    public int getAge() {
+        return age;
+    }
+
+    public void setMarried(boolean married) {
+        this.married = married;
+    }
+
+    public boolean getMarried() {
+        return married;
+    }
+
+    public void setHeight(long height) {
+        this.height = height;
+    }
+
+    public long getHeight() {
+        return height;
+    }
+
+    public void setWeight(float weight) {
+        this.weight = weight;
+    }
+
+    public float getWeight() {
+        return weight;
+    }
+
+    public void setBirthDate(Date date) {
+        birthDate = date;
+    }
+
+    public Date getBirthDate() {
+        return birthDate;
+    }
+
+    public void setPhones(List<String> phones) {
+        this.phones = phones;
+    }
+
+    public List<String> getPhones() {
+        return phones;
+    }
+}
+----
+--
+
+In this case Ignite cache values of the `org.apache.ignite.tests.pojos.Person` type will be persisted into a set of
+Cassandra table columns using such dynamically configured mapping rule:
+
+[opts="header"]
+|===
+| POJO field    | Table column     | Column type
+| firstName     | firstname        | text
+| lastName      | lastname         | text
+| age           | age              | int
+| married       | married          | boolean
+| height        | height           | bigint
+| weight        | weight           | float
+| birthDate     | birthdate        | timestamp
+|===
+
+As you can see from the table above, `phones` field will not be persisted into table. That's because it's not of simple
+java type which could be directly mapped to http://docs.datastax.com/en/developer/java-driver/1.0/java-driver/reference/javaClass2Cql3Datatypes_r.html[appropriate, window=_blank] Cassandra type.
+Such kind of fields could be persisted into Cassandra only if you manually specify all mapping details for the object type
+and if field type itself is implementing `java.io.Serializable` interface. In a such case field will be persisted into a
+separate table column as `blob`. See more details in the next example.
+
+== Example 5
+
+Persistence setting for Ignite cache with keys of custom POJO `org.apache.ignite.tests.pojos.PersonId` and values of
+custom POJO `org.apache.ignite.tests.pojos.Person` types, both to be persisted into a set of table columns based on
+manually specified mapping rules.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="my_table" ttl="86400">
+    <!-- Cassandra keyspace options which should be used to create provided keyspace if it doesn't exist -->
+    <keyspaceOptions>
+        REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3}
+        AND DURABLE_WRITES = true
+    </keyspaceOptions>
+
+    <!-- Cassandra table options which should be used to create provided table if it doesn't exist -->
+    <tableOptions>
+        comment = 'A most excellent and useful table'
+        AND read_repair_chance = 0.2
+    </tableOptions>
+
+    <!-- Persistent settings for Ignite cache keys -->
+    <keyPersistence class="org.apache.ignite.tests.pojos.PersonId" strategy="POJO">
+        <!-- Partition key fields if POJO strategy used -->
+        <partitionKey>
+            <!-- Mapping from POJO field to Cassandra table column -->
+            <field name="companyCode" column="company" />
+            <field name="departmentCode" column="department" />
+        </partitionKey>
+
+        <!-- Cluster key fields if POJO strategy used -->
+        <clusterKey>
+            <!-- Mapping from POJO field to Cassandra table column -->
+            <field name="personNumber" column="number" sort="desc"/>
+        </clusterKey>
+    </keyPersistence>
+
+    <!-- Persistent settings for Ignite cache values -->
+    <valuePersistence class="org.apache.ignite.tests.pojos.Person"
+                      strategy="POJO"
+                      serializer="org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer">
+        <!-- Mapping from POJO field to Cassandra table column -->
+        <field name="firstName" column="first_name" />
+        <field name="lastName" column="last_name" />
+        <field name="age" />
+        <field name="married" index="true"/>
+        <field name="height" />
+        <field name="weight" />
+        <field name="birthDate" column="birth_date" />
+        <field name="phones" />
+    </valuePersistence>
+</persistence>
+----
+--
+
+These persistence settings looks rather complicated. Lets go step by step and analyse them.
+
+Lets first look at the root tag:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<persistence keyspace="test1" table="my_table" ttl="86400">
+----
+--
+
+It specifies that Ignite cache keys and values should be stored in `test1.my_table` table and that data in each row
+http://docs.datastax.com/en/cql/3.1/cql/cql_using/use_expire_c.html[expires, window=_blank] after `86400` sec which is `24` hours.
+
+Then we can see the advanced settings for Cassandra keyspace. The setting will be used to create keyspace if it's not exist.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<keyspaceOptions>
+    REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3}
+    AND DURABLE_WRITES = true
+</keyspaceOptions>
+----
+--
+
+Then by analogy to keyspace setting we can see table advanced setting, which will be used only for table creation.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<tableOptions>
+    comment = 'A most excellent and useful table'
+    AND read_repair_chance = 0.2
+</tableOptions>
+----
+--
+
+Next section specifies how Ignite cache keys should be persisted:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<keyPersistence class="org.apache.ignite.tests.pojos.PersonId" strategy="POJO">
+    <!-- Partition key fields if POJO strategy used -->
+    <partitionKey>
+        <!-- Mapping from POJO field to Cassandra table column -->
+        <field name="companyCode" column="company" />
+        <field name="departmentCode" column="department" />
+    </partitionKey>
+
+    <!-- Cluster key fields if POJO strategy used -->
+    <clusterKey>
+        <!-- Mapping from POJO field to Cassandra table column -->
+        <field name="personNumber" column="number" sort="desc"/>
+    </clusterKey>
+</keyPersistence>
+----
+--
+
+Lets assume that `org.apache.ignite.tests.pojos.PersonId` has such implementation:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+public class PersonId {
+    private String companyCode;
+    private String departmentCode;
+    private int personNumber;
+
+    public void setCompanyCode(String code) {
+        companyCode = code;
+    }
+
+    public String getCompanyCode() {
+        return companyCode;
+    }
+
+    public void setDepartmentCode(String code) {
+        departmentCode = code;
+    }
+
+    public String getDepartmentCode() {
+        return departmentCode;
+    }
+
+    public void setPersonNumber(int number) {
+        personNumber = number;
+    }
+
+    public int getPersonNumber() {
+        return personNumber;
+    }
+}
+----
+--
+
+In such case Ignite cache keys of `org.apache.ignite.tests.pojos.PersonId` type will be persisted into a set of Cassandra
+table columns representing `PARTITION` and `CLUSTER` key using this mapping rule:
+
+[opts="header"]
+|===
+| POJO field    | Table column     | Column type
+| companyCode     | company        | text
+| departmentCode  | department         | text
+| personNumber    | number              | int
+|===
+
+In addition to that, combination of columns `(company, department)` will be used as Cassandra `PARTITION` key and column
+`number` will be used as a `CLUSTER` key sorted in descending order.
+
+Finally lets move to the last section, which specifies persistence settings for Ignite cache values:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+<valuePersistence class="org.apache.ignite.tests.pojos.Person"
+                  strategy="POJO"
+                  serializer="org.apache.ignite.cache.store.cassandra.serializer.KryoSerializer">
+    <!-- Mapping from POJO field to Cassandra table column -->
+    <field name="firstName" column="first_name" />
+    <field name="lastName" column="last_name" />
+    <field name="age" />
+    <field name="married" index="true"/>
+    <field name="height" />
+    <field name="weight" />
+    <field name="birthDate" column="birth_date" />
+    <field name="phones" />
+</valuePersistence>
+----
+--
+
+Lets assume `that org.apache.ignite.tests.pojos.Person` class has the same implementation like in link:extensions-and-integrations/cassandra/usage-examples#example-4[Example 4].
+In this case Ignite cache values of `org.apache.ignite.tests.pojos.Person` type will be persisted into a set of Cassandra
+table columns using such mapping rule:
+
+[opts="header"]
+|===
+| POJO field    | Table column     | Column type
+| firstName     | first_name        | text
+| lastName      | last_name         | text
+| age           | age              | int
+| married       | married          | boolean
+| height        | height           | bigint
+| weight        | weight           | float
+| birthDate     | birth_date        | timestamp
+| phones        | phones           | blob
+|===
+
+Comparing to link:extensions-and-integrations/cassandra/usage-examples#example-4[Example 4] we can see that now `phones`
+field will be serialized to `phones` column of `blob` type using https://github.com/EsotericSoftware/kryo[Kryo, window=_blank] serializer.
+In addition to that, Cassandra secondary index will be created for the `married` column.
diff --git a/docs/_docs/extensions-and-integrations/hibernate-l2-cache.adoc b/docs/_docs/extensions-and-integrations/hibernate-l2-cache.adoc
new file mode 100644
index 0000000..b054de9
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/hibernate-l2-cache.adoc
@@ -0,0 +1,308 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite Hibernate L2 Cache
+
+== Overview
+
+Apache Ignite can be used as a http://hibernate.org[Hibernate, window=_blank] second-Level cache,
+which can significantly speed-up the persistence layer of your application.
+
+All work with Hibernate database-mapped objects is done within a session, usually bound to a worker thread or a Web session.
+By default, Hibernate only uses per-session (L1) cache, so, objects, cached in one session, are not seen in another.
+However, an L2 cache may be used, in which the cached objects are seen for all sessions that use
+the same L2 cache configuration. This usually gives a significantly greater performance gain, because each newly-created
+session can take full advantage of the data already present in L2 cache memory (which outlives any session-local L1 cache).
+
+image::images/integrations/hibernate-l2-cache.png[Ignite Cluster]
+
+While the L1 cache is always enabled and fully implemented by Hibernate internally, L2 cache is optional and can have
+multiple pluggable implementaions. Ignite can be easily plugged-in as an L2 cache implementation, and can be used in all
+access modes (`READ_ONLY`, `READ_WRITE`, `NONSTRICT_READ_WRITE`, and `TRANSACTIONAL`), supporting a wide range of related features:
+
+* caching to memory and disk, as well as off-heap memory.
+* cache transactions, that make `TRANSACTIONAL` mode possible.
+* clustering, with 2 different replication modes: `REPLICATED` and `PARTITIONED`
+
+To start using GridGain as a Hibernate L2 cache, you need to perform 3 simple steps:
+
+* Add Ignite libraries to your application's classpath.
+* Enable L2 cache and specify Ignite implementation class in L2 cache configuration.
+* Configure Ignite caches for L2 cache regions and start the embedded Ignite node (and, optionally, external Ignite nodes).
+
+In the section below we cover these steps in more detail.
+
+== L2 Cache Configuration
+
+To configure Ignite with as a Hibernate L2 cache, without any changes required to the existing Hibernate code, you need to:
+
+* Add either `ignite-hibernate_5.3` or `ignite-hibernate_4.2` module as a dependency to your project depending on whether
+Hibernate 5 or Hibernate 4 is used. Alternatively, you can copy JAR files of the same name from
+`+{apache_ignite_relese}/libs/optional+` to `+{apache_ignite_relese}/libs+` folder if you start an Apache Ignite node
+from a command line.
+* Configure Hibernate itself to use Ignite as an L2 cache.
+* Configure Ignite caches appropriately.
+
+=== Maven Configuration
+
+To add Apache Ignite Hibernate integration to your project, add the following dependency to your pom.xml file:
+
+[tabs]
+--
+tab:Hibernate 5[]
+[source,xml]
+----
+<dependency>
+  <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-hibernate_5.3</artifactId>
+  <version>${ignite.version}</version>
+</dependency>
+----
+tab:Hibernate 4[]
+[source,xml]
+----
+<dependency>
+  <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-hibernate_4.2</artifactId>
+  <version>${ignite.version}</version>
+</dependency>
+----
+--
+
+=== Hibernate Configuration Example
+
+A typical Hibernate configuration for L2 cache with Ignite would look like the one below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<hibernate-configuration>
+    <session-factory>
+        ...
+        <!-- Enable L2 cache. -->
+        <property name="cache.use_second_level_cache">true</property>
+
+        <!-- Generate L2 cache statistics. -->
+        <property name="generate_statistics">true</property>
+
+        <!-- Specify Ignite as L2 cache provider. -->
+        <property name="cache.region.factory_class">org.apache.ignite.cache.hibernate.HibernateRegionFactory</property>
+
+        <!-- Specify the name of the grid, that will be used for second level caching. -->
+        <property name="org.apache.ignite.hibernate.ignite_instance_name">hibernate-grid</property>
+
+        <!-- Set default L2 cache access type. -->
+        <property name="org.apache.ignite.hibernate.default_access_type">READ_ONLY</property>
+
+        <!-- Specify the entity classes for mapping. -->
+        <mapping class="com.mycompany.MyEntity1"/>
+        <mapping class="com.mycompany.MyEntity2"/>
+
+        <!-- Per-class L2 cache settings. -->
+        <class-cache class="com.mycompany.MyEntity1" usage="read-only"/>
+        <class-cache class="com.mycompany.MyEntity2" usage="read-only"/>
+        <collection-cache collection="com.mycompany.MyEntity1.children" usage="read-only"/>
+        ...
+    </session-factory>
+</hibernate-configuration>
+----
+--
+
+Here, we do the following:
+
+* Enable L2 cache (and, optionally, the L2 cache statistics generation).
+* Specify Ignite as L2 cache implementation.
+* Specify the name of the caching grid (should correspond to the one in Ignite configuration).
+* Specify the entity classes and configure caching for each class (a corresponding cache region should be configured in Ignite).
+
+=== Ignite Configuration Example
+A typical Ignite configuration for Hibernate L2 caching looks like this:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<!-- Basic configuration for atomic cache. -->
+<bean id="atomic-cache" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
+    <property name="cacheMode" value="PARTITIONED"/>
+    <property name="atomicityMode" value="ATOMIC"/>
+    <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+</bean>
+
+<!-- Basic configuration for transactional cache. -->
+<bean id="transactional-cache" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
+    <property name="cacheMode" value="PARTITIONED"/>
+    <property name="atomicityMode" value="TRANSACTIONAL"/>
+    <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+</bean>
+
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!--
+        Specify the name of the caching grid (should correspond to the
+        one in Hibernate configuration).
+    -->
+    <property name="igniteInstanceName" value="hibernate-grid"/>
+    ...
+    <!--
+        Specify cache configuration for each L2 cache region (which corresponds
+        to a full class name or a full association name).
+    -->
+    <property name="cacheConfiguration">
+        <list>
+            <!--
+                Configurations for entity caches.
+            -->
+            <bean parent="transactional-cache">
+                <property name="name" value="com.mycompany.MyEntity1"/>
+            </bean>
+            <bean parent="transactional-cache">
+                <property name="name" value="com.mycompany.MyEntity2"/>
+            </bean>
+            <bean parent="transactional-cache">
+                <property name="name" value="com.mycompany.MyEntity1.children"/>
+            </bean>
+
+            <!-- Configuration for update timestamps cache. -->
+            <bean parent="atomic-cache">
+                <property name="name" value="org.hibernate.cache.spi.UpdateTimestampsCache"/>
+            </bean>
+
+            <!-- Configuration for query result cache. -->
+            <bean parent="atomic-cache">
+                <property name="name" value="org.hibernate.cache.internal.StandardQueryCache"/>
+            </bean>
+        </list>
+    </property>
+    ...
+</bean>
+----
+--
+
+Here, we specify the cache configuration for each L2 cache region:
+
+* We use `PARTITIONED` cache to split the data between caching nodes. Another possible strategy is to enable `REPLICATED` mode,
+thus replicating a full dataset between all caching nodes. See Cache Distribution Models for more information.
+* We specify the cache name that corresponds an L2 cache region name (either a full class name or a full association name).
+* We use `TRANSACTIONAL` atomicity mode to take advantage of cache transactions.
+* We enable `FULL_SYNC` to be always fully synchronized with backup nodes.
+
+Additionally, we specify a cache for update timestamps, which may be `ATOMIC`, for better performance.
+
+Having configured Ignite caching node, we can start it from within our code the following way:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+Ignition.start("my-config-folder/my-ignite-configuration.xml");
+----
+--
+
+After the above line is executed, the internal Ignite node is started and is ready to cache the data. We can also start
+additional standalone nodes by running the following command from console:
+
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+$IGNITE_HOME/bin/ignite.sh my-config-folder/my-ignite-configuration.xml
+----
+tab:Windows[]
+[source,shell]
+----
+$IGNITE_HOME\bin\ignite.bat my-config-folder\my-ignite-configuration.xml
+----
+--
+
+[NOTE]
+====
+The nodes may be started on other hosts as well, forming a distributed caching cluster.
+Be sure to specify the right network settings in GridGain configuration file for that.
+====
+
+== Query Cache
+
+In addition to L2 cache, Hibernate offers a query cache. This cache stores the results of queries (either HQL or Criteria)
+with a given set of parameters, so, when you repeat the query with the same parameter set, it hits the cache without going to the database.
+
+Query cache may be useful if you have a number of queries, which may repeat with the same parameter values.
+Like in case of L2 cache, Hibernate relies on a 3-rd party cache implementation, and Ignite can be used as such.
+
+== Query Cache Configuration
+
+The configuration information above totally applies to query cache, but some additional configuration and code change is required.
+
+=== Hibernate Configuration
+To enable query cache in Hibernate, you only need one additional line in configuration file:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<!-- Enable query cache. -->
+<property name="cache.use_query_cache">true</property>
+----
+--
+
+Yet, a code modification is required: for each query that you want to cache, you should enable `cacheable` flag by calling `setCacheable(true)`:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+Session ses = ...;
+
+// Create Criteria query.
+Criteria criteria = ses.createCriteria(cls);
+
+// Enable cacheable flag.
+criteria.setCacheable(true);
+
+...
+----
+--
+
+After this is done, your query results will be cached.
+
+=== Ignite Configuration
+To enable Hibernate query caching in Ignite, you need to specify an additional cache configuration:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<property name="cacheConfiguration">
+    <list>
+        ...
+        <!-- Query cache (refers to atomic cache defined in above example). -->
+        <bean parent="atomic-cache">
+            <property name="name" value="org.hibernate.cache.internal.StandardQueryCache"/>
+        </bean>
+    </list>
+</property>
+----
+--
+
+== Example
+
+See a complete https://github.com/apache/ignite/blob/master/examples/src/main/java-lgpl/org/apache/ignite/examples/datagrid/hibernate/HibernateL2CacheExample.java[example, window=_blank]
+that is available on GitHub and in every Apache Ignite distribution.
diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/ignite-dataframe.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignite-dataframe.adoc
new file mode 100644
index 0000000..c4edacf
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignite-dataframe.adoc
@@ -0,0 +1,380 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite DataFrame
+
+== Overview
+
+The Apache Spark DataFrame API introduced the concept of a schema to describe the data, allowing Spark to manage the schema and organize the data into a tabular format. To put it simply, a DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database and allows Spark to leverage the Catalyst query optimizer to produce much more efficient query execution plans in comparison to RDDs, which are just collections of elements partitioned across the nodes of the cluster.
+
+Ignite expands DataFrame, simplifying development and improving data access times whenever Ignite is used as memory-centric storage for Spark. Benefits include:
+
+* Ability to share data and state across Spark jobs by writing and reading DataFrames to/from Ignite.
+* Faster SparkSQL queries by optimizing Spark query execution plans with Ignite SQL engine which include​ advanced indexing and avoid data movement across the network from Ignite to Spark.
+
+== Integration
+
+`IgniteRelationProvider` is an implementation of the Spark `RelationProvider` and `CreatableRelationProvider` interfaces. The `IgniteRelationProvider` can talk directly to Ignite tables through the Spark SQL interface. The data are loaded and exchanged via `IgniteSQLRelation` that executes filtering operations on the Ignite side. For now, grouping, joining or ordering operations are fulfilled on the Spark side. These operations will be optimized and processed on the Ignite side in link:https://issues.apache.org/jira/browse/IGNITE-7077[upcoming releases^]. `IgniteSQLRelation` utilizes the partitioned nature of Ignite's architecture and provides partitioning information to Spark.
+
+== Spark Session
+
+To use the Apache Spark DataFrame API, it is necessary to create an entry point for programming with Spark. This is achieved through the use of a `SparkSession` object, as shown in the following example:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+// Creating spark session.
+SparkSession spark = SparkSession.builder()
+  .appName("Example Program")
+  .master("local")
+  .config("spark.executor.instances", "2")
+  .getOrCreate();
+----
+
+tab:Scala[]
+[source, scala]
+----
+// Creating spark session.
+implicit val spark = SparkSession.builder()
+  .appName("Example Program")
+  .master("local")
+  .config("spark.executor.instances", "2")
+  .getOrCreate()
+----
+--
+
+== Reading DataFrames
+
+In order to read data from Ignite, you need to specify its format and the path to the Ignite configuration file. For example, assume an Ignite table named ‘person’ is created and deployed in Ignite, as follows:
+
+
+[source, sql]
+----
+CREATE TABLE person (
+    id LONG,
+    name VARCHAR,
+    city_id LONG,
+    PRIMARY KEY (id, city_id)
+) WITH "backups=1, affinityKey=city_id”;
+----
+
+The following Spark code can find all the rows from the 'person' table where the name is ‘Mary Major’:
+
+[tabs]
+--
+
+tab:Java[]
+
+[source, java]
+----
+SparkSession spark = ...
+String cfgPath = "path/to/config/file";
+
+Dataset<Row> df = spark.read()
+  .format(IgniteDataFrameSettings.FORMAT_IGNITE())              //Data source
+  .option(IgniteDataFrameSettings.OPTION_TABLE(), "person")     //Table to read.
+  .option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), CONFIG) //Ignite config.
+  .load();
+
+df.createOrReplaceTempView("person");
+
+Dataset<Row> igniteDF = spark.sql(
+  "SELECT * FROM person WHERE name = 'Mary Major'");
+----
+
+
+tab:Scala[]
+
+[source, scala]
+----
+val spark: SparkSession = …
+val cfgPath: String = "path/to/config/file"
+
+val df = spark.read
+  .format(FORMAT_IGNITE)               // Data source type.
+  .option(OPTION_TABLE, "person")      // Table to read.
+  .option(OPTION_CONFIG_FILE, cfgPath) // Ignite config.
+  .load()
+
+df.createOrReplaceTempView("person")
+
+val igniteDF = spark.sql("SELECT * FROM person WHERE name = 'Mary Major'")
+----
+--
+
+
+
+== Saving DataFrames
+
+[NOTE]
+====
+[discrete]
+=== Implementation notes
+Internally all inserts are done through `IgniteDataStreamer`. Several optional parameters exist to configure the internal streamer. Please, see a <<Ignite DataFrame Options>> of available options.
+====
+
+
+Ignite can serve as a storage for DataFrames created or updated in Spark. The following save modes determine how a DataFrame is processed in Ignite:
+
+* `Append` - the DataFrame will be appended to an existing table. Set `OPTION_STREAMER_ALLOW_OVERWRITE=true` if you want to update existing entries with the data of the DataFrame.
+* `Overwrite` - the following steps will be executed:
+* If the table already exists in Ignite, it will be dropped.
+* A new table will be created using the schema of the DataFrame and provided options.
+* DataFrame content will be inserted into the new table.
+* `ErrorIfExists` (default) - an exception is thrown if the table already exists in Ignite. If a table does not exist:
+* A new table will be created using the schema of the DataFrame and provided options.
+* DataFrame content will be inserted into the new table.
+* `Ignore` - the operation is ignored if the table already exists in Ignite. If a table does not exist:
+* A new table will be created using the schema of the DataFrame and provided options.
+* DataFrame content will be inserted into the new table.
+
+Save mode can be specified using the `mode(SaveMode mode)` method. For more information, please see the link:https://spark.apache.org/docs/2.2.0/api/scala/index.html#org.apache.spark.sql.DataFrameWriter@mode&lpar;saveMode:org.apache.spark.sql.SaveMode&rpar;:org.apache.spark.sql.DataFrameWriter%5BT%5D[Spark Documentation^]). Here is a code example that shows this method:
+
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+SparkSession spark = ...
+
+String cfgPath = "path/to/config/file";
+
+Dataset<Row> jsonDataFrame = spark.read().json("path/to/file.json");
+
+jsonDataFrame.write()
+  .format(IgniteDataFrameSettings.FORMAT_IGNITE())
+  .mode(SaveMode.Append) // SaveMode.
+//... other options
+   .save();
+----
+
+tab:Scala[]
+
+[source, scala]
+----
+val spark: SparkSession = …
+
+val cfgPath: String = "path/to/config/file"
+
+val jsonDataFrame = spark.read.json("path/to/file.json")
+
+jsonDataFrame.write
+  .format(FORMAT_IGNITE)
+  .mode(SaveMode.Append) // SaveMode.
+//... other options
+  .save()
+----
+--
+
+You must define the following Ignite specific options if a new table will be created by a DataFrame's save routines:
+
+* `OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS` - a primary key is required for every Ignite table. This option has to contain a comma-separated list of fields/columns that represent a primary key.
+* `OPTION_CREATE_TABLE_PARAMETERS` - additional parameters to use upon Ignite table creation. The parameters are those that are supported by the link:sql-reference/ddl#create-table[CREATE TABLE] command.
+
+The following example shows how to write the content of a JSON file into Ignite:
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+SparkSession spark = ...
+
+String cfgPath = "path/to/config/file";
+
+Dataset<Row> jsonDataFrame = spark.read().json("path/to/file.json");
+
+jsonDataFrame.write()
+  .format(IgniteDataFrameSettings.FORMAT_IGNITE())
+  .option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), TEST_CONFIG_FILE)
+  .option(IgniteDataFrameSettings.OPTION_TABLE(), "json_table")
+  .option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(), "id")
+  .option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(), "template=replicated")
+  .save();
+----
+
+tab:Scala[]
+
+[source, scala]
+----
+val spark: SparkSession = …
+
+val cfgPath: String = "path/to/config/file"
+
+val jsonDataFrame = spark.read.json("path/to/file.json")
+
+jsonDataFrame.write
+  .format(FORMAT_IGNITE)
+  .option(OPTION_CONFIG_FILE, TEST_CONFIG_FILE)
+  .option(OPTION_TABLE, "json_table")
+  .option(OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS, "id")
+  .option(OPTION_CREATE_TABLE_PARAMETERS, "template=replicated")
+  .save()
+----
+
+--
+
+== IgniteSparkSession and IgniteExternalCatalog
+
+Spark introduces the entity called `catalog` to read and store meta-information about known data sources, such as tables and views. Ignite provides its own implementation of this catalog, called `IgniteExternalCatalog`.
+
+`IgniteExternalCatalog` can read information about all existing SQL tables deployed in the Ignite cluster. `IgniteExternalCatalog` is also required to build an `IgniteSparkSession` object.
+
+`IgniteSparkSession` is an extension of the regular `SparkSession` that stores `IgniteContext` and injects the `IgniteExternalCatalog` instance into Spark objects.
+
+`IgniteSparkSession.builder()` must be used to create `IgniteSparkSession`. For example, if the following two tables are created in Ignite:
+
+
+
+[source, sql]
+----
+CREATE TABLE city (
+    id LONG PRIMARY KEY,
+    name VARCHAR
+) WITH "template=replicated";
+
+CREATE TABLE person (
+    id LONG,
+    name VARCHAR,
+    city_id LONG,
+    PRIMARY KEY (id, city_id)
+) WITH "backups=1, affinityKey=city_id";
+----
+
+
+Then executing the following code provides table meta-information:
+
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+// Using SparkBuilder provided by Ignite.
+IgniteSparkSession igniteSession = IgniteSparkSession.builder()
+  .appName("Spark Ignite catalog example")
+  .master("local")
+  .config("spark.executor.instances", "2")
+  //Only additional option to refer to Ignite cluster.
+  .igniteConfig("/path/to/ignite/config.xml")
+  .getOrCreate();
+
+// This will print out info about all SQL tables existed in Ignite.
+igniteSession.catalog().listTables().show();
+
+// This will print out schema of PERSON table.
+igniteSession.catalog().listColumns("person").show();
+
+// This will print out schema of CITY table.
+igniteSession.catalog().listColumns("city").show();
+----
+
+
+tab:Scala[]
+
+[source, scala]
+----
+// Using SparkBuilder provided by Ignite.
+val igniteSession = IgniteSparkSession.builder()
+  .appName("Spark Ignite catalog example")
+  .master("local")
+  .config("spark.executor.instances", "2")
+  //Only additional option to refer to Ignite cluster.
+  .igniteConfig("/path/to/ignite/config.xml")
+  .getOrCreate()
+
+// This will print out info about all SQL tables existed in Ignite.
+igniteSession.catalog.listTables().show()
+
+// This will print out schema of PERSON table.
+igniteSession.catalog.listColumns("person").show()
+
+// This will print out schema of CITY table.
+igniteSession.catalog.listColumns("city").show()
+----
+--
+
+And the code output should be similar to the following:
+
+
+
+[source, text]
+----
++------+--------+-----------+---------+-----------+
+|  name|database|description|tableType|isTemporary|
++------+--------+-----------+---------+-----------+
+|  CITY|        |       null| EXTERNAL|      false|
+|PERSON|        |       null| EXTERNAL|      false|
++------+--------+-----------+---------+-----------+
+
+PERSON table description:
+
++-------+-----------+--------+--------+-----------+--------+
+|   name|description|dataType|nullable|isPartition|isBucket|
++-------+-----------+--------+--------+-----------+--------+
+|   NAME|       null|  string|    true|      false|   false|
+|     ID|       null|  bigint|   false|       true|   false|
+|CITY_ID|       null|  bigint|   false|       true|   false|
++-------+-----------+--------+--------+-----------+--------+
+
+CITY table description:
+
++----+-----------+--------+--------+-----------+--------+
+|name|description|dataType|nullable|isPartition|isBucket|
++----+-----------+--------+--------+-----------+--------+
+|NAME|       null|  string|    true|      false|   false|
+|  ID|       null|  bigint|   false|       true|   false|
++----+-----------+--------+--------+-----------+--------+
+----
+
+
+
+
+
+
+
+== Ignite DataFrame Options
+
+
+[cols="1,2",opts="header"]
+|===
+| Name  | Description
+| `FORMAT_IGNITE`|   Name of the Ignite Data Source
+|`OPTION_CONFIG_FILE` | Path to the config file
+|`OPTION_TABLE`   | Table name
+|`OPTION_CREATE_TABLE_PARAMETERS` | Additional parameters for a newly created table. The value of this option is used for the `WITH` part of a `CREATE TABLE` query.
+|`OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS`|  Comma separated list of primary key fields.
+|`OPTION_STREAMER_ALLOW_OVERWRITE` |If `true`, then an existing row will be overwritten with DataFrame content. If `false`, then the row will be skipped if the primary key already exists in the table.
+|`OPTION_STREAMER_FLUSH_FREQUENCY`| Automatic flush frequency. This is the time after which the streamer will make an attempt to submit all data added so far to remote nodes See link:data-streaming[Data Streaming]
+|`OPTION_STREAMER_PER_NODE_BUFFER_SIZE`|    Per node buffer size. See also. The size of the per node key-value pairs buffer.
+|`OPTION_STREAMER_PER_NODE_PARALLEL_OPERATIONS`|    Per node buffer size. The maximum number of parallel stream operations for a single node.
+|`OPTION_SCHEMA`|   The Ignite SQL schema name in which the specified table exists. When OPTION_SCHEMA is not specified, all schemas will be scanned to find a table with a matching name. This option can be used to differentiate two tables of the same name in different Ignite SQL schemas.
+
+When creating new tables, `OPTION_SCHEMA` must be specified as `PUBLIC`, otherwise an exception will be thrown because currently Ignite SQL can issue `CREATE TABLE` statements within the `PUBLIC` schema only.
+
+|===
+
+== Examples
+
+There are several examples available on GitHub that demonstrate how to use Spark DataFrames with Ignite:
+
+* link:{githubUrl}/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameExample.scala[DataFrame]
+* link:{githubUrl}/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameWriteExample.scala[Saving DataFrame]
+* link:{githubUrl}/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteCatalogExample.scala[Catalog]
diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc
new file mode 100644
index 0000000..a214970
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc
@@ -0,0 +1,106 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= IgniteContext and IgniteRDD
+
+== IgniteContext
+
+IgniteContext is the main entry point to Spark-Ignite integration. To create an instance of Ignite context, user must provide an instance of SparkContext and a closure creating `IgniteConfiguration` (configuration factory). Ignite context will make sure that server or client Ignite nodes exist in all involved job instances. Alternatively, a path to an XML configuration file can be passed to `IgniteContext` constructor which will be used to configure nodes being started.
+
+When creating an `IgniteContext` instance, an optional boolean `client` argument (defaulting to `true`) can be passed to context constructor. This is typically used in a Shared Deployment installation. When `client` is set to `false`, context will operate in embedded mode and will start server nodes on all workers during the context construction. This is required in an Embedded Deployment installation. See link:ignite-for-spark/installation[Installation] for information on deployment configurations.
+
+[CAUTION]
+====
+[discrete]
+=== Embedded Mode Deprecation
+Embedded mode implies starting Ignite server nodes within Spark executors which can cause unexpected rebalancing or even data loss. Therefore this mode is currently deprecated and will be eventually discontinued. Consider starting a separate Ignite cluster and using standalone mode to avoid data consistency and performance issues.
+====
+
+Once `IgniteContext` is created, instances of `IgniteRDD` may be obtained using `fromCache` methods. It is not required that requested cache exist in Ignite cluster when RDD is created. If the cache with the given name does not exist, it will be created using provided configuration or template configuration.
+
+For example, the following code will create an Ignite context with default Ignite configuration
+
+
+[source, scala]
+----
+val igniteContext = new IgniteContext(sparkContext,
+    () => new IgniteConfiguration())
+----
+
+The following code will create an Ignite context configured from a file `example-shared-rdd.xml`:
+
+
+[source, scala]
+----
+val igniteContext = new IgniteContext(sparkContext,
+    "examples/config/spark/example-shared-rdd.xml")
+----
+
+
+== IgniteRDD
+
+`IgniteRDD` is an implementation of Spark RDD abstraction representing a live view of Ignite cache. `IgniteRDD` is not immutable, all changes in Ignite cache (regardless whether they were caused by another RDD or external changes in cache) will be visible to RDD users immediately.
+
+`IgniteRDD` utilizes partitioned nature of Ignite caches and provides partitioning information to Spark executor. Number of partitions in `IgniteRDD` equals to the number of partitions in underlying Ignite cache. `IgniteRDD` also provides affinity information to Spark via `getPrefferredLocations` method so that RDD computations use data locality.
+
+== Reading values from Ignite
+Since `IgniteRDD` is a live view of Ignite cache, there is no need to explicitly load data to Spark application from Ignite. All RDD methods are available to use right away after an instance of `IgniteRDD` is created.
+
+For example, assuming an Ignite cache with name "partitioned" contains string values, the following code will find all values that contain the word "Ignite":
+
+
+[source, scala]
+----
+val cache = igniteContext.fromCache("partitioned")
+val result = cache.filter(_._2.contains("Ignite")).collect()
+----
+
+
+== Saving values to Ignite
+
+Since Ignite caches operate on key-value pairs, the most straightforward way to save values to Ignite cache is to use a Spark tuple RDD and `savePairs` method. This method will take advantage of the RDD partitioning and store value to cache in a parallel manner, if possible.
+
+It is also possible to save value-only RDD into Ignite cache using `saveValues` method. In this case `IgniteRDD` will generate a unique affinity-local key for each value being stored into the cache.
+
+For example, the following code will store pairs of integers from 1 to 10000 into cache named "partitioned" using 10 parallel store operations:
+
+
+[source, scala]
+----
+val cacheRdd = igniteContext.fromCache("partitioned")
+
+cacheRdd.savePairs(sparkContext.parallelize(1 to 10000, 10).map(i => (i, i)))
+----
+
+
+== Running SQL queries against Ignite cache
+
+When Ignite cache is configured with the indexing subsystem enabled, it is possible to run SQL queries against the cache using `objectSql` and `sql` methods. See link:SQL/sql-introduction[Working with SQL] for more information about Ignite SQL queries.
+
+For example, assuming the "partitioned" cache is configured to index pairs of integers, the following code will get all integers in the range (10, 100):
+
+
+[source, scala]
+----
+val cacheRdd = igniteContext.fromCache("partitioned")
+
+val result = cacheRdd.sql("select _val from Integer where val > ? and val < ?", 10, 100)
+----
+
+== Example
+
+There are​ a couple of examples available on GitHub that demonstrate the usage of `IgniteRDD`:
+
+* link:{githubUrl}/examples/src/main/scala/org/apache/ignite/scalar/examples/spark/ScalarSharedRDDExample.scala[Scala Example^]
+* link:{githubUrl}/examples/src/main/spark/org/apache/ignite/examples/spark/SharedRDDExample.java[Java Example^]
diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc
new file mode 100644
index 0000000..235df40
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc
@@ -0,0 +1,171 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Installation
+
+== Shared Deployment
+
+Shared deployment implies that Apache Ignite nodes are running independently from Apache Spark applications and store state even after Apache Spark jobs die. Similarly to Apache Spark, there are three ways to deploy Apache Ignite to the cluster.
+
+=== Standalone Deployment
+
+In the Standalone deployment mode, Ignite nodes should be deployed together with Spark Worker nodes. Instruction on Ignite installation can be found link:installation[here]. After you install Ignite on all worker nodes, start a node on each Spark worker with your config using `ignite.sh` script.
+
+
+=== Adding Ignite libraries to Spark classpath by default
+
+Spark application deployment model allows dynamic jar distribution during application start. This model, however, has some drawbacks:
+
+  *  Spark dynamic class loader does not implement `getResource` methods, so you will not be able to access resources located in jar files.
+  * Java logger uses application class loader (not the context class loader) to load log handlers which results in `ClassNotFoundException` when using Java logging in Ignite.
+
+There is a way to alter the default Spark classpath for each launched application (this should be done on each machine of the Spark cluster, including master, worker and driver nodes).
+
+. Locate the `$SPARK_HOME/conf/spark-env.sh` file. If this file does not exist, create it from template using `$SPARK_HOME/conf/spark-env.sh.template`
+. Add the following lines to the end of the `spark-env.sh` file (uncomment the line setting `IGNITE_HOME` in case if you do not have it globally set):
+
+
+
+[source, shell]
+----
+# Optionally set IGNITE_HOME here.
+# IGNITE_HOME=/path/to/ignite
+
+IGNITE_LIBS="${IGNITE_HOME}/libs/*"
+
+for file in ${IGNITE_HOME}/libs/*
+do
+    if [ -d ${file} ] && [ "${file}" != "${IGNITE_HOME}"/libs/optional ]; then
+        IGNITE_LIBS=${IGNITE_LIBS}:${file}/*
+    fi
+done
+
+export SPARK_CLASSPATH=$IGNITE_LIBS
+----
+
+
+Copy any folders required from the `$IGNITE_HOME/libs/optional` folder, such as `ignite-log4j`, to the `$IGNITE_HOME/libs` folder.
+
+You can verify that the Spark classpath is changed by running `bin/spark-shell` and typing a simple import statement:
+
+
+
+[source, shell]
+----
+scala> import org.apache.ignite.configuration._
+import org.apache.ignite.configuration._
+----
+
+== Embedded Deployment
+
+[CAUTION]
+====
+[discrete]
+=== Embedded Mode Deprecation
+Embedded mode implies starting Ignite server nodes within Spark executors which can cause unexpected rebalancing or even data loss. Therefore this mode is currently deprecated and will be eventually discontinued. Consider starting a separate Ignite cluster and using standalone mode to avoid data consistency and performance issues.
+====
+
+
+Embedded deployment means that Apache Ignite nodes are started inside the Apache Spark job processes and are stopped when the job dies. There is no need for additional deployment steps in this case. Apache Ignite code will be distributed to worker machines using the Apache Spark deployment mechanism and nodes will be started on all workers as  part of the `IgniteContext` initialization.
+
+
+== Maven
+
+Ignite's Spark artifact is link:http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.ignite%22[hosted in Maven Central^]. Depending on a Scala version you use, include the artifact using one of the dependencies shown below.
+
+.Scala 2.11
+[source, scala]
+----
+<dependency>
+  <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-spark</artifactId>
+  <version>${ignite.version}</version>
+</dependency>
+----
+
+.Scala 2.10
+[source, scala]
+----
+<dependency>
+  <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-spark_2.10</artifactId>
+  <version>${ignite.version}</version>
+</dependency>
+----
+
+== SBT
+
+If SBT is used as a build tool for a Scala application, then Ignite's Spark artifact can be added into `build.sbt` with one of the commands below:
+
+.Scala 2.11
+[source, scala]
+----
+libraryDependencies += "org.apache.ignite" % "ignite-spark" % "ignite.version"
+----
+
+
+.Scala 2.10
+[source, scala]
+----
+libraryDependencies += "org.apache.ignite" % "ignite-spark_2.10" % "ignite.version"
+----
+
+
+== Classpath Configuration
+
+When IgniteRDD or Ignite Data Frames APIs are used, make sure that Spark executors and drivers have all the required Ignite jars available in their classpath. Spark provides several ways to modify the classpath of both the driver or the executor process.
+
+
+=== Parameters Configuration
+
+Ignite jars can be added to Spark using configuration parameters such as
+`spark.driver.extraClassPath` and `spark.executor.extraClassPath`. Refer to the link:https://spark.apache.org/docs/latest/configuration.html#runtime-environment[Spark official documentation] for all available options.
+
+The following shows how to fill in `spark.driver.extraClassPath` parameters:
+
+
+[source, shell]
+----
+spark.executor.extraClassPath /opt/ignite/libs/*:/opt/ignite/libs/optional/ignite-spark/*:/opt/ignite/libs/optional/ignite-log4j/*:/opt/ignite/libs/optional/ignite-yarn/*:/opt/ignite/libs/ignite-spring/*
+----
+
+=== Source Code Configuration
+
+Spark provides APIs to set up extra libraries from the application code. You can provide Ignite jars in the following way:
+
+
+
+[source, scala]
+----
+private val MAVEN_HOME = "/home/user/.m2/repository"
+
+val spark = SparkSession.builder()
+       .appName("Spark Ignite data sources example")
+       .master("spark://172.17.0.2:7077")
+       .getOrCreate()
+
+spark.sparkContext.addJar(MAVEN_HOME + "/org/apache/ignite/ignite-core/2.4.0/ignite-core-2.4.0.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/apache/ignite/ignite-spring/2.4.0/ignite-spring-2.4.0.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/apache/ignite/ignite-log4j/2.4.0/ignite-log4j-2.4.0.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/apache/ignite/ignite-spark/2.4.0/ignite-spark-2.4.0.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/apache/ignite/ignite-indexing/2.4.0/ignite-indexing-2.4.0.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/springframework/spring-beans/4.3.7.RELEASE/spring-beans-4.3.7.RELEASE.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/springframework/spring-core/4.3.7.RELEASE/spring-core-4.3.7.RELEASE.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/springframework/spring-context/4.3.7.RELEASE/spring-context-4.3.7.RELEASE.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/org/springframework/spring-expression/4.3.7.RELEASE/spring-expression-4.3.7.RELEASE.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/javax/cache/cache-api/1.0.0/cache-api-1.0.0.jar")
+spark.sparkContext.addJar(MAVEN_HOME + "/com/h2database/h2/1.4.195/h2-1.4.195.jar")
+----
+
+
diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/overview.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/overview.adoc
new file mode 100644
index 0000000..fff0ce9
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/overview.adoc
@@ -0,0 +1,49 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for Spark
+
+Apache Ignite is a distributed memory-centric database and caching platform that is used by Apache Spark users to:
+
+* Achieve true in-memory performance at scale and avoid data movement from a data source to Spark workers and applications.
+* Boost DataFrame and SQL performance.
+* More easily share state and data among Spark jobs.
+
+image::images/spark_integration.png[Spark Integration]
+
+
+== Ignite RDDs
+
+Apache Ignite provides an implementation of the Spark RDD which allows any data and state to be shared in memory as RDDs across Spark jobs. The Ignite RDD provides a shared, mutable view of the same data in-memory in Ignite across different Spark jobs, workers, or applications. Native Spark RDDs cannot be shared across Spark jobs or applications.
+
+The way an link:ignite-for-spark/ignitecontext-and-rdd[IgniteRDD] is implemented is as a view over a distributed Ignite table (aka. cache). It can be deployed with an Ignite node either within the Spark job executing process, on a Spark worker, or in a separate Ignite cluster. It means that depending on the chosen deployment mode the shared state may either exist only during the lifespan of a Spark application (embedded mode), or it may out-survive the Spark application (standalone mode).
+
+While Apache SparkSQL supports a fairly rich SQL syntax, it doesn't implement any indexing. As a result, Spark queries may take minutes even on moderately small data sets because they have to do full data scans. With Ignite, Spark users can configure primary and secondary indexes that can bring up to 1000x performance gains.
+
+
+== Ignite DataFrames
+
+The Apache Spark DataFrame API introduced the concept of a schema to describe the data, allowing Spark to manage the schema and organize the data into a tabular format. To put it simply, a DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database and allows Spark to leverage the Catalyst query optimizer to produce much more efficient query execution plans in comparison to RDDs, which are just collections of elements partitioned across the nodes of the cluster.
+
+Ignite expands link:ignite-for-spark/ignite-dataframe[DataFrame], simplifying development and improving data access times whenever Ignite is used as memory-centric storage for Spark. Benefits include:
+
+* Ability to share data and state across Spark jobs by writing and reading DataFrames to/from Ignite.
+* Faster SparkSQL queries by optimizing Spark query execution plans with Ignite SQL engine which include​ advanced indexing and avoid data movement across the network from Ignite to Spark.
+
+== Supported Spark Version
+
+Apache Ignite comes with two modules that support different versions of Apache Spark:
+
+* ignite-spark — integration with Spark 2.3
+* ignite-spark-2.4 — integration with Spark 2.4
diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/spark-shell.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/spark-shell.adoc
new file mode 100644
index 0000000..237fa8a
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/spark-shell.adoc
@@ -0,0 +1,202 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Testing Ignite with Spark-shell
+
+== Starting up the cluster
+
+Here we will briefly cover the process of Spark and Ignite cluster startup. Refer to link:https://spark.apache.org/docs/latest/[Spark documentation] for more details.
+
+For the testing you will need a Spark master process and at least one Spark worker. Usually Spark master and workers are separate machines, but for the test purposes you can start worker on the same machine where master starts.
+
+. Download and unpack Spark binary distribution to the same location (let it be `SPARK_HOME`) on all nodes.
+. Download and unpack Ignite binary distribution to the same location (let it be `IGNITE_HOME`) on all nodes.
+. On master node `cd` to `$SPARK_HOME` and run the following command:
++
+--
+[source, shell]
+----
+sbin/start-master.sh
+----
+
+The script should output the path to log file of the started process. Check the log file for the master URL which has the following format: `spark://master_host:master_port` Also check the log file for the Web UI url (usually it is `http://master_host:8080`).
+--
+. On each of the worker nodes `cd` to `$SPARK_HOME` and run the following command:
++
+[source, shell]
+----
+bin/spark-class org.apache.spark.deploy.worker.Worker spark://master_host:master_port
+----
+where `spark://master_host:master_port` is the master URL you grabbed from the master log file. After workers has started check the master Web UI interface, it should show all of your workers registered in status `ALIVE`.
+. On each of the worker nodes cd to `$IGNITE_HOME` and start an Ignite node by running the following command:
++
+[source, shell]
+----
+bin/ignite.sh
+----
+
+
+You should see Ignite nodes discover each other with default configuration. If your network does not allow multicast traffic, you will need to change the default configuration file and configure TCP discovery.
+
+
+== Working with Spark-Shell
+
+Now that you have your cluster up and running, you can run `spark-shell` and check the integration.
+
+1. Start spark shell:
++
+--
+* Either by providing Maven coordinates to Ignite artifacts (you can use `--repositories` if you need, but it may be omitted):
++
+[source, shell]
+----
+./bin/spark-shell
+    --packages org.apache.ignite:ignite-spark:1.8.0
+  --master spark://master_host:master_port
+  --repositories http://repo.maven.apache.org/maven2/org/apache/ignite
+----
+* Or by providing paths to Ignite jar file paths using `--jars` parameter
++
+[source, shell]
+----
+./bin/spark-shell --jars path/to/ignite-core.jar,path/to/ignite-spark.jar,path/to/cache-api.jar,path/to/ignite-log4j.jar,path/to/log4j.jar --master spark://master_host:master_port
+----
+
+You should see Spark shell started up.
+
+Note that if you are planning to use spring configuration loading, you will need to add the `ignite-spring` dependency as well:
+
+[source, shell]
+----
+./bin/spark-shell
+    --packages org.apache.ignite:ignite-spark:1.8.0,org.apache.ignite:ignite-spring:1.8.0
+  --master spark://master_host:master_port
+----
+--
+2. Let's create an instance of Ignite context using default configuration:
++
+--
+
+[source, scala]
+----
+import org.apache.ignite.spark._
+import org.apache.ignite.configuration._
+
+val ic = new IgniteContext(sc, () => new IgniteConfiguration())
+----
+
+You should see something like
+
+
+[source, text]
+----
+ic: org.apache.ignite.spark.IgniteContext = org.apache.ignite.spark.IgniteContext@62be2836
+----
+
+An alternative way to create an instance of IgniteContext is to use a configuration file. Note that if path to configuration is specified in a relative form, then the `IGNITE_HOME` environment variable should be globally set in the system as the path is resolved relative to `IGNITE_HOME`
+
+
+[source, scala]
+----
+import org.apache.ignite.spark._
+import org.apache.ignite.configuration._
+
+val ic = new IgniteContext(sc, "examples/config/spark/example-shared-rdd.xml")
+----
+--
+3. Let's now create an instance of `IgniteRDD` using "partitioned" cache in default configuration:
++
+--
+
+[source, scala]
+----
+val sharedRDD = ic.fromCache[Integer, Integer]("partitioned")
+----
+
+
+You should see an instance of RDD created for partitioned cache:
+
+
+[source, text]
+----
+shareRDD: org.apache.ignite.spark.IgniteRDD[Integer,Integer] = IgniteRDD[0] at RDD at IgniteAbstractRDD.scala:27
+----
+
+
+Note that creation of RDD is a local operation and will not create a cache in Ignite cluster.
+--
+4. Let's now actually ask Spark to do something with our RDD, for example, get all pairs where value is less than 10:
++
+--
+
+[source, scala]
+----
+sharedRDD.filter(_._2 < 10).collect()
+----
+
+
+As our cache has not been filled yet, the result will be an empty array:
+
+
+[source, text]
+----
+res0: Array[(Integer, Integer)] = Array()
+----
+
+
+Check the logs of remote spark workers and see how Ignite context will start clients on all remote workers in the cluster. You can also start command-line Visor and check that "partitioned" cache has been created.
+
+--
+5. Let's now save some values into Ignite:
++
+--
+
+[source, scala]
+----
+sharedRDD.savePairs(sc.parallelize(1 to 100000, 10).map(i => (i, i)))
+----
+
+After running this command you can check with command-line Visor that cache size is 100000 elements.
+
+--
+6. We can now check how the state we created will survive job restart. Shut down the spark shell and repeat steps 1-3. You should again have an instance of Ignite context and RDD for "partitioned" cache. We can now check how many keys there are in our RDD which value is greater than 50000:
++
+--
+
+[source, scala]
+----
+sharedRDD.filter(_._2 > 50000).count
+----
+
+Since we filled up cache with a sequence of number from 1 to 100000 inclusive, we should see `50000` as a result:
+
+
+[source, text]
+----
+res0: Long = 50000
+----
+--
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc
new file mode 100644
index 0000000..f083081
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc
@@ -0,0 +1,23 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Troubleshooting
+
+*  My Spark application or Spark shell hangs when I invoke any action on IgniteRDD
+
+This will happen if you have created `IgniteContext` in client mode (which is default mode) and you do not have any Ignite server nodes started up. In this case Ignite client will wait until server nodes are started or fail after cluster join timeout has elapsed. You should start at least one Ignite server node when using `IgniteContext` in client mode.
+
+*  I am getting `java.lang.ClassNotFoundException` `org.apache.ignite.logger.java.JavaLoggerFileHandler` when using IgniteContext
+
+This issue appears when you do not have any loggers included in classpath and Ignite tries to use standard Java logging. By default Spark loads all user jar files using separate class loader. Java logging framework, on the other hand, uses application class loader to initialize log handlers. To resolve this, you can either add `ignite-log4j` module to the list of the used jars so that Ignite would use Log4j as a logging subsystem, or alter default Spark classpath as described link:ignite-for-spark/installation[here].
diff --git a/docs/_docs/extensions-and-integrations/mybatis-l2-cache.adoc b/docs/_docs/extensions-and-integrations/mybatis-l2-cache.adoc
new file mode 100644
index 0000000..bdbc81a
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/mybatis-l2-cache.adoc
@@ -0,0 +1,55 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite as MyBatis L2 Cache
+
+Apache Ignite can be used as a MyBatis L2 cache that distributes and caches data across a cluster of machines.
+
+If you are an Apache Maven user, simply add the following dependency to the `pom.xml`:
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependencies>
+  ...
+  <dependency>
+    <groupId>org.mybatis.caches</groupId>
+    <artifactId>mybatis-ignite</artifactId>
+    <version>1.0.5</version>
+  </dependency>
+  ...
+</dependencies>
+----
+--
+
+Alternatively, you can also download the https://github.com/mybatis/ignite-cache/releases[zip bundle, window=_blank],
+decompress it and add the jars in the classpath.
+
+Then, just specify it in the mapper XML as follows:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<mapper namespace="org.acme.FooMapper">
+  <cache type="org.mybatis.caches.ignite.IgniteCacheAdapter" />
+</mapper>
+----
+--
+
+and configure your Ignite cache in `config/default-config.xml`. (Simple reference configurations are available on
+https://github.com/mybatis/ignite-cache/tree/master/config[GitHub, window=_blank])
diff --git a/docs/_docs/extensions-and-integrations/php-pdo.adoc b/docs/_docs/extensions-and-integrations/php-pdo.adoc
new file mode 100644
index 0000000..5561c11
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/php-pdo.adoc
@@ -0,0 +1,247 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using PHP PDO With Apache Ignite
+
+== Overview
+
+PHP provides a lightweight, consistent interface for accessing databases named PHP Data Objects - PDO. This extension works
+with several database-specific PDO drivers. One of them is http://php.net/manual/en/ref.pdo-odbc.php[PDO_ODBC, window=_blank],
+which allows connecting to any database that provides its own ODBC driver implementation.
+
+With the usage of Apache Ignite's ODBC driver, it's possible to connect to an Apache Ignite cluster from a PHP application
+ accessing and modifying data that is stored there.
+
+== Setting Up ODBC Driver
+
+Apache Ignite conforms to ODBC protocol and has its own ODBC driver that is delivered along with other functionality.
+This is the driver that will be used by PHP PDO framework going forward in order to connect to an Apache Ignite cluster.
+
+Refer to the Ignite link:SQL/ODBC/odbc-driver[ODBC Driver] documentation to configure and install the driver
+on a target system. Once the driver is installed and functional move to the next sections of this guide below.
+
+== Installing and Configuring PHP PDO
+
+To install PHP, PDO and the PDO_ODBC driver refer to the generic PHP resources:
+
+* http://php.net/downloads.php[Download, window=_blank] and install the desired PHP version. Note, that PDO driver is
+enabled by default in PHP as of PHP 5.1.0. On Windows environment you can download PHP binary from the
+http://windows.php.net/download[following page, window=_blank].
+* http://php.net/manual/en/book.pdo.php[Configure, window=_blank] PHP PDO framework.
+* http://php.net/manual/en/ref.pdo-odbc.php[Enable, window=_blank] PDO_ODBC driver.
+  ** On Windows it may be needed to uncomment `extension=php_pdo_odbc.dll` line in the php.ini and make sure that `extension_dir`
+points to a directory which contains `php_pdo_odbc.dll`. Moreover, this directory has to be added to `PATH` environment variable.
+  ** On Unix based systems most often it's simply required to install a special PHP_ODBC package. For instance, `php5-odbc`
+package has to be installed on Ubuntu 14.04.
+* If necessary, http://php.net/manual/en/ref.pdo-odbc.php#ref.pdo-odbc.installation[configure, window=_blank] and build PDO_ODBC driver
+for a specific system that does not fall under a general case. In most cases, however, simple installation of both PHP
+and PDO_ODBC driver is going to be enough.
+
+== Starting Ignite Cluster
+
+After PHP PDO is installed and ready to be used let's start an Ignite cluster with an exemplary configuration and connect
+to the cluster from a PHP application updating and querying cluster's data:
+
+* First, the ODBC processor has to be enabled cluster wide. To do so, `odbcConfiguration` property has to be added to
+`IgniteConfiguration` of every cluster node.
+
+* Next, list configurations for all the caches related to specific data models inside of `IgniteConfiguration`.
+Since we're going to execute SQL queries from PHP PDO side over the cluster, every cache configuration needs to contain
+a definition for `QueryEntity`. Alternatively, you can define SQL tables and indexes with Ignite DDL commands.
+
+* Finally, use the configuration template below to start an Ignite cluster
++
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xmlns:util="http://www.springframework.org/schema/util"
+       xsi:schemaLocation="
+        http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd
+        http://www.springframework.org/schema/util
+        http://www.springframework.org/schema/util/spring-util.xsd">
+  <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+
+    <!-- Enabling ODBC. -->
+    <property name="odbcConfiguration">
+      <bean class="org.apache.ignite.configuration.OdbcConfiguration"></bean>
+    </property>
+
+    <!-- Configuring cache. -->
+    <property name="cacheConfiguration">
+      <list>
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+          <property name="name" value="Person"/>
+          <property name="cacheMode" value="PARTITIONED"/>
+          <property name="atomicityMode" value="TRANSACTIONAL"/>
+          <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+
+          <property name="queryEntities">
+            <list>
+              <bean class="org.apache.ignite.cache.QueryEntity">
+                <property name="keyType" value="java.lang.Long"/>
+                <property name="valueType" value="Person"/>
+
+                <property name="fields">
+                  <map>
+                    <entry key="firstName" value="java.lang.String"/>
+                    <entry key="lastName" value="java.lang.String"/>
+                    <entry key="resume" value="java.lang.String"/>
+                    <entry key="salary" value="java.lang.Integer"/>
+                  </map>
+                </property>
+
+                <property name="indexes">
+                  <list>
+                    <bean class="org.apache.ignite.cache.QueryIndex">
+                      <constructor-arg value="salary"/>
+                    </bean>
+                  </list>
+                </property>
+              </bean>
+            </list>
+          </property>
+        </bean>
+      </list>
+    </property>
+  </bean>
+</beans>
+----
+--
+
+== Connecting From PHP to Ignite Cluster
+
+To connect to Ignite from PHP PDO side the DSN has to be properly configured for Ignite.
+Refer to the link:SQL/ODBC/connection-string-dsn#configuring-dsn[Configuring DSN] documentation page for details.
+
+In the example below it's assumed that DSN's name is "LocalApacheIgniteDSN". Once everything is configured and can be
+inter-connected it's time to connect to the Apache Ignite cluster from a PHP PDO application and execute a number of
+queries like the ones shown below.
+[tabs]
+--
+tab:Insert[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Preparing query.
+    $dbs = $dbh->prepare('INSERT INTO Person (_key, firstName, lastName, resume, salary)
+        VALUES (?, ?, ?, ?, ?)');
+
+    // Declaring parameters.
+    $key = 777;
+    $firstName = "James";
+    $lastName = "Bond";
+    $resume = "Secret Service agent";
+    $salary = 65000;
+
+    // Binding parameters.
+    $dbs->bindParam(1, $key);
+    $dbs->bindParam(2, $firstName);
+    $dbs->bindParam(3, $lastName);
+    $dbs->bindParam(4, $resume);
+    $dbs->bindParam(5, $salary);
+
+    // Executing the query.
+    $dbs->execute();
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+tab:Update[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Executing the query. The salary field is an indexed field.
+    $dbh->query('UPDATE Person SET salary = 42000 WHERE salary > 50000');
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+tab:Select[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Executing the query and getting a result set. The salary field is an indexed field.
+    $res = $dbh->query('SELECT firstName, lastName, resume, salary from Person
+        WHERE salary > 12000');
+
+    if ($res == FALSE)
+        print_r("Exception");
+
+    // Printing results.
+    foreach($res as $row) {
+        print_r($row);
+    }
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+tab:Delete[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Performing query. Both firstName and lastName are non indexed fields.
+    $dbh->query('DELETE FROM Person WHERE firstName = \'James\' and lastName = \'Bond\'');
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+--
+
diff --git a/docs/_docs/extensions-and-integrations/spring/spring-boot.adoc b/docs/_docs/extensions-and-integrations/spring/spring-boot.adoc
new file mode 100644
index 0000000..b5022eb
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/spring/spring-boot.adoc
@@ -0,0 +1,210 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite With Spring Boot
+
+== Overview
+
+https://spring.io/projects/spring-boot[Spring Boot, window="_blank"] is a widely used Java framework that makes it easy
+to create stand-alone Spring-based applications.
+
+Apache Ignite provides two extensions that automate Ignite configuration withing the Spring Boot environment:
+
+* `ignite-spring-boot-autoconfigure-ext` - autoconfigures ignite server and client nodes within Spring Boot.
+* `ignite-spring-boot-thin-client-autoconfigure-ext` - autoconfigures link:thin-clients/java-thin-client[Ignite Thin Client] with Spring Boot.
+
+== Autoconfiguration of Apache Ignite Servers and Clients
+
+You need to use `ignite-spring-boot-autoconfigure-ext` extension to autoconfigure Ignite servers or clients (aka. thick clients) with Spring Boot.
+
+The extension can be added with Maven as follows:
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependency>
+  <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-spring-boot-autoconfigure-ext</artifactId>
+   <version>1.0.0</version>
+</dependency>
+----
+--
+
+Once added, Spring will create an Ignite instance on start-up automatically.
+
+=== Set Ignite Up Via Spring Boot Configuration
+
+You can use a regular Spring Boot configuration to set Ignite-specific settings. Use `ignite` as a prefix:
+
+[tabs]
+--
+tab:application.yml[]
+[source,yaml]
+----
+ignite:
+  igniteInstanceName: properties-instance-name
+  communicationSpi:
+    localPort: 5555
+  dataStorageConfiguration:
+    defaultDataRegionConfiguration:
+      initialSize: 10485760 #10MB
+    dataRegionConfigurations:
+      - name: my-dataregion
+        initialSize: 104857600 #100MB
+  cacheConfiguration:
+    - name: accounts
+      queryEntities:
+      - tableName: ACCOUNTS
+        keyFieldName: ID
+        keyType: java.lang.Long
+        valueType: java.lang.Object
+        fields:
+          ID: java.lang.Long
+          amount: java.lang.Double
+          updateDate: java.util.Date
+    - name: my-cache2
+----
+--
+
+=== Set Ignite Up Programmatically
+
+There are two ways to configure Ignite programmatically.
+
+**1. Create IgniteConfiguration Bean**
+
+Just create a method that returns `IgniteConfiguration` bean that will be used to initialize an Ignite node with the settings you set:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@Bean
+  public IgniteConfiguration igniteConfiguration() {
+     // If you provide a whole ClientConfiguration bean then configuration properties will not be used.
+    IgniteConfiguration cfg = new IgniteConfiguration();
+    cfg.setIgniteInstanceName("my-ignite");
+    return cfg;
+  }
+----
+--
+
+**2. Customize IgniteConfiguration Created With Spring Boot Configuration**
+
+If you want to customize `IgniteConfiguration` that was initially created with Spring Boot configuration file, then
+provide an implementation of `IgniteConfigurer` interface for your application context.
+
+First, `IgniteConfiguration` will be loaded from the Spring Boot configuration and then that instance will be passed to the configurer for extra settings.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@Bean
+  public IgniteConfigurer nodeConfigurer() {
+    return cfg -> {
+      //Setting some property.
+      //Other will come from `application.yml`
+      cfg.setIgniteInstanceName("my-ignite");
+    };
+  }
+----
+--
+
+== Autoconfiguration of Apache Ignite Thin Client
+
+You need to use `ignite-spring-boot-thin-client-autoconfigure-ext` extension to autoconfigure Ignite Thin Client with Spring Boot.
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-spring-boot-thin-client-autoconfigure-ext</artifactId>
+  <version>1.0.0</version>
+</dependency>
+----
+--
+
+Once added, Spring will create an instance of Ignite Thin client connection on start-up automatically.
+
+=== Set Thin Client Up Via Spring Boot Configuration
+
+You can use a regular Spring Boot configuration to configure `IgniteClient` object. Use `ignite-client` for Ignite-specific settings:
+
+[tabs]
+--
+tab:application.yml[]
+[source,yaml]
+----
+ignite-client:
+  addresses: 127.0.0.1:10800 # this is mandatory property!
+  timeout: 10000
+  tcpNoDelay: false
+----
+--
+
+=== Set Thin Client Up Programmatically
+
+You can use two ways to configure `IgniteClient` object programmatically.
+
+**1. Create ClientConfiguration bean**
+
+Just create a method that returns `ClientConfiguration` bean. `IgniteClient` object will use that bean upon startup:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@Bean
+  public ClientConfiguration clientConfiguration() {
+    // If you provide a whole ClientConfiguration bean then configuration properties will not be used.
+    ClientConfiguration cfg = new ClientConfiguration();
+    cfg.setAddresses("127.0.0.1:10800");
+    return cfg;
+  }
+----
+--
+
+**2. Customize ClientConfiguration Created With Spring Boot Configuration**
+
+If you want to customize `ClientConfiguration` bean created from the Spring Boot configuration file, then provide an
+implementation of `IgniteClientConfigurer` interface in your application context.
+
+First, `ClientConfiguration` will be loaded from the Spring Boot configuration and then an instance will be passed to the configurer.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@Bean
+    IgniteClientConfigurer configurer() {
+        //Setting some property.
+        //Other will come from `application.yml`
+        return cfg -> cfg.setSendBufferSize(64*1024);
+    }
+----
+--
+
+== Examples
+
+Refer to several available https://github.com/apache/ignite-extensions/tree/master/modules/spring-boot-autoconfigure-ext/examples/main[examples, windows="_blank"]
+for more details.
diff --git a/docs/_docs/extensions-and-integrations/spring/spring-caching.adoc b/docs/_docs/extensions-and-integrations/spring/spring-caching.adoc
new file mode 100644
index 0000000..bc57399
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/spring/spring-caching.adoc
@@ -0,0 +1,232 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using Spring Cache With Apache Ignite
+
+== Overview
+
+Ignite is shipped with `SpringCacheManager` - an implementation of http://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html[Spring Cache Abstraction, window=_blank].
+It provides an annotation-based way to enable caching for Java methods so that the result of a method execution is stored
+in an Ignite cache. Later, if the same method is called with the same set of parameter values, the result will be retrieved
+from the cache instead of actually executing the method.
+
+== Enabling Ignite for Spring Caching
+
+Only two simple steps are required to plug in an Ignite cache into your Spring-based application:
+
+* Start an Ignite node with proper configuration in embedded mode (i.e., in the same JVM where the application is running). It can already have predefined caches, but it's not required - caches will be created automatically on first access if needed.
+* Configure `SpringCacheManager` as the cache manager in the Spring application context.
+
+The embedded node can be started by `SpringCacheManager` itself. In this case you will need to provide a path to either
+the Ignite configuration XML file or `IgniteConfiguration` bean via `configurationPath` or `configuration`
+properties respectively (see examples below). Note that setting both is illegal and results in `IllegalArgumentException`.
+
+[tabs]
+--
+tab:configuration path[]
+[source,xml]
+----
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xmlns:cache="http://www.springframework.org/schema/cache"
+       xsi:schemaLocation="
+         http://www.springframework.org/schema/beans
+         http://www.springframework.org/schema/beans/spring-beans.xsd
+         http://www.springframework.org/schema/cache
+         http://www.springframework.org/schema/cache/spring-cache.xsd">
+    <!-- Provide configuration file path. -->
+    <bean id="cacheManager" class="org.apache.ignite.cache.spring.SpringCacheManager">
+        <property name="configurationPath" value="examples/config/spring-cache.xml"/>
+    </bean>
+
+    <!-- Enable annotation-driven caching. -->
+    <cache:annotation-driven/>
+</beans>
+----
+tab:configuration bean[]
+[source,xml]
+----
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xmlns:cache="http://www.springframework.org/schema/cache"
+       xsi:schemaLocation="
+         http://www.springframework.org/schema/beans
+         http://www.springframework.org/schema/beans/spring-beans.xsd
+         http://www.springframework.org/schema/cache
+         http://www.springframework.org/schema/cache/spring-cache.xsd">
+    <-- Provide configuration bean. -->
+    <bean id="cacheManager" class="org.apache.ignite.cache.spring.SpringCacheManager">
+        <property name="configuration">
+            <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+                 ...
+            </bean>
+        </property>
+    </bean>
+
+    <-- Enable annotation-driven caching. -->
+    <cache:annotation-driven/>
+</beans>
+----
+
+--
+
+It's possible that you already have an Ignite node running when the cache manager is initialized (e.g., it was started using
+`ServletContextListenerStartup`). In this case you should simply provide the grid name via `gridName` property.
+Note that if you don't set the grid name as well, the cache manager will try to use the default Ignite instance
+(the one with the `null` name). Here is an example:
+
+[tabs]
+--
+tab:Using an already started Ignite instance[]
+[source,xml]
+----
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xmlns:cache="http://www.springframework.org/schema/cache"
+       xsi:schemaLocation="
+         http://www.springframework.org/schema/beans
+         http://www.springframework.org/schema/beans/spring-beans.xsd
+         http://www.springframework.org/schema/cache
+         http://www.springframework.org/schema/cache/spring-cache.xsd">
+    <!-- Provide grid name. -->
+    <bean id="cacheManager" class="org.apache.ignite.cache.spring.SpringCacheManager">
+        <property name="gridName" value="myGrid"/>
+    </bean>
+
+    <!-- Enable annotation-driven caching. -->
+    <cache:annotation-driven/>
+</beans>
+----
+--
+
+[NOTE]
+====
+[discrete]
+Keep in mind that the node started inside your application is an entry point to the whole topology it connects to.
+You can start as many remote standalone nodes as you need and all these nodes will participate in caching the data.
+====
+
+== Dynamic Caches
+
+While you can have all required caches predefined in Ignite configuration, it's not required. If Spring wants to use a
+cache that doesn't exist, the `SpringCacheManager` will automatically create it.
+
+If otherwise not specified, a new cache will be created will all defaults. To customize it, you can provide a configuration
+template via `dynamicCacheConfiguration` property. For example, if you want to use `REPLICATED` caches instead of
+`PARTITIONED`, you should configure `SpringCacheManager` like this:
+
+[tabs]
+--
+tab:Dynamic cache configuration[]
+[source,xml]
+----
+<bean id="cacheManager" class="org.apache.ignite.cache.spring.SpringCacheManager">
+    ...
+
+    <property name="dynamicCacheConfiguration">
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+            <property name="cacheMode" value="REPLICATED"/>
+        </bean>
+    </property>
+</bean>
+----
+--
+
+You can also utilize near caches on client side. To achieve this, simply provide near cache configuration via the
+`dynamicNearCacheConfiguration` property. By default, near cache is not created. Here is an example:
+
+[tabs]
+--
+tab:Dynamic near cache configuration[]
+[source,xml]
+----
+<bean id="cacheManager" class="org.apache.ignite.cache.spring.SpringCacheManager">
+    ...
+
+    <property name="dynamicNearCacheConfiguration">
+        <bean class="org.apache.ignite.configuration.NearCacheConfiguration">
+            <property name="nearStartSize" value="1000"/>
+        </bean>
+    </property>
+</bean>
+----
+--
+
+== Example
+
+Once you have added `SpringCacheManager` to your Spring application context, you can enable caching for any Java method by simply attaching an annotation to it.
+
+Usually, you would use caching for heavy operations, like database access. For example, let's assume you have a DAO class with
+`averageSalary(...)` method that calculates the average salary of all employees in an organization. You can use `@Cacheable`
+annotation to enable caching for this method:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+private JdbcTemplate jdbc;
+
+@Cacheable("averageSalary")
+public long averageSalary(int organizationId) {
+    String sql =
+        "SELECT AVG(e.salary) " +
+        "FROM Employee e " +
+        "WHERE e.organizationId = ?";
+
+    return jdbc.queryForObject(sql, Long.class, organizationId);
+}
+----
+--
+
+When this method is called for the first time, `SpringCacheManager` will automatically create a `averageSalary` cache.
+It will also lookup the pre-calculated average value in this cache and return it right away if it's there. If the average
+for this organization is not calculated yet, the method will be called and the result will be stored in cache. So next
+time you request the average salary for this organization, you will not need to query the database.
+
+If the salary of one of the employees is changed, you may want to remove the average value for the organization this
+employee belongs to, because otherwise the `averageSalary(...)` method will return obsolete cached result. This can be
+achieved by attaching `@CacheEvict` annotation to a method that updates employee's salary:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+private JdbcTemplate jdbc;
+
+@CacheEvict(value = "averageSalary", key = "#e.organizationId")
+public void updateSalary(Employee e) {
+    String sql =
+        "UPDATE Employee " +
+        "SET salary = ? " +
+        "WHERE id = ?";
+
+    jdbc.update(sql, e.getSalary(), e.getId());
+}
+----
+--
+
+After this method is called, average value for the provided employee's organization will be evicted from the `averageSalary` cache.
+This will force `averageSalary(...)` to recalculate the value next time it's called.
+
+[NOTE]
+====
+[discrete]
+Note that this method receives employee as a parameter, while average values are saved in cache by `organizationID`.
+To explicitly specify what is used as a cache key, we used key parameter of the annotation and Spring Expression Language.
+
+The `#e.organizationId` expression means that we need to extract the value of `organizationId` property from `e` variable.
+Essentially, `getOrganizationId()` method will be called on provided employee object and the returned value will be used as the cache key.
+====
diff --git a/docs/_docs/extensions-and-integrations/spring/spring-data.adoc b/docs/_docs/extensions-and-integrations/spring/spring-data.adoc
new file mode 100644
index 0000000..8216a59
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/spring/spring-data.adoc
@@ -0,0 +1,234 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite With Spring Data
+
+== Overview
+
+Spring Data Framework provides a unified and widely used API that allows abstracting an underlying data storage from the
+application layer. Spring Data helps you avoid locking to a specific database vendor, making it easy to switch from one
+database to another with minimal efforts. Apache Ignite integrates with Spring Data by implementing Spring Data `CrudRepository` interface.
+
+== Maven Configuration
+
+The easiest way to start working with Apache Ignite's Spring Data repository is by adding the following Maven dependency
+to an application's `pom.xml` file:
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-spring-data_2.2</artifactId>
+    <version>{ignite.version}</version>
+</dependency>
+----
+--
+
+[NOTE]
+====
+If your Spring Data version is earlier than Spring Data 2.2 then set `ignite-spring-data_2.0`
+or `ignite-spring-data` as an `artifactId` in the pom.xml configuration.
+====
+
+== Apache Ignite Repository
+
+Apache Ignite introduces a special `IgniteRepository` interface that extends default `CrudRepository`. This interface
+should be extended by all custom Spring Data repositories that wish to store and query data located in an Apache Ignite cluster.
+
+For instance, let's create the first custom repository named `PersonRepository`:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@RepositoryConfig(cacheName = "PersonCache")
+public interface PersonRepository extends IgniteRepository<Person, Long> {
+    /**
+     * Gets all the persons with the given name.
+     * @param name Person name.
+     * @return A list of Persons with the given first name.
+     */
+    public List<Person> findByFirstName(String name);
+
+    /**
+     * Returns top Person with the specified surname.
+     * @param name Person surname.
+     * @return Person that satisfy the query.
+     */
+    public Cache.Entry<Long, Person> findTopByLastNameLike(String name);
+
+    /**
+     * Getting ids of all the Person satisfying the custom query from {@link Query} annotation.
+     *
+     * @param orgId Query parameter.
+     * @param pageable Pageable interface.
+     * @return A list of Persons' ids.
+     */
+    @Query("SELECT id FROM Person WHERE orgId > ?")
+    public List<Long> selectId(long orgId, Pageable pageable);
+}
+----
+--
+
+* `@RepositoryConfig` annotation should be specified to map a repository to a distributed cache. In the above example, `PersonRepository` is mapped to `PersonCache`.
+* Signatures of custom methods like `findByFirstName(name)` and `findTopByLastNameLike(name)` will be automatically processed and turned
+into SQL queries when methods get executed. In addition, `@Query(queryString)` annotation can be used if a concrete​ SQL
+query needs to be executed as a result of a method call.
+
+
+[CAUTION]
+====
+[discrete]
+=== Unsupported CRUD Operations
+
+Some operations of CrudRepository interface are not currently supported. These are the operations that do not require providing the key as a parameter:
+
+* save(S entity)
+* save(Iterable<S> entities)
+* delete(T entity)
+* delete(Iterable<? extends T> entities)
+
+Instead of these operations you can use Ignite specific counterparts available via `IgniteRepository` interface:
+
+* save(ID key, S entity)
+* save(Map<ID, S> entities)
+* deleteAll(Iterable<ID> ids)
+
+====
+
+== Spring Data and Apache Ignite Configuration
+
+
+To enable Apache Ignite backed repositories in Spring Data, mark an application configuration with `@EnableIgniteRepositories`
+annotation, as shown below:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@Configuration
+@EnableIgniteRepositories
+public class SpringAppCfg {
+    /**
+     * Creating Apache Ignite instance bean. A bean will be passed
+     * to IgniteRepositoryFactoryBean to initialize all Ignite based Spring Data repositories and connect to a cluster.
+     */
+    @Bean
+    public Ignite igniteInstance() {
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // Setting some custom name for the node.
+        cfg.setIgniteInstanceName("springDataNode");
+
+        // Enabling peer-class loading feature.
+        cfg.setPeerClassLoadingEnabled(true);
+
+        // Defining and creating a new cache to be used by Ignite Spring Data
+        // repository.
+        CacheConfiguration ccfg = new CacheConfiguration("PersonCache");
+
+        // Setting SQL schema for the cache.
+        ccfg.setIndexedTypes(Long.class, Person.class);
+
+        cfg.setCacheConfiguration(ccfg);
+
+        return Ignition.start(cfg);
+    }
+}
+----
+--
+
+The configuration has to instantiate Apache Ignite bean (node) that will be passed to `IgniteRepositoryFactoryBean`
+and will be used by all the Apache Ignite repositories in order to connect to the cluster.
+
+In the example above, the bean is initialized directly by the application and is named `igniteInstance`.
+Alternatively, the following beans can be registered in your configuration and an Apache Ignite node will be started automatically:
+
+* `IgniteConfiguration` object named as `igniteCfg` bean.
+* A path to Apache Ignite's Spring XML configuration named `igniteSpringCfgPath`.
+
+== Using Apache Ignite Repositories
+
+Once all the configurations and repositories are ready to be used, you can register the configuration in an application context and get a reference to the repository.
+The following example shows how to register `SpringAppCfg` - our sample configuration from the section above - in an application context and get a reference to `PersonRepository`:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+ctx = new AnnotationConfigApplicationContext();
+
+// Explicitly registering Spring configuration.
+ctx.register(SpringAppCfg.class);
+
+ctx.refresh();
+
+// Getting a reference to PersonRepository.
+repo = ctx.getBean(PersonRepository.class);
+----
+--
+
+Now, you can put data in Ignite using Spring Data API:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+TreeMap<Long, Person> persons = new TreeMap<>();
+
+persons.put(1L, new Person(1L, 2000L, "John", "Smith", 15000, "Worked for Apple"));
+
+persons.put(2L, new Person(2L, 2000L, "Brad", "Pitt", 16000, "Worked for Oracle"));
+
+persons.put(3L, new Person(3L, 1000L, "Mark", "Tomson", 10000, "Worked for Sun"));
+
+// Adding data into the repository.
+repo.save(persons);
+----
+--
+
+To query the data, we can use basic CRUD operations or methods that will be automatically turned into Apache Ignite SQL queries:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+List<Person> persons = repo.findByFirstName("John");
+
+for (Person person: persons)
+    System.out.println("   >>>   " + person);
+
+Cache.Entry<Long, Person> topPerson = repo.findTopByLastNameLike("Smith");
+
+System.out.println("\n>>> Top Person with surname 'Smith': " +
+        topPerson.getValue());
+----
+--
+
+== Example
+
+The complete example is available on link: https://github.com/apache/ignite-extensions/tree/master/modules/spring-data-2.0-ext/examples/main[GitHub, windows="_blank"]
+
+== Tutorial
+
+Follow the tutorial that shows how to build a https://www.gridgain.com/docs/tutorials/spring/spring-ignite-tutorial[RESTful web service with Apache Ignite and Spring Data, window=_blank].
+
diff --git a/docs/_docs/extensions-and-integrations/streaming/camel-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/camel-streamer.adoc
new file mode 100644
index 0000000..a421293
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/camel-streamer.adoc
@@ -0,0 +1,153 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Camel Streamer
+
+== Overview
+
+This documentation page focuses on the Apache Camel, which can also be thought of as a universal streamer because it
+allows you to consume from any technology or protocol supported by Camel into an Ignite Cache.
+
+image::images/integrations/camel-streamer.png[Camel Streamer]
+
+With this streamer, you can ingest entries straight into an Ignite cache based on:
+
+* Calls received on a Web Service (SOAP or REST), by extracting the body or headers.
+* Listening on a TCP or UDP channel for messages.
+* The content of files received via FTP or written to the local filesystem.
+* Email messages received via POP3 or IMAP.
+* A MongoDB tailable cursor.
+* An AWS SQS queue.
+* And many others.
+
+This streamer supports two modes of ingestion: **direct ingestion** and **mediated ingestion**.
+
+[NOTE]
+====
+[discrete]
+=== The Ignite Camel Component
+There is also the https://camel.apache.org/components/latest/ignite-summary.html[camel-ignite, window=_blank] component, if what you are looking is
+to interact with Ignite Caches, Compute, Events, Messaging, etc. from within a Camel route.
+====
+
+== Maven Dependency
+
+To make use of the `ignite-camel-ext` streamer, you need to add the following dependency:
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-camel-ext</artifactId>
+    <version>${ignite-camel-ext.version}</version>
+</dependency>
+----
+--
+
+It will also pull in `camel-core` as a transitive dependency.
+
+== Direct Ingestion
+
+Direct Ingestion allows you to consume from any Camel endpoint straight into Ignite, with the help of a
+Tuple Extractor. We call this **direct ingestion**.
+
+Here is a code sample:
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// Start Apache Ignite.
+Ignite ignite = Ignition.start();
+
+// Create an streamer pipe which ingests into the 'mycache' cache.
+IgniteDataStreamer<String, String> pipe = ignite.dataStreamer("mycache");
+
+// Create a Camel streamer and connect it.
+CamelStreamer<String, String> streamer = new CamelStreamer<>();
+streamer.setIgnite(ignite);
+streamer.setStreamer(pipe);
+
+// This endpoint starts a Jetty server and consumes from all network interfaces on port 8080 and context path /ignite.
+streamer.setEndpointUri("jetty:http://0.0.0.0:8080/ignite?httpMethodRestrict=POST");
+
+// This is the tuple extractor. We'll assume each message contains only one tuple.
+// If your message contains multiple tuples, use a StreamMultipleTupleExtractor.
+// The Tuple Extractor receives the Camel Exchange and returns a Map.Entry<?,?> with the key and value.
+streamer.setSingleTupleExtractor(new StreamSingleTupleExtractor<Exchange, String, String>() {
+    @Override public Map.Entry<String, String> extract(Exchange exchange) {
+        String stationId = exchange.getIn().getHeader("X-StationId", String.class);
+        String temperature = exchange.getIn().getBody(String.class);
+        return new GridMapEntry<>(stationId, temperature);
+    }
+});
+
+// Start the streamer.
+streamer.start();
+----
+--
+
+== Mediated Ingestion
+
+For more sophisticated scenarios, you can also create a Camel route that performs complex processing on incoming messages, e.g. transformations, validations, splitting, aggregating, idempotency, resequencing, enrichment, etc. and **ingest only the result into the Ignite cache**. 
+
+We call this **mediated ingestion**.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// Create a CamelContext with a custom route that will:
+//  (1) consume from our Jetty endpoint.
+//  (2) transform incoming JSON into a Java object with Jackson.
+//  (3) uses JSR 303 Bean Validation to validate the object.
+//  (4) dispatches to the direct:ignite.ingest endpoint, where the streamer is consuming from.
+CamelContext context = new DefaultCamelContext();
+context.addRoutes(new RouteBuilder() {
+    @Override
+    public void configure() throws Exception {
+        from("jetty:http://0.0.0.0:8080/ignite?httpMethodRestrict=POST")
+            .unmarshal().json(JsonLibrary.Jackson)
+            .to("bean-validator:validate")
+            .to("direct:ignite.ingest");
+    }
+});
+
+// Remember our Streamer is now consuming from the Direct endpoint above.
+streamer.setEndpointUri("direct:ignite.ingest");
+----
+--
+
+== Setting a Response
+
+By default, the response sent back to the caller (if it is a synchronous endpoint) is simply an echo of the original request.
+If you want to customize​ the response, set a Camel `Processor` as a `responseProcessor`:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+streamer.setResponseProcessor(new Processor() {
+    @Override public void process(Exchange exchange) throws Exception {
+        exchange.getOut().setHeader(Exchange.HTTP_RESPONSE_CODE, 200);
+        exchange.getOut().setBody("OK");
+    }
+});
+----
+--
diff --git a/docs/_docs/extensions-and-integrations/streaming/flink-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/flink-streamer.adoc
new file mode 100644
index 0000000..92ab398
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/flink-streamer.adoc
@@ -0,0 +1,78 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Flink Streamer
+
+Apache Ignite Flink Sink module is a streaming connector to inject Flink data into Ignite cache. The sink emits its input
+data to Ignite cache. When creating a sink, an Ignite cache name and Ignite grid configuration file have to be provided.
+
+Starting data transfer to Ignite cache can be done with the following steps.
+
+. Import Ignite Flink Sink Module in Maven Project
+If you are using Maven to manage dependencies of your project, you can add Flink module
+dependency like this (replace `${ignite-flink-ext.version}` with actual Ignite Flink Extension version you are
+interested in):
++
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                        http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    ...
+    <dependencies>
+        ...
+        <dependency>
+            <groupId>org.apache.ignite</groupId>
+            <artifactId>ignite-flink-ext</artifactId>
+            <version>${ignite-flink-ext.version}</version>
+        </dependency>
+        ...
+    </dependencies>
+    ...
+</project>
+----
+--
+. Create an Ignite configuration file and make sure it is accessible from the sink.
+. Make sure your data input to the sink is specified and start the sink.
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+IgniteSink igniteSink = new IgniteSink("myCache", "ignite.xml");
+
+igniteSink.setAllowOverwrite(true);
+igniteSink.setAutoFlushFrequency(10);
+igniteSink.start();
+
+DataStream<Map> stream = ...;
+
+// Sink data into the grid.
+stream.addSink(igniteSink);
+try {
+    env.execute();
+} catch (Exception e){
+    // Exception handling.
+}
+finally {
+    igniteSink.stop();
+----
+--
+
+Refer to the Javadocs of the `ignite-flink` module for more info on the available options.
diff --git a/docs/_docs/extensions-and-integrations/streaming/flume-sink.adoc b/docs/_docs/extensions-and-integrations/streaming/flume-sink.adoc
new file mode 100644
index 0000000..3697c7c
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/flume-sink.adoc
@@ -0,0 +1,79 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Flume Sink
+
+== Overview
+
+Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large
+amounts of log data. (https://github.com/apache/flume).
+
+`IgniteSink` is a Flume sink that extracts events from an associated Flume channel and injects into an Ignite cache.
+
+`IgniteSink` and its dependencies have to be included in the agent's classpath, as described in the following subsection,
+before starting the Flume agent.
+
+== Setting Up
+
+. Create a transformer by implementing `EventTransformer` interface.
+. Create `ignite` directory inside `plugins.d` directory which is located in `$\{FLUME_HOME}`. If the `plugins.d` directory
+is not there, create it.
+. Build it and copy to `$\{FLUME_HOME}/plugins.d/ignite-sink/lib`.
+. Copy other Ignite-related jar files from Apache Ignite distribution to `$\{FLUME_HOME}/plugins.d/ignite-sink/libext` to
+have them as shown below.
++
+----
+plugins.d/
+`-- ignite
+ |-- lib
+ |   `-- ignite-flume-transformer-x.x.x.jar <-- your jar
+ `-- libext
+     |-- cache-api-1.0.0.jar
+     |-- ignite-core-x.x.x.jar
+     |-- ignite-flume-ext.x.x.x.jar <-- IgniteSink
+     |-- ignite-spring-x.x.x.jar
+     |-- spring-aop-4.1.0.RELEASE.jar
+     |-- spring-beans-4.1.0.RELEASE.jar
+     |-- spring-context-4.1.0.RELEASE.jar
+     |-- spring-core-4.1.0.RELEASE.jar
+     `-- spring-expression-4.1.0.RELEASE.jar
+----
+
+. In Flume configuration file, specify Ignite configuration XML file's location with cache properties
+(see `flume/src/test/resources/example-ignite.xml` for a basic example) with the cache name specified for cache creation.
+Also specify the cache name (same as in Ignite configuration file), your `EventTransformer`'s implementation class, and,
+optionally, batch size. All properties are shown in the table below (required properties are in bold).
++
+[cols="20%,45%,35%",opts="header"]
+|===
+|Property Name |Description | Default Value
+|channel| | -
+|type| The component type name. Needs to be `org.apache.ignite.stream.flume.IgniteSink` | -
+|igniteCfg| Ignite configuration XML file | -
+|cacheName| Cache name. Same as in igniteCfg | -
+|eventTransformer| Your implementation of `org.apache.ignite.stream.flume.EventTransformer` | -
+|batchSize| Number of events to be written per transaction| 100
+|===
+
+The sink configuration part of agent named `a1` can look like this:
+
+----
+a1.sinks.k1.type = org.apache.ignite.stream.flume.IgniteSink
+a1.sinks.k1.igniteCfg = /some-path/ignite.xml
+a1.sinks.k1.cacheName = testCache
+a1.sinks.k1.eventTransformer = my.company.MyEventTransformer
+a1.sinks.k1.batchSize = 100
+----
+
+After specifying your source and channel (see Flume's docs), you are ready to run a Flume agent.
diff --git a/docs/_docs/extensions-and-integrations/streaming/jms-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/jms-streamer.adoc
new file mode 100644
index 0000000..b3f9be9
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/jms-streamer.adoc
@@ -0,0 +1,123 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JMS Streamer
+
+== Overview
+
+Ignite offers a JMS Data Streamer to consume messages from JMS brokers, convert them into cache tuples and insert them in Ignite.
+
+This data streamer supports the following features:
+
+* Consumes from queues or topics.
+* Supports durable subscriptions from topics.
+* Concurrent consumers are supported via the `threads` parameter.
+ ** When consuming from queues, this component will start as many `Session` objects with separate `MessageListener` instances each, therefore achieving _natural_ concurrency.
+ ** When consuming from topics, obviously we cannot start multiple threads as that would lead us to consume duplicate messages. Therefore, we achieve concurrency in a _virtualized_ manner through an internal thread pool.
+* Transacted sessions are supported through the `transacted` parameter.
+* Batched consumption is possible via the `batched` parameter, which groups message reception within the scope of a local JMS transaction (XA not used supported). Depending on the broker, this technique can provide a higher throughput as it decreases the amount of message acknowledgment​ round trips that are necessary, albeit at the expense possible duplicate messages (especially if an incident occurs in the middle of a transaction).
+ ** Batches are committed when the `batchClosureMillis` time has elapsed, or when a Session has received at least `batchClosureSize` messages.
+ ** Time-based closure fires with the specified frequency and applies to all ``Session``s in parallel.
+ ** Size-based closure applies to each individual `Session` (as transactions are `Session-bound` in JMS), so it will fire when that `Session` has processed that many messages.
+ ** Both options are compatible with each other. You can disable either, but not both if batching is enabled.
+* Supports specifying the destination with implementation-specific `Destination` objects or with names.
+
+We have tested our implementation against http://activemq.apache.org[Apache ActiveMQ, window=_blank], but any JMS broker
+is supported as long as it client library implements the http://download.oracle.com/otndocs/jcp/7195-jms-1.1-fr-spec-oth-JSpec/[JMS 1.1 specification, window=_blank].
+
+== Instantiating JMS Streamer
+
+When you instantiate the JMS Streamer, you will need to concretize​ the following generic types:
+
+* `T extends Message` \=> the type of JMS `Message` this streamer will receive. If it can receive multiple, use the generic `Message` type.
+* `K` \=> the type of the cache key.
+* `V` \=> the type of the cache value.
+
+To configure the JMS streamer, you will need to provide the following compulsory properties:
+
+* `connectionFactory` \=> an instance of your `ConnectionFactory` duly configured as required by the broker. It can be a pooled `ConnectionFactory`.
+* `destination` or (`destinationName` and `destinationType`) \=> a `Destination` object (normally a broker-specific implementation of the JMS `Queue` or `Topic` interfaces), or the combination of a destination name (queue or topic name) and the type as a `Class` reference to either `Queue` or `Topic`. In the latter case, the streamer will use either `Session.createQueue(String)` or `Session.createTopic(String)` to get a hold of the destination.
+* `transformer` \=> an implementation of `MessageTransformer<T, K, V>` that digests a JMS message of type `T` and produces a `Map<K, V>` of cache entries to add. It can also return `null` or an empty `Map` to ignore the incoming message.
+
+== Example
+
+The example in this section populates a cache with `String` keys and `String` values, consuming `TextMessages` with this format:
+
+----
+raulk,Raul Kripalani
+dsetrakyan,Dmitriy Setrakyan
+sv,Sergi Vladykin
+gm,Gianfranco Murador
+----
+
+Here is the code:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// create a data streamer
+IgniteDataStreamer<String, String> dataStreamer = ignite.dataStreamer("mycache"));
+dataStreamer.allowOverwrite(true);
+
+// create a JMS streamer and plug the data streamer into it
+JmsStreamer<TextMessage, String, String> jmsStreamer = new JmsStreamer<>();
+jmsStreamer.setIgnite(ignite);
+jmsStreamer.setStreamer(dataStreamer);
+jmsStreamer.setConnectionFactory(connectionFactory);
+jmsStreamer.setDestination(destination);
+jmsStreamer.setTransacted(true);
+jmsStreamer.setTransformer(new MessageTransformer<TextMessage, String, String>() {
+    @Override
+    public Map<String, String> apply(TextMessage message) {
+        final Map<String, String> answer = new HashMap<>();
+        String text;
+        try {
+            text = message.getText();
+        }
+        catch (JMSException e) {
+            LOG.warn("Could not parse message.", e);
+            return Collections.emptyMap();
+        }
+        for (String s : text.split("\n")) {
+            String[] tokens = s.split(",");
+            answer.put(tokens[0], tokens[1]);
+        }
+        return answer;
+    }
+});
+
+jmsStreamer.start();
+
+// on application shutdown
+jmsStreamer.stop();
+dataStreamer.close();
+----
+--
+
+To use this component, you have to import the following module through your build system (Maven, Ivy, Gradle, sbt, etc.):
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-jms11-ext</artifactId>
+    <version>${ignite-jms11-ext.version}</version>
+</dependency>
+----
+--
diff --git a/docs/_docs/extensions-and-integrations/streaming/kafka-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/kafka-streamer.adoc
new file mode 100644
index 0000000..a45fa4d
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/kafka-streamer.adoc
@@ -0,0 +1,221 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Kafka Streamer
+
+== Overview
+
+Apache Ignite Kafka Streamer module provides streaming from Kafka to Ignite cache.
+Either of the following two methods can be used to achieve such streaming:
+
+* using Kafka Connect functionality with Ignite sink
+* importing the Kafka Streamer module in your Maven project and instantiating KafkaStreamer for data streaming
+
+== Streaming Data via Kafka Connect
+
+`IgniteSinkConnector` will help you export data from Kafka to Ignite cache by polling data from Kafka topics and writing
+it to your specified cache. The connector can be found in the `optional/ignite-kafka` module. It and its dependencies
+have to be on the classpath of a Kafka running instance, as described in the following subsection. _For more information
+on Kafka Connect, see http://kafka.apache.org/documentation.html#connect[Kafka Documentation, window=_blank]._
+
+=== Setting up and Running
+
+. Add the `IGNITE_HOME/libs/optional/ignite-kafka` module to the application classpath.
+
+. Prepare worker configurations, e.g.,
++
+[tabs]
+--
+tab:Configuration[]
+[source,yaml]
+----
+bootstrap.servers=localhost:9092
+
+key.converter=org.apache.kafka.connect.storage.StringConverter
+value.converter=org.apache.kafka.connect.storage.StringConverter
+key.converter.schemas.enable=false
+value.converter.schemas.enable=false
+
+internal.key.converter=org.apache.kafka.connect.storage.StringConverter
+internal.value.converter=org.apache.kafka.connect.storage.StringConverter
+internal.key.converter.schemas.enable=false
+internal.value.converter.schemas.enable=false
+
+offset.storage.file.filename=/tmp/connect.offsets
+offset.flush.interval.ms=10000
+----
+--
+
+. Prepare connector configurations, e.g.,
++
+[tabs]
+--
+tab:Configuration[]
+[source,yaml]
+----
+# connector
+name=my-ignite-connector
+connector.class=org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
+tasks.max=2
+topics=someTopic1,someTopic2
+
+# cache
+cacheName=myCache
+cacheAllowOverwrite=true
+igniteCfg=/some-path/ignite.xml
+singleTupleExtractorCls=my.company.MyExtractor
+----
+--
++
+* where `cacheName` is the name of the cache you specify in `/some-path/ignite.xml` and the data from `someTopic1,someTopic2`
+will be pulled and stored.
+* `cacheAllowOverwrite` can be set to `true` if you want to enable overwriting existing values in the cache.
+* If you need to parse the incoming data and decide on your new key and value, you can implement it as `StreamSingleTupleExtractor` and specify as `singleTupleExtractorCls`.
+* You can also set `cachePerNodeDataSize` and `cachePerNodeParOps` to adjust per-node buffer and the maximum number of parallel stream operations for a single node.
+
+. Start connector, for instance, in a standalone mode as follows,
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+bin/connect-standalone.sh myconfig/connect-standalone.properties myconfig/ignite-connector.properties
+----
+--
+
+=== Checking the Flow
+
+To perform a very basic functionality check, you can do the following,
+
+. Start Zookeeper
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+bin/zookeeper-server-start.sh config/zookeeper.properties
+----
+--
+. Start Kafka server
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+bin/kafka-server-start.sh config/server.properties
+----
+--
+. Provide some data input to the Kafka server
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --property parse.key=true --property key.separator=,k1,v1
+----
+--
+. Start the connector
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+bin/connect-standalone.sh myconfig/connect-standalone.properties myconfig/ignite-connector.properties
+----
+--
+. Check the value is in the cache. For example, via REST API,
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+http://node1:8080/ignite?cmd=size&cacheName=cache1
+----
+--
+
+== Streaming data with Ignite Kafka Streamer Module
+
+If you are using Maven to manage dependencies of your project, first of all you will have to add Kafka Streamer module
+dependency like this (replace `${ignite-kafka-ext.version}` with actual Ignite Kafka Extension version you are interested in):
+
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                        http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    ...
+    <dependencies>
+        ...
+        <dependency>
+            <groupId>org.apache.ignite</groupId>
+            <artifactId>ignite-kafka-ext</artifactId>
+            <version>${ignite-kafka-ext.version}</version>
+        </dependency>
+        ...
+    </dependencies>
+    ...
+</project>
+----
+--
+
+Having a cache with `String` keys and `String` values, the streamer can be started as follows
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+KafkaStreamer<String, String, String> kafkaStreamer = new KafkaStreamer<>();
+
+IgniteDataStreamer<String, String> stmr = ignite.dataStreamer("myCache"));
+
+// allow overwriting cache data
+stmr.allowOverwrite(true);
+
+kafkaStreamer.setIgnite(ignite);
+kafkaStreamer.setStreamer(stmr);
+
+// set the topic
+kafkaStreamer.setTopic(someKafkaTopic);
+
+// set the number of threads to process Kafka streams
+kafkaStreamer.setThreads(4);
+
+// set Kafka consumer configurations
+kafkaStreamer.setConsumerConfig(kafkaConsumerConfig);
+
+// set extractor
+kafkaStreamer.setSingleTupleExtractor(strExtractor);
+
+kafkaStreamer.start();
+
+...
+
+// stop on shutdown
+kafkaStreamer.stop();
+
+strm.close();
+----
+--
+
+For the detailed information on Kafka consumer properties, refer http://kafka.apache.org/documentation.html
diff --git a/docs/_docs/extensions-and-integrations/streaming/mqtt-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/mqtt-streamer.adoc
new file mode 100644
index 0000000..1339c97
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/mqtt-streamer.adoc
@@ -0,0 +1,76 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= MQTT Streamer
+
+== Overview
+
+This streamer consumes from an MQTT topic and feeds key-value pairs into an `IgniteDataStreamer` instance, using
+https://eclipse.org/paho/[Eclipse Paho, window=_blank] as an MQTT client.
+
+You need to provide a stream tuple extractor (either a single-entry or multiple-entries extractor) to process the incoming
+message and extract the tuple to insert.
+
+This streamer supports:
+
+* Subscribing to a single topic or multiple topics at once.
+* Specifying the subscriber's QoS for a single topic or for multiple topics.
+* Setting https://www.eclipse.org/paho/files/javadoc/org/eclipse/paho/client/mqttv3/MqttConnectOptions.html[MqttConnectOptions, window=_blank]
+to enable features like _last will testament_, _persistent sessions_, etc.
+* Specifying the client ID. A random one will be generated and maintained throughout reconnections if the user does not provide one.
+* (Re-)Connection retries powered by the https://github.com/rholder/guava-retrying[guava-retrying library, window=_blank].
+_Retry wait_ and _retry stop_ policies can be configured.
+* Blocking the start() method until the client is connected for the first time.
+
+== Example
+
+Here's a trivial code sample showing how to use this streamer:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// Start Ignite.
+Ignite ignite = Ignition.start();
+
+// Get a data streamer reference.
+IgniteDataStreamer<Integer, String> dataStreamer = grid().dataStreamer("mycache");
+
+// Create an MQTT data streamer
+MqttStreamer<Integer, String> streamer = new MqttStreamer<>();
+streamer.setIgnite(ignite);
+streamer.setStreamer(dataStreamer);
+streamer.setBrokerUrl(brokerUrl);
+streamer.setBlockUntilConnected(true);
+
+// Set a single tuple extractor to extract items in the format 'key,value' where key => Int, and value => String
+// (using Guava here).
+streamer.setSingleTupleExtractor(new StreamSingleTupleExtractor<MqttMessage, Integer, String>() {
+    @Override public Map.Entry<Integer, String> extract(MqttMessage msg) {
+        List<String> s = Splitter.on(",").splitToList(new String(msg.getPayload()));
+
+        return new GridMapEntry<>(Integer.parseInt(s.get(0)), s.get(1));
+    }
+});
+
+// Consume from multiple topics at once.
+streamer.setTopics(Arrays.asList("def", "ghi", "jkl", "mno"));
+
+// Start the MQTT Streamer.
+streamer.start();
+----
+--
+
+Refer to the Javadocs of the `ignite-mqtt-ext` module for more info on the available options.
diff --git a/docs/_docs/extensions-and-integrations/streaming/rocketmq-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/rocketmq-streamer.adoc
new file mode 100644
index 0000000..a302ca7
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/rocketmq-streamer.adoc
@@ -0,0 +1,85 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= RocketMQ Streamer
+
+This streamer module provides streaming from https://github.com/apache/incubator-rocketmq[Apache RocketMQ, window=_blank]
+to Ignite.
+
+To use Ignite RocketMQ Streamer module
+
+. Import it to your Maven project. If you are using Maven to manage dependencies of your project, you can add an Ignite
+RocketMQ module dependency like this (replace `${ignite-rocketmq-ext.version}` with actual Ignite RocketMQ Extension version you are interested in):
++
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                        http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    ...
+    <dependencies>
+        ...
+        <dependency>
+            <groupId>org.apache.ignite</groupId>
+            <artifactId>ignite-rocketmq-ext</artifactId>
+            <version>${ignite-rocketmq-ext.version}</version>
+        </dependency>
+        ...
+    </dependencies>
+    ...
+</project>
+----
+--
+
+. Implement either `StreamSingleTupleExtractor` or `StreamMultipleTupleExtractor` for the streamer (shown
+as `MyTupleExtractor` in the code sample below). For a simple implementation, refer to `RocketMQStreamerTest.java`.
+
+. Initialize and start the streamer
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+IgniteDataStreamer<String, byte[]> dataStreamer = ignite.dataStreamer(MY_CACHE));
+
+dataStreamer.allowOverwrite(true);
+dataStreamer.autoFlushFrequency(10);
+
+streamer = new RocketMQStreamer<>();
+
+//configure.
+streamer.setIgnite(ignite);
+streamer.setStreamer(dataStreamer);
+streamer.setNameSrvAddr(NAMESERVER_IP_PORT);
+streamer.setConsumerGrp(CONSUMER_GRP);
+streamer.setTopic(TOPIC_NAME);
+streamer.setMultipleTupleExtractor(new MyTupleExtractor());
+
+streamer.start();
+
+...
+
+// stop on shutdown
+streamer.stop();
+
+dataStreamer.close();
+----
+--
+
+Refer to the Javadocs for more info on the available options.
diff --git a/docs/_docs/extensions-and-integrations/streaming/storm-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/storm-streamer.adoc
new file mode 100644
index 0000000..887712e
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/storm-streamer.adoc
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Storm Streamer
+
+Apache Ignite Storm Streamer module provides streaming via http://storm.apache.org/[Storm, window=_blank] to Ignite.
+
+Starting data transfer to Ignite can be done with the following steps.
+
+. Import Ignite Storm Streamer Module In Maven Project. If you are using Maven to manage dependencies of your project,
+you can add Storm module dependency like this (replace `${ignite-storm-ext.version}` with actual Ignite Storm Extension version you are interested in):
++
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                        http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    ...
+    <dependencies>
+        ...
+        <dependency>
+            <groupId>org.apache.ignite</groupId>
+            <artifactId>ignite-storm-ext</artifactId>
+            <version>${ignite-storm-ext.version}</version>
+        </dependency>
+        ...
+    </dependencies>
+    ...
+</project>
+----
+--
+
+. Create an Ignite configuration file (see `example-ignite.xml` in `modules/storm/src/test/resources/example-ignite.xml`)
+and make sure it is accessible from the streamer.
+. Make sure your key-value data input to the streamer is specified with the field named `ignite` (or a different one you
+configure with `StormStreamer.setIgniteTupleField(...)`).
+See TestStormSpout.declareOutputFields(...) for an example.
+. Create a topology with the streamer, make a jar file with all dependencies and run the following
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+storm jar ignite-storm-streaming-jar-with-dependencies.jar my.company.ignite.MyStormTopology
+----
+--
diff --git a/docs/_docs/extensions-and-integrations/streaming/twitter-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/twitter-streamer.adoc
new file mode 100644
index 0000000..4f47c60
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/twitter-streamer.adoc
@@ -0,0 +1,65 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Twitter Streamer
+
+Ignite Twitter Streamer module consumes tweets from Twitter and feeds the transformed key-value pairs `<tweetId, text>` into Ignite.
+
+To stream data from Twitter into Ignite, you need to:
+
+. Import Ignite Twitter Module with Maven and replace `${ignite-twitter-ext.version}` with the actual Ignite Twitter Extension version you are interested in.
++
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependency>
+  <groupId>org.apache.ignite</groupId>
+  <artifactId>ignite-twitter-ext</artifactId>
+  <version>${ignite-twitter-ext.version}</version>
+</dependency>
+----
+--
+
+. In your code, set the necessary parameters and start the streamer, like so:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+IgniteDataStreamer dataStreamer = ignite.dataStreamer("myCache");
+dataStreamer.allowOverwrite(true);
+dataStreamer.autoFlushFrequency(10);
+
+OAuthSettings oAuthSettings = new OAuthSettings("setting1", "setting2", "setting3", "setting4");
+
+TwitterStreamer<Integer, String> streamer = new TwitterStreamer<>(oAuthSettings);
+streamer.setIgnite(ignite);
+streamer.setStreamer(dataStreamer);
+
+Map<String, String> params = new HashMap<>();
+params.put("track", "apache, twitter");
+params.put("follow", "3004445758");
+
+streamer.setApiParams(params);// Twitter Streaming API params.
+streamer.setEndpointUrl(endpointUrl);// Twitter streaming API endpoint.
+streamer.setThreadsCount(8);
+
+streamer.start();
+----
+--
+
+Refer to https://dev.twitter.com/streaming/overview[Twitter streaming API, window=_blank] documentation for more information on various streaming parameters.
diff --git a/docs/_docs/extensions-and-integrations/streaming/zeromq-streamer.adoc b/docs/_docs/extensions-and-integrations/streaming/zeromq-streamer.adoc
new file mode 100644
index 0000000..918c0e8
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/streaming/zeromq-streamer.adoc
@@ -0,0 +1,67 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ZeroMQ Streamer
+
+Apache Ignite ZeroMQ Streamer module enables streaming capabilities via http://zeromq.org/[ZeroMQ, window=_blank] into Ignite.
+
+To start streaming into Ignite, you need to do the following:
+
+. Add Ignite ZeroMQ Streamer Module to your Maven `pom.xml` file.
++
+[tabs]
+--
+tab:pom.xml[]
+[source,xml]
+----
+<dependencies>
+    ...
+    <dependency>
+        <groupId>org.apache.ignite</groupId>
+        <artifactId>ignite-zeromq-ext</artifactId>
+        <version>${ignite-zeromq-ext.version}</version>
+    </dependency>
+    ...
+</dependencies>
+----
+--
+
+. Implement either the https://github.com/apache/ignite/blob/f2f82f09b35368f25e136c9fce5e7f2198a91171/modules/core/src/main/java/org/apache/ignite/stream/StreamSingleTupleExtractor.java[StreamSingleTupleExtractor, window=_blank] or
+the https://github.com/apache/ignite/blob/f2f82f09b35368f25e136c9fce5e7f2198a91171/modules/core/src/main/java/org/apache/ignite/stream/StreamMultipleTupleExtractor.java[StreamMultipleTupleExtractor, window=_blank] for ZeroMQ streamer.
+Refer to https://github.com/apache/ignite/blob/7492843ad9e22c91764fb8d0c3a096b8ce6c653e/modules/zeromq/src/test/java/org/apache/ignite/stream/zeromq/ZeroMqStringSingleTupleExtractor.java[this sample implementation, window=_blank] for more details.
+. Set the extractor and initiate the streaming as shown below:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+try (IgniteDataStreamer<Integer, String> dataStreamer =
+     grid().dataStreamer("myCacheName")) {
+
+    dataStreamer.allowOverwrite(true);
+    dataStreamer.autoFlushFrequency(1);
+
+    try (IgniteZeroMqStreamer streamer = new IgniteZeroMqStreamer(
+      1, ZeroMqTypeSocket.PULL, "tcp://localhost:5671", null)) {
+      streamer.setIgnite(grid());
+      streamer.setStreamer(dataStreamer);
+
+      streamer.setSingleTupleExtractor(new ZeroMqStringSingleTupleExtractor());
+
+      streamer.start();
+    }
+}
+----
+--
diff --git a/docs/_docs/images/111.gif b/docs/_docs/images/111.gif
new file mode 100644
index 0000000..dc5f668
--- /dev/null
+++ b/docs/_docs/images/111.gif
Binary files differ
diff --git a/docs/_docs/images/222.gif b/docs/_docs/images/222.gif
new file mode 100644
index 0000000..05a097c
--- /dev/null
+++ b/docs/_docs/images/222.gif
Binary files differ
diff --git a/docs/_docs/images/333.gif b/docs/_docs/images/333.gif
new file mode 100644
index 0000000..828f448
--- /dev/null
+++ b/docs/_docs/images/333.gif
Binary files differ
diff --git a/docs/_docs/images/555.gif b/docs/_docs/images/555.gif
new file mode 100644
index 0000000..1d5ef9a
--- /dev/null
+++ b/docs/_docs/images/555.gif
Binary files differ
diff --git a/docs/_docs/images/666.gif b/docs/_docs/images/666.gif
new file mode 100644
index 0000000..983e35b
--- /dev/null
+++ b/docs/_docs/images/666.gif
Binary files differ
diff --git a/docs/_docs/images/bagging.png b/docs/_docs/images/bagging.png
new file mode 100644
index 0000000..5664051
--- /dev/null
+++ b/docs/_docs/images/bagging.png
Binary files differ
diff --git a/docs/_docs/images/cache_table.png b/docs/_docs/images/cache_table.png
new file mode 100644
index 0000000..383502d
--- /dev/null
+++ b/docs/_docs/images/cache_table.png
Binary files differ
diff --git a/docs/_docs/images/checkpointing-chainsaw.png b/docs/_docs/images/checkpointing-chainsaw.png
new file mode 100644
index 0000000..06a65ee
--- /dev/null
+++ b/docs/_docs/images/checkpointing-chainsaw.png
Binary files differ
diff --git a/docs/_docs/images/checkpointing-persistence.png b/docs/_docs/images/checkpointing-persistence.png
new file mode 100644
index 0000000..887301a
--- /dev/null
+++ b/docs/_docs/images/checkpointing-persistence.png
Binary files differ
diff --git a/docs/_docs/images/client-to-aws.png b/docs/_docs/images/client-to-aws.png
new file mode 100644
index 0000000..8694dd5
--- /dev/null
+++ b/docs/_docs/images/client-to-aws.png
Binary files differ
diff --git a/docs/_docs/images/collocated_joins.png b/docs/_docs/images/collocated_joins.png
new file mode 100644
index 0000000..04fffaa
--- /dev/null
+++ b/docs/_docs/images/collocated_joins.png
Binary files differ
diff --git a/docs/_docs/images/data_streaming.png b/docs/_docs/images/data_streaming.png
new file mode 100644
index 0000000..c407447
--- /dev/null
+++ b/docs/_docs/images/data_streaming.png
Binary files differ
diff --git a/docs/_docs/images/defragmented.png b/docs/_docs/images/defragmented.png
new file mode 100644
index 0000000..1079cfa
--- /dev/null
+++ b/docs/_docs/images/defragmented.png
Binary files differ
diff --git a/docs/_docs/images/durable-memory-diagram.png b/docs/_docs/images/durable-memory-diagram.png
new file mode 100644
index 0000000..0ee572a
--- /dev/null
+++ b/docs/_docs/images/durable-memory-diagram.png
Binary files differ
diff --git a/docs/_docs/images/durable-memory-overview.png b/docs/_docs/images/durable-memory-overview.png
new file mode 100644
index 0000000..ce58af7
--- /dev/null
+++ b/docs/_docs/images/durable-memory-overview.png
Binary files differ
diff --git a/docs/_docs/images/external_storage.png b/docs/_docs/images/external_storage.png
new file mode 100644
index 0000000..cdc516e
--- /dev/null
+++ b/docs/_docs/images/external_storage.png
Binary files differ
diff --git a/docs/_docs/images/fragmented.png b/docs/_docs/images/fragmented.png
new file mode 100644
index 0000000..01a5488
--- /dev/null
+++ b/docs/_docs/images/fragmented.png
Binary files differ
diff --git a/docs/_docs/images/ignite_clustering.png b/docs/_docs/images/ignite_clustering.png
new file mode 100644
index 0000000..25edce7
--- /dev/null
+++ b/docs/_docs/images/ignite_clustering.png
Binary files differ
diff --git a/docs/_docs/images/ijfull.png b/docs/_docs/images/ijfull.png
new file mode 100644
index 0000000..99dd5ae
--- /dev/null
+++ b/docs/_docs/images/ijfull.png
Binary files differ
diff --git a/docs/_docs/images/ijimport.png b/docs/_docs/images/ijimport.png
new file mode 100644
index 0000000..72ef2ed
--- /dev/null
+++ b/docs/_docs/images/ijimport.png
Binary files differ
diff --git a/docs/_docs/images/ijrun.png b/docs/_docs/images/ijrun.png
new file mode 100644
index 0000000..dea4aac
--- /dev/null
+++ b/docs/_docs/images/ijrun.png
Binary files differ
diff --git a/docs/_docs/images/integrations/camel-streamer.png b/docs/_docs/images/integrations/camel-streamer.png
new file mode 100644
index 0000000..cff36dc
--- /dev/null
+++ b/docs/_docs/images/integrations/camel-streamer.png
Binary files differ
diff --git a/docs/_docs/images/integrations/hibernate-l2-cache.png b/docs/_docs/images/integrations/hibernate-l2-cache.png
new file mode 100644
index 0000000..42f83c5
--- /dev/null
+++ b/docs/_docs/images/integrations/hibernate-l2-cache.png
Binary files differ
diff --git a/docs/_docs/images/jconsole.png b/docs/_docs/images/jconsole.png
new file mode 100644
index 0000000..120a309
--- /dev/null
+++ b/docs/_docs/images/jconsole.png
Binary files differ
diff --git a/docs/_docs/images/k8s/aks-node-number.png b/docs/_docs/images/k8s/aks-node-number.png
new file mode 100644
index 0000000..9959e28
--- /dev/null
+++ b/docs/_docs/images/k8s/aks-node-number.png
Binary files differ
diff --git a/docs/_docs/images/k8s/create-aks-cluster.png b/docs/_docs/images/k8s/create-aks-cluster.png
new file mode 100644
index 0000000..1581c74
--- /dev/null
+++ b/docs/_docs/images/k8s/create-aks-cluster.png
Binary files differ
diff --git a/docs/_docs/images/logistic-regression.png b/docs/_docs/images/logistic-regression.png
new file mode 100644
index 0000000..4531071
--- /dev/null
+++ b/docs/_docs/images/logistic-regression.png
Binary files differ
diff --git a/docs/_docs/images/logistic-regression2.png b/docs/_docs/images/logistic-regression2.png
new file mode 100644
index 0000000..f55c151
--- /dev/null
+++ b/docs/_docs/images/logistic-regression2.png
Binary files differ
diff --git a/docs/_docs/images/machine_learning.png b/docs/_docs/images/machine_learning.png
new file mode 100644
index 0000000..800fc1a
--- /dev/null
+++ b/docs/_docs/images/machine_learning.png
Binary files differ
diff --git a/docs/_docs/images/memory-segment.png b/docs/_docs/images/memory-segment.png
new file mode 100644
index 0000000..d127bf1
--- /dev/null
+++ b/docs/_docs/images/memory-segment.png
Binary files differ
diff --git a/docs/_docs/images/naive-bayes.png b/docs/_docs/images/naive-bayes.png
new file mode 100644
index 0000000..660c866
--- /dev/null
+++ b/docs/_docs/images/naive-bayes.png
Binary files differ
diff --git a/docs/_docs/images/naive-bayes2.png b/docs/_docs/images/naive-bayes2.png
new file mode 100644
index 0000000..7e3e29a
--- /dev/null
+++ b/docs/_docs/images/naive-bayes2.png
Binary files differ
diff --git a/docs/_docs/images/naive-bayes3.png b/docs/_docs/images/naive-bayes3.png
new file mode 100644
index 0000000..cc02903
--- /dev/null
+++ b/docs/_docs/images/naive-bayes3.png
Binary files differ
diff --git a/docs/_docs/images/naive-bayes3png b/docs/_docs/images/naive-bayes3png
new file mode 100644
index 0000000..cc02903
--- /dev/null
+++ b/docs/_docs/images/naive-bayes3png
Binary files differ
diff --git a/docs/_docs/images/net-view-details.png b/docs/_docs/images/net-view-details.png
new file mode 100644
index 0000000..b74c945
--- /dev/null
+++ b/docs/_docs/images/net-view-details.png
Binary files differ
diff --git a/docs/_docs/images/network_segmentation.png b/docs/_docs/images/network_segmentation.png
new file mode 100644
index 0000000..26876b0
--- /dev/null
+++ b/docs/_docs/images/network_segmentation.png
Binary files differ
diff --git a/docs/_docs/images/non_collocated_joins.png b/docs/_docs/images/non_collocated_joins.png
new file mode 100644
index 0000000..7e30eb2
--- /dev/null
+++ b/docs/_docs/images/non_collocated_joins.png
Binary files differ
diff --git a/docs/_docs/images/odbc_dsn_configuration.png b/docs/_docs/images/odbc_dsn_configuration.png
new file mode 100644
index 0000000..a689117
--- /dev/null
+++ b/docs/_docs/images/odbc_dsn_configuration.png
Binary files differ
diff --git a/docs/_docs/images/off_heap_memory_eviction.png b/docs/_docs/images/off_heap_memory_eviction.png
new file mode 100644
index 0000000..25b9b8e
--- /dev/null
+++ b/docs/_docs/images/off_heap_memory_eviction.png
Binary files differ
diff --git a/docs/_docs/images/partitionawareness01.png b/docs/_docs/images/partitionawareness01.png
new file mode 100644
index 0000000..51c11a7
--- /dev/null
+++ b/docs/_docs/images/partitionawareness01.png
Binary files differ
diff --git a/docs/_docs/images/partitionawareness02.png b/docs/_docs/images/partitionawareness02.png
new file mode 100644
index 0000000..d6698be
--- /dev/null
+++ b/docs/_docs/images/partitionawareness02.png
Binary files differ
diff --git a/docs/_docs/images/partitioned_cache.png b/docs/_docs/images/partitioned_cache.png
new file mode 100644
index 0000000..0dab468
--- /dev/null
+++ b/docs/_docs/images/partitioned_cache.png
Binary files differ
diff --git a/docs/_docs/images/partitioning.png b/docs/_docs/images/partitioning.png
new file mode 100644
index 0000000..bf4cd04
--- /dev/null
+++ b/docs/_docs/images/partitioning.png
Binary files differ
diff --git a/docs/_docs/images/persistent_store_structure.png b/docs/_docs/images/persistent_store_structure.png
new file mode 100644
index 0000000..02d7b3e
--- /dev/null
+++ b/docs/_docs/images/persistent_store_structure.png
Binary files differ
diff --git a/docs/_docs/images/preprocessing.png b/docs/_docs/images/preprocessing.png
new file mode 100644
index 0000000..3601b59
--- /dev/null
+++ b/docs/_docs/images/preprocessing.png
Binary files differ
diff --git a/docs/_docs/images/preprocessing2.png b/docs/_docs/images/preprocessing2.png
new file mode 100644
index 0000000..07fda7c
--- /dev/null
+++ b/docs/_docs/images/preprocessing2.png
Binary files differ
diff --git a/docs/_docs/images/replicated_cache.png b/docs/_docs/images/replicated_cache.png
new file mode 100644
index 0000000..89f19aa
--- /dev/null
+++ b/docs/_docs/images/replicated_cache.png
Binary files differ
diff --git a/docs/_docs/images/segmentation_resolved.png b/docs/_docs/images/segmentation_resolved.png
new file mode 100644
index 0000000..b28d6d2
--- /dev/null
+++ b/docs/_docs/images/segmentation_resolved.png
Binary files differ
diff --git a/docs/_docs/images/set-streaming.png b/docs/_docs/images/set-streaming.png
new file mode 100644
index 0000000..a448b59
--- /dev/null
+++ b/docs/_docs/images/set-streaming.png
Binary files differ
diff --git a/docs/_docs/images/span.png b/docs/_docs/images/span.png
new file mode 100644
index 0000000..ae05b72
--- /dev/null
+++ b/docs/_docs/images/span.png
Binary files differ
diff --git a/docs/_docs/images/spark_integration.png b/docs/_docs/images/spark_integration.png
new file mode 100644
index 0000000..466c6a3
--- /dev/null
+++ b/docs/_docs/images/spark_integration.png
Binary files differ
diff --git a/docs/_docs/images/split_brain.png b/docs/_docs/images/split_brain.png
new file mode 100644
index 0000000..a49c986
--- /dev/null
+++ b/docs/_docs/images/split_brain.png
Binary files differ
diff --git a/docs/_docs/images/split_brain_resolved.png b/docs/_docs/images/split_brain_resolved.png
new file mode 100644
index 0000000..ef9635f
--- /dev/null
+++ b/docs/_docs/images/split_brain_resolved.png
Binary files differ
diff --git a/docs/_docs/images/tools/gg-control-center.png b/docs/_docs/images/tools/gg-control-center.png
new file mode 100644
index 0000000..d884adb
--- /dev/null
+++ b/docs/_docs/images/tools/gg-control-center.png
Binary files differ
diff --git a/docs/_docs/images/tools/informatica-import-tables.png b/docs/_docs/images/tools/informatica-import-tables.png
new file mode 100644
index 0000000..e1f4cfc
--- /dev/null
+++ b/docs/_docs/images/tools/informatica-import-tables.png
Binary files differ
diff --git a/docs/_docs/images/tools/informatica-rel-connection.png b/docs/_docs/images/tools/informatica-rel-connection.png
new file mode 100644
index 0000000..097a009
--- /dev/null
+++ b/docs/_docs/images/tools/informatica-rel-connection.png
Binary files differ
diff --git a/docs/_docs/images/tools/pentaho-ignite-connection.png b/docs/_docs/images/tools/pentaho-ignite-connection.png
new file mode 100644
index 0000000..1b15d6a
--- /dev/null
+++ b/docs/_docs/images/tools/pentaho-ignite-connection.png
Binary files differ
diff --git a/docs/_docs/images/tools/pentaho-new-transformation.png b/docs/_docs/images/tools/pentaho-new-transformation.png
new file mode 100644
index 0000000..58bbc4c
--- /dev/null
+++ b/docs/_docs/images/tools/pentaho-new-transformation.png
Binary files differ
diff --git a/docs/_docs/images/tools/pentaho-running-and-inspecting-data.png b/docs/_docs/images/tools/pentaho-running-and-inspecting-data.png
new file mode 100644
index 0000000..e138ef4
--- /dev/null
+++ b/docs/_docs/images/tools/pentaho-running-and-inspecting-data.png
Binary files differ
diff --git a/docs/_docs/images/tools/tableau-choose_dsn_01.png b/docs/_docs/images/tools/tableau-choose_dsn_01.png
new file mode 100644
index 0000000..5719d7a
--- /dev/null
+++ b/docs/_docs/images/tools/tableau-choose_dsn_01.png
Binary files differ
diff --git a/docs/_docs/images/tools/tableau-choose_dsn_02.png b/docs/_docs/images/tools/tableau-choose_dsn_02.png
new file mode 100644
index 0000000..95cfed4
--- /dev/null
+++ b/docs/_docs/images/tools/tableau-choose_dsn_02.png
Binary files differ
diff --git a/docs/_docs/images/tools/tableau-choosing_driver_01.png b/docs/_docs/images/tools/tableau-choosing_driver_01.png
new file mode 100644
index 0000000..03f9c9e
--- /dev/null
+++ b/docs/_docs/images/tools/tableau-choosing_driver_01.png
Binary files differ
diff --git a/docs/_docs/images/tools/tableau-creating_dataset.png b/docs/_docs/images/tools/tableau-creating_dataset.png
new file mode 100644
index 0000000..33dc98d
--- /dev/null
+++ b/docs/_docs/images/tools/tableau-creating_dataset.png
Binary files differ
diff --git a/docs/_docs/images/tools/tableau-edit_connection.png b/docs/_docs/images/tools/tableau-edit_connection.png
new file mode 100644
index 0000000..ec35e68
--- /dev/null
+++ b/docs/_docs/images/tools/tableau-edit_connection.png
Binary files differ
diff --git a/docs/_docs/images/tools/tableau-visualizing_data.png b/docs/_docs/images/tools/tableau-visualizing_data.png
new file mode 100644
index 0000000..e5351ed
--- /dev/null
+++ b/docs/_docs/images/tools/tableau-visualizing_data.png
Binary files differ
diff --git a/docs/_docs/images/tools/visor-cmd.png b/docs/_docs/images/tools/visor-cmd.png
new file mode 100644
index 0000000..55249eb
--- /dev/null
+++ b/docs/_docs/images/tools/visor-cmd.png
Binary files differ
diff --git a/docs/_docs/images/trace_in_zipkin.png b/docs/_docs/images/trace_in_zipkin.png
new file mode 100644
index 0000000..074b264
--- /dev/null
+++ b/docs/_docs/images/trace_in_zipkin.png
Binary files differ
diff --git a/docs/_docs/images/zookeeper.png b/docs/_docs/images/zookeeper.png
new file mode 100644
index 0000000..8db3997
--- /dev/null
+++ b/docs/_docs/images/zookeeper.png
Binary files differ
diff --git a/docs/_docs/images/zookeeper_split.png b/docs/_docs/images/zookeeper_split.png
new file mode 100644
index 0000000..9cb643a
--- /dev/null
+++ b/docs/_docs/images/zookeeper_split.png
Binary files differ
diff --git a/docs/_docs/includes/cpp-linux-build-prerequisites.adoc b/docs/_docs/includes/cpp-linux-build-prerequisites.adoc
new file mode 100644
index 0000000..581a52f
--- /dev/null
+++ b/docs/_docs/includes/cpp-linux-build-prerequisites.adoc
@@ -0,0 +1,45 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+The following packages need to be installed:
+
+- C++ compiler
+- cmake 3.6+
+- jdk 
+- openssl, including header files
+- unixODBC
+
+Installation instructions for several popular distributions are listed below:
+[tabs]
+--
+tab:Ubuntu 18.04/20.04[]
+[source,bash,subs="attributes,specialchars"]
+----
+sudo apt-get install -y build-essential cmake openjdk-11-jdk unixodbc-dev libssl-dev
+----
+
+tab:CentOS/RHEL 7[]
+[source,shell,subs="attributes,specialchars"]
+----
+sudo yum install -y epel-release
+sudo yum install -y java-11-openjdk-devel cmake3 unixODBC-devel openssl-devel make gcc-c++
+----
+
+tab:CentOS/RHEL 8[]
+[source,shell,subs="attributes,specialchars"]
+----
+sudo yum install -y java-11-openjdk-devel cmake3 unixODBC-devel openssl-devel make gcc-c++
+----
+
+--
diff --git a/docs/_docs/includes/cpp-prerequisites.adoc b/docs/_docs/includes/cpp-prerequisites.adoc
new file mode 100644
index 0000000..7b91f10
--- /dev/null
+++ b/docs/_docs/includes/cpp-prerequisites.adoc
@@ -0,0 +1,23 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[width="100%",cols="1,3"]
+|===
+|JDK|Oracle JDK 8 and later, Open JDK 8 and later, IBM JDK 8 and later
+|OS|Windows Vista, Windows Server 2008 and later versions, Ubuntu (18.04 64 bit)
+|Network|No restrictions (10G recommended)
+|Hardware|No restrictions
+|C++ compiler|MS Visual C++ (10.0 and up), g++ (4.4.0 and up)
+|Visual Studio| 2010 and above
+|===
diff --git a/docs/_docs/includes/dotnet-prerequisites.adoc b/docs/_docs/includes/dotnet-prerequisites.adoc
new file mode 100644
index 0000000..489615b
--- /dev/null
+++ b/docs/_docs/includes/dotnet-prerequisites.adoc
@@ -0,0 +1,20 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[width="100%",cols="1,3"]
+|===
+|JDK |Oracle JDK 8 and later, Open JDK 8 and later, IBM JDK 8 and later
+|.NET Framework |.NET 4.0+, .NET Core 2.0+
+//|IDE |Visual Studio 2010+, Rider, Visual Studio Code
+|===
diff --git a/docs/_docs/includes/exampleprojects.adoc b/docs/_docs/includes/exampleprojects.adoc
new file mode 100644
index 0000000..a94564c
--- /dev/null
+++ b/docs/_docs/includes/exampleprojects.adoc
@@ -0,0 +1,37 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+Your Ignite installation includes additional examples. These examples are shipped together with the primary Ignite package you downloaded as part of the Ignite installation above.
+
+To run the examples project please follow these steps (which are provided for IntelliJ IDEA IDE, but should apply to similar IDEs such as Eclipse):
+
+. Start IntelliJ IDEA, click the "Import Project" button:
++
+image::images/ijimport.png[Importing a Project in IntelliJ]
+
+. Navigate to the `{IGNITE_HOME}/examples` folder and select the `{IGNITE}/examples/pom.xml` file. Click "OK".
+
+. Click "Next" on each of the following screens and apply the suggested defaults to the project. Click "Finish".
+
+. Wait while IntelliJ IDEA finishes setting up Maven, resolving dependencies, and loading modules.
+
+. Set up JDK if needed.
+
+. Run `src/main/java/org/apache/ignite/examples/datagrid/CacheApiExample`:
++
+image::images/ijrun.png[Run a project in IntelliJ]
++
+. Make sure that the example has been started and executed successfully, as shown in the image below.
++
+image::images/ijfull.png[Project in IntelliJ]
diff --git a/docs/_docs/includes/install-ignite.adoc b/docs/_docs/includes/install-ignite.adoc
new file mode 100644
index 0000000..75e941a
--- /dev/null
+++ b/docs/_docs/includes/install-ignite.adoc
@@ -0,0 +1,26 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+To get started with the Apache Ignite binary distribution:
+
+.  Download the https://ignite.apache.org/download.cgi#binaries[Ignite binary, window="_blank"]
+as a zip archive.
+.  Unzip the zip archive into the installation folder in your system.
+. (Optional) Enable required link:setup#enabling-modules[modules].
+. (Optional) Set the `IGNITE_HOME` environment variable or Windows PATH to
+point to the installation folder and make sure there is no trailing `/` (or
+`\` for Windows) in the path.
+
+
+
diff --git a/docs/_docs/includes/install-nodejs-npm.adoc b/docs/_docs/includes/install-nodejs-npm.adoc
new file mode 100644
index 0000000..e4c1da7
--- /dev/null
+++ b/docs/_docs/includes/install-nodejs-npm.adoc
@@ -0,0 +1,19 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[source,shell]
+----
+npm install -g apache-ignite-client
+----
+
diff --git a/docs/_docs/includes/install-php-composer.adoc b/docs/_docs/includes/install-php-composer.adoc
new file mode 100644
index 0000000..7402efb
--- /dev/null
+++ b/docs/_docs/includes/install-php-composer.adoc
@@ -0,0 +1,25 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[source,shell]
+----
+composer require apache/apache-ignite-client
+----
+
+To use the client in your application, include the `vendor/autoload.php` file, generated by Composer, to your source code, eg.
+
+[source,php]
+----
+require_once __DIR__ . '/vendor/autoload.php';
+----
diff --git a/docs/_docs/includes/install-python-pip.adoc b/docs/_docs/includes/install-python-pip.adoc
new file mode 100644
index 0000000..a4308c3
--- /dev/null
+++ b/docs/_docs/includes/install-python-pip.adoc
@@ -0,0 +1,29 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[tabs]
+--
+tab:pip3[]
+[source,shell]
+----
+pip3 install pyignite
+----
+
+tab:pip[]
+[source,shell]
+----
+pip install pyignite
+----
+--
+
diff --git a/docs/_docs/includes/intro-languages.adoc b/docs/_docs/includes/intro-languages.adoc
new file mode 100644
index 0000000..58739c9
--- /dev/null
+++ b/docs/_docs/includes/intro-languages.adoc
@@ -0,0 +1,47 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+Ignite is available for Java, .NET/C#, {cpp} and other programming languages. The Java version provides the richest API.
+The .NET/C#, C++, Python, etc. languages may have limited functionality. To make the Ignite documentation intuitive for all application developers,
+we adhere to the following conventions:
+
+* The information provided in this documentation applies to all programming languages unless noted otherwise.
+* Code samples for different languages are provided in different tabs as shown below. For example, if you are a .NET developer, click on the .NET tab in the code examples to see .NET specific code.
++
+[tabs]
+--
+tab:XML[]
+[source,text]
+----
+This is a place where an example of XML configuration is provided.
+Click on other tabs to view an equivalent programmatic configuration.
+----
+tab:Java[]
+[source,text]
+----
+Code sample in Java. Click on other tabs to view the same example in other languages.
+----
+tab:C#/.NET[]
+[source,text]
+----
+Code sample in .NET. Click on other tabs to view the same example in other languages.
+----
+tab:C++[]
+[source,text]
+----
+Code sample in C++. Click on other tabs to view the same example in other languages.
+----
+--
+
+* If there is no tab for a specific language, this most likely means that the functionality is not supported in that language.
diff --git a/docs/_docs/includes/java9.adoc b/docs/_docs/includes/java9.adoc
new file mode 100644
index 0000000..e40abf8
--- /dev/null
+++ b/docs/_docs/includes/java9.adoc
@@ -0,0 +1,42 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+To run Ignite with Java 11 or later, follow these steps:
+
+1.  Set the `JAVA_HOME` environment variable to point to the Java installation
+directory.
+2.  Ignite uses proprietary SDK APIs that are not available by
+default. You need to pass specific flags to JVM to make these APIs
+available. If you use the start-up script `ignite.sh` (or `ignite.bat` for Windows), you do not need
+to do anything because these flags are already set up in the script.
+Otherwise, provide the following parameters to the JVM of your
+application:
++
+[source,shell]
+----
+--add-exports=java.base/jdk.internal.misc=ALL-UNNAMED
+--add-exports=java.base/sun.nio.ch=ALL-UNNAMED
+--add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED
+--add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED
+--add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED
+--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED
+--illegal-access=permit
+----
+
+3.  TLSv1.3, which is available in Java 11, is not supported at the
+moment. Consider adding `‑Djdk.tls.client.protocols=TLSv1.2` if SSL
+between nodes is used.
+
+
+
diff --git a/docs/_docs/includes/nodes-and-clustering.adoc b/docs/_docs/includes/nodes-and-clustering.adoc
new file mode 100644
index 0000000..8a8c8e7
--- /dev/null
+++ b/docs/_docs/includes/nodes-and-clustering.adoc
@@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+There are two types of nodes:  servers and clients.
+
+A _server node_ is the base computational and data storage unit. Typically, you start a single server
+node per machine or container and it will scale vertically by utilizing all of the CPU, RAM, and other resources
+available unless specified differently. Those resources are pooled and become available to Ignite applications
+once the server node joins a cluster of other server nodes.
+
+image::images/ignite_clustering.png[Ignite Deployment]
+
+A _cluster_ is a group of server nodes interconnected together in order to provide shared resources like RAM and
+CPU to your applications.
+
+Operations executed by applications (key-value queries, SQL, computations, etc.) are directed to and performed by
+server nodes. If you need more computational power or data storage, scale out your cluster by adding more server
+nodes to it.
+
+_Client nodes_ are your connection endpoints and gateways from the application layer to the cluster of
+server nodes. You always embed a client into your application code and execute required APIs. The clients shield all
+the complexity of Ignite's distributed nature from application developers who will see the cluster as a single unit. It's as simple as connecting to an RDBMS via a JDBC driver or Spring Data framework.
diff --git a/docs/_docs/includes/note-on-deactivation.adoc b/docs/_docs/includes/note-on-deactivation.adoc
new file mode 100644
index 0000000..9674dd3
--- /dev/null
+++ b/docs/_docs/includes/note-on-deactivation.adoc
@@ -0,0 +1,19 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[WARNING]
+====
+Deactivation deallocates all memory resources, including your application data, on all cluster nodes and disables public cluster API.
+If you have in-memory caches that are not backed up by a persistent storage (neither link:persistence/native-persistence[native persistent storage] nor link:persistence/external-storage[external storage]), you will lose the data and will have to repopulate these caches.
+====
diff --git a/docs/_docs/includes/partition-awareness.adoc b/docs/_docs/includes/partition-awareness.adoc
new file mode 100644
index 0000000..1d1389e
--- /dev/null
+++ b/docs/_docs/includes/partition-awareness.adoc
@@ -0,0 +1,40 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+Partition awareness allows the thin client to send query requests directly to the node that owns the queried data.
+
+Without partition awareness, an application that is connected to the cluster via a thin client executes all queries and operations via a single server node that acts as a proxy for the incoming requests.
+These operations are then re-routed to the node that stores the data that is being requested.
+This results in a bottleneck that could prevent the application from scaling linearly.
+
+image::images/partitionawareness01.png[Without Partition Awareness]
+
+Notice how queries must pass through the proxy server node, where they are routed to the correct node.
+
+With partition awareness in place, the thin client can directly route queries and operations to the primary nodes that own the data required for the queries.
+This eliminates the bottleneck, allowing the application to scale more easily.
+
+image::images/partitionawareness02.png[With Partition Awareness]
+
+[WARNING]
+====
+[discrete]
+Note that presently you need to provide addresses of all the server nodes in the connection properties.
+This also means that if a new server node joins the cluster, you should add the server's address to the connection properties and reconnect the thin client.
+Otherwise, the thin client will not be able to send direct requests to this server.
+This limitation is planned to be addressed before the GA release of the feature.
+====
+
+
+
diff --git a/docs/_docs/includes/prereqs.adoc b/docs/_docs/includes/prereqs.adoc
new file mode 100644
index 0000000..f984162
--- /dev/null
+++ b/docs/_docs/includes/prereqs.adoc
@@ -0,0 +1,23 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+[width="100%",cols="1,3"]
+|===
+|JDK |Oracle JDK 8 and later, Open JDK 8 and later, IBM JDK 8 and later
+|OS |Linux (any flavor), Mac OSX (10.6 and up), Windows (XP and up),
+Windows Server (2008 and up), Oracle Solaris
+|ISA |x86, x64, SPARC, PowerPC
+
+|Network |No restrictions (10G recommended)
+|===
diff --git a/docs/_docs/includes/starting-node.adoc b/docs/_docs/includes/starting-node.adoc
new file mode 100644
index 0000000..b22a25a
--- /dev/null
+++ b/docs/_docs/includes/starting-node.adoc
@@ -0,0 +1,93 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+You can start a node from the command line using the default configuration or by passing a custom configuration file.
+You can start as many nodes as you like and they will all automatically discover each other.
+
+Navigate into the `bin` folder of the Ignite installation directory from the command shell.
+Your command might look like this:
+
+[tabs]
+--
+
+tab:Unix[]
+[source,shell]
+----
+cd {IGNITE_HOME}/bin/
+----
+
+tab:Window[]
+[source,shell]
+----
+cd {IGNITE_HOME}\bin\
+----
+
+--
+
+
+Start a node with a custom configuration file that is passed as a parameter to `ignite.sh|bat` like this:
+
+
+[tabs]
+--
+
+tab:Unix[]
+[source,shell]
+----
+./ignite.sh ../examples/config/example-ignite.xml
+----
+
+tab:Window[]
+[source,shell]
+----
+ignite.bat ..\examples\config\example-ignite.xml
+----
+--
+
+
+You will see output similar to this:
+
+....
+[08:53:45] Ignite node started OK (id=7b30bc8e)
+[08:53:45] Topology snapshot [ver=1, locNode=7b30bc8e, servers=1, clients=0, state=ACTIVE, CPUs=4, offheap=1.6GB, heap=2.0GB]
+....
+
+Open another tab from your command shell and run the same command again:
+
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+./ignite.sh ../examples/config/example-ignite.xml
+----
+
+tab:Windows[]
+[source,shell]
+----
+ignite.bat ..\examples\config\example-ignite.xml
+----
+
+--
+
+Check the `Topology snapshot` line in the output.
+Now you have a cluster of two server nodes with more CPUs and RAM available cluster-wide:
+
+....
+[08:54:34] Ignite node started OK (id=3a30b7a4)
+[08:54:34] Topology snapshot [ver=2, locNode=3a30b7a4, servers=2, clients=0, state=ACTIVE, CPUs=4, offheap=3.2GB, heap=4.0GB]
+....
+
+
+NOTE: By default, `ignite.sh|bat` starts a node with the default configuration file: `config/default-config.xml`.
diff --git a/docs/_docs/includes/thick-and-thin-clients.adoc b/docs/_docs/includes/thick-and-thin-clients.adoc
new file mode 100644
index 0000000..324b2b5
--- /dev/null
+++ b/docs/_docs/includes/thick-and-thin-clients.adoc
@@ -0,0 +1,42 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+Ignite clients come in several different flavors, each with various capabilities.
+link:SQL/JDBC/jdbc-driver[JDBC] and link:SQL/ODBC/odbc-driver[ODBC] drivers
+are useful for SQL-only applications and SQL-based tools. Thick and thin clients go beyond SQL capabilities and
+support many more APIs. Finally, ORM frameworks like Spring Data or Hibernate are also integrated with Ignite and
+can be used as an access point to your cluster.
+
+Let's review the difference between thick and thin clients by comparing their capabilities.
+
+*Thick* clients (client nodes) join the cluster via an internal protocol, receive all of the cluster-wide
+updates such as topology changes, are aware of data distribution, and can direct a query/operation to a server node
+that owns a required data set. Plus, thick clients support all of the Ignite APIs.
+
+*Thin* clients (aka. lightweight clients) connect to the cluster via binary protocol with a well-defined
+message format. This type of client supports a limited set of APIs (presently, key-value and SQL operations only) but
+in return:
+
+- Makes it easy to enable programming language support for Ignite. Java, .NET, C++, Python, Node.JS, and
+  PHP are supported out of the box.
+
+- Doesn't have any dependencies on JVM. For instance, .NET and C++ _thick_ clients have a richer feature set but
+  start and use JVM internally.
+
+- Requires at least one port opened on the cluster end. Note that more ports need to be opened if
+  partition-awareness is used for a thin client.
+
+TIP: The ODBC driver uses a protocol similar to the thin client's. As for the JDBC driver, it comes in two flavors -
+a thick version of the driver that utilizes a Java thick client internally and a thin counterpart, based on the thin
+client's protocol.
diff --git a/docs/_docs/index.adoc b/docs/_docs/index.adoc
new file mode 100644
index 0000000..1e8aadc
--- /dev/null
+++ b/docs/_docs/index.adoc
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite Documentation
+
+Apache Ignite is a distributed database for in-memory speed at petabyte scale.
+
+The technical documentation introduces you to the key capabilities, shows how to use certain features, or how to
+approach cluster optimizations and issues troubleshooting. If you are new to Ignite, then start with the
+link:quick-start/java[Quick Start Guides], and build the first application in a matter of 5-10 minutes.
+Otherwise, select the topic of your interest and have your problems solved, and questions answered.
+Good luck with your Ignite journey!
+
+== APIs
+
+API reference for various programming languages.
+
+*Latest Stable Version*
+
+* link:/releases/latest/javadoc/[JavaDoc]
+* link:/releases/latest/dotnetdoc/api/[C#/.NET]
+* link:/releases/latest/cppdoc/[C++]
+* link:/releases/latest/scaladoc/scalar/index.html[Scala]
+
+*Older Versions*
+
+* With the top-level navigation menu, change an Ignite version and select a version-specific API from the APIs drop-down list.
+* Or, go to the link:/download.cgi[downloads page] for a full archive of the versions.
+
+== Examples
+
+The Apache Ignite github repository contains a number of runnable examples that illustrate various Ignite functionality.
+
+* link:{githubUrl}/examples[Java^]
+* link:{githubUrl}/modules/platforms/dotnet/examples[C#/.NET^]
+* link:{githubUrl}/modules/platforms/cpp/examples[C++^]
+* link:{githubUrl}/modules/platforms/python/examples[Python^]
+* link:{githubUrl}/modules/platforms/nodejs/examples[Node.JS^]
+* link:{githubUrl}/modules/platforms/php/examples[PHP^]
+
+== Programming Languages
+include::includes/intro-languages.adoc[]
diff --git a/docs/_docs/installation/deb-rpm.adoc b/docs/_docs/installation/deb-rpm.adoc
new file mode 100644
index 0000000..60a441e
--- /dev/null
+++ b/docs/_docs/installation/deb-rpm.adoc
@@ -0,0 +1,95 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Installing Using DEP and RPM Package
+
+Apache Ignite can be installed from the official link:https://www.apache.org/dist/ignite/rpm[RPM] or link:https://www.apache.org/dist/ignite/deb[DEB] repositories.
+
+== Installing Deb Package
+
+Configure the repository:
+
+[source, shell]
+----
+sudo apt update
+sudo apt install dirmngr --no-install-recommends
+----
+
+
+[source, shell]
+----
+sudo bash -c 'cat <<EOF > /etc/apt/sources.list.d/ignite.list
+deb http://apache.org/dist/ignite/deb/ apache-ignite main
+EOF'
+sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 379CE192D401AB61
+sudo apt update
+----
+
+Install the Apache Ignite package:
+
+[source, shell]
+----
+sudo apt install apache-ignite --no-install-recommends
+----
+
+This will install the following files into your system:
+
+[cols="1,1,1",opts="header"]
+|===
+
+|Folder|  Mapped To|   Description
+|/usr/share/apache-ignite||        The root of Apache Ignite's installation
+|/usr/share/apache-ignite/bin||        Bin folder (scripts and executables)
+|/etc/apache-ignite | /usr/share/apache-ignite/config| Default configuration files
+|/var/log/apache-ignite|  /var/lib/apache-ignite/log|  Log directory
+|/usr/lib/apache-ignite|  /usr/share/apache-ignite/libs|   Core and optional libraries
+|/var/lib/apache-ignite|  /usr/share/apache-ignite/work|   Ignite work directory
+|/usr/share/doc/apache-ignite     ||   Documentation
+|/usr/share/license/apache-ignite-{version} ||     Licenses
+|/etc/systemd/system |    systemd service configuration
+
+|===
+
+== Running Ignite as a Service
+
+NOTE: If running on Windows 10 WSL or Docker, you should start Apache Ignite as a stand-alone process (not as a service).
+//See the next section.
+
+To start an Ignite node with a custom configuration, run the following command:
+
+[source, shell]
+----
+sudo systemctl start apache-ignite@<config_name>
+----
+
+The `<config_name>` parameter specifies the path to the configuration file relative to the `/etc/apache-ignite folder`.
+
+To launch the node at system startup, run the following command:
+
+[source, shell]
+----
+sudo systemctl enable apache-ignite@<config name>
+----
+
+
+////
+== Running Ignite as a Stand-Alone Process
+
+Use the commands below to start Ignite as a stand-alone process (cd to /usr/share/apache-ignite previously).
+To change the default configuration, you can update the /etc/apache-ignite/default-config.xml file.
+The default configuration uses Multicast IP Finder; if you want to use Static IP Finder, you need to change the default config file.
+Learn more about TCP/IP Discovery in the corresponding page.
+
+////
+
diff --git a/docs/_docs/installation/index.adoc b/docs/_docs/installation/index.adoc
new file mode 100644
index 0000000..8c850f4
--- /dev/null
+++ b/docs/_docs/installation/index.adoc
@@ -0,0 +1,21 @@
+---
+layout: toc
+---
+
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+= Installation
+
diff --git a/docs/_docs/installation/installing-using-docker.adoc b/docs/_docs/installation/installing-using-docker.adoc
new file mode 100644
index 0000000..149d092
--- /dev/null
+++ b/docs/_docs/installation/installing-using-docker.adoc
@@ -0,0 +1,212 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Installing Using Docker
+
+== Considerations
+
+*In-memory vs. Persistent Cluster*::
++
+--
+When deploying a persistent Ignite cluster, you should always mount a persistent volume or local directory.
+If you do not use a persistent volume, Ignite will store the data in the container's file system.
+It means that the data will be erased when you remove the container. To save the data in a permanent location, <<Running Persistent Cluster,mount a persistent volume>>.
+--
+
+*Networking*::
++
+--
+By default, Ignite Docker image exposes the following ports: 11211, 47100, 47500, 49112.
+You can expose more ports as needed by adding `-p <port>` to the `docker run` command.
+For example, to connect a thin client to the node running inside a docker container, open port 10800:
+
+[source, shell]
+----
+docker run -d -p 10800:10800 apacheignite/ignite
+----
+--
+
+== Downloading Ignite Docker Image
+
+Assuming that you already have Docker installed on your machine, you can pull
+and run the Ignite Docker image using the following commands.
+
+Open a command shell and use the following command to pull the Ignite Docker image.
+[source,shell]
+----
+# Pull latest version
+sudo docker pull apacheignite/ignite
+----
+
+By default, the latest version is downloaded, but you can download a specific version too.
+[source,shell,subs="attributes,specialchars"]
+----
+# Pull a specific Ignite version
+sudo docker pull apacheignite/ignite:{version}
+----
+
+== Running In-Memory Cluster
+
+Run Ignite in a docker container using the `docker run` command.
+
+[source,shell]
+----
+# Run the latest version
+sudo docker run -d apacheignite/ignite
+----
+
+This command will launch a single Ignite node.
+
+To run a specific version of Ignite, use the following command:
+
+[source,shell,subs="attributes,specialchars"]
+----
+# Run a specific Ignite version
+sudo docker run -d apacheignite/ignite:{version}
+----
+
+== Running Persistent Cluster
+
+If you use link:persistence/native-persistence[Native Persistence], Ignite stores the user data under the default work directory (`{IGNITE_HOME}/work`) in the file system of the container. This directory will be erased if you restart the docker container. To avoid this, you can:
+
+- Use a persistent volume to store the data; or
+- Mount a local directory
+
+These two options are described in the following sections.
+
+=== Using Persistent Volume
+
+
+To create a persistent volume, run the following command:
+
+[source, shell]
+----
+sudo docker volume create persistence-volume
+----
+
+We will mount this volume to a specific directory when running the Ignite docker image. This directory will have to be passed to Ignite. This can be done in two ways:
+
+- Using the `IGNITE_WORK_DIR` system property
+- In the node configuration file
+
+The following command launches the Ignite Docker image and passes the work directory to Ignite via the system property:
+
+
+[source,shell]
+----
+docker run -d \
+  -v storage-volume:/storage \
+  -e IGNITE_WORK_DIR=/storage \
+  apacheignite/ignite
+----
+
+=== Using Local Directory
+
+Instead of creating a volume, you can mount a local directory to the container in which the Ignite image is running and use this directory to store persistent data.
+When restarting the container with the same command, Ignite will load the data from the directory.
+
+
+[source, shell]
+----
+mkdir work_dir
+
+docker run -d \
+  -v ${PWD}/work_dir:/storage \
+  -e IGNITE_WORK_DIR=/storage \
+  apacheignite/ignite
+----
+
+The `-v` option mounts a local directory under the `/storage` path in the container.
+The `-e IGNITE_WORK_DIR=/storage` option tells Ignite to use this folder as the work directory.
+
+
+== Providing Configuration File
+When you run the image, it starts a node with the default configuration file.
+You can pass a custom configuration file by using the `CONFIG_URI` environment variable:
+
+[source, shell]
+----
+docker run -d \
+  -e CONFIG_URI=http://myserver/config.xml  \
+  apacheignite/ignite
+----
+
+You can also use a file from your local file system.
+You need to mount the file first under a specific path inside the container by using the `-v` option.
+Then, use this path in the `CONFIG_URI` option:
+
+[source, shell]
+----
+docker run -d \
+  -v /local/dir/config.xml:/config-file.xml \
+  -e CONFIG_URI=/config-file.xml \
+  apacheignite/ignite
+----
+
+== Deploying User Libraries
+
+When starting, a node adds all libraries found in the `{IGNITE_HOME}/libs` directory to the classpath (ignoring the "optional" directory).
+If you want to deploy user libraries, you can mount a directory from your local machine to a path in the `/opt/ignite/apache-ignite/libs/` in the container by using the `-v` option.
+
+The following command mounts a directory on your machine to `libs/user_libs` in the container.
+All files located in the directory are added to the classpath of the node.
+
+[source, shell]
+----
+docker run -v /local_path/to/dir_with_libs/:/opt/ignite/apache-ignite/libs/user_libs apacheignite/ignite
+----
+
+Another option is to use the `EXTERNAL_LIBS` variable if your libraries are available via an URL.
+
+[source, shell]
+----
+docker run -e "EXTERNAL_LIBS=http://url_to_your_jar" apacheignite/ignite
+----
+
+
+== Enabling Modules
+
+To enable specific link:setup#enabling-modules[modules], specify their names in the "OPTION_LIBS" system variable as follows:
+
+[source, shell]
+----
+sudo docker run -d \
+  -e "OPTION_LIBS=ignite-rest-http,ignite-aws" \
+  apacheignite/ignite
+----
+
+By default, the Ignite Docker image starts with the following modules enabled:
+
+- ignite-log4j,
+- ignite-spring,
+- ignite-indexing.
+
+== Environment Variables
+
+The following parameters can be passed as environment variables in the docker container:
+
+[cols="1,2,1", options="header"]
+|===
+| Parameter Name |Description |Default
+| `CONFIG_URI` | URL to the Ignite configuration file (can also be relative to the META-INF folder on the class path).
+The downloaded config file is saved to ./ignite-config.xml | N/A
+
+| `OPTION_LIBS` | A list of link:setup#enabling-modules[modules] that will be enabled for the node. | ignite-log4j, ignite-spring, ignite-indexing
+
+| `JVM_OPTS` | JVM arguments passed to the Ignite instance.| N/A
+
+| `EXTERNAL_LIBS` | A list of URL's to external libraries. Refer to <<Deploying User Libraries>>.| N/A
+
+|===
+
diff --git a/docs/_docs/installation/installing-using-zip.adoc b/docs/_docs/installation/installing-using-zip.adoc
new file mode 100644
index 0000000..4eaddfa
--- /dev/null
+++ b/docs/_docs/installation/installing-using-zip.adoc
@@ -0,0 +1,27 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Installing Using ZIP Archive
+
+== Prerequisites
+
+Ignite was tested on:
+
+include::includes/prereqs.adoc[]
+
+== Installing Using ZIP Archive
+
+include::includes/install-ignite.adoc[]
+
+
diff --git a/docs/_docs/installation/kubernetes/amazon-eks-deployment.adoc b/docs/_docs/installation/kubernetes/amazon-eks-deployment.adoc
new file mode 100644
index 0000000..41bf77b
--- /dev/null
+++ b/docs/_docs/installation/kubernetes/amazon-eks-deployment.adoc
@@ -0,0 +1,68 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Amazon EKS Deployment
+:command: kubectl
+:soft_name: Kubernetes
+:serviceName: Amazon EKS
+:configDir: ../../code-snippets/k8s
+:script: ../../code-snippets/k8s/setup.sh
+:javaFile: ../../{javaCodeDir}/k8s/K8s.java
+
+
+This page is a step-by-step guide on how to deploy an Ignite cluster on Amazon EKS.
+
+include::installation/kubernetes/generic-configuration.adoc[tag=intro]
+
+include::installation/kubernetes/generic-configuration.adoc[tag=kube-version]
+
+In this guide, we will use the `eksctl` command line tool to create a Kubernetes cluster.
+Follow link:https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html[this guide,window=_blank] to install the required resources and get familiar with the tool.
+
+
+== Creating an Amazon EKS Cluster
+
+First of all, you need to create an Amazon EKS cluster that will provide resources for our Kubernetes pods.
+You can create a cluster using the following command:
+
+[source, shell]
+----
+eksctl create cluster --name ignitecluster --nodes 2 --nodes-min 1 --nodes-max 4
+----
+
+Check the link:https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html[EKS documentation,window=_blank] for the full list of options.
+The provisioning of a cluster can take up to 15 minutes.
+Check the status of the cluster using the following command:
+
+[source, shell]
+----
+$ eksctl get cluster -n ignitecluster
+NAME            VERSION STATUS  CREATED                 VPC                     SUBNETS                                                                                                 SECURITYGROUPS
+ignitecluster 1.14    ACTIVE  2019-12-16T09:57:09Z    vpc-0ebf4a6ee3de12c63   subnet-00fa7e85aaebcd54d,subnet-06134ae545a5cc04c,subnet-063d9fdb481e727d2,subnet-0a087062ddc47c341     sg-06a6800a67ea95528
+----
+
+When the status of the cluster becomes ACTIVE, you can start creating Kubernetes resources.
+
+Verify that your `kubectl` is configured correctly:
+
+[source, shell]
+----
+$ kubectl get svc
+NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
+kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   6m49s
+----
+
+== Kubernetes Configuration
+
+include::installation/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
diff --git a/docs/_docs/installation/kubernetes/azure-deployment.adoc b/docs/_docs/installation/kubernetes/azure-deployment.adoc
new file mode 100644
index 0000000..bf317bd
--- /dev/null
+++ b/docs/_docs/installation/kubernetes/azure-deployment.adoc
@@ -0,0 +1,84 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Microsoft Azure Kubernetes Service Deployment
+
+:serviceName: AKS
+:soft_name: Kubernetes
+:command: kubectl
+:configDir: ../../code-snippets/k8s
+:script: ../../code-snippets/k8s/setup.sh
+:javaFile: ../../{javaCodeDir}/k8s/K8s.java
+
+This page is a step-by-step guide on how to deploy an Ignite  cluster on Microsoft Azure Kubernetes Service.
+
+include::installation/kubernetes/generic-configuration.adoc[tag=intro]
+
+include::installation/kubernetes/generic-configuration.adoc[tag=kube-version]
+
+== Creating the AKS Cluster
+
+The first step is to configure the Azure Kubernetes Service (AKS) cluster by following one of the Microsoft guidelines:
+
+*  link:https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal[Deploy an AKS cluster using the Azure portal,window=_blank]
+*  link:https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough[Deploy an AKS cluster using the Azure CLI,window=_blank]
+
+
+In this guide, we'll be using the Azure portal.
+
+1. Create a Microsoft account if you do not have one. Navigate to https://portal.azure.com[,window=_blank] and choose *Create a resource > Kubernetes Service > Create*.
+2. On the screen that appears, specify general parameters for your deployment, cluster name as "IgniteCluster", and resource group name as "Ignite".
++
+--
+image::images/k8s/create-aks-cluster.png[]
+--
+3. On the same screen, pick the required number of nodes for your AKS cluster:
++
+--
+image::images/k8s/aks-node-number.png[]
+--
+4. Configure other parameters as required.
+5. When finished with the configuration, click the *Review + create* button.
+6. Double check the configuration parameters and click *Create*. Give Azure some time to deploy the cluster.
+7. Go to *All Resources > IgniteCluster* to view the state of the cluster.
+
+== Connecting to the AKS Cluster
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the following command:
+
+[source, shell]
+----
+az aks get-credentials --resource-group Ignite --name IgniteCluster
+----
+
+If you encounter any problems, check out the https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster[official documentation,window=_blank].
+
+Using the following command, check that all the nodes are in "Ready" state:
+
+[source, shell]
+----
+$ kubectl get nodes
+
+NAME                                STATUS   ROLES   AGE     VERSION
+aks-agentpool-25545244-vmss000000   Ready    agent   6h23m   v1.14.8
+aks-agentpool-25545244-vmss000001   Ready    agent   6h23m   v1.14.8
+aks-agentpool-25545244-vmss000002   Ready    agent   6h23m   v1.14.8
+----
+
+Now you can start creating Kubernetes resources.
+
+== Kubernetes Configuration
+
+include::installation/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
+
diff --git a/docs/_docs/installation/kubernetes/generic-configuration.adoc b/docs/_docs/installation/kubernetes/generic-configuration.adoc
new file mode 100644
index 0000000..3dc9f3f
--- /dev/null
+++ b/docs/_docs/installation/kubernetes/generic-configuration.adoc
@@ -0,0 +1,402 @@
+---
+published: false
+---
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Generic Kubernetes Instruction
+:safe: unsafe
+:command: kubectl
+:soft_name: Kubernetes
+:serviceName:
+
+
+//tag::kube-version[]
+CAUTION: This guide was written using `kubectl` version 1.17.
+//end::kube-version[]
+
+== Introduction
+
+//tag::intro[]
+We will consider two deployment modes: stateful and stateless.
+Stateless deployments are suitable for in-memory use cases where your cluster keeps the application data in RAM for better performance.
+A stateful deployment differs from a stateless deployment in that it includes setting up persistent volumes for the cluster's storage.
+
+CAUTION: This guide focuses on deploying server nodes on Kubernetes. If you want to run client nodes on Kubernetes while your cluster is deployed elsewhere, you need to enable the communication mode designed for client nodes running behind a NAT. Refer to link:clustering/running-client-nodes-behind-nat[this section].
+
+//end::intro[]
+
+
+== {soft_name} Configuration
+
+//tag::kubernetes-config[]
+
+{soft_name} configuration involves creating the following resources:
+
+* A namespace
+* A cluster role
+* A ConfigMap for the node configuration file
+* A service to be used for discovery and load balancing when external apps connect to the cluster
+* A configuration for pods running Ignite nodes
+
+=== Creating Namespace
+
+Create a unique namespace for your deployment.
+In our case, the namespace is called “ignite”.
+
+Create the namespace using the following command:
+
+[source, shell]
+----
+include::{script}[tags=create-namespace]
+----
+
+=== Creating Service
+
+The {soft_name} service is used for auto-discovery and as a load-balancer for external applications that will connect to your cluster.
+
+Every time a new node is started (in a separate pod), the IP finder connects to the service via the Kubernetes API to obtain the list of the existing pods' addresses.
+Using these addresses, the new node discovers all cluster nodes.
+
+.service.yaml
+[source, yaml]
+----
+include::{configDir}/service.yaml[tag=config-block]
+----
+
+Create the service:
+
+[source, shell]
+----
+include::{script}[tags=create-service]
+----
+
+=== Creating Cluster Role and Service Account
+
+Create a service account:
+
+[source, shell]
+----
+include::{script}[tags=create-service-account]
+----
+
+A cluster role is used to grant access to pods. The following file is an example of a cluster role:
+
+.cluster-role.yaml
+[source, yaml]
+----
+include::{configDir}/cluster-role.yaml[tag=config-block]
+----
+
+Run the following command to create the role and a role binding:
+
+[source, shell]
+----
+include::{script}[tags=create-cluster-role]
+----
+
+=== Creating ConfigMap for Node Configuration File
+We will create a ConfigMap, which will keep the node configuration file for every node to use.
+This will allow you to keep a single instance of the configuration file for all nodes.
+
+Let's create a configuration file first.
+Choose one of the tabs below, depending on whether you use persistence or not.
+
+:kubernetes-ip-finder-description: This IP finder connects to the service via the Kubernetes API and obtains the list of the existing pods' addresses. Using these addresses, the new node discovers all other cluster nodes.
+
+
+[tabs]
+--
+tab:Configuration without persistence[]
+We must use the `TcpDiscoveryKubernetesIpFinder` IP finder for node discovery.
+{kubernetes-ip-finder-description}
+
+The file will look like this:
+
+.node-configuration.xml
+[source, xml]
+----
+include::{configDir}/stateless/node-configuration.xml[tag=config-block]
+----
+
+tab:Configuration with persistence[]
+In the configuration file, we will:
+
+* Enable link:persistence/native-persistence[native persistence] and specify the `workDirectory`, `walPath`, and `walArchivePath`. These directories are mounted in each pod that runs an Ignite node. Volume configuration is part of the <<Creating Pod Configuration,pod configuration>>.
+* Use the `TcpDiscoveryKubernetesIpFinder` IP finder. {kubernetes-ip-finder-description}
+
+The file look like this:
+
+.node-configuration.xml
+[source, xml]
+----
+include::{configDir}/stateful/node-configuration.xml[tag=config-block]
+----
+--
+
+
+The `namespace` and `serviceName` properties of the IP finder must be the same as specified in the <<Creating Service,service configuration>>.
+Add other properties as required for your use case.
+
+To create the ConfigMap, run the following command in the directory with the `node-configuration.xml` file.
+
+[source, shell]
+----
+include::{script}[tags=create-configmap]
+----
+
+
+=== Creating Pod Configuration
+
+Now we will create a configuration for pods.
+In the case of stateless deployment, we will use a link:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/[Deployment,window=_blank].
+For a stateful deployment, we will use a link:https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSet,window=_blank].
+
+
+[tabs]
+--
+tab:Configuration without persistence[]
+
+Our Deployment configuration will deploy a ReplicaSet with two pods running Ignite {version}.
+
+In the container's configuration, we will:
+
+* Enable the “ignite-kubernetes” and “ignite-rest-http” link:installation/installing-using-docker#enabling-modules[modules].
+* Use the configuration file from the ConfigMap we created earlier.
+* Open a number of ports:
+** 47100 — the communication port
+** 47500 ­—­ the discovery port
+** 49112 — the default JMX port
+** 10800 — thin client/JDBC/ODBC port
+** 8080 — REST API port
+
+The deployment configuration file might look like as follows:
+
+.deployment.yaml
+[source, yaml,subs="attributes,specialchars"]
+----
+include::{configDir}/stateless/deployment-template.yaml[tag=config-block]
+----
+
+Create a deployment by running the following command:
+
+[source, shell]
+----
+include::{script}[tags=create-deployment]
+----
+
+tab:Configuration with persistence[]
+
+Our StatefulSet configuration deploys 2 pods running Ignite {version}.
+
+In the container's configuration we will:
+
+* Enable the “ignite-kubernetes” and “ignite-rest-http” link:installation/installing-using-docker#enabling-modules[modules].
+* Use the configuration file from the ConfigMap we created earlier.
+* Mount volumes for the work directory (where application data is stored), WAL files, and WAL archive.
+* Open a number of ports:
+** 47100 — the communication port
+** 47500 ­—­ the discovery port
+** 49112 — the default JMX port
+** 10800 — thin client/JDBC/ODBC port
+** 8080 — REST API port
+
+The StatefulSet configuration file might look like as follows:
+
+.statefulset.yaml
+[source,yaml,subs="attributes,specialchars"]
+----
+include::{configDir}/stateful/statefulset-template.yaml[tag=config-block]
+----
+
+Note the `spec.volumeClaimTemplates` section, which defines persistent volumes provisioned by a persistent volume provisioner.
+The volume type depends on the cloud provider.
+You can have more control over the volume type by defining https://kubernetes.io/docs/concepts/storage/storage-classes/[storage classes,window=_blank].
+
+Create the StatefulSet by running the following command:
+
+[source, shell]
+----
+include::{script}[tags=create-statefulset]
+----
+
+--
+
+Check if the pods were deployed correctly:
+
+[source, shell,subs="attributes"]
+----
+$ {command} get pods -n ignite
+NAME                                READY   STATUS    RESTARTS   AGE
+ignite-cluster-5b69557db6-lcglw   1/1     Running   0          44m
+ignite-cluster-5b69557db6-xpw5d   1/1     Running   0          44m
+----
+
+Check the logs of the nodes:
+
+[source, shell,subs="attributes"]
+----
+$ {command} logs ignite-cluster-5b69557db6-lcglw -n ignite
+...
+[14:33:50] Ignite documentation: http://gridgain.com
+[14:33:50]
+[14:33:50] Quiet mode.
+[14:33:50]   ^-- Logging to file '/opt/gridgain/work/log/ignite-b8622b65.0.log'
+[14:33:50]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
+[14:33:50]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
+[14:33:50]
+[14:33:50] OS: Linux 4.19.81 amd64
+[14:33:50] VM information: OpenJDK Runtime Environment 1.8.0_212-b04 IcedTea OpenJDK 64-Bit Server VM 25.212-b04
+[14:33:50] Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments.
+[14:33:50] Initial heap size is 30MB (should be no less than 512MB, use -Xms512m -Xmx512m).
+[14:33:50] Configured plugins:
+[14:33:50]   ^-- None
+[14:33:50]
+[14:33:50] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
+[14:33:50] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
+[14:33:50] Security status [authentication=off, tls/ssl=off]
+[14:34:00] Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=918MB, available=1849MB]
+[14:34:01] Performance suggestions for grid  (fix if possible)
+[14:34:01] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
+[14:34:01]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
+[14:34:01]   ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]' to JVM options)
+[14:34:01]   ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
+[14:34:01]   ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
+[14:34:01] Refer to this page for more performance suggestions: https://ignite.apache.org/docs/latest/perf-and-troubleshooting/general-perf-tips
+[14:34:01]
+[14:34:01] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
+[14:34:01] Data Regions Configured:
+[14:34:01]   ^-- default [initSize=256.0 MiB, maxSize=370.0 MiB, persistence=false, lazyMemoryAllocation=true]
+[14:34:01]
+[14:34:01] Ignite node started OK (id=b8622b65)
+[14:34:01] Topology snapshot [ver=2, locNode=b8622b65, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=0.72GB, heap=0.88GB]
+----
+
+The string `servers=2` in the last line indicates that the two nodes joined into a single cluster.
+
+== Activating the Cluster
+
+If you deployed a stateless cluster, skip this step: a cluster without persistence does not require activation.
+
+If you are using persistence, you must activate the cluster after it is started. To do that, connect to one of the pods:
+
+[source, shell,subs="attributes,specialchars"]
+----
+{command} exec -it <pod_name> -n ignite -- /bin/bash
+----
+
+Execute the following command:
+
+[source, shell]
+----
+/opt/ignite/apache-ignite/bin/control.sh --set-state ACTIVE --yes
+----
+
+You can also activate the cluster using the link:restapi#change-cluster-state[REST API].
+Refer to the <<Connecting to the Cluster>> section for details about connection to the cluster's REST API.
+
+== Scaling the Cluster
+
+You can add more nodes to the cluster by using the `{command} scale` command.
+
+CAUTION: Make sure your {serviceName} cluster has enough resources to add new pods.
+
+In the following example, we will bring up one more node (we had two).
+
+
+[tabs]
+--
+tab:Configuration without persistence[]
+To scale your Deployment, run the following command:
+
+[source, shell,subs="attributes,specialchars"]
+----
+{command} scale deployment ignite-cluster --replicas=3 -n ignite
+----
+
+tab:Configuration with persistence[]
+To scale your StatefulSet, run the following command:
+[source, shell,subs="attributes,specialchars"]
+----
+{command} scale sts ignite-cluster --replicas=3 -n ignite
+----
+
+After scaling the cluster, link:control-script#activation-deactivation-and-topology-management[change the baseline topology] accordingly.
+
+--
+
+CAUTION: If you reduce the number of nodes by more than the link:configuring-caches/configuring-backups[number of partition backups], you may lose data. The proper way to scale down is to redistribute the data after removing a node by changing the link:control-script#removing-nodes-from-baseline-topology[baseline topology].
+
+== Connecting to the Cluster
+
+If your application is also running in {soft_name}, you can use either thin clients or client nodes to connect to the cluster.
+
+Get the public IP of the service:
+
+[source, shell,subs="attributes,specialchars"]
+----
+$ {command} describe svc ignite-service -n ignite
+Name:                     ignite-service
+Namespace:                ignite
+Labels:                   app=ignite
+Annotations:              <none>
+Selector:                 app=ignite
+Type:                     LoadBalancer
+IP:                       10.0.144.19
+LoadBalancer Ingress:     13.86.186.145
+Port:                     rest  8080/TCP
+TargetPort:               8080/TCP
+NodePort:                 rest  31912/TCP
+Endpoints:                10.244.1.5:8080
+Port:                     thinclients  10800/TCP
+TargetPort:               10800/TCP
+NodePort:                 thinclients  31345/TCP
+Endpoints:                10.244.1.5:10800
+Session Affinity:         None
+External Traffic Policy:  Cluster
+----
+
+
+You can use the `LoadBalancer Ingress` address to connect to one of the open ports.
+The ports are also listed in the output of the command.
+
+
+=== Connecting Client Nodes
+
+A client node requires connection to every node in the cluster. The only way to achieve this is to start a client node within {soft_name}.
+You will need to configure the discovery mechanism to use `TcpDiscoveryKubernetesIpFinder`, as described in the <<Creating ConfigMap for Node Configuration File>> section.
+
+
+=== Connecting with Thin Clients
+
+The following code snippet illustrates how to connect to your cluster using the link:thin-clients/java-thin-client[java thin client]. You can use other thin clients in the same way.
+Note that we use the external IP address (LoadBalancer Ingress) of the service.
+
+[source, java]
+----
+include::{javaFile}[tags=connectThinClient, indent=0]
+----
+
+=== Connecting to REST API
+
+Connect to the cluster's REST API as follows:
+
+[source,shell,subs="attributes,specialchars"]
+----
+$ curl http://13.86.186.145:8080/ignite?cmd=version
+{"successStatus":0,"error":null,"response":"{version}","sessionToken":null}
+----
+
+
+//end::kubernetes-config[]
diff --git a/docs/_docs/installation/kubernetes/gke-deployment.adoc b/docs/_docs/installation/kubernetes/gke-deployment.adoc
new file mode 100644
index 0000000..0c75d79
--- /dev/null
+++ b/docs/_docs/installation/kubernetes/gke-deployment.adoc
@@ -0,0 +1,78 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Google Kubernetes Engine Deployment
+
+:serviceName: GKE
+:soft_name: Kubernetes
+:command: kubectl
+:configDir: ../../code-snippets/k8s
+:script: ../../code-snippets/k8s/setup.sh
+:javaFile: ../../{javaCodeDir}/k8s/K8s.java
+
+
+This page explains how to deploy an Ignite  cluster on Google Kubernetes Engine.
+
+include::installation/kubernetes/generic-configuration.adoc[tag=intro]
+
+include::installation/kubernetes/generic-configuration.adoc[tag=kube-version]
+
+== Creating a GKE Cluster
+A cluster in GKE is a set of nodes that provision resources for the applications that are deployed in the cluster.
+You must create a GKE cluster with enough resources (CPU, RAM, and storage) for your use case.
+
+* link:https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster[Create a cluster,window=_blank]
+* link:https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl[Configure _kubectl_,window=_blank]
+
+The easiest way to create a cluster is to use the `gcloud` command line tool:
+
+[source, txt]
+----
+$ gcloud container clusters create my-cluster --zone us-west1
+...
+Creating cluster my-cluster in us-west1... Cluster is being health-checked (master is healthy)...done.
+Created [https://container.googleapis.com/v1/projects/gmc-development/zones/us-west1/clusters/my-cluster].
+To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-west1/my-cluster?project=my-project
+kubeconfig entry generated for my-cluster.
+NAME        LOCATION  MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
+my-cluster  us-west1  1.14.10-gke.27  35.230.126.102  n1-standard-1  1.14.10-gke.27  9          RUNNING
+----
+
+Verify that your `kubectl` is configured correctly:
+
+
+[source, shell]
+----
+$ kubectl get nodes
+NAME                                        STATUS   ROLES    AGE   VERSION
+gke-my-cluster-default-pool-6e9f3e45-8k0w   Ready    <none>   73s   v1.14.10-gke.27
+gke-my-cluster-default-pool-6e9f3e45-b7lb   Ready    <none>   72s   v1.14.10-gke.27
+gke-my-cluster-default-pool-6e9f3e45-cmzc   Ready    <none>   74s   v1.14.10-gke.27
+gke-my-cluster-default-pool-a2556b36-85z6   Ready    <none>   73s   v1.14.10-gke.27
+gke-my-cluster-default-pool-a2556b36-xlbj   Ready    <none>   72s   v1.14.10-gke.27
+gke-my-cluster-default-pool-a2556b36-z8fp   Ready    <none>   74s   v1.14.10-gke.27
+gke-my-cluster-default-pool-e93974f2-hwkj   Ready    <none>   72s   v1.14.10-gke.27
+gke-my-cluster-default-pool-e93974f2-jqj3   Ready    <none>   72s   v1.14.10-gke.27
+gke-my-cluster-default-pool-e93974f2-v8xv   Ready    <none>   74s   v1.14.10-gke.27
+----
+
+Now you are ready to create Kubernetes resources.
+
+== Kubernetes Configuration
+
+include::installation/kubernetes/generic-configuration.adoc[tag=kubernetes-config]
+
+
+
+
diff --git a/docs/_docs/installation/vmware-installation.adoc b/docs/_docs/installation/vmware-installation.adoc
new file mode 100644
index 0000000..18948f7
--- /dev/null
+++ b/docs/_docs/installation/vmware-installation.adoc
@@ -0,0 +1,59 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Installing Apache Ignite in VMWare
+
+== Overview
+
+Apache Ignite can be deployed in virtual and cloud environments managed by VMWare. There are no specificities related to
+VMWare; however, we recommend you have an Ignite VM pinned to a single dedicated host, which allows you to:
+
+* Avoid the "noisy neighbor" problem when Ignite VM could compete for the host resources with other applications. This
+might cause performance spikes in your Ignite cluster.
+* Ensure high-availability. If a host goes down and you had two or more Ignite server node VMs pinned to it, then it could lead to data loss.
+
+The following sections cover vMotion usage aspects for Ignite nodes migration.
+
+== Cluster Nodes Migration With vMotion
+
+vMotion provides migration of a live VM from one host to another. There are some basic principles Ignite relies on to
+continue a normal operation after the migration:
+
+* Memory state on the new host is identical.
+* Disk state is identical (or the new host uses the same disk).
+* IP addresses, available ports, and other networking parameters are not changed.
+* All network resources are available, TCP connections are not interrupted.
+
+If vMotion is set up and works in accordance with above mentioned rules, then an Ignite node will function normally.
+
+However, the vMotion migration will impact the performance of the Ignite VM. During the transfer procedure a lot of resources
+-- mainly CPU and network capacity -- will be serving vMotion needs.
+
+To avoid negative performance spikes and unresponsive/frozen periods of the cluster state, we recommend the following:
+
+* Perform migration during the periods of low activity and load on your Ignite cluster. This ensures faster transfer with
+minimal impact on the cluster performance.
+* Perform migration of the nodes sequentially, one by one, if several nodes have to be migrated.
+* Set `IgniteConfiguration.failureDetectionTimeout` parameter to a value higher than the possible downtime for Ignite VM.
+This is because vMotion stops the CPU of your Ignite VM when a small chunk of state is left for transfer. It will take X
+time to transfer the chunk and `IgniteConfiguration.failureDetectionTimeout` has to be bigger than X; otherwise, the node
+will be removed from the cluster.
+* Use a high-throughput network. It's better if the vMotion migrator and Ignite cluster are using different networks to
+avoid network saturation.
+* If you have an option to choose between more nodes with less RAM vs. fewer nodes with more RAM, then go for the first option.
+Smaller RAM on the Ignite VM ensures faster vMotion migration, and faster migration ensures more stable operation of the Ignite cluster.
+* If it's applicable for your use case, you can even consider the migration with a downtime of the Ignite VM. Given that
+there are backup copies of the data on other nodes in the cluster, the node can be shut down and brought back up after the
+vMotion migration is over. This may result in better overall performance (both performance of the cluster and the vMotion
+transfer time) than with a live migration.
diff --git a/docs/_docs/key-value-api/basic-cache-operations.adoc b/docs/_docs/key-value-api/basic-cache-operations.adoc
new file mode 100644
index 0000000..5c6d66a
--- /dev/null
+++ b/docs/_docs/key-value-api/basic-cache-operations.adoc
@@ -0,0 +1,421 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Basic Cache Operations
+
+:javaFile: {javaCodeDir}/BasicCacheOperations.java
+
+== Getting an Instance of a Cache
+
+All operations on a cache are performed through an instance of `IgniteCache`.
+You can obtain `IgniteCache` for an existing cache, or you can create a cache dynamically.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=getCache,indent=0]
+
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+IIgnite ignite = Ignition.Start();
+
+// Obtain an instance of cache named "myCache".
+// Note that generic arguments are only for your convenience.
+// You can work with any cache in terms of any generic arguments.
+// However, attempt to retrieve an entry of incompatible type
+// will result in exception.
+ICache<int, string> cache = ignite.GetCache<int, string>("myCache");
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/cache_getting_instance.cpp[tag=cache-getting-instance,indent=0]
+----
+--
+
+== Creating Caches Dynamically
+
+You can also create a cache dynamically:
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=createCache,indent=0]
+----
+
+Refer to the link:configuring-caches/configuration-overview[Cache Configuration] section for the list of cache parameters.
+tab:C#/.NET[]
+[source,csharp]
+----
+IIgnite ignite = Ignition.Start();
+
+// Create cache with given name, if it does not exist.
+var cache = ignite.GetOrCreateCache<int, string>("myNewCache");
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/cache_creating_dynamically.cpp[tag=cache-creating-dynamically,indent=0]
+----
+--
+
+
+The methods that create a cache throw an `org.apache.ignite.IgniteCheckedException` exception when called while the baseline topology is being changed.
+
+
+[source, shell]
+----
+javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: Failed to start/stop cache, cluster state change is in progress.
+        at org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1323)
+        at org.apache.ignite.internal.IgniteKernal.createCache(IgniteKernal.java:3001)
+        at org.apache.ignite.internal.processors.platform.client.cache.ClientCacheCreateWithNameRequest.process(ClientCacheCreateWithNameRequest.java:48)
+        at org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:51)
+        at org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:173)
+        at org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:47)
+        at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:278)
+        at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:108)
+        at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:96)
+        at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:119)
+
+        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
+        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
+        at java.base/java.lang.Thread.run(Thread.java:834)
+----
+
+You may want to retry the operation if you catch this exception.
+
+
+== Destroying Caches
+To delete a cache from all cluster nodes, call the `destroy()` method.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=destroyCache,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Atomic Operations
+Once you get the instance of a cache, you can start performing get/put operations on it.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=atomic1,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/BasicCacheOperations.cs[tag=atomicOperations1,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/cache_get_put.cpp[tag=cache-get-put,indent=0]
+----
+--
+
+[NOTE]
+====
+Bulk operations such as `putAll()` or `removeAll()` are executed as a sequence of atomic operations and can partially fail.
+If this happens, a `CachePartialUpdateException` is thrown and contains a list of keys for which the update failed.
+
+To update a collection of entries within a single operation, consider using link:key-value-api/transactions[transactions].
+====
+
+Below are more examples of basic atomic operations:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=atomic2,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/BasicCacheOperations.cs[tag=atomicOperations2,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/cache_atomic_operations.cpp[tag=cache-atomic-operations,indent=0]
+----
+--
+
+////
+*TODO: a note about a deadlock on readme.io?*
+////
+
+== Asynchronous Execution
+Most of the cache operations have asynchronous counterparts that have the "Async" suffix in their names.
+
+[tabs]
+--
+tab:Java[]
+
+[source,java]
+----
+// a synchronous get
+V get(K key);
+
+// an asynchronous get
+IgniteFuture<V> getAsync(K key);
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+// a synchronous get
+TV Get(TK key);
+
+// an asynchronous get
+Task<TV> GetAsync(TK key);
+
+----
+tab:C++[]
+[source,cpp]
+----
+// a synchronous get
+V Get(K key);
+
+// an asynchronous get
+Future<V> GetAsync(K key);
+----
+--
+
+The asynchronous operations return an object that represents the result of the operation. You can wait for the completion of the operation in either blocking or non-blocking manner.
+
+////
+*TODO - Artem, should we explain what blocking means? Also, you explain how to wait in non-blocking fashion, but don't show how to do so in a blocking manner. Is that important enough to show?*
+
+*ALSO, do we need to explain what a "closure" is?*
+
+Blocking and closure are basic notions a java developer should know. We also expect that users know/can learn themselves how to use the Feature class. We can elaborate on this if we get relevant feedback.
+////
+
+To wait for the results in a non-blocking fashion, register a closure using the `IgniteFuture.listen()` or `IgniteFuture.chain()` method. The closure is called when the operation is completed.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=async,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/BasicCacheOperations.cs[tag=asyncExec,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/cache_asynchronous_execution.cpp[tag=cache-asynchronous-execution,indent=0]
+----
+--
+
+
+[NOTE]
+====
+[discrete]
+=== Closures Execution and Thread Pools
+
+////////////////////////////////////////////////////////////////////////////////
+This is java specific
+////////////////////////////////////////////////////////////////////////////////
+
+
+If an asynchronous operation is completed by the time the closure is passed to either the `IgniteFuture.listen()` or `IgniteFuture.chain()` method, then the closure is executed synchronously by the calling thread. Otherwise, the closure is executed asynchronously when the operation is completed.
+
+Depending on the type of operation, the closure might be called by a thread from the system pool (asynchronous cache operations) or by a thread from the public pool (asynchronous compute operations). Therefore, you should avoid calling synchronous cache and compute operations from inside the closure, because it may lead to a deadlock due to pools starvation.
+
+To achieve nested execution of asynchronous compute operations, you can take advantage of link:perf-troubleshooting-guide/thread-pools-tuning#creating-custom-thread-pool[custom thread pools].
+====
+
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+
+
+
+== Resource Injection
+
+Ignite allows dependency injection of pre-defined Ignite resources, and supports field-based as well as method-based injection. Resources with proper annotations will be injected into the corresponding task, job, closure, or SPI before it is initialized.
+
+
+You can inject resources by annotating either a field or a method. When you annotate a field, Ignite simply sets the value of the field at injection time (disregarding an access modifier of the field). If you annotate a method with a resource annotation, it should accept an input parameter of the type corresponding to the injected resource. If it does, then the method is invoked at injection time with the appropriate resource passed as an input argument.
+
+Below is an example of a field injection.
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+Ignite ignite = Ignition.ignite();
+
+Collection<String> res = ignite.compute().broadcast(new IgniteCallable<String>() {
+    // Inject Ignite instance.
+    @IgniteInstanceResource
+    private Ignite ignite;
+
+    @Override
+    public String call() throws Exception {
+        IgniteCache<Object, Object> cache = ignite.getOrCreateCache(CACHE_NAME);
+
+        // Do some stuff with the cache.
+    }
+});
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+
+And this is an example of a method-based injection:
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+public class MyClusterJob implements ComputeJob {
+
+    private Ignite ignite;
+
+    // Inject an Ignite instance.
+    @IgniteInstanceResource
+    public void setIgnite(Ignite ignite) {
+        this.ignite = ignite;
+    }
+
+}
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+
+There are a number of pre-defined resources that you can inject:
+
+[width="100%",cols="30%,70%",options="header",]
+|===
+|Resource | Description
+
+|`CacheNameResource`
+
+|Injects the cache name provided via `CacheConfiguration.getName()`.
+
+|`CacheStoreSessionResource`
+
+|Injects the current `CacheStoreSession` instance.
+
+|`IgniteInstanceResource`
+
+|Injects the current instance of `Ignite`.
+
+|`JobContextResource`
+
+|Injects an instance of `ComputeJobContext`. A job context holds useful information about a particular job execution. For example, you can get the name of the cache containing the entry for which a job was colocated.
+
+|`LoadBalancerResource`
+
+|Injects an instance of `ComputeLoadBalancer` that can be used by a task to do the load balancing.
+
+|`ServiceResource`
+
+|Injects the service specified by the given name.
+
+|`SpringApplicationContextResource`
+
+|Injects Spring's `ApplicationContext` resource.
+
+|`SpringResource`
+
+|Injects resource from Spring's `ApplicationContext`. Use it whenever you would like to access a bean specified in Spring's application context XML configuration.
+
+|`TaskContinuousMapperResource`
+
+|Injects an instance of `ComputeTaskContinuousMapper`. Continuous mapping allows emitting jobs from the task at any point, even after the initial map phase.
+
+|`TaskSessionResource`
+
+|Injects an instance of the `ComputeTaskSession` resource, which defines a distributed session for a particular task execution.
+|===
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+////
+
+TODO: the importance of this section is questionable
+
+== Cache Interceptor
+
+Ignite lets you execute custom logic before or after specific operations on a cache. You can:
+
+- change the returned value of the `get` operation;
+- process an entry before or after any `put`/`remove` operation.
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+
+----
+++++
+</code-tab>
+<code-tab data-tab="C#/.NET">
+++++
+[source,csharp]
+----
+TODO
+----
+++++
+</code-tab>
+<code-tab data-tab="C++">
+++++
+[source,cpp]
+----
+TODO
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+////
diff --git a/docs/_docs/key-value-api/binary-objects.adoc b/docs/_docs/key-value-api/binary-objects.adoc
new file mode 100644
index 0000000..228e4fe
--- /dev/null
+++ b/docs/_docs/key-value-api/binary-objects.adoc
@@ -0,0 +1,236 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Working with Binary Objects
+
+== Overview
+In Ignite, data is stored in link:data-modeling/data-modeling#binary-object-format[binary format] and is deserialized into objects every time you call cache methods. However, you can work directly with the binary objects avoiding deserialization.
+
+////
+*TODO* ARTEM, should we explain why we'd want to avoid deserialization?
+////
+
+A binary object is a wrapper over the binary representation of an entry stored in a cache. Each binary object has the `field(name)` method which returns the value of the given field and the `type()` method that extracts the <<Binary Type and Binary Fields,information about the type of the object>>.
+Binary objects are useful when you want to work only with some fields of the objects and do not need to deserialize the entire object.
+
+
+
+You do not need to have the class definition to work with binary objects and can <<Creating and Modifying Binary Objects,change the structure of objects dynamically>> without restarting the cluster.
+
+The binary object format is universal for all supported platforms, i.e. Java, .NET, and {cpp}. You can start a Java cluster, then connect to it from .NET or {cpp} clients and use binary objects in those platforms with no need to define classes on the client side.
+
+[IMPORTANT]
+====
+[discrete]
+=== Restrictions
+
+There are several restrictions that are implied by the binary object format implementation:
+
+ * Internally, the type and fields of a binary object are identified by their IDs. The IDs are calculated as the hash codes of the corresponding string names. Consequently, fields or types with the identical name hash are not allowed. However, you can <<Configuring Binary Objects,provide your own implementation of ID generation>> via the configuration.
+ * For the same reason, the binary object format does not allow identical field names on different levels of class hierarchy.
+ * If a class implements the `Externalizable` interface, Ignite uses `OptimizedMarshaller` instead of the binary one. `OptimizedMarshaller` uses the `writeExternal()` and `readExternal()` methods to serialize and deserialize objects; therefore, the class must be added to the classpath of the server nodes.
+====
+
+== Enabling Binary Mode for Caches
+
+By default, when you request entries from a cache, they are returned in the deserialized format.
+To work with the binary format, obtain an instance of the cache using the `withKeepBinary()` method.
+This instance returns objects in the binary format (when possible).
+//and also passes binary objects to link:distributed-computing/collocated-computations#entry-processor[entry processors], if any are used.
+// and cache interceptors.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=enablingBinary,indent=0]
+----
+
+Note that not all objects are converted to the binary object format.
+The following classes are never converted (e.g., the `toBinary(Object)` method returns the original object, and instances of these classes are stored without changes):
+
+* All primitives (`byte`, `int`, etc) and their wrapper classes (`Byte`, `Integer`, etc)
+* Arrays of primitives (byte[], int[], ...)
+* `String` and array of `Strings`
+* `UUID` and array of `UUIDs`
+* `Date` and array of `Dates`
+* `Timestamp` and array of `Timestamps`
+* `Enums` and array of enums
+* Maps, collections and arrays of objects (but the objects inside them are reconverted if they are binary)
+
+tab:C#/.NET[]
+[source,csharp]
+----
+ICache<int, IBinaryObject> binaryCache = cache.WithKeepBinary<int, IBinaryObject>();
+IBinaryObject binaryPerson = binaryCache[1];
+string name = binaryPerson.GetField<string>("Name");
+
+IBinaryObjectBuilder builder = binaryPerson.ToBuilder();
+builder.SetField("Name", name + " - Copy");
+
+IBinaryObject binaryPerson2 = builder.Build();
+binaryCache[2] = binaryPerson2;
+----
+
+Note that not all types can be represented as `IBinaryObject`. Primitive types, `string`, `Guid`, `DateTime`, collections and arrays of these types are always returned as is.
+
+tab:C++[unsupported]
+--
+
+== Creating and Modifying Binary Objects
+
+Instances of binary objects are immutable. To update fields or create a new binary object, use a binary object builder. A binary object builder is a utility class that allows you to modify the fields of binary objects without having the class definition of the objects.
+
+
+[NOTE]
+====
+[discrete]
+=== Limitations
+
+* You cannot change the types of existing fields.
+* You cannot change the order of enum values or add new constants at the beginning or in the middle of the list of enum's values. You can add new constants to the end of the list though.
+
+====
+
+You can obtain an instance of the binary object builder for a specific type as follows:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=binaryBuilder,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+IIgnite ignite = Ignition.Start();
+
+IBinaryObjectBuilder builder = ignite.GetBinary().GetBuilder("Book");
+
+IBinaryObject book = builder
+  .SetField("ISBN", "xyz")
+  .SetField("Title", "War and Peace")
+  .Build();
+----
+tab:C++[unsupported]
+--
+
+Builders created in this way contain no fields.
+You can add fields by calling the `setField(...)` method.
+
+You can also obtain a binary object builder from an existing binary object by calling the `toBuilder()` method.
+In this case, all field values are copied from the binary object to the builder.
+
+
+In the following example, we use an entry processor to update an object on the server node without having the object's class deployed on that node and without full object deserialization.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=cacheEntryProc,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithBinaryObjects.cs[tag=entryProcessor,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Binary Type and Binary Fields
+
+Binary objects hold the information about the type of objects they represent. The type information includes the field names, field types and the affinity field name.
+
+The type of each field is represented by a `BinaryField` object.
+Once obtained, a `BinaryField` object can be reused multiple times if you need to read the same field from each object in a collection.
+Reusing a `BinaryField` object is faster than reading the field value directly from each binary object.
+Below is an example of using a binary field.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=binaryField,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+
+tab:C++[unsupported]
+--
+
+
+== Recommendations on Binary Objects Tuning
+
+Ignite keeps a _schema_ for every  Binary Object of a given type, which specifies the fields present in the object as well as their order and types.
+Schemas are replicated to all cluster nodes.
+Binary objects that have the same fields but in different order are considered to have different schemas.
+We strongly recommend you should always add fields to binary objects in the same order.
+
+A null field would normally take five bytes to store — four bytes for the field ID plus one byte for the field length.
+Memory-wise, it's preferable to not include a field, rather than include a null field.
+However, if you do not include a field, Ignite creates a new schema for this object, and that schema is different from the schema of the objects that do include the field.
+If you have multiple fields that are set to `null` in random combinations, Ignite maintains a different Binary Object schema for each combination, and your heap may be exhausted by the total size of the Binary Object schemas.
+It is better to have a few schemas for your Binary Objects, with the same set of fields of same types, set in the same order.
+Choose one of them when creating Binary Object by supplying the same set of fields, even with null value.
+This is also the reason you need to supply field type for null field.
+
+You can also nest your Binary Objects if you have a subset of fields which are optional but either all absent or all present.
+You can put them in a separate BinaryObject, which is either stored under a field in the parent object or set as null.
+
+If you have a large number of fields which are all optional in any combinations, and very often null, you can store them in a map field.
+You will have several fixed fields in your value object, and one map for extra properties.
+
+
+== Configuring Binary Objects
+
+In the vast majority of use cases, there is no need to configure binary objects. However, if you need to change the type and field IDs generation or plug in a custom serializer, you can do this via the configuration.
+
+The type and fields of a binary object are identified by their IDs. The IDs are calculated as the hash codes of the corresponding string names and are stored in each binary object. You can define your own implementation of ID generation in the configuration.
+
+The name-to-ID conversion is done in two steps.
+First, the type name (class name) or a field name is transformed by a name mapper, then an ID mapper calculates the IDs.
+You can specify a global name mapper, a global ID mapper, and a global binary serializer as well as per-type mappers and serializers.
+Wildcards are supported for per-type configuration, in which case the provided configuration is applied to all types that match the type name template.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/binary-objects.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/WorkingWithBinaryObjects.java[tag=cfg,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/WorkingWithBinaryObjects.cs[tag=binaryCfg,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/key-value-api/continuous-queries.adoc b/docs/_docs/key-value-api/continuous-queries.adoc
new file mode 100644
index 0000000..101fb51
--- /dev/null
+++ b/docs/_docs/key-value-api/continuous-queries.adoc
@@ -0,0 +1,177 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using Continuous Queries
+
+:javaFile: {javaCodeDir}/UsingContinuousQueries.java
+
+A continuous query is a query that monitors data modifications occurring in a cache.
+Once a continuous query is started, you get notified of all the data changes that fall into your query filter.
+
+All update events are propagated to the <<Local Listener,local listener>> that must be registered in the query.
+Continuous query implementation guarantees exactly once delivery of an event to the local listener.
+
+You can also specify a remote filter to narrow down the range of entries that are monitored for updates.
+
+[CAUTION]
+====
+[discrete]
+=== Continuous Queries and MVCC
+Continuous queries have a number of link:transactions/mvcc[functional limitations] when used with MVCC-enabled caches.
+====
+
+
+== Local Listener
+
+When a cache gets modified (an entry is inserted, updated, or deleted), an event is sent to the continuous query's local listener so that your application can react accordingly.
+The local listener is executed on the node that initiated the query.
+
+Note that the continuous query throws an exception if started without a local listener.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=localListener,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ContiniuosQueries.cs[tag=localListener,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/continuous_query_listener.cpp[tag=continuous-query-listener,indent=0]
+----
+--
+
+== Initial Query
+
+You can specify an initial query that is executed before the continuous query gets registered in the cluster and before you start to receive updates.
+To specify an initial query, use the `ContinuousQuery.setInitialQuery(...)` method.
+
+Just like scan queries, a continuous query is executed via the `query()` method that returns a cursor. When an initial query is set, you can use that cursor to iterate over the results of the initial query.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=initialQry,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/continuous_query.cpp[tag=continuous-query,indent=0]
+----
+--
+
+
+== Remote Filter
+
+This filter is executed for each updated key and evaluates whether the update should be propagated to the query's local listener.
+If the filter returns `true`, then the local listener is notified about the update.
+
+For redundancy reasons, the filter is executed for both primary and backup versions (if backups are configured) of the key.
+Because of this, a remote filter can be used as a remote listener for update events.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=remoteFilter,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ContiniuosQueries.cs[tag=remoteFilter,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/continuous_query_filter.cpp[tag=continuous-query-filter,indent=0]
+----
+--
+
+
+[NOTE]
+====
+In order to use remote filters, make sure the class definitions of the filters are available on the server nodes.
+You can do this in two ways:
+
+* Add the classes to the classpath of every server node;
+* link:code-deployment/peer-class-loading[Enable peer class loading].
+====
+
+
+== Remote Transformer
+
+By default, continuous queries send the whole updated object to the local listener. This can lead to excessive network usage, especially if the object is very large. Moreover, applications often need only a subset of fields of the object.
+
+To address these cases, you can use a continuous query with a transformer. A transformer is a function that is executed on remote nodes for every updated object and sends back only the results of the transformation.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=transformer,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+[NOTE]
+====
+In order to use transformers, make sure the class definitions of the transformers are available on the server nodes.
+You can do this in two ways:
+
+* Add the classes to the classpath of every server node;
+* link:code-deployment/peer-class-loading[Enable peer class loading].
+====
+
+== Events Delivery Guarantees
+
+Continuous queries ensure the exactly-once semantic for the delivery of events to the clients' local listeners.
+
+Both primary and backup nodes maintain an update queue that holds events that are processed by continuous queries
+on the server side but yet to be delivered to the clients. Suppose a primary node crashes
+or the cluster topology changes for any reason. In that case, every backup node flushes the content of its update
+queue to the client, making sure that every event is delivered to the client's local listener.
+
+Ignite manages a special per-partition update counter that helps to avoid duplicate notifications. Once an entry in
+some partition is updated, a counter for this partition is incremented on both primary and backup nodes. The value of
+this counter is also sent along with the event notification to the client. Thus, the client can skip already-processed
+events. Once the client confirms that an event is received, the primary and backup nodes remove the record for this event
+from their backup queues.
+
+
+== Examples
+
+The following application examples show typical usage of continuous queries.
+
+link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheContinuousQueryExample.java[CacheContinuousQueryExample.java,window=_blank]
+
+link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheContinuousAsyncQueryExample.java[CacheContinuousAsyncQueryExample.java,window=_blank]
+
+link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheContinuousQueryWithTransformerExample.java[CacheContinuousQueryWithTransformerExample.java,window=_blank]
diff --git a/docs/_docs/key-value-api/transactions.adoc b/docs/_docs/key-value-api/transactions.adoc
new file mode 100644
index 0000000..ab208eb
--- /dev/null
+++ b/docs/_docs/key-value-api/transactions.adoc
@@ -0,0 +1,330 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Performing Transactions
+
+:javaFile: {javaCodeDir}/PerformingTransactions.java
+
+== Overview
+
+To enable transactional support for a specific cache, set the `atomicityMode` parameter in the cache configuration to `TRANSACTIONAL`.
+See link:configuring-caches/atomicity-modes[Atomicity Modes] for details.
+
+Transactions allow you to group multiple cache operations, on one or more keys, into a single atomic transaction.
+These operations are executed without any other interleaved operations on the specified keys, and either all succeed or all fail.
+There is no partial execution of the operations.
+
+You can enable transactions for a specific cache in the cache configuration.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/transactions.xml[tags=ignite-config;!discovery;!cache, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=enabling,!exclude,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration
+{
+    CacheConfiguration = new[]
+    {
+        new CacheConfiguration("txCache")
+        {
+            AtomicityMode = CacheAtomicityMode.Transactional
+        }
+    },
+    TransactionConfiguration = new TransactionConfiguration
+    {
+        DefaultTransactionConcurrency = TransactionConcurrency.Optimistic
+    }
+};
+----
+tab:C++[unsupported]
+--
+
+== Executing Transactions
+
+The key-value API provides an interface for starting and completing transactions as well as getting transaction-related metrics. The interface can be obtained from an instance of `Ignite`.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=executing,!exclude,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PerformingTransactions.cs[tag=executingTransactions,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/transactions.cpp[tag=transactions-execution,indent=0]
+----
+
+--
+
+////
+== Two-Phase-Commit
+
+*TODO: read the articles from https://apacheignite.readme.io/docs/transactions#section-two-phase-commit-2pc and see if there is useful information in them*
+////
+
+== Concurrency Modes and Isolation Levels
+////
+*TODO: this stuff is incomprehensible. need to do something about it.*
+////
+
+Caches with the `TRANSACTIONAL` atomicity mode support both `OPTIMISTIC` and `PESSIMISTIC` concurrency modes for transactions. Concurrency mode determines when an entry-level transaction lock is acquired: at the time of data access or during the prepare phase. Locking prevents concurrent access to an object. For example, when you attempt to update a ToDo list item with pessimistic locking, the server places a lock on the object until you either commit or rollback the transaction and no other transaction or operation is allowed to update the same entry. Regardless of the concurrency mode used in a transaction, there exists a moment in time when all entries enlisted in the transaction are locked before the commit.
+
+Isolation level defines how concurrent transactions 'see' and handle operations on the same keys. Ignite supports `READ_COMMITTED`, `REPEATABLE_READ` and `SERIALIZABLE` isolation levels.
+
+All combinations of concurrency modes and isolation levels are allowed. Below is the description of the system behavior and the guarantees provided by each concurrency-isolation combination.
+
+=== Pessimistic Transactions
+
+In `PESSIMISTIC` transactions, locks are acquired during the first read or write access (depending on the isolation level) and held by the transaction until it is committed or rolled back. In this mode locks are acquired on primary nodes first and then promoted to backup nodes during the prepare stage. The following isolation levels can be configured with the `PESSIMISTIC` concurrency mode:
+
+* `READ_COMMITTED` - Data is read without a lock and is never cached in the transaction itself. The data may be read from a backup node if this is allowed in the cache configuration. In this isolation mode you can have the so-called Non-Repeatable Reads
+* because a concurrent transaction can change the data when you are reading the data twice in your transaction. The lock is only acquired at the time of first write access (this includes `EntryProcessor` invocation). This means that an entry that has been read during the transaction may have a different value by the time the transaction is committed. No exception is thrown in this case.
+
+* `REPEATABLE_READ` - Entry lock is acquired and data is fetched from the primary node on the first read or write access and stored in the local transactional map. All consecutive access to the same data is local and returns the last read or updated transaction value. This means no other concurrent transactions can make changes to the locked data, and you are getting Repeatable Reads for your transaction.
+
+* `SERIALIZABLE` - In the `PESSIMISTIC` mode, this isolation level works the same way as `REPEATABLE_READ`.
+
+Note that in the `PESSIMISTIC` mode, the order of locking is important. Moreover, locks are acquired sequentially and exactly in the specified order.
+
+[IMPORTANT]
+====
+[discrete]
+=== Topology Change Restrictions
+
+Note that if at least one `PESSIMISTIC` transaction lock is acquired, it is impossible to change the cache topology until the transaction is committed or rolled back.
+Therefore, you should avoid holding transaction locks for long periods of time.
+====
+
+
+=== Optimistic Transactions
+
+In `OPTIMISTIC` transactions, entry locks are acquired on primary nodes during the first phase of 2PC, at the `prepare` step, and then promoted to backup nodes and released once the transaction is committed. The locks are never acquired if you roll back the transaction and no commit attempt was made. The following isolation levels can be configured with the `OPTIMISTIC` concurrency mode:
+
+* `READ_COMMITTED` - Changes that should be applied to the cache are collected on the originating node and applied upon the transaction commit. Transaction data is read without a lock and is never cached in the transaction. The data may be read from a backup node if this is allowed in the cache configuration. In this isolation you can have so-called Non-Repeatable Reads because a concurrent transaction can change the data when you are reading the data twice in your transaction. This mode combination does not check if the entry value has been modified since the first read or write access and never raises an optimistic exception.
+
+* `REPEATABLE_READ` - Transactions at this isolation level work similar to `OPTIMISTIC` `READ_COMMITTED` transactions with only one difference: read values are cached on the originating node and all subsequent reads are guaranteed to be local. This mode combination does not check if the entry value has been modified since the first read or write access and never raises an optimistic exception.
+
+* `SERIALIZABLE` - Stores an entry version upon first read access. Ignite fails a transaction at the commit stage if the Ignite engine detects that at least one of the entries used as part of the initiated transaction has been modified. In short, this means that if Ignite detects that there is a conflict at the commit stage of a transaction, it fails the transaction, throwing `TransactionOptimisticException` and rolling back any changes made. Make sure you handle this exception and retry the transaction.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=optimistic,!exclude,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PerformingTransactions.cs[tag=optimisticTx,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/transactions.cpp[tag=transactions-optimistic,indent=0]
+----
+--
+
+Another important point to note here is that a transaction fails even if an entry was read without being modified (`cache.put(...)`), since the value of the entry could be important to the logic within the initiated transaction.
+
+Note that the key order is important for `READ_COMMITTED` and `REPEATABLE_READ` transactions since the locks are still acquired sequentially in these modes.
+
+=== Read Consistency
+
+In order to achieve full read consistency in PESSIMISTIC mode, read-locks need to be acquired. This means that full consistency between reads in the PESSIMISTIC mode can be achieved only with PESSIMISTIC REPEATABLE_READ (or SERIALIZABLE) transactions.
+
+When using OPTIMISTIC transactions, full read consistency can be achieved by disallowing potential conflicts between reads.
+This behavior is provided by OPTIMISTIC SERIALIZABLE mode.
+Note, however, that until the commit happens you can still read a partial transaction state, so the transaction logic must protect against it.
+Only during the commit phase, in case of any conflict, a `TransactionOptimisticException` is thrown allowing you to retry the transaction.
+
+IMPORTANT: If you are not using PESSIMISTIC REPEATABLE_READ or SERIALIZABLE transactions or OPTIMISTIC SERIALIZABLE transactions, then it is possible to see a partial transaction state. This means that if one transaction updates objects A and B, then another transaction may see the new value for A and the old value for B.
+
+
+
+== Deadlock Detection
+
+One major rule that you must follow when working with distributed transactions is that locks for the keys participating in a transaction must be acquired in the same order. Violating this rule can lead to a distributed deadlock.
+
+Ignite does not avoid distributed deadlocks, but rather has built-in functionality that makes it easier to debug and fix such situations.
+
+In the code snippet below, a transaction has been started with a timeout.
+If the timeout expires, the deadlock detection procedure tries to find a possible deadlock that might have caused the timeout.
+When the timeout expires, `TransactionTimeoutException` is generated and propagated to the application code as the cause of `CacheException` regardless of a deadlock.
+However, if a deadlock is detected, the cause of the returned `TransactionTimeoutException` will be `TransactionDeadlockException` (at least for one transaction involved in the deadlock).
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=deadlock,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PerformingTransactions.cs[tag=deadlock,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/transactions_pessimistic.cpp[tag=transactions-pessimistic,indent=0]
+----
+--
+
+The `TransactionDeadlockException` message contains useful information that can help you find the reason for the deadlock.
+
+
+
+[source,shell]
+----
+Deadlock detected:
+
+K1: TX1 holds lock, TX2 waits lock.
+K2: TX2 holds lock, TX1 waits lock.
+
+Transactions:
+
+TX1 [txId=GridCacheVersion [topVer=74949328, time=1463469328421, order=1463469326211, nodeOrder=1], nodeId=ad68354d-07b8-4be5-85bb-f5f2362fbb88, threadId=73]
+TX2 [txId=GridCacheVersion [topVer=74949328, time=1463469328421, order=1463469326210, nodeOrder=1], nodeId=ad68354d-07b8-4be5-85bb-f5f2362fbb88, threadId=74]
+
+Keys:
+
+K1 [key=1, cache=default]
+K2 [key=2, cache=default]
+----
+
+
+Deadlock detection is a multi-step procedure that can take many iterations depending on the number of nodes in the cluster, keys, and transactions that are involved in a possible deadlock. A deadlock detection initiator is a node where a transaction was started and failed with a `TransactionTimeoutException`.
+This node investigates if a deadlock has occurred by exchanging requests/responses with other remote nodes, and then prepares a deadlock related report that is provided with the `TransactionDeadlockException`.
+Each such message (request/response) is known as an iteration.
+
+Since a transaction is not rolled back until the deadlock detection procedure is completed, it sometimes makes sense to tune the parameters (shown below), if you want to have a predictable time for a transaction's rollback.
+
+- `IgniteSystemProperties.IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS` - Specifies the maximum number of iterations for the deadlock detection procedure. If the value of this property is less than or equal to zero, deadlock detection is disabled (1000 by default);
+- `IgniteSystemProperties.IGNITE_TX_DEADLOCK_DETECTION_TIMEOUT` - Specifies the timeout for the deadlock detection mechanism (1 minute by default).
+
+Note that if there are too few iterations, you may get an incomplete deadlock-report.
+
+
+== Deadlock-free Transactions
+
+For `OPTIMISTIC` `SERIALIZABLE` transactions, locks are not acquired sequentially. In this mode, keys can be accessed in any order because transaction locks are acquired in parallel with an additional check allowing Ignite to avoid deadlocks.
+
+We need to introduce some concepts in order to describe how locks in `SERIALIZABLE` transactions work.
+In Ignite, each transaction is assigned a comparable version called `XidVersion`.
+Upon transaction commit, each entry that is written in the transaction is assigned a new comparable version called `EntryVersion`.
+An `OPTIMISTIC` `SERIALIZABLE` transaction with version `XidVersionA` fails with a `TransactionOptimisticException` if:
+
+ * There is an ongoing `PESSIMISTIC` or non-serializable `OPTIMISTIC` transaction holding a lock on an entry of the `SERIALIZABLE` transaction.
+ * There is another ongoing `OPTIMISTIC` `SERIALIZABLE` transaction with version `XidVersionB` such that `XidVersionB > XidVersionA` and this transaction holds a lock on an entry of the `SERIALIZABLE` transaction.
+ * By the time the `OPTIMISTIC` `SERIALIZABLE` transaction acquires all required locks, there exists an entry with the current version different from the observed version before commit.
+
+
+[NOTE]
+====
+In a highly concurrent environment, optimistic locking might lead to a high transaction failure rate but pessimistic locking can lead to deadlocks if locks are acquired in a different order by transactions.
+
+However, in a contention-free environment optimistic serializable locking may provide better performance for large transactions because the number of network trips depends only on the number of nodes that the transaction spans and does not depend on the number of keys in the transaction.
+====
+
+
+== Handling Failed Transactions
+A transaction might fail with the following exceptions:
+
+[cols="",opts="autowidth,header"]
+|===
+| Exception | Description | Solution
+| `CacheException` caused by `TransactionTimeoutException` | `TransactionTimeoutException` is generated if the transaction times out.  | To solve this exception, increase the timeout or make the transaction shorter.
+
+| `CacheException` caused by `TransactionTimeoutException`, which is caused by `TransactionDeadlockException`
+| This exception is thrown if the optimistic transaction fails for some reason. In most cases, this exception occurs when the data the transaction was trying to update was changed concurrently.   | Rerun the transaction.
+
+| `TransactionOptimisticException`
+| This exception is thrown if the optimistic transaction fails for some reason. In most of the scenarios, this exception occurs when the data the transaction was trying to update was changed concurrently.
+| Rerun the transaction.
+
+|`TransactionRollbackException`
+| This exception occurs when a transaction is rolled back (automatically or manually). In this case, the data is consistent.
+| Since the data is in a consistent state, you can retry the transaction.
+
+| `TransactionHeuristicException`
+| An unlikely exception that happens due to an unexpected internal or communication issue. The exception exists to report problematic scenarios that were not foreseen by the transactional subsystem and were not handled by it properly.
+
+| The data might not stay consistent if the exception occurs. Reload the data and report to Ignite development community.
+|===
+
+
+== Long Running Transactions Termination
+
+Some cluster events trigger partition map exchange process and data rebalancing within an Ignite cluster to ensure even data distribution cluster-wide. An example of one such event is the cluster-topology-change event that takes place whenever a new node joins the cluster or an existing one leaves it. Plus, every time a new cache or SQL table is created, the partition map exchange gets triggered.
+
+When the partition map exchange starts, Ignite acquires a global lock at a particular stage. The lock can't be obtained while incomplete transactions are running in parallel. These transactions prevent the partition map exchange process from moving forward​, thus, blocking some operations such as a new node join process.
+
+Use the `TransactionConfiguration.setTxTimeoutOnPartitionMapExchange(...)` method to set the maximum time allowed for your long-running transactions to block the partition map exchange.
+Once the timeout fires, all incomplete transactions are rolled back letting the partition map exchange proceed.
+
+This example shows how to configure the timeout:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/transactions.xml[tags=ignite-config;configuration;!cache;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=timeout,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PerformingTransactions.cs[tag=pmeTimeout,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Monitoring Transactions
+
+Refer to the link:monitoring-metrics/metrics#monitoring-transactions[Monitoring Transactions] section for the list of metrics that expose some transaction-related information.
+
+For the information on how to trace transactions, refer to the link:monitoring-metrics/tracing[Tracing] section.
+
+You can also use the link:control-script#transaction-management[control script] to get information about, or cancel, specific transactions being executed in the cluster.
diff --git a/docs/_docs/key-value-api/using-scan-queries.adoc b/docs/_docs/key-value-api/using-scan-queries.adoc
new file mode 100644
index 0000000..5463e6f
--- /dev/null
+++ b/docs/_docs/key-value-api/using-scan-queries.adoc
@@ -0,0 +1,124 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using Scan Queries
+
+:javaFile: {javaCodeDir}/UsingScanQueries.java
+:dotnetFile: code-snippets/dotnet/UsingScanQueries.cs
+// rewrite
+
+== Overview
+`IgniteCache` has several query methods, all of which receive a subclass of the `Query` class and return a `QueryCursor`.
+
+A `Query` represents an abstract paginated query to be executed on a cache.
+The page size is configurable via the `Query.setPageSize(...)` method (default is 1024).
+
+
+`QueryCursor` represents the query result set and allows for transparent page-by-page iteration.
+When a user starts iterating over the last page, `QueryCursos` automatically requests the next page in the background.
+For cases when pagination is not needed, you can use the `QueryCursor.getAll()` method, which fetches the entries and stores them in a collection.
+
+[NOTE]
+====
+[discrete]
+=== Closing Cursors
+Cursors close automatically when you call the `QueryCursor.getAll()` method. If you are iterating over the cursor in a for loop or explicitly getting an `Iterator`, you must close the cursor explicitly or use a  try-with-resources statement.
+====
+
+
+== Executing Scan Queries
+
+A scan query is a simple search query used to retrieve data from a cache in a distributed manner. When executed without parameters, a scan query returns all entries from the cache.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=scanQry,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=scanQry1,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/scan_query.cpp[tag=query-cursor,indent=0]
+----
+--
+
+
+Scan queries return entries that match a predicate, if specified. The predicate is applied on the remote nodes.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=predicate,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=scanQry2,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Scan queries also support an optional transformer closure which lets you convert the entry on the server node before sending it back. This is useful, for example, when you want to fetch only several fields of a large object and want to minimize the network traffic. The example below shows how to fetch only the keys without sending the values.
+
+[tabs]
+--
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=transformer,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== Local Scan Query
+
+By default, a scan query is distributed to all nodes.
+However, you can execute the query locally, in which case the query runs against the data stored on the local node (i.e. the node where the query is executed).
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=localQuery,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=scanQryLocal,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/scan_query.cpp[tag=set-local,indent=0]
+----
+--
+
+== Related Topics
+
+* link:restapi#sql-scan-query-execute[Execute scan query via REST API]
+* link:events/events#cache-query-events[Cache Query Events]
diff --git a/docs/_docs/key-value-api/with-expiry-policy.adoc b/docs/_docs/key-value-api/with-expiry-policy.adoc
new file mode 100644
index 0000000..2a9439b
--- /dev/null
+++ b/docs/_docs/key-value-api/with-expiry-policy.adoc
@@ -0,0 +1,40 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Expiry Policy for Individual Entries
+:page-published: false
+
+You can have expiry policy
+
+If you have a cache without an expiry policy, you can get an instance of the
+
+Every new entries added via this instance of the cache will expire based on the given policy.
+
+The expiry policy will apply only if the cache does not have an entry with the given key.
+
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/ExpiryPolicies.java[tag=expiry2,indent=0]
+----
+
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
diff --git a/docs/_docs/logging.adoc b/docs/_docs/logging.adoc
new file mode 100644
index 0000000..96019f4
--- /dev/null
+++ b/docs/_docs/logging.adoc
@@ -0,0 +1,184 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Configuring Logging
+
+== Overview
+
+Ignite supports a number of logging libraries and frameworks:
+
+- JUL (default),
+- Log4j,
+- Log4j2,
+- JCL,
+- SLF4J.
+
+This section shows you how to set up the logger.
+
+When a node starts, it outputs start-up information to the console, including the information about the configured logging library. Each logging library has its own configuration parameters and should be set up according to its official documentation. Besides library-specific configuration, there is a number of system properties that allow you to tune logging. These properties are presented in the following table.
+
+
+[cols="1,3,1",opts="stretch,header"]
+|===
+| System Property | Description | Default Value
+| `IGNITE_LOG_INSTANCE_NAME` | If the property is set, Ignite includes its instance name in log messages. |  Not set
+| `IGNITE_QUIET` | Set to `false` to disable the quiet mode and enable the verbose mode.
+In the verbose mode, the node logs a lot more information. | `true`
+| `IGNITE_LOG_DIR` | The directory where Ignite writes log files. | `$IGNITE_HOME/
+work/log`
+| `IGNITE_DUMP_THREADS_ON_FAILURE` | Set to `true` to output thread dumps to the log when a critical error is caught. | `true`
+|===
+
+
+== Default Logging
+By default, Ignite uses the java.util.logging (JUL) framework.
+If you start Ignite using the `ignite.sh|bat` script from the distribution package, Ignite uses `$IGNITE_HOME/config/java.util.logging.properties` as the default logging configuration file and outputs all messages to log files in the `$IGNITE_HOME/work/log` directory.
+You can override the default logging directory by specifying the `IGNITE_LOG_DIR` system property.
+
+If you use Ignite as a library in your application, the default logging configuration includes only console handler at INFO level.
+You can provide a custom configuration file via the `java.util.logging.config.file` system property.
+
+== Using Log4j
+
+NOTE: Before using Log4j, enable the link:setup#enabling-modules[ignite-log4j] module.
+
+To enable Log4j logger, set the `gridLogger` property of `IgniteConfiguration`, as shown in the following example:
+
+:javaFile: {javaCodeDir}/Logging.java
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/log4j.xml[tags=log4j;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=log4j, indent=0]
+----
+tab:.NET[unsupported]
+tab:C++[unsupported]
+--
+
+In the above example, the path to `log4j-config.xml` can be either an absolute path, a local path relative to META-INF in classpath or to `IGNITE_HOME`. An example log4j configuration file can be found in the distribution package (`$IGNITE_HOME/config/ignite-log4j.xml`).
+
+== Using Log4j2
+NOTE: Before using Log4j2, enable the link:setup#enabling-modules[ignite-log4j2] module.
+
+To enable Log4j2 logger, set the `gridLogger` property of `IgniteConfiguration`, as shown below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/log4j2.xml[tags=log4j2;!discovery, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=log4j2, indent=0]
+----
+
+tab:.NET[unsupported]
+
+
+tab:C++[unsupported]
+--
+
+In the above example, the path to `log4j2-config.xml` can be either an absolute path, a local path relative to META-INF in classpath or to `IGNITE_HOME`. An example log4j2 configuration file can be found in the distribution package (`$IGNITE_HOME/config/ignite-log4j2.xml`).
+
+NOTE: Log4j2 supports runtime reconfiguration, i.e. changes in the configuration file is applied without the need to restart the application.
+
+== Using JCL
+NOTE: Before using JCL, enable the link:setup#enabling-modules[ignite-jcl] module.
+
+NOTE: Note that JCL simply forwards logging messages to an underlying logging system, which needs to be properly configured. Refer to the link:https://commons.apache.org/proper/commons-logging/guide.html#Configuration[JCL official documentation] for more information. For example, if you want to use Log4j, make sure you add the required libraries to your classpath.
+
+To enable Log4j2 logger, set the `gridLogger` property of `IgniteConfiguration`, as shown below:
+
+[tabs]
+--
+
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/jcl.xml[tags=jcl;!discovery, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=jcl, indent=0]
+----
+
+tab:.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== Using SLF4J
+
+NOTE: Before using SLF4J, enable the link:setup#enabling-modules[ignite-slf4j] module.
+
+To enable the SLF4J logger, set the `gridLogger` property of `IgniteConfiguration`, as shown below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/slf4j.xml[tags=slf4j;!discovery, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=slf4j, indent=0]
+----
+
+tab:.NET[unsupported]
+
+tab:C++[unsupported]
+--
+
+Refer to the link:https://www.slf4j.org/docs.html[SLF4J user manual] for more information.
+
+== Suppressing Sensitive Information
+
+Logs can include the content of cache entries, system properties, startup options, etc.
+In some cases, those can contain sensitive information.
+You can prevent such information from being written to the log by setting the `IGNITE_TO_STRING_INCLUDE_SENSITIVE` system property to `false`.
+
+[source, shell]
+----
+./ignite.sh -J-DIGNITE_TO_STRING_INCLUDE_SENSITIVE=false
+----
+
+See link:starting-nodes#setting-jvm-options[Setting JVM Options] to learn about different ways to set system properties.
+
+== Logging Configuration Example
+
+The following steps guide you through the process of configuring logging. This should be suitable for most cases.
+
+. Use either Log4j or Log4j2 as the logging framework. To enable it, follow the instructions provided in the corresponding section above.
+. If you use the default configuration file (either `ignite-log4j.xml` or `ignite-log4j2.xml`), uncomment the CONSOLE appender.
+. In the log4j configuration file, set the path to the log file. The default location is `${IGNITE_HOME}/work/log/ignite.log`.
+. Start the nodes in verbose mode:
+   - If you use `ignite.sh` to start nodes, specify the `-v` option.
+   - If you start nodes from Java code, use the `IGNITE_QUIET=false` system variable.
+
+
diff --git a/docs/_docs/machine-learning/binary-classification/ann.adoc b/docs/_docs/machine-learning/binary-classification/ann.adoc
new file mode 100644
index 0000000..6d6e415
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/ann.adoc
@@ -0,0 +1,87 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ANN (Approximate Nearest Neighbor)
+
+An approximate nearest neighbor search algorithm is allowed to return points, whose distance from the query is at most *c* times the distance from the query to its nearest points.
+
+The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter.
+
+The ANN algorithm is able to solve multi-class classification tasks. The Apache Ignite implementation is a heuristic algorithm based upon searching of small limited size *N* of candidate points (internally it uses a distributed KMeans clustering algorithm to find centroids) that can vote for class labels like a KNN algorithm.
+
+The difference between KNN and ANN is that in the prediction phase, all training points are involved in searching k-nearest neighbors in the KNN algorithm, but in ANN this search starts only on a small subset of candidates points.
+
+NOTE: if *N* is set to the size of the training set, the ANN reduces to KNN with enormous time spent in the training phase. So, instead, choose *N* comparable with *k* (e.g. 10 x k, 100 x k, and so on).
+
+== Model
+
+ANN classification output represents a class membership. An object is classified by the majority votes of its neighbors. The object is assigned to a particular class that is most common among its *k* nearest neighbors. *k* is a positive integer, typically small. There is a special case when *k* is 1, then the object is simply assigned to the class of that single nearest neighbor.
+At present, Ignite supports the following parameters for the ANN classification algorithm:
+
+  * k - the number of nearest neighbors.
+  * distanceMeasure - one of the distance metrics provided by the Machine Learning (ML) framework, such as Euclidean, Hamming or Manhattan.
+  * isWeighted - false by default, if true it enables a weighted KNN algorithm.
+
+
+[source, java]
+----
+NNClassificationModel knnMdl = trainer.fit(
+...
+).withK(5)
+ .withDistanceMeasure(new EuclideanDistance())
+ .withWeighted(true);
+
+
+// Make a prediction.
+double prediction = knnMdl.predict(observation);
+----
+
+== Trainer
+
+The trainer of the ANN model uses KMeans to calculate the candidate subset and this is the reason that it has the same parameters as the KMeans algorithm to tune its hyperparameters. It builds not only the set of candidates but also their class-label distributions to vote for the class label during the prediction phase.
+
+At present, Ignite supports the following parameters for the ANNClassificationTrainer:
+
+  * k - the number of possible clusters.
+  * maxIterations - one stop criteria (the other one is epsilon).
+  * epsilon - delta of convergence (delta between old and new centroid values).
+  * distance - one of the distance metrics provided by the ML framework, such as Euclidean, Hamming or Manhattan.
+  * seed - one of initialization parameters which helps to reproduce models (trainer has a random initialization step to get the first centroids).
+
+
+[source, java]
+----
+// Set up the trainer
+ANNClassificationTrainer trainer = new ANNClassificationTrainer()
+  .withDistance(new ManhattanDistance())
+  .withK(50)
+  .withMaxIterations(1000)
+  .withSeed(1234L)
+  .withEpsilon(1e-2);
+
+// Build the model
+NNClassificationModel knnMdl = trainer.fit(
+  ignite,
+  dataCache,
+  vectorizer
+).withK(5)
+ .withDistanceMeasure(new EuclideanDistance())
+ .withWeighted(true);
+----
+
+== Example
+
+
+To see how ANNClassificationModel can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/knn/ANNClassificationExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution. The training dataset is the Iris dataset that can be loaded from the https://archive.ics.uci.edu/ml/datasets/iris[UCI Machine Learning Repository].
+
diff --git a/docs/_docs/machine-learning/binary-classification/decision-trees.adoc b/docs/_docs/machine-learning/binary-classification/decision-trees.adoc
new file mode 100644
index 0000000..57ab7bf
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/decision-trees.adoc
@@ -0,0 +1,77 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Decision Trees
+
+Decision trees and their ensembles are popular methods for the machine learning tasks of classification and regression. Decision trees are widely used since they are easy to interpret, handle categorical features, extend to the multiclass classification setting, do not require feature scaling, and are able to capture non-linearities and feature interactions. Tree ensemble algorithms such as random forests and boosting are among the top performers for classification and regression tasks.
+
+== Overview
+
+Decision trees are a simple yet powerful model in supervised machine learning. The main idea is to split a feature space into regions such as that the value in each region varies a little. The measure of the values' variation in a region is called the impurity of the region.
+
+Apache Ignite provides an implementation of the algorithm optimized for data stored in rows (see link:machine-learning/partition-based-dataset[Partition Based Dataset]).
+
+Splits are done recursively and every region created from a split can be split further. Therefore, the whole process can be described by a binary tree, where each node is a particular region and its children are the regions derived from it by another split.
+
+Let each sample from a training set belong to some space `S` and let `p_i` be a projection on a feature with index `i`, then a split by continuous feature with index `i` has the form:
+
+image::images/555.gif[]
+
+and a split by categorical feature with values from some set `X` has the form:
+
+image::images/666.gif[]
+
+Here `X_0` is a subset of `X`.
+
+The model works this way - the split process stops when either the algorithm has reached the configured maximal depth, or splitting of any region has not resulted in significant impurity loss. Prediction of a value for point `s` from `S` is a traversal of the tree down to the node that corresponds to the region containing `s` and getting back a value associated with this leaf.
+
+
+== Model
+
+The Model in a decision tree classification is represented by the class `DecisionTreeNode`. We can make a prediction for a given vector of features in the following way:
+
+
+[source, java]
+----
+DecisionTreeNode mdl = ...;
+
+double prediction = mdl.apply(observation);
+----
+
+The model is a fully independent object and after the training it can be saved, serialized and restored.
+
+== Trainer
+
+A Decision Tree algorithm can be used for classification and regression depending upon the impurity measure and node instantiation approach.
+
+=== Classification
+
+The Classification Decision Tree uses the https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity[Gini] impurity measure and you can use it in the following way:
+
+[source, java]
+----
+// Create decision tree classification trainer.
+DecisionTreeClassificationTrainer trainer = new DecisionTreeClassificationTrainer(
+    4, // Max deep.
+    0  // Min impurity decrease.
+);
+
+// Train model.
+DecisionTreeNode mdl = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+
+== Examples
+
+To see how the Decision Tree can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tree/DecisionTreeClassificationTrainerExample.java[classification example] that is available on GitHub and delivered with every Apache Ignite distribution.
diff --git a/docs/_docs/machine-learning/binary-classification/introduction.adoc b/docs/_docs/machine-learning/binary-classification/introduction.adoc
new file mode 100644
index 0000000..eccdfaf
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/introduction.adoc
@@ -0,0 +1,36 @@
+---
+layout: toc
+---
+
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+= Introduction
+
+In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.
+
+All existing training algorithms presented in this section are designed to solve binary classification tasks:
+
+
+*  Linear SVM (Support Vector Machines)
+*  Decision Trees
+*    Multilayer perceptron
+*    Logistic Regression
+*    k-NN Classification
+*    ANN (Approximate Nearest Neighbor)
+*    Naive Bayes
+
+
+Binary or binomial classification is the task of classifying the elements of a given set into two groups (predicting which group each one belongs to) on the basis of a classification rule.
diff --git a/docs/_docs/machine-learning/binary-classification/knn-classification.adoc b/docs/_docs/machine-learning/binary-classification/knn-classification.adoc
new file mode 100644
index 0000000..e5d1315
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/knn-classification.adoc
@@ -0,0 +1,63 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= k-NN Classification
+
+The Apache Ignite Machine Learning component provides two versions of the widely used k-NN (k-nearest neighbors) algorithm - one for classification tasks and the other for regression tasks.
+
+This documentation reviews k-NN as a solution for classification tasks.
+
+== Trainer and Model
+
+The k-NN algorithm is a non-parametric method whose input consists of the k-closest training examples in the feature space.
+
+Also, k-NN classification's output represents a class membership. An object is classified by the majority votes of its neighbors. The object is assigned to a particular class that is most common among its k nearest neighbors. `k` is a positive integer, typically small. There is a special case when `k` is `1`, then the object is simply assigned to the class of that single nearest neighbor.
+
+Presently, Ignite supports a few parameters for k-NN classification algorithm:
+
+* `k` - a number of nearest neighbors
+* `distanceMeasure` - one of the distance metrics provided by the ML framework such as Euclidean, Hamming or Manhattan.
+* `isWeighted` - false by default, if true it enables a weighted KNN algorithm.
+* `dataCache` -  holds a training set of objects for which the class is already known.
+* `indexType` - distributed spatial index, has three values: ARRAY, KD_TREE, BALL_TREE.
+
+
+[source, java]
+----
+// Create trainer
+KNNClassificationTrainer trainer = new KNNClassificationTrainer();
+
+// Create trainer
+KNNClassificationTrainer trainer = new KNNClassificationTrainer()
+  .withK(3)
+  .withIdxType(SpatialIndexType.BALL_TREE)
+  .withDistanceMeasure(new EuclideanDistance())
+  .withWeighted(true);
+
+// Train model.
+KNNClassificationModel knnMdl = trainer.fit(
+  ignite,
+  dataCache,
+  vectorizer
+);
+
+// Make a prediction.
+double prediction = knnMdl.predict(observation);
+----
+
+== Example
+
+To see how kNN Classification can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/knn/KNNClassificationExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution.
+
+The training dataset is the Iris dataset which can be loaded from the https://archive.ics.uci.edu/ml/datasets/iris[UCI Machine Learning Repository].
diff --git a/docs/_docs/machine-learning/binary-classification/linear-svm.adoc b/docs/_docs/machine-learning/binary-classification/linear-svm.adoc
new file mode 100644
index 0000000..c582818
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/linear-svm.adoc
@@ -0,0 +1,52 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Linear SVM (Support Vector Machine)
+
+Support Vector Machines (SVMs) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
+
+Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier.
+
+Apache Ignite Machine Learning module only supports Linear SVM. For more information look at SVM in link:https://en.wikipedia.org/wiki/Support_vector_machine[Wikipedia].
+
+== Model
+
+A Model in the case of SVM is represented by the class `SVMLinearClassificationModel`. It enables a prediction to be made for a given vector of features, in the following way:
+
+
+[source, java]
+----
+SVMLinearClassificationModel model = ...;
+
+double prediction = model.predict(observation);
+----
+
+Presently Ignite supports a few parameters for SVMLinearClassificationModel:
+
+* `isKeepingRawLabels` - controls the output label format: -1 and +1 for false value and raw distances from the separating hyperplane (default value: false)
+* `threshold` - a threshold to assign +1 label to the observation if the raw value is more than this threshold (default value: 0.0)
+
+
+[source, java]
+----
+SVMLinearClassificationModel model = ...;
+
+double prediction = model
+  .withRawLabels(true)
+  .withThreshold(5)
+  .predict(observation);
+----
+
+
+
diff --git a/docs/_docs/machine-learning/binary-classification/logistic-regression.adoc b/docs/_docs/machine-learning/binary-classification/logistic-regression.adoc
new file mode 100644
index 0000000..73e40d6
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/logistic-regression.adoc
@@ -0,0 +1,85 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Logistic Regression
+
+Binary Logistic Regression is a special type of regression where a binary response variable is related to a set of explanatory variables, which can be discrete and/or continuous. The important point here to note is that in linear regression, the expected values of the response variable are modeled based on a combination of values taken by the predictors. In logistic regression Probability or Odds of the response taking a particular value is modeled based on the combination of values taken by the predictors. In the Apache Ignite ML module it is implemented via LogisticRegressionModel that solves the binary classification problem. It is a linear method with the loss function in the formulation given by the logistic loss:
+
+image::images/logistic-regression.png[]
+
+For binary classification problems, the algorithm outputs a binary logistic regression model. Given a new data point, denoted by x, the model makes predictions by applying the logistic function:
+
+
+image::images/logistic-regression2.png[]
+
+By default, if `f(wTx)>0.5` or `\mathrm{f}(\wv^T x) > 0.5` (Tex formula), the outcome is positive, or negative otherwise. However, unlike linear SVMs, the raw output of the logistic regression model f(z) has a probabilistic interpretation (i.e., the probability that it is positive).
+
+== Model
+
+The model is represented by the class `LogisticRegressionModel` and keeps the weight vector. It enables a prediction to be made for a given vector of features, in the following way:
+
+
+[source, java]
+----
+LogisticRegressionModel mdl = …;
+
+double prediction = mdl.predict(observation);
+----
+
+Ignite supports several parameters for LogisticRegressionModel:
+
+* `isKeepingRawLabels` - controls the output label format: 0 and 1 for false value and raw distances from the separating hyperplane otherwise (default value: false)
+* `threshold` - a threshold to assign label ‘1’ to the observation if the raw value is more than this threshold (default value: 0.5)
+
+
+
+[source, java]
+----
+LogisticRegressionModel mdl = …;
+
+double prediction = mdl.withRawLabels(true).withThreshold(0.5).predict(observation);
+----
+
+== Trainer
+
+Trainer of the binary logistic regression model builds a MLP 1-level trainer under the hood.
+
+Ignite supports the following parameters for LogisticRegressionSGDTrainer:
+
+  * updatesStgy - update strategy
+  * maxIterations - max amount of iterations before convergence
+  * batchSize - the size of learning batch
+  * locIterations - the amount of local iterations of SGD algorithm
+  * seed - seed value for internal random purposes to reproduce training results
+
+
+Set up the trainer:
+
+[source, java]
+----
+LogisticRegressionSGDTrainer trainer = new LogisticRegressionSGDTrainer()
+  .withUpdatesStgy(UPDATES_STRATEGY)
+  .withAmountOfIterations(MAX_ITERATIONS)
+  .withAmountOfLocIterations(BATCH_SIZE)
+  .withBatchSize(LOC_ITERATIONS)
+  .withSeed(SEED);
+
+// Build the model
+LogisticRegressionModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+
+== Example
+
+To see how `LogRegressionMultiClassModel` can be used in practice, try this link:https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/regression/logistic/multiclass/LogRegressionMultiClassClassificationExample.java[example, window=_blank], available on GitHub and delivered with every Apache Ignite distribution.
diff --git a/docs/_docs/machine-learning/binary-classification/multilayer-perceptron.adoc b/docs/_docs/machine-learning/binary-classification/multilayer-perceptron.adoc
new file mode 100644
index 0000000..0308b32
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/multilayer-perceptron.adoc
@@ -0,0 +1,78 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Multilayer Perceptron
+
+Multiplayer Perceptron (MLP) is the basic form of neural network. It consists of one input layer and 0 or more transformation layers. Each transformation layer depends on the previous layer in the following way:
+
+image::images/333.gif[]
+
+In the above equation, the dot operator is the dot product of two vectors, functions denoted by `sigma` are called activators, vectors denoted by `w` are called weights, and vectors denoted by `b` are called biases. Each transformation layer has associated weights, activator, and optionally biases. The set of all weights and biases of MLP is the set of MLP parameters.
+
+
+== Model
+
+
+Model in case of neural network is represented by class `MultilayerPerceptron`. It allows you to make a prediction for a given vector of features in the following way:
+
+
+[source, java]
+----
+MultilayerPerceptron mlp = ...
+
+Matrix prediction = mlp.apply(observation);
+----
+
+The model is a fully independent object and after the training it can be saved, serialized and restored.
+
+== Trainer
+
+One of the popular ways for supervised model training is batch training. In this approach, training is done in iterations; during each iteration we extract a `subpart(batch)` of labeled data (data consisting of input of approximated function and corresponding values of this function which are often called 'ground truth') on which we train and update model parameters using this subpart. Updates are made to minimize loss function on batches.
+
+Apache Ignite `MLPTrainer` is used for distributed batch training, which works in a map-reduce way. Each iteration (let's call it global iteration) consists of several parallel iterations which in turn consists of several local steps. Each local iteration is executed by its own worker and performs the specified number of local steps (called synchronization period) to compute its update of model parameters. Then all updates are accumulated on the node that started the training, and are transformed to global update which is sent back to all workers. This process continues until stop criteria is reached.
+
+`MLPTrainer` can be parameterized by neural network architecture, loss function, update strategy (`SGD`, `RProp` or `Nesterov`), max number of iterations, batch size, number of local iterations and seed.
+
+
+[source, java]
+----
+// Define a layered architecture.
+MLPArchitecture arch = new MLPArchitecture(2).
+    withAddedLayer(10, true, Activators.RELU).
+    withAddedLayer(1, false, Activators.SIGMOID);
+
+// Define a neural network trainer.
+MLPTrainer<SimpleGDParameterUpdate> trainer = new MLPTrainer<>(
+    arch,
+    LossFunctions.MSE,
+    new UpdatesStrategy<>(
+        new SimpleGDUpdateCalculator(0.1),
+        SimpleGDParameterUpdate::sumLocal,
+        SimpleGDParameterUpdate::avg
+    ),
+    3000,   // Max iterations.
+    4,      // Batch size.
+    50,     // Local iterations.
+    123L    // Random seed.
+);
+
+// Train model.
+MultilayerPerceptron mlp = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+
+== Example
+
+To see how Deep Learning can be used in practice, try link:https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/nn/MLPTrainerExample.java[this example, window=_blank], available on GitHub and delivered with every Apache Ignite distribution.
+
diff --git a/docs/_docs/machine-learning/binary-classification/naive-bayes.adoc b/docs/_docs/machine-learning/binary-classification/naive-bayes.adoc
new file mode 100644
index 0000000..43307da
--- /dev/null
+++ b/docs/_docs/machine-learning/binary-classification/naive-bayes.adoc
@@ -0,0 +1,109 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Naive Bayes
+
+== Overview
+
+Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features.
+In all trainers, prior probabilities can be preset or calculated. Also, there is an option to use equal probabilities.
+
+
+
+== Gaussian Naive Bayes
+
+Gaussian Naive Bayes algorithm is based on https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Gaussian_naive_Bayes[this information^].
+
+When dealing with continuous data, a typical assumption is that the continuous values associated with each class are distributed according to a normal (or Gaussian) distribution
+
+The model predicts the result value y belongs to a class C_k, k in [0..K] as
+
+image::images/naive-bayes.png[]
+
+Where
+
+image::images/naive-bayes2.png[]
+
+
+The model returns the number (index) of the most possible class.
+The trainer counts means and variances for each class.
+
+
+[source, java]
+----
+GaussianNaiveBayesTrainer trainer = new GaussianNaiveBayesTrainer();
+
+GaussianNaiveBayesModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+The full example could be found https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/naivebayes/GaussianNaiveBayesTrainerExample.java[here].
+
+== Discrete (Bernoulli) Naive Bayes
+
+Naive Bayes algorithm over Bernoulli or multinomial distribution based on next https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes[information].
+
+It can be used for non-continuous features. The thresholds to convert a feature to a discrete value should be set to a trainer. If the features are binary, the discrete Bayes becomes Bernoulli.
+
+The model predicts the result value y belongs to a class C_k, k in [0..K] as
+
+image::images/naive-bayes3.png[]
+
+Where x_i is a discrete feature, p_ki is a prior probability of class p(C_k).
+
+The model returns the number (index) of the most possible class.
+
+
+[source, java]
+----
+double[][] thresholds = new double[][] {{.5}, {.5}, {.5}, {.5}, {.5}};
+
+DiscreteNaiveBayesTrainer trainer = new DiscreteNaiveBayesTrainer()
+  .setBucketThresholds(thresholds);
+
+ DiscreteNaiveBayesModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+
+The full example could be found https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/naivebayes/DiscreteNaiveBayesTrainerExample.java[here].
+
+
+== Compound Naive Bayes
+
+Compound Naive Bayes is a composition of several Naive Bayes classifiers where each classifier represents subset of features of one type.
+
+The model contains both Gaussian and Discrete Bayes. A user can select which set of features will be trained on each model.
+
+The model returns the number (index) of the most possible class.
+
+
+
+[source, java]
+----
+double[] priorProbabilities = new double[] {.5, .5};
+
+double[][] thresholds = new double[][] {{.5}, {.5}, {.5}, {.5}, {.5}};
+
+CompoundNaiveBayesTrainer trainer = new CompoundNaiveBayesTrainer()
+  .withPriorProbabilities(priorProbabilities)
+  .withGaussianNaiveBayesTrainer(new GaussianNaiveBayesTrainer())
+  .withGaussianFeatureIdsToSkip(asList(3, 4, 5, 6, 7))
+  .withDiscreteNaiveBayesTrainer(new DiscreteNaiveBayesTrainer()
+                                 .setBucketThresholds(thresholds))
+  .withDiscreteFeatureIdsToSkip(asList(0, 1, 2));
+
+  CompoundNaiveBayesModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+The full example could be found https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/naivebayes/CompoundNaiveBayesExample.java[here].
+
diff --git a/docs/_docs/machine-learning/clustering/gaussian-mixture.adoc b/docs/_docs/machine-learning/clustering/gaussian-mixture.adoc
new file mode 100644
index 0000000..9c241e9
--- /dev/null
+++ b/docs/_docs/machine-learning/clustering/gaussian-mixture.adoc
@@ -0,0 +1,71 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Gaussian mixture (GMM)
+
+A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters.
+
+NOTE: You could think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.
+
+== Model
+
+This algorithm represents a soft clustering model where each cluster is a Gaussian distribution with its own mean value and covariation matrix. Such a model can predict a cluster using the maximum likelihood principle.
+
+It defines the labels by the following way:
+
+
+[source, java]
+----
+KMeansModel mdl = trainer.fit(
+    ignite,
+    dataCache,
+    vectorizer
+);
+
+double clusterLabel = mdl.predict(inputVector);
+----
+
+
+== Trainer
+
+
+GMM is a unsupervised learning algorithm. The GaussianMixture object implements the expectation-maximization (EM) algorithm for fitting mixture-of-Gaussian models. It can compute the Bayesian Information Criterion to assess the number of clusters in the data.
+
+Presently, Ignite ML supports a few parameters for the GMM classification algorithm:
+
+* `maxCountOfClusters ` - the number of possible clusters
+* `maxCountOfIterations ` - one stop criteria (the other one is epsilon)
+* `epsilon` - delta of convergence(delta between old and new centroid's values)
+* `countOfComponents` - the number of components
+* `maxLikelihoodDivergence` - maximum divergence between maximum of likelihood of vector in dataset and other for anomalies identification
+* `minElementsForNewCluster` - minimum required anomalies in terms of maxLikelihoodDivergence for creating new cluster
+* `minClusterProbability` - minimum cluster probability
+
+
+[source, java]
+----
+// Set up the trainer
+GmmTrainer trainer = new GmmTrainer(COUNT_OF_COMPONENTS);
+
+// Build the model
+GmmModel mdl = trainer
+    .withMaxCountIterations(MAX_COUNT_ITERATIONS)
+    .withMaxCountOfClusters(MAX_AMOUNT_OF_CLUSTERS)
+    .fit(ignite, dataCache, vectorizer);
+----
+
+== Example
+
+To see how GMM clustering can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/clustering/GmmClusterizationExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution.
+
diff --git a/docs/_docs/machine-learning/clustering/introduction.adoc b/docs/_docs/machine-learning/clustering/introduction.adoc
new file mode 100644
index 0000000..eff08b0
--- /dev/null
+++ b/docs/_docs/machine-learning/clustering/introduction.adoc
@@ -0,0 +1,22 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Introduction
+
+The Apache Ignite Machine Learning module provides K-Means and GMM algorithms to group the unlabeled data into clusters.
+
+All existing training algorithms presented in this section are designed to solve unsupervised (clustering) tasks:
+
+* K-Means Clustering
+* Gaussian mixture (GMM)
diff --git a/docs/_docs/machine-learning/clustering/k-means-clustering.adoc b/docs/_docs/machine-learning/clustering/k-means-clustering.adoc
new file mode 100644
index 0000000..eba9ec7
--- /dev/null
+++ b/docs/_docs/machine-learning/clustering/k-means-clustering.adoc
@@ -0,0 +1,80 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= K-Means Clustering
+
+K-means is one of the most commonly used clustering algorithms that clusters the data points into a predefined number of clusters.
+
+== Model
+
+K-Means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster.
+
+The model holds a vector of k centers and one of the distance metrics provided by the ML framework such as Euclidean, Hamming, Manhattan and etc.
+
+It creates the label as follows:
+
+
+
+[source, java]
+----
+KMeansModel mdl = trainer.fit(
+    ignite,
+    dataCache,
+    vectorizer
+);
+
+
+double clusterLabel = mdl.predict(inputVector);
+----
+
+== Trainer
+
+
+KMeans is an unsupervised learning algorithm. It solves a clustering task which is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).
+
+KMeans is a parametrized iterative algorithm which calculates the new means to be the centroids of the observations in the clusters on each iteration.
+
+Presently, Ignite supports a few parameters for the KMeans classification algorithm:
+
+* `k` - a number of possible clusters
+* `maxIterations` - one stop criteria (the other one is epsilon)
+* `epsilon` - delta of convergence (delta between old and new centroid's values)
+* `distance` - one of the distance metrics provided by the ML framework such as Euclidean, Hamming or Manhattan
+* `seed` - one of initialization parameters which helps to reproduce models (trainer has a random initialization step to get the first centroids)
+
+
+[source, java]
+----
+// Set up the trainer
+KMeansTrainer trainer = new KMeansTrainer()
+   .withDistance(new EuclideanDistance())
+   .withK(AMOUNT_OF_CLUSTERS)
+   .withMaxIterations(MAX_ITERATIONS)
+   .withEpsilon(PRECISION);
+
+// Build the model
+KMeansModel mdl = trainer.fit(
+    ignite,
+    dataCache,
+    vectorizer
+);
+----
+
+
+== Example
+
+
+To see how K-Means clustering can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/clustering/KMeansClusterizationExample.java[example^] that is available on GitHub and delivered with every Apache Ignite distribution.
+
+The training dataset is the subset of the Iris dataset (classes with labels 1 and 2, which are presented linear separable two-classes dataset) which can be loaded from the https://archive.ics.uci.edu/ml/datasets/iris[UCI Machine Learning Repository].
diff --git a/docs/_docs/machine-learning/ensemble-methods/bagging.adoc b/docs/_docs/machine-learning/ensemble-methods/bagging.adoc
new file mode 100644
index 0000000..3722a48
--- /dev/null
+++ b/docs/_docs/machine-learning/ensemble-methods/bagging.adoc
@@ -0,0 +1,56 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Bagging
+
+Bagging stands for bootstrap aggregation. One way to reduce the variance of an estimate is to average together multiple estimates. For example, we can train M different trees on different subsets of the data (chosen randomly with replacement) and compute the ensemble:
+
+image::images/bagging.png[]
+
+Bagging uses bootstrap sampling to obtain the data subsets for training the base learners. For aggregating the outputs of base learners, bagging uses voting for classification and averaging for regression.
+
+
+[source, java]
+----
+// Define the weak classifier.
+DecisionTreeClassificationTrainer trainer = new DecisionTreeClassificationTrainer(5, 0);
+
+// Set up the bagging process.
+BaggedTrainer<Double> baggedTrainer = TrainerTransformers.makeBagged(
+  trainer, // Trainer for making bagged
+  10,      // Size of ensemble
+  0.6,     // Subsample ratio to whole dataset
+  4,       // Feature vector dimensionality
+  3,       // Feature subspace dimensionality
+  new OnMajorityPredictionsAggregator())
+  .withEnvironmentBuilder(LearningEnvironmentBuilder
+                          .defaultBuilder()
+                          .withRNGSeed(1)
+                         );
+
+// Train the Bagged Model.
+BaggedModel mdl = baggedTrainer.fit(
+  ignite,
+  dataCache,
+  vectorizer
+);
+----
+
+
+TIP: A commonly used class of ensemble algorithms are forests of randomized trees.
+
+== Example
+
+The full example could be found as a part of the Titanic tutorial https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tutorial/Step_10_Bagging.java[here].
+
diff --git a/docs/_docs/machine-learning/ensemble-methods/gradient-boosting.adoc b/docs/_docs/machine-learning/ensemble-methods/gradient-boosting.adoc
new file mode 100644
index 0000000..ce92be0
--- /dev/null
+++ b/docs/_docs/machine-learning/ensemble-methods/gradient-boosting.adoc
@@ -0,0 +1,99 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Gradient Boosting
+
+In machine learning, boosting is an ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones.
+
+[NOTE]
+====
+[discrete]
+=== Question posed by Kearns and Valiant (1988, 1989)
+"Can a set of weak learners create a single strong learner?"
+
+ A weak learner is defined to be a classifier that is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification.
+====
+
+Later, in 1990 it was demonstrated by Robert Schapire and led to the boosting technique development.
+
+The boosing is presented in Ignite ML library as a Gradient Boosting (the most popular boosting implementation).
+
+== Overview
+
+
+Gradient boosting is a machine learning technique that produces a prediction model in the form of an https://en.wikipedia.org/wiki/Ensemble_learning[ensemble] of weak prediction models. A gradient boosting algorithm tries to solve the minimization error problem on learning samples in a functional space where each function is a model. Each model in this composition tries to predict a gradient of error for points in a feature space and these predictions will be summed with some weight to model an answer. This algorithm may be used for regression and classification problems. For more information please see https://en.wikipedia.org/wiki/Gradient_boosting[Wikipedia].
+
+In Ignite ML there is an implementation of a general GDB algorithm and GDB-on-trees algorithm. General GDB (GDBRegressionTrainer and GDBBinaryClassifierTrainer) allows any trainer for training each model in composition. GDB on trees uses some optimizations specific for trees, such as indexes, for avoiding sorting during the decision tree build phase.
+
+
+== Model
+
+Apache Ignite ML purposes all implementations of the GDB algorithm to use GDBModel, wrapping ModelsComposition for representing the composition of a few models. ModelsComposition implements a common Model interface and can be used as follows:
+
+
+[source, java]
+----
+GDBModel model = ...;
+
+double prediction = model.predict(observation);
+----
+
+GDBModel uses WeightedPredictionsAggregator as the model answer reducer. This aggregator computes an answer of a meta-model, since “result = bias + p1*w1 + p2*w2 + ...” where
+
+ * `pi` - answer of i-th model.
+ * `wi` - weight of model in composition.
+
+GDB uses the mean value of labels for the bias-parameter in the aggregator.
+
+== Trainer
+
+Training of GDB is represented by `GDBRegressionTrainer`, `GDBBinaryClassificationTrainer` and `GDBRegressionOnTreesTrainer`, `GDBBinaryClassificationOnTreesTrainer` for general GDB and GDB on trees respectively. All trainers have the following parameters:
+
+  * `gradStepSize` - sets the constant weight of each model in composition; in future versions of Ignite ML this parameter may be computed dynamically.
+  * `cntOfIterations` - sets the maximum of models in the composition after training.
+  * `checkConvergenceFactory` - sets factory for construction of convergence checker used for preventing overfitting and learning of many useless models while training.
+
+For classifier trainers there is addition parameter:
+
+  * `loss` - sets loss computer on some learning example from a training dataset.
+
+There are several factories for convergence checkers:
+
+  * `ConvergenceCheckerStubFactory` creates a checker that always returns false for a convergence check. So in this case, model composition size will have cntOfIterations models.
+  * `MeanAbsValueConvergenceCheckerFactory` creates a checker that compute a mean value of the absolute gradient values on each example from a dataset and returns true if this it is less than the used-defined threshold.
+  * `MedianOfMedianConvergenceCheckerFactory` creates a checker that computes the median of median absolute gradient values on each data partition. This method is less sensitive for anomalies in the learning dataset, but GDB may converge longer.
+
+Example of training:
+
+
+
+[source, java]
+----
+// Set up trainer
+GDBTrainer trainer = new GDBBinaryClassifierOnTreesTrainer(
+  learningRate, countOfIterations, new LogLoss()
+).withCheckConvergenceStgyFactory(new MedianOfMedianConvergenceCheckFactory(precision));
+
+// Build the model
+GDBModel mdl = trainer.fit(
+  ignite,
+  dataCache,
+  vectorizer
+);
+----
+
+
+== Example
+
+To see how GDB Classifier can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tree/boosting/GDBOnTreesClassificationTrainerExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution.
diff --git a/docs/_docs/machine-learning/ensemble-methods/introduction.adoc b/docs/_docs/machine-learning/ensemble-methods/introduction.adoc
new file mode 100644
index 0000000..aa34380
--- /dev/null
+++ b/docs/_docs/machine-learning/ensemble-methods/introduction.adoc
@@ -0,0 +1,25 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Introduction
+
+In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.  Typically, ML ensemble consists of only a concrete finite set of alternative models.
+
+Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model in order to decrease variance (bagging), bias (boosting), or improve predictions (stacking).
+
+The most popular ensemble models are supported in Apache Ignite ML:
+
+* Stacking
+* Boosting via GradientBoosting
+* Bagging (Bootstrap aggregating) and RandomForest as a special case
diff --git a/docs/_docs/machine-learning/ensemble-methods/random-forest.adoc b/docs/_docs/machine-learning/ensemble-methods/random-forest.adoc
new file mode 100644
index 0000000..7c57eaf
--- /dev/null
+++ b/docs/_docs/machine-learning/ensemble-methods/random-forest.adoc
@@ -0,0 +1,85 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Random Forest
+
+== Random Forest in Apache Ignite
+
+Random forest is an ensemble learning method to solve any classification and regression problem. Random forest training builds a model composition (ensemble) of one type and uses some aggregation algorithm of several answers from models. Each model is trained on a part of the training dataset. The part is defined according to bagging and feature subspace methods. More information about these concepts may be found here: https://en.wikipedia.org/wiki/Random_forest, https://en.wikipedia.org/wiki/Bootstrap_aggregating and https://en.wikipedia.org/wiki/Random_subspace_method.
+
+There are several implementations of aggregation algorithms in Apache Ignite ML:
+
+* `MeanValuePredictionsAggregator` - computes answer of a random forest as mean value of predictions from all models in the given composition. Often this is is used for regression tasks.
+* `OnMajorityPredictionsAggegator` - gets a mode of predictions from all models in the given composition. This can be useful for a classification task. NOTE: This aggregator supports multi-classification tasks.
+
+
+== Model
+
+The random forest algorithm is implemented in Ignite ML as a special case of a model composition with specific aggregators for different problems (`MeanValuePredictionsAggregator` for regression, `OnMajorityPredictionsAggegator` for classification).
+
+Here is an example of model usage:
+
+
+[source, java]
+----
+ModelsComposition randomForest = ….
+
+double prediction = randomForest.apply(featuresVector);
+
+----
+
+
+== Trainer
+
+The random forest training algorithm is implemented with RandomForestRegressionTrainer and RandomForestClassifierTrainer trainers with the following parameters:
+
+`meta` - features meta, list of feature type description such as:
+
+  * `featureId` - index in features vector.
+  * `isCategoricalFeature` - flag having true value if a feature is categorical.
+  * `featureName`.
+
+This meta-information is important for random forest training algorithms because it builds feature histograms and categorical features should be represented in histograms for all feature values:
+
+  * `featuresCountSelectionStrgy` - sets strategy defining count of random features for learning one tree. There are several strategies: SQRT, LOG2, ALL and ONE_THIRD strategies implemented in the FeaturesCountSelectionStrategies class.
+  * `maxDepth` - sets the maximum tree depth.
+  * `minInpurityDelta` - a node in a decision tree is split into two nodes if the impurity values on these two nodes is less than the unspilt node's minImpurityDecrease value.
+  * `subSampleSize` - value lying in the [0; MAX_DOUBLE]-interval. This parameter defines the count of sample repetitions in uniformly sampling with replacement.
+  * `seed` - seed value used in random generators.
+
+Random forest training may be used as follows:
+
+
+[source, java]
+----
+RandomForestClassifierTrainer trainer = new RandomForestClassifierTrainer(featuresMeta)
+  .withCountOfTrees(101)
+  .withFeaturesCountSelectionStrgy(FeaturesCountSelectionStrategies.ONE_THIRD)
+  .withMaxDepth(4)
+  .withMinImpurityDelta(0.)
+  .withSubSampleSize(0.3)
+  .withSeed(0);
+
+ModelsComposition rfModel = trainer.fit(
+  ignite,
+  dataCache,
+  vectorizer
+);
+----
+
+
+
+== Example
+
+To see how Random Forest Classifier can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tree/randomforest/RandomForestClassificationExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution. In this example, a Wine recognition dataset was used. Description of this dataset and data are available from the https://archive.ics.uci.edu/ml/datasets/wine[UCI Machine Learning Repository].
diff --git a/docs/_docs/machine-learning/ensemble-methods/stacking.adoc b/docs/_docs/machine-learning/ensemble-methods/stacking.adoc
new file mode 100644
index 0000000..439ac2a
--- /dev/null
+++ b/docs/_docs/machine-learning/ensemble-methods/stacking.adoc
@@ -0,0 +1,49 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Stacking
+
+Stacking (sometimes called stacked generalization) involves training a learning algorithm to combine the predictions of several other learning algorithms.
+
+First, all of the other algorithms are trained using the available data, then a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as additional inputs. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the widely known ensemble techniques, although, in practice, a logistic regression model is often used as the combiner like in the example below.
+
+
+[source, java]
+----
+DecisionTreeClassificationTrainer trainer = new DecisionTreeClassificationTrainer(5, 0);
+DecisionTreeClassificationTrainer trainer1 = new DecisionTreeClassificationTrainer(3, 0);
+DecisionTreeClassificationTrainer trainer2 = new DecisionTreeClassificationTrainer(4, 0);
+
+LogisticRegressionSGDTrainer aggregator = new LogisticRegressionSGDTrainer()
+  .withUpdatesStgy(new UpdatesStrategy<>(new SimpleGDUpdateCalculator(0.2),
+                                         SimpleGDParameterUpdate.SUM_LOCAL,
+                                         SimpleGDParameterUpdate.AVG));
+
+StackedModel<Vector, Vector, Double, LogisticRegressionModel> mdl = new StackedVectorDatasetTrainer<>(aggregator)
+  .addTrainerWithDoubleOutput(trainer)
+  .addTrainerWithDoubleOutput(trainer1)
+  .addTrainerWithDoubleOutput(trainer2)
+  .fit(ignite,
+       dataCache,
+       vectorizer
+      );
+
+----
+
+NOTE: The Evaluator works well with the StackedModel
+
+
+== Example
+
+The full example could be found as a part of the Titanic tutorial https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tutorial/Step_9_Scaling_With_Stacking.java[here].
diff --git a/docs/_docs/machine-learning/importing-model/introduction.adoc b/docs/_docs/machine-learning/importing-model/introduction.adoc
new file mode 100644
index 0000000..a49c2e5
--- /dev/null
+++ b/docs/_docs/machine-learning/importing-model/introduction.adoc
@@ -0,0 +1,26 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Introduction
+
+Apache Ignite since 2.8 supports importing Machine Learning models from external platforms including Apache Spark ML and XGBoost. By working with imported models, you can:
+
+- store imported models in Ignite for further inference,
+- use imported models as part of pipelines,
+- apply ensembling methods such as boosting, bagging, or stacking to those models.
+
+Also, imported pre-trained models can be updated inside Apache Ignite.
+
+Apache Ignite provides an API for distributed inference for models trained in [Apache Spark ML], [XGBoost], and [H2O].
+
diff --git a/docs/_docs/machine-learning/importing-model/model-import-from-apache-spark.adoc b/docs/_docs/machine-learning/importing-model/model-import-from-apache-spark.adoc
new file mode 100644
index 0000000..92992f8
--- /dev/null
+++ b/docs/_docs/machine-learning/importing-model/model-import-from-apache-spark.adoc
@@ -0,0 +1,84 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Import Model from Apache Spark
+
+Starting with Ignite 2.8,  it's possible to import the following models of Apache Spark ML:
+
+- Logistic regression (`org.apache.spark.ml.classification.LogisticRegressionModel`)
+- Linear regression (`org.apache.spark.ml.classification.LogisticRegressionModel`)
+- Decision tree (`org.apache.spark.ml.classification.DecisionTreeClassificationModel`)
+- Support Vector Machine (`org.apache.spark.ml.classification.LinearSVCModel`)
+- Random forest (`org.apache.spark.ml.classification.RandomForestClassificationModel`)
+- K-Means (`org.apache.spark.ml.clustering.KMeansModel`)
+- Decision tree regression (`org.apache.spark.ml.regression.DecisionTreeRegressionModel`)
+- Random forest regression (`org.apache.spark.ml.regression.RandomForestRegressionModel`)
+- Gradient boosted trees regression (`org.apache.spark.ml.regression.GBTRegressionModel`)
+- Gradient boosted trees (`org.apache.spark.ml.classification.GBTClassificationModel`)
+
+This feature works with models saved in _snappy.parquet_ files.
+
+Supported and tested Spark version: 2.3.0
+Possibly might work with next Spark versions: 2.1, 2.2, 2.3, 2.4
+
+To get the model from Spark ML you should save the model built as a result of training in Spark ML to the parquet file like in example below:
+
+
+[source, scala]
+----
+val spark: SparkSession = TitanicUtils.getSparkSession
+
+val passengers = TitanicUtils.readPassengersWithCasting(spark)
+    .select("survived", "pclass", "sibsp", "parch", "sex", "embarked", "age")
+
+// Step - 1: Make Vectors from dataframe's columns using special VectorAssmebler
+val assembler = new VectorAssembler()
+    .setInputCols(Array("pclass", "sibsp", "parch", "survived"))
+    .setOutputCol("features")
+
+// Step - 2: Transform dataframe to vectorized dataframe with dropping rows
+val output = assembler.transform(
+    passengers.na.drop(Array("pclass", "sibsp", "parch", "survived", "age"))
+).select("features", "age")
+
+
+val lr = new LinearRegression()
+    .setMaxIter(100)
+    .setRegParam(0.1)
+    .setElasticNetParam(0.1)
+    .setLabelCol("age")
+    .setFeaturesCol("features")
+
+// Fit the model
+val model = lr.fit(output)
+model.write.overwrite().save("/home/models/titanic/linreg")
+----
+
+
+To load in Ignite ML you should use SparkModelParser class via method parse() call
+
+
+[source, java]
+----
+DecisionTreeNode mdl = (DecisionTreeNode)SparkModelParser.parse(
+   SPARK_MDL_PATH,
+   SupportedSparkModels.DECISION_TREE
+);
+----
+
+You can see more examples of using this API in the examples module in the package: `org.apache.ignite.examples.ml.inference.spark.modelparser`
+
+NOTE: It does not support loading from PipelineModel in Spark.
+It does not support intermediate feature transformers from Spark due to different nature of preprocessing on Ignite and Spark side.
+
diff --git a/docs/_docs/machine-learning/importing-model/model-import-from-gxboost.adoc b/docs/_docs/machine-learning/importing-model/model-import-from-gxboost.adoc
new file mode 100644
index 0000000..a42ef38
--- /dev/null
+++ b/docs/_docs/machine-learning/importing-model/model-import-from-gxboost.adoc
@@ -0,0 +1,35 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Import Model from XGBoost
+
+Using Apache Ignite you can import pre-trained models from XGBoost. The models are translated into Apache Ignite ML models. Apache Ignite ML also provides the ability to import pre-trained XGBoost models for local or distributed inference.
+
+The difference between translating the model into an Apache Ignite ML model and performing distributed inference is in the parser implementation. This example shows how you can import a model from XGBoost and translate it to an Apache Ignite ML model for distributed inference:
+
+
+[source, java]
+----
+File mdlRsrc = IgniteUtils.resolveIgnitePath(TEST_MODEL_RES);
+
+ModelReader reader = new FileSystemModelReader(mdlRsrc.getPath());
+
+XGModelParser parser = new XGModelParser();
+
+AsyncModelBuilder mdlBuilder = new IgniteDistributedModelBuilder(ignite, 4, 4);
+
+Model<NamedVector, Future<Double>> mdl = mdlBuilder.build(reader, parser);
+
+----
+
diff --git a/docs/_docs/machine-learning/machine-learning.adoc b/docs/_docs/machine-learning/machine-learning.adoc
new file mode 100644
index 0000000..9fd95fa
--- /dev/null
+++ b/docs/_docs/machine-learning/machine-learning.adoc
@@ -0,0 +1,139 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Machine Learning
+
+== Overview
+
+Apache Ignite Machine Learning (ML) is a set of simple, scalable and efficient tools that allow the building of predictive Machine Learning models without costly data transfers.
+
+The rationale for adding machine and deep learning (DL) to Apache Ignite is quite simple. Today's data scientists have to deal with two major factors that keep ML from mainstream adoption:
+
+* First, the models are trained and deployed (after the training is over) in different systems. The data scientists have to wait for ETL or some other data transfer process to move the data into a system like Apache Mahout or Apache Spark for a training purpose. Then they have to wait while this process completes and redeploy the models in a production environment. The whole process can take hours moving terabytes of data from one system to another. Moreover, the training part usually happens over the old data set.
+
+* The second factor is related to scalability. ML and DL algorithms that have to process data sets which no longer fit within a single server unit are constantly growing. This urges the data scientist to come up with sophisticated solutions o​r turn to distributed computing platforms such as Apache Spark and TensorFlow. However, those platforms mostly solve only a part of the puzzle which is the model training, making it a burden of the developers to decide how do deploy the models in production later.
+
+
+image::images/machine_learning.png[]
+
+
+=== Zero ETL and Massive Scalability
+
+Ignite Machine Learning relies on Ignite's memory-centric storage that brings massive scalability for ML and DL tasks and eliminates the wait imposed by ETL between the different systems. For instance, it allows users to run ML/DL training and inference directly on data stored across memory and disk in an Ignite cluster. Next, Ignite provides a host of ML and DL algorithms that are optimized for Ignite's collocated distributed processing. These implementations deliver in-memory speed and unlimited horizontal scalability when running in place against massive data sets or incrementally against incoming data streams, without requiring the data to be moved into another store. By eliminating the data movement and the long processing wait times, Ignite Machine learning enables continuous learning that can improve decisions based on the latest data as it arrives in real-time.
+
+
+=== Fault Tolerance and Continuous Learning
+
+Apache Ignite Machine Learning is tolerant to node failures. This means that in the case of node failures during the learning process, all recovery procedures will be transparent to the user, learning processes won't be interrupted, and we will get results in the time similar to the case when all nodes work fine. For more information please see link:machine-learning/partition-based-dataset[Partition Based Dataset].
+
+
+== Algorithms and Applicability
+
+=== Classification
+
+Identifying to which category a new observation belongs, on the basis of a training set.
+
+*Applicability:* spam detection, image recognition, credit scoring, disease identification.
+
+*Algorithms:* link:machine-learning/binary-classification/logistic-regression[Logistic Regression], link:machine-learning/binary-classification/linear-svm[Linear SVM (Support Vector Machine)], link:machine-learning/binary-classification/knn-classification[k-NN Classification], link:machine-learning/binary-classification/naive-bayes[Naive Bayes], link:machine-learning/binary-classification/decision-trees[Decision Trees], link:machine-learning/binary-classification/random-forest[Random Forest], link:machine-learning/binary-classification/multilayer-perceptron[Multilayer perceptron], link:machine-learning/ensemble-methods/gradient-boosting[Gradient Boosting], link:machine-learning/binary-classification/ann[ANN (Approximate Nearest Neighbor)].
+
+
+=== Regression
+
+Modeling the relationship between a scalar dependent variable (y) and one or more explanatory variables or independent variables (x).
+
+
+*Applicability:* drug response, stock prices, supermarket revenue.
+
+*Algorithms:* Linear Regression, Decision Trees Regression, k-NN Regression.
+
+=== Clustering
+
+Grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).
+
+*Applicability:* customer segmentation, grouping experiment outcomes, grouping of shopping items.
+
+*Algorithms:* K-Means Clustering, Gaussian mixture (GMM).
+
+=== Recommendation
+
+Building a recommendation system, which is a subclass of information filtering systems that seeks to predict the "rating" or "preference" a user would give to an item.
+
+*Applicability:*  playlist generators for video and music services, product recommenders for services
+
+*Algorithms:* link:machine-learning/recommendation-systems[Matrix Factorization].
+
+=== Preprocessing
+
+Feature extraction and normalization.
+
+*Applicability:* transform input data such as text for use with machine learning algorithms, to extract features we need to fit on, to normalize input data.
+
+*Algorithms:* Apache Ignite ML supports custom preprocessing using partition based dataset capabilities and has default link:machine-learning/preprocessing[preprocessors] such as normalization preprocessor, one-hot-encoder, min-max scaler and so on.
+
+
+== Getting Started
+
+The fastest way to get started with the Machine Learning is to build and run existing examples, study their output and keep coding. The ML examples are located in the https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/ml[examples] folder of every Apache Ignite distribution.
+
+Follow the steps below to try out the examples:
+
+. Download Apache Ignite version 2.8 or later.
+. Open the `examples` project in an IDE, such as IntelliJ IDEA or Eclipse.
+. Go to the `src/main/java/org/apache/ignite/examples/ml` folder in the IDE and run an ML example.
+
+The examples do not require any special configuration. All ML  examples will launch, run and stop successfully without any user intervention and provide meaningful output on the console. Additionally, the Tracer API example will launch a web browser and generate HTML output.
+
+=== Get it With Maven
+
+Add the Maven dependency below to your project in order to include the ML functionality provided by Ignite:
+
+[source, xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-ml</artifactId>
+    <version>${ignite.version}</version>
+</dependency>
+
+----
+
+
+Replace `${ignite-version}` with an actual Ignite version.
+
+=== Build From Sources
+
+The latest Apache Ignite Machine Learning jar is always uploaded to the Maven repository. If you need to take the jar and deploy it in a custom environment, then it can be either downloaded from Maven or built from scratch. To build the Machine Learning component from sources:
+
+1. Download the latest Apache Ignite source release.
+2. Clean the local Maven repository (this is to ensure that older Maven builds don’t impact the build).
+3. Build and install Apache Ignite from the project's root directory:
++
+[source, shell]
+----
+mvn clean install -DskipTests -Dmaven.javadoc.skip=true
+----
+
+4. Locate the Machine Learning jar in your local Maven repository under the path `{user_dir}/.m2/repository/org/apache/ignite/ignite-ml/{ignite-version}/ignite-ml-{ignite-version}.jar`.
+
+5. If you want to build ML or DL examples from sources, execute the following commands:
++
+[source, shell]
+----
+cd examples
+mvn clean package -DskipTests
+----
+
+
+If needed, refer to `DEVNOTES.txt` in the project's root folder and the `README` files in the `ignite-ml` component for more details.
diff --git a/docs/_docs/machine-learning/model-selection/cross-validation.adoc b/docs/_docs/machine-learning/model-selection/cross-validation.adoc
new file mode 100644
index 0000000..8e64c68
--- /dev/null
+++ b/docs/_docs/machine-learning/model-selection/cross-validation.adoc
@@ -0,0 +1,90 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cross-Validation
+
+Cross validation functionality in Apache Ignite is represented by the `CrossValidation` class. This is a calculator parameterized by the type of model, type of label and key-value types of data. After instantiation (constructor doesn’t accept any additional parameters) we can use a score method to perform cross validation.
+
+Let’s imagine that we have a trainer, a training set and we want to make cross validation using accuracy as a metric and using 4 folds. Apache Ignite allows us to do this as shown in the following example:
+
+
+== Cross-Validation (without Pipeline API usage)
+
+[source, java]
+----
+// Create classification trainer
+DecisionTreeClassificationTrainer trainer = new DecisionTreeClassificationTrainer(4, 0);
+
+// Create cross-validation instance
+CrossValidation<DecisionTreeNode, Integer, Vector> scoreCalculator
+  = new CrossValidation<>();
+
+// Set up the cross-validation process
+scoreCalculator
+    .withIgnite(ignite)
+    .withUpstreamCache(trainingSet)
+    .withTrainer(trainer)
+    .withMetric(MetricName.ACCURACY)
+    .withPreprocessor(vectorizer)
+    .withAmountOfFolds(4)
+    .isRunningOnPipeline(false)
+
+// Calculate accuracy for each fold
+double[] accuracyScores = scoreCalculator.scoreByFolds();
+----
+
+In this example we specify trainer and metric as parameters, after that we pass common training arguments such as a link to the Ignite instance, cache, vectorizers, and finally specify the number of folds. This method returns an array containing chosen metrics for all possible splits of the training set.
+
+== Cross-Validation (with Pipeline API usage)
+
+Define the pipeline and pass it as a parameter to Cross-Validation instance to run cross-validation on Pipeline.
+
+CAUTION: The Pipeline API is experimental and could be changed in the next releases.
+
+
+[source, java]
+----
+// Create classification trainer
+DecisionTreeClassificationTrainer trainer = new DecisionTreeClassificationTrainer(4, 0);
+
+Pipeline<Integer, Vector, Integer, Double> pipeline
+  = new Pipeline<Integer, Vector, Integer, Double>()
+    .addVectorizer(vectorizer)
+    .addPreprocessingTrainer(new ImputerTrainer<Integer, Vector>())
+    .addPreprocessingTrainer(new MinMaxScalerTrainer<Integer, Vector>())
+    .addTrainer(trainer);
+
+
+// Create cross-validation instance
+CrossValidation<DecisionTreeNode, Integer, Vector> scoreCalculator
+  = new CrossValidation<>();
+
+// Set up the cross-validation process
+scoreCalculator
+    .withIgnite(ignite)
+    .withUpstreamCache(trainingSet)
+    .withPipeline(pipeline)
+    .withMetric(MetricName.ACCURACY)
+    .withPreprocessor(vectorizer)
+    .withAmountOfFolds(4)
+    .isRunningOnPipeline(false)
+
+// Calculate accuracy for each fold
+double[] accuracyScores = scoreCalculator.scoreByFolds();
+----
+
+
+== Example
+
+To see how the Cross Validation can be used in practice, try https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/selection/cv/CrossValidationExample.java[this example] and see step https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tutorial/Step_8_CV_with_Param_Grid_and_pipeline.java[8 of ML Tutorial] that are available on GitHub and delivered with every Apache Ignite distribution.
diff --git a/docs/_docs/machine-learning/model-selection/evaluator.adoc b/docs/_docs/machine-learning/model-selection/evaluator.adoc
new file mode 100644
index 0000000..0660f41
--- /dev/null
+++ b/docs/_docs/machine-learning/model-selection/evaluator.adoc
@@ -0,0 +1,107 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Evaluator
+
+Apache Ignite ML comes with a number of machine learning algorithms that can be used to learn from and make predictions on data. When these algorithms are applied to build machine learning models, there is a need to evaluate the performance of the model on some criteria, which depends on the application and its requirements. Apache Ignite ML also provides a suite of classification and regression metrics for the purpose of evaluating the performance of machine learning models.
+
+== Classification model evaluation
+
+While there are many different types of classification algorithms, the evaluation of classification models all share similar principles. In a supervised classification problem, there exists a true output and a model-generated predicted output for each data point. For this reason, the results for each data point can be assigned to one of four categories:
+
+* True Positive (TP) - label is positive and prediction is also positive
+* True Negative (TN) - label is negative and prediction is also negative
+* False Positive (FP) - label is negative but prediction is positive
+* False Negative (FN) - label is positive but prediction is negative
+
+Especially, these metrics are important for binary classification.
+
+CAUTION: Multiclass classification evalution is not supported yet in Apache Ignite ML.
+
+The full list of binary classification metrics supported in Apache Ignite ML is next:
+
+* Accuracy
+* Balanced accuracy
+* F-Measure
+* FallOut
+* FN
+* FP
+* FDR
+* MissRate
+* NPV
+* Precision
+* Recall
+* Specificity
+* TN
+* TP
+
+The explanation and formulas for these metrics can be found https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers[here].
+
+
+[source, java]
+----
+// Define the vectorizer.
+Vectorizer<Integer, Vector, Integer, Double> vectorizer = new DummyVectorizer<Integer>()
+   .labeled(Vectorizer.LabelCoordinate.FIRST);
+
+// Define the trainer.
+SVMLinearClassificationTrainer trainer = new SVMLinearClassificationTrainer();
+
+// Train the model.
+SVMLinearClassificationModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+
+// Calculate all classification metrics.
+EvaluationResult res = Evaluator
+  .evaluateBinaryClassification(dataCache, mdl, vectorizer);
+
+double accuracy = res.get(MetricName.ACCURACY)
+----
+
+
+== Regression model evaluation
+
+Regression analysis is used when predicting a continuous output variable from a number of independent variables.
+
+The full list of regression metrics supported in Apache Ignite ML is as follows:
+
+* MAE
+* R2
+* RMSE
+* RSS
+* MSE
+
+
+[source, java]
+----
+// Define the vectorizer.
+Vectorizer<Integer, Vector, Integer, Double> vectorizer = new DummyVectorizer<Integer>()
+   .labeled(Vectorizer.LabelCoordinate.FIRST);
+
+// Define the trainer.
+KNNRegressionTrainer trainer = new KNNRegressionTrainer()
+    .withK(5)
+    .withDistanceMeasure(new ManhattanDistance())
+    .withIdxType(SpatialIndexType.BALL_TREE)
+    .withWeighted(true);
+
+// Train the model.
+KNNRegressionModel knnMdl = trainer.fit(ignite, dataCache, vectorizer);
+
+// Calculate all classification metrics.
+EvaluationResult res = Evaluator
+  .evaluateRegression(dataCache, mdl, vectorizer);
+
+double mse = res.get(MetricName.MSE);
+----
+
diff --git a/docs/_docs/machine-learning/model-selection/hyper-parameter-tuning.adoc b/docs/_docs/machine-learning/model-selection/hyper-parameter-tuning.adoc
new file mode 100644
index 0000000..268a4cc
--- /dev/null
+++ b/docs/_docs/machine-learning/model-selection/hyper-parameter-tuning.adoc
@@ -0,0 +1,65 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Hyper-parameter tuning
+
+In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned.
+
+In Apache Ignite ML you could tune the model by changing of hyper-parameters (preprocessor and trainer's hyper-parameters).
+
+The main object to keep the all possible values of hyper-parameters is the ParamGrid object.
+
+
+[source, java]
+----
+DecisionTreeClassificationTrainer trainerCV = new DecisionTreeClassificationTrainer();
+
+ParamGrid paramGrid = new ParamGrid()
+    .addHyperParam("maxDeep", trainerCV::withMaxDeep,
+                   new Double[] {1.0, 2.0, 3.0, 4.0, 5.0, 10.0})
+    .addHyperParam("minImpurityDecrease", trainerCV::withMinImpurityDecrease,
+                   new Double[] {0.0, 0.25, 0.5});
+----
+
+There are a few approaches to find the optimal set of hyper-parameters:
+
+* *BruteForce (GridSearch)* - The traditional way of performing hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm.
+* *Random search* - It replaces the exhaustive enumeration of all combinations by selecting them randomly.
+* *Evolutionary optimization* - Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization uses evolutionary algorithms to search the space of hyperparameters for a given algorithm.
+
+The Random Search ParamGrid is could be set up as follows:
+
+
+[source, java]
+----
+ParamGrid paramGrid = new ParamGrid()
+    .withParameterSearchStrategy(
+         new RandomStrategy()
+             .withMaxTries(10)
+             .withSeed(12L))
+    .addHyperParam("p", normalizationTrainer::withP,
+                   new Double[] {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0})
+    .addHyperParam("maxDeep", trainerCV::withMaxDeep,
+                   new Double[] {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0})
+    .addHyperParam("minImpurityDecrease", trainerCV::withMinImpurityDecrease,
+                   new Double[] {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 1.0});
+----
+
+
+[TIP]
+====
+Performance Tip:
+
+The GridSearch (BruteForce) and Evolutionary optimization methods could be easily parallelized because all training runs are independent from each other.
+====
diff --git a/docs/_docs/machine-learning/model-selection/introduction.adoc b/docs/_docs/machine-learning/model-selection/introduction.adoc
new file mode 100644
index 0000000..c609101
--- /dev/null
+++ b/docs/_docs/machine-learning/model-selection/introduction.adoc
@@ -0,0 +1,32 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Introduction
+
+This section describes how to use Ignite ML for tuning ML algorithms and [Pipelines](doc:pipeline-api) . Built-in Cross-Validation and other tooling allow users to optimize [hyper-parameters](doc:hyper-parameter-tuning) in algorithms and Pipelines.
+
+Model selection is a set of tools that provides the ability to prepare and [evaluate](doc:evaluator)  models efficiently. Use it to link:machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets[split] data based on training and test data as well as perform cross validation.
+
+
+== Overview
+
+It is not good practice to learn the parameters of a prediction function and validate it on the same data. This leads to overfitting. To avoid this problem, one of the most efficient solutions is to save part of the training data as a validation set. However, by partitioning the available data and excluding one or more parts from the training set, we significantly reduce the number of samples which can be used for learning the model and the results can depend on a particular random choice for the pair of (train, validation) sets.
+
+A solution to this problem is a procedure called link:machine-learning/model-selection/cross-validation[Cross-Validation]. In the basic approach, called k-fold CV, the training set is split into k smaller sets and after that the following procedure works: a model is trained using k-1 of the folds (parts) as a training data, the resulting model is validated on the remaining part of the data (it’s used as a test set to compute metrics such as accuracy).
+
+Apache Ignite provides cross validation functionality that allows it to parameterize the trainer to be validated, metrics to be calculated for the model trained on every step and the number of folds training data should be split on.
+
+
+
+
diff --git a/docs/_docs/machine-learning/model-selection/pipeline-api.adoc b/docs/_docs/machine-learning/model-selection/pipeline-api.adoc
new file mode 100644
index 0000000..7f0cb93
--- /dev/null
+++ b/docs/_docs/machine-learning/model-selection/pipeline-api.adoc
@@ -0,0 +1,125 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Pipelines API
+
+Apache Ignite ML standardizes APIs for machine learning algorithms to make it easier to combine multiple algorithms into a single pipeline, or workflow. This section covers the key concepts introduced by the Pipelines API, where the pipeline concept is mostly inspired by the scikit-learn and Apache Spark projects.
+
+* **Preprocessor Model **- This is an algorithm which can transform one DataSet into another DataSet.
+
+* **Preprocessor Trainer**- This is an algorithm which can be fit on a DataSet to produce a PreprocessorModel.
+
+* **Pipeline **-  A Pipeline chains multiple Trainers and Preprocessors together to specify an ML workflow.
+
+* **Parameter **- All ML Trainers and Preprocessor Trainers now share a common API for specifying parameters.
+
+CAUTION: The Pipeline API is experimental and could be changed in the next releases.
+
+
+The Pipeline could replace the pieces of code with .fit() method calls as in the next examples:
+
+
+[tabs]
+--
+tab:Without Pipeline API[]
+
+[source, java]
+----
+final Vectorizer<Integer, Vector, Integer, Double> vectorizer = new DummyVectorizer<Integer>(0, 3, 4, 5, 6, 8, 10).labeled(1);
+
+TrainTestSplit<Integer, Vector> split = new TrainTestDatasetSplitter<Integer, Vector>()
+  .split(0.75);
+
+Preprocessor<Integer, Vector> imputingPreprocessor = new ImputerTrainer<Integer, Vector>()
+  .fit(ignite,
+       dataCache,
+       vectorizer
+      );
+
+Preprocessor<Integer, Vector> minMaxScalerPreprocessor = new MinMaxScalerTrainer<Integer, Vector>()
+  .fit(ignite,
+       dataCache,
+       imputingPreprocessor
+      );
+
+Preprocessor<Integer, Vector> normalizationPreprocessor = new NormalizationTrainer<Integer, Vector>()
+  .withP(1)
+  .fit(ignite,
+       dataCache,
+       minMaxScalerPreprocessor
+      );
+
+// Tune hyper-parameters with K-fold Cross-Validation on the split training set.
+
+DecisionTreeClassificationTrainer trainerCV = new DecisionTreeClassificationTrainer();
+
+CrossValidation<DecisionTreeNode, Integer, Vector> scoreCalculator = new CrossValidation<>();
+
+ParamGrid paramGrid = new ParamGrid()
+  .addHyperParam("maxDeep", trainerCV::withMaxDeep, new Double[] {1.0, 2.0, 3.0, 4.0, 5.0, 10.0})
+  .addHyperParam("minImpurityDecrease", trainerCV::withMinImpurityDecrease, new Double[] {0.0, 0.25, 0.5});
+
+scoreCalculator
+  .withIgnite(ignite)
+  .withUpstreamCache(dataCache)
+  .withTrainer(trainerCV)
+  .withMetric(MetricName.ACCURACY)
+  .withFilter(split.getTrainFilter())
+  .isRunningOnPipeline(false)
+  .withPreprocessor(normalizationPreprocessor)
+  .withAmountOfFolds(3)
+  .withParamGrid(paramGrid);
+
+CrossValidationResult crossValidationRes = scoreCalculator.tuneHyperParameters();
+----
+
+tab:With Pipeline API[]
+
+[source, java]
+----
+final Vectorizer<Integer, Vector, Integer, Double> vectorizer = new DummyVectorizer<Integer>(0, 4, 5, 6, 8).labeled(1);
+
+TrainTestSplit<Integer, Vector> split = new TrainTestDatasetSplitter<Integer, Vector>()
+  .split(0.75);
+
+DecisionTreeClassificationTrainer trainerCV = new DecisionTreeClassificationTrainer();
+
+Pipeline<Integer, Vector, Integer, Double> pipeline = new Pipeline<Integer, Vector, Integer, Double>()
+  .addVectorizer(vectorizer)
+  .addPreprocessingTrainer(new ImputerTrainer<Integer, Vector>())
+  .addPreprocessingTrainer(new MinMaxScalerTrainer<Integer, Vector>())
+  .addTrainer(trainer);
+
+CrossValidation<DecisionTreeNode, Integer, Vector> scoreCalculator = new CrossValidation<>();
+
+ParamGrid paramGrid = new ParamGrid()
+  .addHyperParam("maxDeep", trainer::withMaxDeep, new Double[] {1.0, 2.0, 3.0, 4.0, 5.0, 10.0})
+  .addHyperParam("minImpurityDecrease", trainer::withMinImpurityDecrease, new Double[] {0.0, 0.25, 0.5});
+
+scoreCalculator
+  .withIgnite(ignite)
+  .withUpstreamCache(dataCache)
+  .withPipeline(pipeline)
+  .withMetric(MetricName.ACCURACY)
+  .withFilter(split.getTrainFilter())
+  .withAmountOfFolds(3)
+  .withParamGrid(paramGrid);
+
+
+CrossValidationResult crossValidationRes = scoreCalculator.tuneHyperParameters();
+----
+--
+
+The full code could be found in the https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tutorial/Step_8_CV_with_Param_Grid_and_pipeline.java[Titanic tutorial].
+
diff --git a/docs/_docs/machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets.adoc b/docs/_docs/machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets.adoc
new file mode 100644
index 0000000..a463cea
--- /dev/null
+++ b/docs/_docs/machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets.adoc
@@ -0,0 +1,66 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Split the dataset on test and train datasets
+
+Data splitting is meant to split the data stored in a cache into two parts: the training part that is used to train the model, and the test part that is used to estimate the model quality.
+
+All fit() methods has a special parameter to pass a filter condition to each cache.
+
+[NOTE]
+====
+Due to distributed and lazy nature of dataset operations, the dataset split is the lazy operation too and could be defined as a filter condition that could be applied to the initial cache to form both, the train and test datasets.
+====
+
+In the example below the model is trained only on 75% of the initial dataset. The filter parameter value is the result of the `split.getTrainFilter()` that could continue with or reject the row from the initial dataset to handle it during the training.
+
+
+[source, java]
+----
+// Define the cache.
+IgniteCache<Integer, Vector> dataCache = ...;
+
+// Define the percentage of the train sub-set of the initial dataset.
+TrainTestSplit<Integer, Vector> split = new TrainTestDatasetSplitter<>().split(0.75);
+
+IgniteModel<Vector, Double> mdl = trainer
+  .fit(ignite, dataCache, split.getTrainFilter(), vectorizer);
+----
+
+
+The `split.getTestFilter()` could be used to validate the model on the test data.
+Below is the example of working with the cache directly: printing the predicted and real regression value from the test sub-set of the initial dataset.
+
+
+[source, java]
+----
+// Define the cache query and set the filter.
+ScanQuery<Integer, Vector> qry = new ScanQuery<>();
+qry.setFilter(split.getTestFilter());
+
+
+try (QueryCursor<Cache.Entry<Integer, Vector>> observations = dataCache.query(qry)) {
+    for (Cache.Entry<Integer, Vector> observation : observations) {
+         Vector val = observation.getValue();
+         Vector inputs = val.copyOfRange(1, val.size());
+         double groundTruth = val.get(0);
+
+         double prediction = mdl.predict(inputs);
+
+         System.out.printf(">>> | %.4f\t\t| %.4f\t\t|\n", prediction, groundTruth);
+    }
+}
+----
+
+
diff --git a/docs/_docs/machine-learning/multiclass-classification.adoc b/docs/_docs/machine-learning/multiclass-classification.adoc
new file mode 100644
index 0000000..c553d37
--- /dev/null
+++ b/docs/_docs/machine-learning/multiclass-classification.adoc
@@ -0,0 +1,55 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Multiclass Classification
+
+In machine learning, multiclass or multinomial classification is the problem of classifying instances into one of three or more classes.
+
+Currently, Apache Ignite ML support the most popular method of Multiclass classification known as One-vs-Rest.
+
+One-vs-Rest strategy involves training a single classifier per class, with the samples of that class as positive samples and all other samples as negatives.
+
+Internally it uses one dataset but with the different changed labels for each trained classifier. If you have N classes, the N classifiers will be trained to become a MultiClassModel.
+
+MultiClassModel uses soft-margin technique to predict the real label. It means that the MultiClassModel returns the label of the class which is better suited for the predicted vector.
+
+
+== Example
+
+To see how One-vs-Rest trainer parametrized by binary SVM classifier can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/multiclass/OneVsRestClassificationExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution.
+
+The preprocessed Glass dataset is from the https://archive.ics.uci.edu/ml/datasets/Glass+Identification[UCI Machine Learning Repository].
+
+There are 3 classes with labels: 1 (building_windows_float_processed), 3 (vehicle_windows_float_processed), 7 (headlamps) and feature names: 'Na-Sodium', 'Mg-Magnesium', 'Al-Aluminum', 'Ba-Barium', 'Fe-Iron'.
+
+
+[source, java]
+----
+OneVsRestTrainer<SVMLinearClassificationModel> trainer
+                    = new OneVsRestTrainer<>(new SVMLinearClassificationTrainer()
+                    .withAmountOfIterations(20)
+                    .withAmountOfLocIterations(50)
+                    .withLambda(0.2)
+                    .withSeed(1234L)
+                );
+
+MultiClassModel<SVMLinearClassificationModel> mdl = trainer.fit(
+                    ignite,
+                    dataCache,
+                    new DummyVectorizer<Integer>().labeled(0)
+                );
+
+double prediction = mdl.predict(inputVector);
+----
+
diff --git a/docs/_docs/machine-learning/partition-based-dataset.adoc b/docs/_docs/machine-learning/partition-based-dataset.adoc
new file mode 100644
index 0000000..79a3904
--- /dev/null
+++ b/docs/_docs/machine-learning/partition-based-dataset.adoc
@@ -0,0 +1,100 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Partition Based Dataset
+
+== Overview
+
+Partition-Based Dataset is an abstraction layer on top of the Apache Ignite storage and computational capabilities that allow us to build algorithms in accordance with link:machine-learning/machine-learning#section-zero-etl-and-massive-scalability[zero ETL] and link:machine-learning/machine-learning#section-fault-tolerance-and-continuous-learning[fault tolerance] principles.
+
+A main idea behind the partition-based datasets is the classic MapReduce approach implemented using the Compute Grid in Ignite.
+
+The most important advantage of MapReduce is the ability to perform computations on data distributed across the cluster without involving significant data transfers over the network. This idea is adopted in the partition-based datasets in the following way:
+
+  * Every dataset is spread across partitions;
+  * Partitions hold a persistent *training context* and recoverable *training data* stored locally on every node;
+  * Computations needed to be performed on a dataset splits on *Map* operations which executes on every partition and *Reduce* operations which reduces results of *Map* operations to one final result.
+
+**Training Context (Partition Context)** is a persistent part of the partition which is kept in an Apache Ignite, so that all changes made in this part will be consistently maintained until a partition-based dataset is closed. Training context survives node failures but requires additional time to read and write, so it should be used only when it's not possible to use partition data.
+
+**Training Data (Partition Data)** is a part of the partition that can be recovered from the upstream data and context at any time. Because of this, it is not necessary to maintain partition data in some persistent storage, so that partition data is kept on every node in local storage (On-Heap, Off-Heap or even in GPU memory) and in case of node failure is recovered from upstream data and context on another node.
+
+Why have partitions been selected as dataset and learning building blocks instead of cluster nodes?
+
+One of the fundamental ideas of an Apache Ignite is that partitions are atomic, which means that they cannot be split between multiple nodes for more details). As a result in the case of rebalancing or node failure, a partition will be recovered on another node with the same data it contained on the previous node.
+
+In case of a machine learning algorithm, it's vital​ because most of the ML algorithms are iterative and require some context maintained between iterations. This context cannot be split or merged and should be maintained in a consistent state during the whole learning process.
+
+== Usage
+
+To build a partition-based dataset you need to specify:
+
+* Upstream Data Source which can be an Ignite Cache or just a Map with data;
+* Partition Context Builder that defines how to build a partition context from upstream data rows corresponding to this partition;
+* Partition Data Builder that defines how to build partition data from upstream data rows corresponding to this partition.
+
+
+.Cache-based Dataset
+[source, java]
+----
+Dataset<MyPartitionContext, MyPartitionData> dataset =
+    new CacheBasedDatasetBuilder<>(
+        ignite,                            // Upstream Data Source
+        upstreamCache
+    ).build(
+        new MyPartitionContextBuilder<>(), // Training Context Builder
+        new MyPartitionDataBuilder<>()     // Training Data Builder
+    );
+----
+
+
+.Local Dataset
+[source, java]
+----
+Dataset<MyPartitionContext, MyPartitionData> dataset =
+    new LocalDatasetBuilder<>(
+        upstreamMap,                       // Upstream Data Source
+        10
+    ).build(
+        new MyPartitionContextBuilder<>(), // Partition Context Builder
+        new MyPartitionDataBuilder<>()     // Partition Data Builder
+    );
+----
+
+After this you are able to perform different computations on this dataset in a MapReduce manner.
+
+
+[source, java]
+----
+int numerOfRows = dataset.compute(
+    (partitionData, partitionIdx) -> partitionData.getRows(),
+    (a, b) -> a == null ? b : a + b
+);
+----
+
+And, finally, when all computations are completed it's important to close the dataset and free resources.
+
+
+[source, java]
+----
+dataset.close();
+----
+
+== Example
+
+To see how the Partition Based Dataset can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution.
+
+
+
+
diff --git a/docs/_docs/machine-learning/preprocessing.adoc b/docs/_docs/machine-learning/preprocessing.adoc
new file mode 100644
index 0000000..6879d6e
--- /dev/null
+++ b/docs/_docs/machine-learning/preprocessing.adoc
@@ -0,0 +1,253 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Preprocessing
+
+Preprocessing is required to transform raw data stored in an Ignite cache to the dataset of feature vectors suitable for further use in a machine learning pipeline.
+
+This section covers algorithms for working with features, roughly divided into the following groups:
+
+  * Extracting features from “raw” data
+  * Scaling features
+  * Converting features
+  * Modifying features
+
+NOTE: Usually it starts from label and feature extraction via vectorizer usage and can be complicated with other preprocessing stages.
+
+== Normalization preprocessor
+
+The normal flow is to extract features and labels from Ignite data via a vectorizer​, transform the features and then normalize them.
+
+In addition to the ability to build any custom preprocessor, Apache Ignite provides a built-in normalization preprocessor. This preprocessor makes normalization on each vector using p-norm.
+
+For normalization, you need to create a NormalizationTrainer and fit a normalization preprocessor as follows:
+
+
+[source, java]
+----
+// Train the preprocessor on the given data
+Preprocessor<Integer, Vector> preprocessor = new NormalizationTrainer<Integer, Vector>()
+  .withP(1)
+  .fit(ignite, data, vectorizer);
+
+// Create linear regression trainer.
+LinearRegressionLSQRTrainer trainer = new LinearRegressionLSQRTrainer();
+
+// Train model.
+LinearRegressionModel mdl = trainer.fit(
+    ignite,
+    upstreamCache,
+    preprocessor
+);
+----
+
+
+== Examples
+
+To see how the Normalization Preprocessor can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/preprocessing/NormalizationExample.java[example] that is available on GitHub and delivered with every Apache Ignite distribution.
+
+== Binarization preprocessor
+
+Binarization is the process of thresholding numerical features to binary (0/1) features.
+Feature values greater than the threshold are binarized to 1.0; values equal to or less than the threshold are binarized to 0.0.
+
+It contains only one significant parameter, which is the threshold.
+
+
+[source, java]
+----
+// Create binarization trainer.
+BinarizationTrainer<Integer, Vector> binarizationTrainer
+    = new BinarizationTrainer<>().withThreshold(40);
+
+// Build the preprocessor.
+Preprocessor<Integer, Vector> preprocessor = binarizationTrainer
+    .fit(ignite, data, vectorizer);
+----
+
+To see how the Binarization Preprocessor can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/preprocessing/BinarizationExample.java[example].
+
+
+== Imputer preprocessor
+
+
+The Imputer preprocessor completes missing values in a dataset, either using the mean or another statistic of the column in which the missing values are located. The missing values should be presented as Double.NaN. The input dataset column should be of Double. Currently, the Imputer preprocessor does not support categorical features and possibly creates incorrect values for columns containing categorical features.
+
+During the training phase, the Imputer Trainer collects statistics about the preprocessing dataset and in the preprocessing phase it changes the data according to the collected statistics.
+
+The Imputer Trainer contains only one parameter: `imputingStgy` that is presented as enum  *ImputingStrategy* with two available values (NOTE: future releases may support more values):
+
+  * MEAN: The default strategy. If this strategy is chosen, then replace missing values using the mean for the numeric features along the axis.
+  * MOST_FREQUENT: If this strategy is chosen, then replace missing values using the most frequent value along the axis.
+
+
+[source, java]
+----
+// Create imputer trainer.
+ImputerTrainer<Integer, Vector>() imputerTrainer =
+    new ImputerTrainer<>().withImputingStrategy(ImputingStrategy.MOST_FREQUENT);
+
+// Train imputer preprocessor.
+Preprocessor<Integer, Vector> preprocessor = new ImputerTrainer<Integer, Vector>()
+                    .fit(ignite, data, vectorizer);
+----
+
+To see how the Imputer Preprocessor can be used in practice, try https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/preprocessing/ImputingExample.java[this].
+
+== One-Hot Encoder preprocessor
+
+One-hot encoding maps a categorical feature, represented as a label index (Double or String value), to a binary vector with at most a single one-value indicating the presence of a specific feature value from among the set of all feature values.
+
+This preprocessor can transform multiple columns in which indices are handled during the training process. These indexes could be defined via a `withEncodedFeature(featureIndex)` call.
+
+[NOTE]
+====
+Each one-hot encoded binary vector adds its cells to the end of the current feature vector.
+
+  * This preprocessor always creates a separate column for NULL values.
+  * The index value associated with NULL will be located in a binary vector according to the frequency of NULL values.
+====
+
+`StringEncoderPreprocessor` and `OneHotEncoderPreprocessor` use the same EncoderTraining to collect data about categorial features during the training phase. To preprocess the dataset with the One-Hot Encoder preprocessor, set the `encoderType` with the value `EncoderType.ONE_HOT_ENCODER` as shown in the code snippet below:
+
+
+[source, java]
+----
+Preprocessor<Integer, Object[]> encoderPreprocessor = new EncoderTrainer<Integer, Object[]>()
+   .withEncoderType(EncoderType.ONE_HOT_ENCODER)
+   .withEncodedFeature(0)
+   .withEncodedFeature(1)
+   .withEncodedFeature(4)
+   .fit(ignite,
+       dataCache,
+       vectorizer
+);
+----
+
+== String Encoder preprocessor
+
+The String Encoder encodes string values (categories) to double values in the range [0.0, amountOfCategories) where the most popular value will be presented as 0.0 and the least popular value presented with amountOfCategories-1 value.
+
+This preprocessor can transform multiple columns in which indices are handled during the training process. These indexes could be defined via a `withEncodedFeature(featureIndex)` call.
+
+NOTE: It doesn’t add a new column but changes data in-place.
+
+*Example*
+
+Assume that we have the following Dataset with features id and category:
+
+
+[cols="1,1",opts="header"]
+|===
+|Id| Category
+|0|   a
+|1|   b
+|2|   c
+|3|   a
+|4|   a
+|5|   c
+|===
+
+[cols="1,1",opts="header"]
+|===
+|Id|  Category
+|0|   0.0
+|1|   2.0
+|2|   1.0
+|3|   0.0
+|4|   0.0
+|5|   1.0
+|===
+
+“a” gets index 0 because it is the most frequent, followed by “c” with index 1 and “b” with index 2.
+
+[NOTE]
+====
+There is only one strategy regarding how StringEncoder will handle unseen labels when you have to fit a StringEncoder on one dataset and then use it to transform another: put unseen labels in a special additional bucket, at the index equal to `amountOfCategories`.
+====
+
+`StringEncoderPreprocessor` and `OneHotEncoderPreprocessor` use the same EncoderTraining to collect data about categorial features during the training phase. To preprocess the dataset with the `StringEncoderPreprocessor`, set the `encoderType` with the value `EncoderType.STRING_ENCODER` as shown below in the code snippet:
+
+
+[source, java]
+----
+Preprocessor<Integer, Object[]> encoderPreprocessor
+  = new EncoderTrainer<Integer, Object[]>()
+   .withEncoderType(EncoderType.STRING_ENCODER)
+   .withEncodedFeature(1)
+   .withEncodedFeature(4)
+   .fit(ignite,
+       dataCache,
+       vectorizer
+);
+
+----
+
+
+To see how the String Encoder or OHE can be used in practice, try https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/ml/preprocessing/encoding[this] example.
+
+
+== MinMax Scaler preprocessor
+
+The MinMax Scaler transforms the given dataset, rescaling each feature to a specific range.
+
+From a mathematical point of view, it is the following function which is applied to every element in the dataset:
+
+image::images/preprocessing.png[]
+
+for all i, where i is a number of column, max_i is the value of the maximum element in this column, min_i is the value of the minimal element in this column.
+
+
+[source, java]
+----
+// Create min-max scaler trainer.
+MinMaxScalerTrainer<Integer, Vector> trainer = new MinMaxScalerTrainer<>();
+
+// Build the preprocessor.
+Preprocessor<Integer, Vector> preprocessor = trainer
+    .fit(ignite, data, vectorizer);
+----
+
+`MinMaxScalerTrainer` computes summary statistics on a data set and produces a `MinMaxScalerPreprocessor`
+The preprocessor can then transform each feature individually such that it is in the given range.
+
+To see how the `MinMaxScalerPreprocessor` can be used in practice, try https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/preprocessing/MinMaxScalerExample.java[this] tutorial example.
+
+
+== MaxAbsScaler Preprocessor
+
+The MaxAbsScaler transforms the given dataset, rescaling each feature to the range [-1, 1] by dividing through the maximum absolute value in each feature.
+
+NOTE:It does not shift or center the data, and thus does not destroy any sparsity.
+
+
+[source, java]
+----
+// Create max-abs trainer.
+MaxAbsScalerTrainer<Integer, Vector> trainer = new MaxAbsScalerTrainer<>();
+
+// Build the preprocessor.
+Preprocessor<Integer, Vector> preprocessor = trainer
+    .fit(ignite, data, vectorizer);
+----
+
+From a mathematical point of view it is the following function which is applied to every element in a dataset:
+
+image::images/preprocessing2.png[]
+
+for all i, where i is a number of column, maxabs_i is the value of the absolute maximum element in this column.
+
+`MaxAbsScalerTrainer` computes summary statistics on a data set and produces a `MaxAbsScalerPreprocessor`
+
+To see how the `MaxAbsScalerPreprocessor` can be used in practice, try https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/preprocessing/MaxAbsScalerExample.java[this] tutorial example.
diff --git a/docs/_docs/machine-learning/recommendation-systems.adoc b/docs/_docs/machine-learning/recommendation-systems.adoc
new file mode 100644
index 0000000..1ee3818
--- /dev/null
+++ b/docs/_docs/machine-learning/recommendation-systems.adoc
@@ -0,0 +1,71 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Recommendation Systems
+
+CAUTION: This is an experimental API that could be changed in the next releases.
+
+Collaborative filtering is commonly used for recommender systems. These techniques aim to fill in the missing entries of a user-item association matrix. Apache Ignite ML currently supports model-based collaborative filtering, in which users and products are described by a small set of latent factors that can be used to predict missing entries.
+
+The standard approach to matrix factorization-based collaborative filtering treats the entries in the user-item matrix as explicit preferences given by the user to the item, for example, users giving ratings to movies.
+
+Example of recommendation system based on https://grouplens.org/datasets/movielens[MovieLens dataset].
+
+
+
+[source, java]
+----
+IgniteCache<Integer, RatingPoint> movielensCache = loadMovieLensDataset(ignite, 10_000);
+
+RecommendationTrainer trainer = new RecommendationTrainer()
+  .withMaxIterations(-1)
+  .withMinMdlImprovement(10)
+  .withBatchSize(10)
+  .withLearningRate(10)
+  .withLearningEnvironmentBuilder(envBuilder)
+  .withTrainerEnvironment(envBuilder.buildForTrainer());
+
+RecommendationModel<Integer, Integer> mdl = trainer.fit(new CacheBasedDatasetBuilder<>(ignite, movielensCache));
+----
+
+CAUTION: The Evaluator is not support the recommendation systems yet.
+
+The next example demonstrates how to calculate metrics over the given cache manually and locally on the client node:
+
+
+[source, java]
+----
+double mean = 0;
+
+try (QueryCursor<Cache.Entry<Integer, RatingPoint>> cursor = movielensCache.query(new ScanQuery<>())) {
+  for (Cache.Entry<Integer, RatingPoint> e : cursor) {
+    ObjectSubjectRatingTriplet<Integer, Integer> triplet = e.getValue();
+    mean += triplet.getRating();
+  }
+  mean /= movielensCache.size();
+}
+
+double tss = 0, rss = 0;
+
+try (QueryCursor<Cache.Entry<Integer, RatingPoint>> cursor = movielensCache.query(new ScanQuery<>())) {
+  for (Cache.Entry<Integer, RatingPoint> e : cursor) {
+    ObjectSubjectRatingTriplet<Integer, Integer> triplet = e.getValue();
+    tss += Math.pow(triplet.getRating() - mean, 2);
+    rss += Math.pow(triplet.getRating() - mdl.predict(triplet), 2);
+  }
+}
+
+double r2 = 1.0 - rss / tss;
+----
+
diff --git a/docs/_docs/machine-learning/regression/decision-trees-regression.adoc b/docs/_docs/machine-learning/regression/decision-trees-regression.adoc
new file mode 100644
index 0000000..48f9d5c
--- /dev/null
+++ b/docs/_docs/machine-learning/regression/decision-trees-regression.adoc
@@ -0,0 +1,75 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Decision Trees Regression
+
+Decision trees and their ensembles are popular methods for the machine learning tasks of classification and regression. Decision trees are widely used since they are easy to interpret, handle categorical features, extend to the multiclass classification setting, do not require feature scaling, and are able to capture non-linearities and feature interactions. Tree ensemble algorithms such as random forests and boosting are among the top performers for classification and regression tasks.
+
+== Overview
+
+Decision trees are a simple yet powerful model in supervised machine learning. The main idea is to split a feature space into regions such as that the value in each region varies a little. The measure of the values' variation in a region is called the `impurity` of the region.
+
+Apache Ignite provides an implementation of the algorithm optimized for data stored in rows (see link:machine-learning/partition-based-dataset[partition-based dataset].
+
+Splits are done recursively and every region created from a split can be split further. Therefore, the whole process can be described by a binary tree, where each node is a particular region and its children are the regions derived from it by another split.
+
+Let each sample from a training set belong to some space `S` and let `p_i` be a projection on a feature with index `i`, then a split by continuous feature with index `i` has the form:
+
+
+image::images/555.gif[]
+
+and a split by categorical feature with values from some set `X` has the form:
+
+image::images/666.gif[]
+
+Here `X_0` is a subset of `X`.
+
+The model works this way - the split process stops when either the algorithm has reached the configured maximal depth, or splitting of any region has not resulted in significant impurity loss. Prediction of a value for point `s` from `S` is a traversal of the tree down to the node that corresponds to the region containing `s` and getting back a value associated with this leaf.
+
+== Model
+
+The Model in a decision tree classification is represented by the class `DecisionTreeNode`. We can make a prediction for a given vector of features in the following way:
+
+
+[source, java]
+----
+DecisionTreeNode mdl = ...;
+
+double prediction = mdl.apply(observation);
+----
+
+Model is fully independent object and after the training it can be saved, serialized and restored.
+
+== Trainer
+
+A Decision Tree algorithm can be used for classification and regression depending upon the impurity measure and node instantiation approach.
+
+The Regression Decision Tree uses the https://en.wikipedia.org/wiki/Mean_squared_error[MSE^] impurity measure and you can use it in the following way:
+
+
+[source, java]
+----
+// Create decision tree classification trainer.
+DecisionTreeRegressionTrainer trainer = new DecisionTreeRegressionTrainer(
+    4, // Max deep.
+    0  // Min impurity decrease.
+);
+
+// Train model.
+DecisionTreeNode mdl = trainer.fit(ignite, dataCache, vectorizer);
+----
+
+== Examples
+
+To see how the Decision Tree can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tree/DecisionTreeRegressionTrainerExample.java[regression example^] that is available on GitHub and delivered with every Apache Ignite distribution.
diff --git a/docs/_docs/machine-learning/regression/introduction.adoc b/docs/_docs/machine-learning/regression/introduction.adoc
new file mode 100644
index 0000000..490a17e
--- /dev/null
+++ b/docs/_docs/machine-learning/regression/introduction.adoc
@@ -0,0 +1,23 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Introduction
+
+Regression is a ML algorithm that can be trained to predict real numbered outputs, like temperature, stock price, etc. Regression is based on a hypothesis that can be linear, quadratic, polynomial, non-linear, etc. The hypothesis is a function that is based on some hidden parameters and the input values.
+
+All existing training algorithms presented in this section are designed to solve regression tasks:
+
+* Linear Regression
+* Decision Trees Regression
+* k-NN Regression
diff --git a/docs/_docs/machine-learning/regression/knn-regression.adoc b/docs/_docs/machine-learning/regression/knn-regression.adoc
new file mode 100644
index 0000000..94349cc
--- /dev/null
+++ b/docs/_docs/machine-learning/regression/knn-regression.adoc
@@ -0,0 +1,63 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= k-NN Regression
+
+The Apache Ignite Machine Learning component provides two versions of the widely used k-NN (k-nearest neighbors) algorithm - one for classification tasks and the other for regression tasks.
+
+This documentation reviews k-NN as a solution for regression tasks.
+
+== Trainer and Model
+
+The k-NN regression algorithm is a non-parametric method whose input consists of the k-closest training examples in the feature space. Each training example has a property value in a numerical form associated with the given training example.
+
+The k-NN regression  algorithm uses all training sets to predict a property value for the given test sample.
+This predicted property value is an average of the values of its k nearest neighbors. If `k` is `1`, then the test sample is simply assigned to the property value of a single nearest neighbor.
+
+Presently, Ignite supports a few parameters for k-NN regression algorithm:
+
+* `k` - a number of nearest neighbors
+* `distanceMeasure` - one of the distance metrics provided by the ML framework such as Euclidean, Hamming or Manhattan
+* `isWeighted` - false by default, if true it enables a weighted KNN algorithm.
+* `dataCache` -  holds a training set of objects for which the class is already known.
+* `indexType` - distributed spatial index, has three values: ARRAY, KD_TREE, BALL_TREE
+
+
+[source, java]
+----
+// Create trainer
+KNNRegressionTrainer trainer = new KNNRegressionTrainer()
+  .withK(5)
+  .withIdxType(SpatialIndexType.BALL_TREE)
+  .withDistanceMeasure(new ManhattanDistance())
+  .withWeighted(true);
+
+// Train model.
+KNNClassificationModel knnMdl = trainer.fit(
+  ignite,
+  dataCache,
+  vectorizer
+);
+
+// Make a prediction.
+double prediction = knnMdl.predict(observation);
+----
+
+
+== Example
+
+
+To see how kNN Regression can be used in practice, try this https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/knn/KNNRegressionExample.java[example^] that is available on GitHub and delivered with every Apache Ignite distribution.
+
+The training dataset is the Iris dataset which can be loaded from the https://archive.ics.uci.edu/ml/datasets/iris[UCI Machine Learning Repository^].
diff --git a/docs/_docs/machine-learning/regression/linear-regression.adoc b/docs/_docs/machine-learning/regression/linear-regression.adoc
new file mode 100644
index 0000000..4afa98d
--- /dev/null
+++ b/docs/_docs/machine-learning/regression/linear-regression.adoc
@@ -0,0 +1,99 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Linear Regression
+
+== Overview
+
+Apache Ignite supports the ordinary least squares Linear Regression algorithm - one of the most basic and powerful machine learning algorithms. This documentation describes how the algorithm works, and is implemented in Apache Ignite.
+
+The basic idea behind the Linear Regression algorithm is an assumption that a dependent variable `y` and an explanatory variable `x` are in the following relationship:
+
+image::images/111.gif[]
+
+
+WARNING:Be aware that further documentation uses a dot product of vectors `x` and `b`, and explicitly avoids using a constant term. It is mathematically correct in the case where vector `x` is supplemented by one value equal to 1.
+
+The above assumption allows us to make a prediction based on a feature vector `x` if a vector `b` is known. This fact is reflected in Apache Ignite in the `LinearRegressionModel` class responsible for making predictions.
+
+
+== Model
+
+A Model in the case of linear regression is represented by the class `LinearRegressionModel`. It enables a prediction to be made for a given vector of features, in the following way:
+
+
+[source, java]
+----
+LinearRegressionModel model = ...;
+
+double prediction = model.predict(observation);
+----
+
+Model is fully independent object and after the training it can be saved, serialized and restored.
+
+== Trainers
+
+Linear Regression is a supervised learning algorithm. This means that to find parameters (vector `b`), we need to train on a training dataset and minimize the loss function:
+
+image::images/222.gif[]
+
+Apache Ignite provides two linear regression trainers: trainer based on the LSQR algorithm and another trainer based on the Stochastic Gradient Descent method.
+
+=== LSQR Trainer
+
+The LSQR algorithm finds the least-squares solution to a large, sparse, linear system of equations. The Apache Ignite implementation is a distributed version of this algorithm.
+
+
+[source, java]
+----
+// Create linear regression trainer.
+LinearRegressionLSQRTrainer trainer = new LinearRegressionLSQRTrainer();
+
+// Train model.
+LinearRegressionModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+
+// Make a prediction.
+double prediction = mdl.apply(coordinates);
+----
+
+
+=== SGD Trainer
+
+Another Linear Regression Trainer uses the stochastic gradient descent method to find a minimum of the loss function. The configuration of this trainer is similar to link:machine-learning/binary-classification/multilayer-perceptron[multilayer perceptron trainer] configuration and we can specify the type of updater (`SGD`, `RProp` of `Nesterov`), max number of iterations, batch size, number of local iterations and seed.
+
+[source, java]
+----
+// Create linear regression trainer.
+LinearRegressionSGDTrainer<?> trainer = new LinearRegressionSGDTrainer<>(
+    new UpdatesStrategy<>(
+        new RPropUpdateCalculator(),
+        RPropParameterUpdate::sumLocal,
+        RPropParameterUpdate::avg
+    ),
+    100000,  // Max iterations.
+    10,      // Batch size.
+    100,     // Local iterations.
+    123L     // Random seed.
+);
+
+// Train model.
+LinearRegressionModel mdl = trainer.fit(ignite, dataCache, vectorizer);
+
+// Make a prediction.
+double prediction = mdl.apply(coordinates);
+----
+
+== Examples
+
+To see how the Linear Regression can be used in practice, try these https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/ml/regression/linear[examples] that are available on GitHub and delivered with every Apache Ignite distribution.
diff --git a/docs/_docs/machine-learning/updating-trained-models.adoc b/docs/_docs/machine-learning/updating-trained-models.adoc
new file mode 100644
index 0000000..a030f60
--- /dev/null
+++ b/docs/_docs/machine-learning/updating-trained-models.adoc
@@ -0,0 +1,77 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Updating Trained Models
+
+Updating Already Trained Models in Apache Ignite
+
+The model updating interface in Ignite ML provides relearning of an already trained model on a new portion of data using the state of the model trained earlier. This interface is represented in the DatasetTrainer class and it repeats the training interface with an already learned model as the first parameter:
+
+* M update (M mdl, DatasetBuilder<K, V> datasetBuilder, IgniteBiFunction<K, V, Vector> featureExtractor, IgniteBiFunction<K, V, L> lbExtractor).
+* M update (M mdl, Ignite ignite, IgniteCache<K, V> cache, IgniteBiFunction<K, V, Vector> featureExtractor, IgniteBiFunction<K, V, L> lbExtractor).
+* M update (M mdl, Ignite ignite, IgniteCache<K, V> cache, IgniteBiPredicate<K, V> filter, IgniteBiFunction<K, V, Vector> featureExtractor, IgniteBiFunction<K, V, L> lbExtractor).
+*   M update(M mdl, Map<K, V> data, int parts, IgniteBiFunction<K, V, Vector> featureExtractor, IgniteBiFunction<K, V, L> lbExtractor).
+*  M update (M mdl, Map<K, V> data, IgniteBiPredicate<K, V> filter, int parts, IgniteBiFunction<K, V, Vector> featureExtractor, IgniteBiFunction<K, V, L> lbExtractor).
+
+The interface brings online learning and online batch learning. Online learning means that you can train a model and when you get a new example for learning, such as clicks on a website, you can update the model as if the model were trained on this example too. Batch online learning requires a batch of examples instead of one training example for model update. Some models allow both update strategies, some allow only batch updating. It depends upon the learning algorithm. Further details of model update capabilities in terms of online and batch online learning can be found below.
+
+[NOTE]
+====
+The new portion of data should be compatible with the first trainer’s parameters and previous dataset that was used for previous pieces of training in terms of feature vector size and feature value distributions. For example, if you train an ANN model then you should provide the trainer with distance measure and candidates parameter count as at the first learning stage. If you update k-means then the new dataset should contain at least k-rows.
+====
+
+Each model has a special implementation of this interface. Read the next section to get more information about the updating process for each algorithm.
+
+
+== KMeans
+
+Model updating takes already learned centroids and updates them by new rows. We recommend to use batch online learning for this model. First, the dataset should have a size equal to the k-value at least. Second, a dataset with a small number of rows can move centroids to invalid positions.
+
+== KNN
+
+Model updating just adds a new dataset to the old dataset. In this case, model updating isn’t restricted.
+
+== ANN
+
+As in the case of KNN, a new trainer should provide the same distance measure and k-value. Those parameters are important because internally ANN use KMeans and statistics over centroids provided by KMeans. During an update, the trainer gets statistics over centroids from the last learning and updates it with new observations. From this point of view, ANN allows “mini-batch” online learning where batch size is equal to the k-parameter.
+
+== Neural Network (NN)
+
+NN updating just gets current neural network state and updates it according to the gradient of error on a new dataset. In this case the NN requires only feature vector compatibility between different datasets.
+
+== Logistic Regression
+
+Logistic regression inherits all restrictions from the neural network trainer because it uses perceptron internally.
+
+== Linear Regression
+
+The LinearRegressionSGD trainer inherits all restrictions from the neural network trainer. LinearRegressionLSQRTrainer restores state from the last learning and uses it as a first approximation in learning on a new dataset. In this way, LinearRegressionLSQRTrainer also requires only feature vectors compatibility.
+
+== SVM
+
+SVM trainer uses the state of a learned model as first approximation during a training process. From this point of view, the algorithm only requires feature vectors compatibility.
+
+== Decision Tree
+
+There is no correct implementation for decision tree updating. Updating learns a new model on a given dataset.
+
+== GDB
+
+GDB trainer updating gets already learned models from composition and tries to minimize the error gradient on a given dataset through learning of new models predicting gradient. It also uses a convergence checker and if there is no large error on a new dataset then GDB skips the update stage. From this point of view, GDB requires only feature vector compatibility.
+
+NOTE: Every update can increase the model composition size. All models depend upon each other. So, frequent updating based upon small datasets can produce an enormous model that requires a lot of memory.
+
+== Random Forest (RF)
+
+The RF trainer just learns new decision trees on a given dataset and adds them to an already learned composition. In this way, RF requires feature vector compatibility and the dataset should have a size bigger than one element because a decision tree cannot be trained on such a small dataset. In contrast to GDB models in a trained composition, RF models aren’t dependent upon each other and if the composition is too big then a user can manually remove some models.
diff --git a/docs/_docs/memory-architecture.adoc b/docs/_docs/memory-architecture.adoc
new file mode 100644
index 0000000..4f8fe42
--- /dev/null
+++ b/docs/_docs/memory-architecture.adoc
@@ -0,0 +1,93 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Memory Architecture
+
+== Overview
+
+Ignite memory architecture allows storing and processing data and indexes both in memory and on disk, and helps achieve in-memory performance with the durability of disk.
+
+image::images/durable-memory-overview.png[Memory architecture]
+
+The multi-tiered storage operates in a way similar to the virtual memory of operating systems, such as Linux.
+However, one significant difference between these two types of architecture is that the multi-tiered storage always treats the disk as the superset of the data (if persistence is enabled), capable of surviving crashes and restarts, while the traditional virtual memory uses the disk only as a swap extension, which gets erased once the process stops.
+
+== Memory Architecture
+
+Multi-tiered architecture is a page-based memory architecture that is split into pages of fixed size. The pages are stored in _managed off-heap regions_ in RAM (outside of Java heap) and are organized in a special hierarchy on disk.
+
+Ignite maintains the same binary data representation both in memory and on disk. This removes the need for costly serialization when moving data between memory and disk.
+
+The picture below illustrates the architecture of the multi-tiered storage.
+
+image::images/durable-memory-diagram.png[height=700px]
+
+=== Memory Segments
+
+Every data region starts with an initial size and has a maximum size it can grow to. The region expands to its maximum size by allocating continuous memory segments.
+
+A memory segment is a continuous byte array or physical memory allocated from the operating system. The array is divided into pages of fixed size. There are several types of pages that can reside in the segment, as shown in the picture below.
+
+image::images/memory-segment.png["Memory Segment"]
+
+=== Data Pages
+
+A data page stores entries you put into caches from the application side.
+
+Usually, a single data page holds multiple key-value entries in order to use the memory as efficiently as possible and avoid memory fragmentation.
+When a new entry is added to a cache, Ignite looks for an optimal page that can fit the whole key-value entry.
+
+However, if an entry's total size exceeds the page size configured via the `DataStorageConfiguration.setPageSize(..)` property, then the entry occupies more than one data page.
+
+[NOTE]
+====
+If you have many cache entries that do not fit in a single page, then it makes sense to increase the page size configuration parameter.
+====
+
+If during an update an entry size expands beyond the free space available in its data page, then Ignite searches for a new data page that has enough room to take the updated entry and moves the entry there.
+
+
+=== Memory Defragmentation
+
+Ignite performs memory defragmentation automatically and does not require any explicit action from a user.
+
+Over time, an individual data page might be updated multiple times by different CRUD operations.
+This can lead to the page and overall memory fragmentation.
+To minimize memory fragmentation, Ignite uses _page compaction_ whenever a page becomes too fragmented.
+
+A compacted data page looks like the one in the picture below:
+
+image:images/defragmented.png[]
+
+The page has a header that stores information needed for internal usage. All key-value entries are always added from right to left. In the picture, there are three entries (1, 2 and 3 respectively) stored in the page. These entries might have different size.
+
+The offsets (or references) to the entries' locations inside the page are stored left-to-right and are always of fixed size. The offsets are used as pointers to look up the key-value entries in a page.
+
+The space in the middle is a free space and is filled in whenever more data is pushed into the cluster.
+
+Next, let's assume that over time entry 2 was removed, which resulted in a non-continuous free space in the page:
+
+image:images/fragmented.png[]
+
+
+This is what a fragmented page looks like.
+
+However, when the whole free space available in the page is needed or some fragmentation threshold is reached, the compaction process defragments the page turning it into the state shown in the first picture above, where the free space is continuous. This process is automatic and doesn't require any action from the user side.
+
+== Persistence
+
+Ignite provides a number of features that let you persist your data on disk with consistency guarantees.
+You can restart the cluster without losing the data, be resilient to crashes, and provide a storage for data when the amount of RAM is not sufficient. When native persistence is enabled, Ignite always stores all the data on disk, and loads as much data as
+it can into RAM for processing. Refer to the link:persistence/native-persistence[Ignite Persistence] section for further information.
+
diff --git a/docs/_docs/memory-configuration/data-regions.adoc b/docs/_docs/memory-configuration/data-regions.adoc
new file mode 100644
index 0000000..f2646ad
--- /dev/null
+++ b/docs/_docs/memory-configuration/data-regions.adoc
@@ -0,0 +1,84 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Configuring Data Regions
+
+== Overview
+Ignite uses the concept of _data regions_ to control the amount of RAM available to a cache or a group of caches. A data region is a logical extendable area in RAM in which cached data resides. You can control the initial size of the region and the maximum size it can occupy. In addition to the size, data regions control link:persistence/native-persistence[persistence settings] for caches.
+
+By default, there is one data region that can take up to 20% of RAM available to the node, and all caches you create are placed in that region; but you can add as many regions as you want. There are a couple of reasons why you may want to have multiple regions:
+
+* Regions allow you to configure the amount of RAM available to a cache or number of caches.
+* Persistence parameters are configured per region. If you want to have both in-memory only caches and the caches that store their content to disk, you need to configure two (or more) data regions with different persistence settings: one for in-memory caches and one for persistent caches.
+* Some memory parameters, such as link:memory-configuration/eviction-policies[eviction policies], are configured per data region.
+
+See the following section to learn how to change the parameters of the default data region or configure multiple data regions.
+
+== Configuring Default Data Region
+
+By default, a new cache is added to the default data region. If you want to change the properties of the default data region, you can do so in the data storage configuration.
+
+
+:xmlFile: code-snippets/xml/data-regions-configuration.xml
+:javaFile: {javaCodeDir}/DataRegionConfigurationExample.java
+
+[tabs]
+--
+tab:XML[]
+
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;default;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=!*;ignite-config;default,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/MemoryArchitecture.cs[tag=DefaultDataReqion,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Adding Custom Data Regions
+
+In addition to the default data region, you can add more data regions with custom settings.
+In the following example, we configure a data region that can take up to 40 MB and uses the link:memory-configuration/eviction-policies#random-2-lru[Random-2-LRU] eviction policy.
+Note that further below in the configuration, we create a cache that resides in the new data region.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;data-region;default;caches;!discovery,indent=0]
+----
+
+For the full list of properties, refer to the link:{javadoc_base_url}/org/apache/ignite/configuration/DataStorageConfiguration.html[DataStorageConfiguration] javadoc.
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=ignite-config,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/MemoryArchitecture.cs[tag=mem,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/memory-configuration/eviction-policies.adoc b/docs/_docs/memory-configuration/eviction-policies.adoc
new file mode 100644
index 0000000..38921ef
--- /dev/null
+++ b/docs/_docs/memory-configuration/eviction-policies.adoc
@@ -0,0 +1,177 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Eviction Policies
+
+When link:persistence/native-persistence[Native Persistence] is off, Ignite holds all cache entries in the off-heap memory and allocates pages as new data comes in.
+When a memory limit is reached and Ignite cannot allocate a page, some of the data must be purged from memory to avoid OutOfMemory errors.
+This process is called _eviction_. Eviction prevents the system from running out of memory but at the cost of losing data and having to reload it when you need it again.
+
+Eviction is used in following cases:
+
+* for off-heap memory when link:persistence/native-persistence[Native Persistence] is off;
+* for off-heap memory when Ignite is used with an link:persistence/external-storage[external storage];
+* for link:configuring-caches/on-heap-caching[on-heap caches];
+* for link:configuring-caches/near-cache[near caches] if configured.
+
+When Native Persistence is on, a similar process — called _page replacement_ — is used to free up off-heap memory when Ignite cannot allocate a new page.
+The difference is that the data is not lost (because it is stored in the persistent storage), and therefore you are less concerned about losing data than about efficiency.
+Page replacement is automatically handled by Ignite and is not user-configurable.
+
+== Off-Heap Memory Eviction
+
+Off-heap memory eviction is implemented as follows.
+
+When memory usage exceeds the preset limit, Ignite applies one of the preconfigured algorithms to select a memory page that is most suitable for eviction.
+Then, each cache entry contained in the page is removed from the page.
+However, if an entry is locked by a transaction, it is retained.
+Thus, either the entire page or a large chunk of it is emptied and is ready to be reused.
+
+image::images/off_heap_memory_eviction.png[Off-Heap Memory Eviction Mechanism]
+
+By default, off-heap memory eviction is disabled, which means that the used memory constantly grows until it reaches its limit.
+To enable eviction, specify the page eviction mode in the link:memory-configuration/data-regions/[data region configuration].
+Note that off-heap memory eviction is configured per link:memory-configuration/data-regions[data region].
+If you don't use data regions, you have to explicitly add default data region parameters in your configuration to be able to configure eviction.
+
+By default, eviction starts when the overall RAM consumption by a region gets to 90%.
+Use the `DataRegionConfiguration.setEvictionThreshold(...)` parameter if you need to initiate eviction earlier or later.
+
+Ignite supports two page selection algorithms:
+
+* Random-LRU
+* Random-2-LRU
+
+The differences between the two are explained below.
+
+=== Random-LRU
+
+To enable the Random-LRU eviction algorithm, configure the data region as shown below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+
+include::code-snippets/xml/eviction.xml[tags=ignite-config;!discovery, indent=0]
+
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+  <!-- Memory configuration. -->
+  <property name="dataStorageConfiguration">
+    <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+      <property name="dataRegionConfigurations">
+        <list>
+          <!--
+              Defining a data region that consumes up to 20 GB of RAM.
+          -->
+          <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+            <!-- Custom region name. -->
+            <property name="name" value="20GB_Region"/>
+
+            <!-- 500 MB initial size (RAM). -->
+            <property name="initialSize" value="#{500L * 1024 * 1024}"/>
+
+            <!-- 20 GB maximum size (RAM). -->
+            <property name="maxSize" value="#{20L * 1024 * 1024 * 1024}"/>
+
+            <!-- Enabling RANDOM_LRU eviction for this region.  -->
+            <property name="pageEvictionMode" value="RANDOM_LRU"/>
+          </bean>
+        </list>
+      </property>
+    </bean>
+  </property>
+
+  <!-- The rest of the configuration. -->
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/EvictionPolicies.java[tag=randomLRU,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/EvictionPolicies.cs[tag=randomLRU,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Random-LRU algorithm works as follows:
+
+* Once a memory region defined by a memory policy is configured, an off-heap array is allocated to track the 'last usage' timestamp for every individual data page.
+* When a data page is accessed, its timestamp gets updated in the tracking array.
+* When it is time to evict a page, the algorithm randomly chooses 5 indexes from the tracking array and evicts the page with the oldest timestamp. If some of the indexes point to non-data pages (index or system pages), then the algorithm picks another page.
+
+=== Random-2-LRU
+
+To enable Random-2-LRU eviction algorithm, which is a scan-resistant version of Random-LRU, configure the data region, as shown in the example below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+  <!-- Memory configuration. -->
+  <property name="dataStorageConfiguration">
+    <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
+      <property name="dataRegionConfigurations">
+        <list>
+          <!--
+              Defining a data region that consumes up to 20 GB of RAM.
+          -->
+          <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
+            <!-- Custom region name. -->
+            <property name="name" value="20GB_Region"/>
+
+            <!-- 500 MB initial size (RAM). -->
+            <property name="initialSize" value="#{500L * 1024 * 1024}"/>
+
+            <!-- 20 GB maximum size (RAM). -->
+            <property name="maxSize" value="#{20L * 1024 * 1024 * 1024}"/>
+
+            <!-- Enabling RANDOM_2_LRU eviction for this region.  -->
+            <property name="pageEvictionMode" value="RANDOM_2_LRU"/>
+          </bean>
+        </list>
+      </property>
+    </bean>
+  </property>
+
+  <!-- The rest of the configuration. -->
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/EvictionPolicies.java[tag=random2LRU,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/EvictionPolicies.cs[tag=random2LRU,indent=0]
+----
+tab:C++[unsupported]
+--
+
+In Random-2-LRU, the two most recent access timestamps are stored for every data page. At the time of eviction, the algorithm randomly chooses 5 indexes from the tracking array and the minimum between two latest timestamps is taken for further comparison with corresponding minimums of four other pages that are chosen as eviction candidates.
+
+Random-LRU-2 outperforms LRU by resolving the "one-hit wonder" problem: if a data page is accessed rarely but accidentally accessed once, it's protected from eviction for a long time.
+
+== On-Heap Cache Eviction
+
+Refer to the link:configuring-caches/on-heap-caching#configuring-eviction-policy[Configuring Eviction Policy for On-Heap Caches] section for the instruction on how to configure eviction policy for on-heap caches.
diff --git a/docs/_docs/memory-configuration/index.adoc b/docs/_docs/memory-configuration/index.adoc
new file mode 100644
index 0000000..14fe978
--- /dev/null
+++ b/docs/_docs/memory-configuration/index.adoc
@@ -0,0 +1,21 @@
+---
+layout: toc
+---
+
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+= Memory Configuration
+
diff --git a/docs/_docs/messaging.adoc b/docs/_docs/messaging.adoc
new file mode 100644
index 0000000..12fa89b
--- /dev/null
+++ b/docs/_docs/messaging.adoc
@@ -0,0 +1,106 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Topic-Based Messaging With Apache Ignite
+
+== Overview
+
+Ignite distributed messaging enables topic-based cluster-wide communication between all nodes. Messages via a specified
+message topic can be distributed to all or sub-group of nodes that have subscribed to that topic.
+
+Ignite messaging is based on the publish-subscribe paradigm where publishers and subscribers are tethered together with
+a common topic. When one of the nodes sends a message `A` for topic `T`, it is published on all nodes that have subscribed to `T`.
+
+[NOTE]
+====
+[discrete]
+Any new node joining the cluster automatically gets subscribed to all the topics that other nodes in the cluster
+(or link:distributed-computing/cluster-groups[cluster group]) are subscribed to.
+====
+
+== IgniteMessaging
+
+Distributed messaging functionality in Ignite is available via the `IgniteMessaging` interface. You can get an instance
+of `IgniteMessaging` like so:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+Ignite ignite = Ignition.ignite();
+
+// Messaging instance over this cluster.
+IgniteMessaging msg = ignite.message();
+
+// Messaging instance over given cluster group (in this case, remote nodes).
+IgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());
+----
+--
+
+== Publish Messages
+
+Send methods help sending/publishing messages with a specified message topic to all nodes. Messages can be sent
+in _ordered_ or _unordered_ manner.
+
+=== Ordered Messages
+
+The `sendOrdered(...)` method can be used if you want to receive messages in the order they were sent. The timeout parameter
+is passed to specify how long a message will stay in the queue to wait for messages that are supposed to be sent before
+this message. If the timeout expires, then all the messages that have not yet arrived for a given topic on that node will be ignored.
+
+=== Unordered Messages
+
+The `send(...)` methods do not guarantee message ordering. This means that, when you sequentially send message `A` and
+message `B`, you are not guaranteed that the target node first receives `A` and then `B`.
+
+== Subscribe for Messages
+
+Listen methods help to listen/subscribe for messages. When these methods are called, a listener with specified message
+topic is registered on  all (or sub-group of ) nodes to listen for new messages. With listen methods, a predicate is
+passed that returns a boolean value which tells the listener to continue or stop listening for new messages.
+
+=== Local Listen
+
+The `localListen(...)` method registers a message listener with specified topic only on the local node and listens for
+messages from any node in the _given_ cluster group.
+
+=== Remote Listen
+
+The `remoteListen(...)` method registers message listeners with specified topic on all nodes in the _given_ cluster group
+and listens for messages from any node in _this_ cluster group.
+
+== Example
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+Ignite ignite = Ignition.ignite();
+
+IgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());
+
+// Add listener for ordered messages on all remote nodes.
+rmtMsg.remoteListen("MyOrderedTopic", (nodeId, msg) -> {
+    System.out.println("Received ordered message [msg=" + msg + ", from=" + nodeId + ']');
+
+    return true; // Return true to continue listening.
+});
+
+// Send ordered messages to remote nodes.
+for (int i = 0; i < 10; i++)
+    rmtMsg.sendOrdered("MyOrderedTopic", Integer.toString(i),0);
+----
+--
diff --git a/docs/_docs/monitoring-metrics/cluster-id.adoc b/docs/_docs/monitoring-metrics/cluster-id.adoc
new file mode 100644
index 0000000..26bb561
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/cluster-id.adoc
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cluster ID and Tag
+
+A cluster ID is a unique identifier of the cluster that is generated automatically when the cluster starts for the first time.
+A cluster tag is a user friendly name that you can assign to your cluster.
+You can use these values to identify your cluster in the monitoring system you use.
+
+The default cluster tag is generated automatically, but you can change it using one of the available methods.
+The length of the tag is limited by 280 characters.
+
+You can use the following methods to view the cluster ID and view or change the cluster tag:
+
+* Via the link:control-script#cluster-id-and-tag[control script].
+* JMX Bean:
++
+--
+----
+group=IgniteCluster,name=IgniteClusterMXBeanImpl
+----
+[cols="3,2,8", opts="header"]
+|===
+| Attribute | Type | Description
+|Id| String | The cluster ID.
+|Tag | String | The cluster tag.
+|===
+
+[cols="4,9", opts="header"]
+|===
+| Operation | Description
+| Tag(String) | Set the new cluster tag.
+|===
+--
+* Programmatically:
++
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/ClusterAPI.java[tags=cluster-tag, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+
+
+
diff --git a/docs/_docs/monitoring-metrics/cluster-states.adoc b/docs/_docs/monitoring-metrics/cluster-states.adoc
new file mode 100644
index 0000000..1848941
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/cluster-states.adoc
@@ -0,0 +1,97 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cluster States
+
+:javaFile: {javaCodeDir}/ClusterAPI.java
+
+== Overview
+
+An Ignite cluster can be in one of the three states: `ACTIVE`, `ACTIVE_READ_ONLY`, and `INACTIVE`.
+
+When you start a pure in-memory cluster (no persistent data regions) for the first time, the cluster is in the `ACTIVE` state.
+When you start a cluster with persistent data regions for the first time, the cluster is `INACTIVE`.
+
+
+* `INACTIVE`: All operations are prohibited.
++
+--
+When you change the cluster state from active to `INACTIVE` (deactivation), the cluster deallocates all memory resources.
+
+include::includes/note-on-deactivation.adoc[]
+
+--
+* `ACTIVE`: This is the normal mode of the cluster. You can execute any operation.
+
+* `ACTIVE_READ_ONLY`: The read-only mode. Only read operations are allowed.
++
+--
+Any attempt to create a cache or modify the data in an existing cache results in an `IgniteClusterReadOnlyException` exception.
+DDL or DML statements that modify the data are prohibited as well.
+--
+
+
+== Changing Cluster State
+
+You can change the cluster state in multiple ways:
+
+* link:control-script#getting-cluster-state[Control script]:
++
+[source, shell]
+----
+control.sh --set-state ACTIVE_READ_ONLY
+----
+
+* link:restapi#change-cluster-state[REST command]:
++
+--
+
+[source, url]
+----
+http://localhost:8080/ignite?cmd=setstate&state=ACTIVE_READ_ONLY
+----
+
+--
+* Programmatically:
++
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=change-state, indent=0]
+----
+
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+* JMX Bean:
++
+--
+
+Mbean's Object Name: ::
+----
+group="Kernal",name=IgniteKernal
+----
+[cols="1,4",opts="header"]
+|===
+|Operation | Description
+
+| `clusterState()` | Get the current cluster state.
+| `clusterState(String)` | Set the cluster state.
+|===
+--
diff --git a/docs/_docs/monitoring-metrics/configuring-metrics.adoc b/docs/_docs/monitoring-metrics/configuring-metrics.adoc
new file mode 100644
index 0000000..7d784b5
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/configuring-metrics.adoc
@@ -0,0 +1,149 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Configuring Metrics
+
+:javaFile: {javaCodeDir}/ConfiguringMetrics.java
+:xmlFile: code-snippets/xml/configuring-metrics.xml
+:dotnetFile: code-snippets/dotnet/ConfiguringMetrics.cs
+
+
+Metrics collection is not a free operation and might affect the performance of an application.
+For this reason, some metrics are disabled by default.
+
+
+== Enabling Cache Metrics
+
+Cache metrics show statistics on the amount of data stored in caches, the total number and frequency of cache operations, etc. as well as some cache configuration properties for information purposes.
+
+To enable cache metrics, use one of the methods described below for each cache you want to monitor.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tag=cache-metrics,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=cache-metrics,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=cache-metrics,indent=0]
+----
+tab:C++[unsupported]
+--
+
+For each cache on a node, Ignite creates two JMX Beans: one with cache information specific to the node, and one with global (cluster-wide) information about the cache.
+
+*Local cache information MBean:*::
++
+....
+group=<Cache_Name>,name="org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl"
+....
+
+
+*Global cache information MBean:*::
++
+----
+group=<Cache_Name>,name="org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl"
+----
+
+//See link:monitoring-metrics/monitoring-with-jconsole[Monitoring with JConsole] for the information on how to access JMX beans.
+
+
+== Enabling Data Region Metrics
+Data region metrics expose information about data regions, including memory and storage size of the region.
+Enable data region metrics for every region you want to collect the metrics for.
+
+Data region metrics can be enabled in two ways:
+
+* in the link:memory-configuration/data-regions[configuration of the region]
+* via JMX Beans
+
+The following example illustrates how to enable metrics for the default data region and one custom data region.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=ignite-config;data-region-metrics;!data-storage-metrics;!cache-metrics;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=data-region-metrics,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tags=data-region-metrics,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Data region metrics can be enabled/disabled at runtime via the following JMX Bean:
+
+*Data Region MBean*::
++
+----
+org.apache:group=DataRegionMetrics,name=<Data Region Name>
+----
+
+== Enabling Persistence-related Metrics
+Persistence-related metrics can be enabled/disabled in the data storage configuration:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/persistence-metrics.xml[tags=!*;ignite-config,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=data-storage-metrics,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tags=data-storage-metrics,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+You can enable "Persistent Store" metrics at runtime via the following MXBean:
+
+*Persistent Store MBean*::
++
+--
+----
+org.apache:group="Persistent Store",name=DataStorageMetrics
+----
+[cols="1,4",opts="header"]
+|===
+| Operation | Description
+| EnableMetrics | Enable persistent data storage metrics.
+|===
+--
+
+
+
diff --git a/docs/_docs/monitoring-metrics/intro.adoc b/docs/_docs/monitoring-metrics/intro.adoc
new file mode 100644
index 0000000..d495f6c
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/intro.adoc
@@ -0,0 +1,58 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Introduction: Monitoring and Metrics
+
+This chapter covers monitoring and metrics for Ignite. We'll start with an overview of the methods available for monitoring, and then we'll delve into the Ignite specifics, including a list of JMX metrics and MBeans.
+
+== Overview
+The basic task of monitoring in Ignite involves metrics. You have several approaches for accessing metrics:
+
+-  via link:monitoring-metrics/metrics[JMX]
+-  Programmatically
+-  link:monitoring-metrics/system-views[System views]
+
+
+== What to Monitor
+You can start by monitoring:
+
+  - Each node in isolation
+  - The Connection between nodes
+  - The system as a whole
+
+Note that a node consists of several layers: hardware, the operating system, the Virtual Machine (JVM, etc.), and the application. You need to check all of these levels, and the *network* surrounding it.
+
+  - Hardware (Hypervisor): CPU/Memory/Disk => System Logs/Cloud Provider's Logs
+  - Operating System
+  - JVM: GC Logs, JMX, Java Flight Recorder, Thread Dumps, Heap dumps, etc.
+  - Application: Logs, JMX, Throughput/Latency, Test queries
+      * For log based monitoring, the key is that you can act proactively, watch the logs for trends/etc., don't just wait to check the logs until something breaks.
+  - Network: ping monitoring, network hardware monitoring, TCP dumps
+
+This should give you a good place to start for setting up monitoring of your hardware, operating system, and network. To monitor the application layer (the nodes that make up your in-memory computing solution), you'll need to perform Ignite-specific monitoring via metrics you access with JMX/Beans or programmatically.
+
+
+== Global vs. Node-specific Metrics
+
+The information exposed through different metrics has different scope (applicability), and may be different depending on the node where you get the metrics.
+The following list explains different metric scopes.
+
+*Global metrics*:: Provide information about the cluster in general, for example: the number nodes, state of the cluster. This information is available on any node of the cluster.
+
+*Node-specific metrics*:: Provide information specific to the node on which you obtain the metrics, for example: memory consumption, data region metrics, WAL size, queue size, etc.
+
+Cache-related metrics can be global as well as node-specific.
+For example, the total number of entries in a cache is a global metric, and you can obtain it on any node.
+You can also get the number of entries of the cache that are stored on a specific node, in which case it will be a node-specific metric.
+
diff --git a/docs/_docs/monitoring-metrics/metrics.adoc b/docs/_docs/monitoring-metrics/metrics.adoc
new file mode 100644
index 0000000..ddecb56
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/metrics.adoc
@@ -0,0 +1,507 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JMX Metrics
+
+:table_opts: cols="3,2,8,2", opts="stretch,header"
+
+== Overview
+
+Ignite exposes a large number of metrics useful for monitoring your cluster or application.
+You can use JMX and a monitoring tool, such as JConsole to access these metrics via JMX.
+You can also access them programmatically.
+
+On this page, we've collected the most useful metrics and grouped them into various common categories based on the monitoring task.
+
+// link:monitoring-metrics/configuring-metrics[Configuring Metrics]
+
+== Understanding MBean's ObjectName
+
+Every JMX Mbean has an https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html[ObjectName,window=_blank].
+The ObjectName is used to identify the bean.
+The ObjectName consists of a domain and a list of key properties, and can be represented as a string as follows:
+
+   domain: key1 = value1 , key2 = value2
+
+All Ignite metrics have the same domain: `org.apache.<classloaderId>` where the classloader ID is optional (omitted if you set `IGNITE_MBEAN_APPEND_CLASS_LOADER_ID=false`). In addition, each metric has two properties: `group` and `name`.
+For example:
+
+    org.apache:group=SPIs,name=TcpDiscoverySpi
+
+This MBean provides various metrics related to node discovery.
+
+The MBean ObjectName can be used to identify the bean in UI tools like JConsole.
+For example, JConsole displays MBeans in a tree-like structure where all beans are first grouped by domain and then by the 'group' property:
+
+image::images/jconsole.png[]
+
+{sp}+
+
+== Monitoring the Amount of Data
+
+If you do not use link:persistence/native-persistence[Native persistence] (i.e., all your data is kept in memory), you would want to monitor RAM usage.
+If you use Native persistence, in addition to RAM, you should monitor the size of the data storage on disk.
+
+The size of the data loaded into a node is available at different levels of aggregation. You can monitor for:
+
+* The total size of the data the node keeps on disk or in RAM. This amount is the sum of the size of each configured data region (in the simplest case, only the default data region) plus the sizes of the system data regions.
+* The size of a specific link:memory-configuration/data-regions[data region] on that node. The data region size is the sum of the sizes of all cache groups.
+* The size of a specific cache/cache group on that node, including the backup partitions.
+
+These metrics can be enabled/disabled for each level separately and are exposed via different JMX beans listed below.
+
+
+=== Allocated Space vs. Actual Size of Data
+
+There is no way to get the exact size of the data (neither in RAM nor on disk). Instead, there are two ways to estimate it.
+
+You can get the size of the space _allocated_ for storing the data.
+(The "space" here refers either to the space in RAM or on disk depending on whether you use Native persistence or not.)
+Space is allocated when the size of the storage gets full and more entries need to be added.
+However, when you remove entries from caches, the space is not deallocated.
+It is reused when new entries need to be added to the storage on subsequent write operations. Therefore, the allocated size does not decrease when you remove entries from the caches.
+The allocated size is available at the level of data storage, data region, and cache group metrics.
+The metric is called `TotalAllocatedSize`.
+
+You can also get an estimate of the actual size of data by multiplying the number of link:memory-centric-storage#data-pages[data pages] in use by the fill factor. The fill factor is the ratio of the size of data in a page to the page size, averaged over all pages. The number of pages in use and the fill factor are available at the level of data <<Data Region Size,region metrics>>.
+
+Add up the estimated size of all data regions to get the estimated total amount of data on the node.
+
+
+:allocsize_note: Note that when Native persistence is disabled, this metric shows the total size of the allocated space in RAM.
+
+=== Monitoring RAM Memory Usage
+The amount of data in RAM can be monitored for each data region through the following MBeans:
+
+Mbean's Object Name: ::
++
+--
+----
+group=DataRegionMetrics,name=<Data Region name>
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+
+| PagesFillFactor| float | The average size of data in pages as a ratio of the page size. When Native persistence is enabled, this metric is applicable only to the persistent storage (i.e. pages on disk). | Node
+| TotalUsedPages | long | The number of data pages that are currently in use. When Native persistence is enabled, this metric is applicable only to the persistent storage (i.e. pages on disk).| Node
+| PhysicalMemoryPages |long | The number of the allocated pages in RAM. | Node
+| PhysicalMemorySize |long |The size of the allocated space in RAM in bytes. | Node
+|===
+--
+
+If you have multiple data regions, add up the sizes of all data regions to get the total size of the data on the node.
+
+=== Monitoring Storage Size
+
+Persistent storage, when enabled, saves all application data on disk.
+The total amount of data each node keeps on disk consists of the persistent storage (application data), the link:persistence/native-persistence#write-ahead-log[WAL files], and link:persistence/native-persistence#wal-archive[WAL Archive] files.
+
+==== Persistent Storage Size
+To monitor the size of the persistent storage on disk, use the following metrics:
+
+Mbean's Object Name: ::
++
+--
+----
+group="Persistent Store",name=DataStorageMetrics
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| TotalAllocatedSize | long  | The size of the space allocated on disk for the entire data storage (in bytes). {allocsize_note} | Node
+| WalTotalSize | long | Total size of the WAL files in bytes, including the WAL archive files. | Node
+| WalArchiveSegments | int | The number of WAL segments in the archive.  | Node
+|===
+
+[cols="1,4",opts="header"]
+|===
+|Operation | Description
+| enableMetrics | Enable collection of metrics related to the persistent storage at runtime.
+| disableMetrics | Disable metrics collection.
+|===
+--
+
+==== Data Region Size
+
+For each configured data region, Ignite creates a separate JMX Bean that exposes specific information about the region. Metrics collection for data regions are disabled by default. You can link:monitoring-metrics/configuring-metrics#enabling-data-region-metrics[enable it in the data region configuration, or via JMX at runtime] (see the Bean's operations below).
+
+The size of the data region on a node comprises the size of all partitions (including backup partitions) that this node owns for all caches in that data region.
+
+Data region metrics are available in the following MBean:
+
+Mbean's Object Name: ::
++
+--
+----
+group=DataRegionMetrics,name=<Data Region name>
+----
+
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+
+| TotalAllocatedSize | long  | The size of the space allocated for this data region (in bytes). {allocsize_note} | Node
+| PagesFillFactor| float | The average amount of data in pages as a ratio of the page size. | Node
+| TotalUsedPages | long | The number of data pages that are currently in use. | Node
+| PhysicalMemoryPages |long |The number of data pages in this data region held in RAM. | Node
+| PhysicalMemorySize | long |The size of the allocated space in RAM in bytes.| Node
+|===
+
+[cols="1,4",opts="header"]
+|===
+|Operation | Description
+| enableMetrics | Enable metrics collection for this data region.
+| disableMetrics | Disable metrics collection for this data region.
+|===
+--
+
+==== Cache Group Size
+
+If you don't use link:configuring-caches/cache-groups[cache groups], each cache will be its own group.
+There is a separate JMX bean for each cache group.
+The name of the bean corresponds to the name of the group.
+
+Mbean's Object Name: ::
++
+--
+----
+group="Cache groups",name=<Cache group name>
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+|TotalAllocatedSize |long | The amount of space allocated for the cache group on this node. | Node
+|===
+--
+
+== Monitoring Checkpointing Operations
+Checkpointing may slow down cluster operations.
+You may want to monitor how much time each checkpoint operation takes, so that you can tune the properties that affect checkpointing.
+You may also want to monitor the disk performance to see if the slow-down is caused by external reasons.
+
+See link:persistence/persistence-tuning#pages-writes-throttling[Pages Writes Throttling] and link:persistence/persistence-tuning#adjusting-checkpointing-buffer-size[Checkpointing Buffer Size] for performance tips.
+
+Mbean's Object Name: ::
++
+--
+    group="Persistent Store",name=DataStorageMetrics
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| DirtyPages  | long | The number of pages in memory that have been changed but not yet synchronized to disk. Those will be written to disk during next checkpoint. | Node
+|LastCheckpointDuration | long | The time in milliseconds it took to create the last checkpoint. | Node
+|CheckpointBufferSize | long | The size of the checkpointing buffer. | Global
+|===
+--
+
+
+== Monitoring Rebalancing
+link:data-rebalancing[Rebalancing] is the process of moving partitions between the cluster nodes so that the data is always distributed in a balanced manner. Rebalancing is triggered when a new node joins, or an existing node leaves the cluster.
+
+If you have multiple caches, they will be rebalanced sequentially.
+There are several metrics that you can use to monitor the progress of the rebalancing process for a specific cache.
+
+Mbean's Object Name: ::
++
+--
+----
+group=<cache name>,name=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+|RebalancingStartTime | long | This metric shows the time when rebalancing of local partitions started for the cache. This metric will return 0 if the local partitions do not participate in the rebalancing. The time is returned in milliseconds. | Node
+| EstimatedRebalancingFinishTime | long | Expected time of completion of the rebalancing process. |  Node
+| KeysToRebalanceLeft | long | The number of keys on the node that remain to be rebalanced.  You can monitor this metric to learn when the rebalancing process finishes.| Node
+|===
+--
+
+
+== Monitoring Topology
+Topology refers to the set of nodes in a cluster. There are a number of metrics that expose the information about the topology of the cluster. If the topology changes too frequently or has a size that is different from what you expect, you may want to look into whether there are network problems.
+
+
+Mbean's Object Name: ::
++
+--
+----
+group=Kernal,name=ClusterMetricsMXBeanImpl
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| TotalServerNodes| long  |The number of server nodes in the cluster.| Global
+| TotalClientNodes| long |The number of client nodes in the cluster. | Global
+| TotalBaselineNodes | long | The number of nodes that are registered in the link:clustering/baseline-topology[baseline topology]. When a node goes down, it remains registered in the baseline topology and you need to remote it manually. |  Global
+| ActiveBaselineNodes | long | The number of nodes that are currently active in the baseline topology.  |  Global
+|===
+--
+
+Mbean's Object Name: ::
++
+--
+----
+group=SPIs,name=TcpDiscoverySpi
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| Coordinator | String | The node ID of the current coordinator node.| Global
+| CoordinatorNodeFormatted|String a|
+Detailed information about the coordinator node.
+....
+TcpDiscoveryNode [id=e07ad289-ff5b-4a73-b3d4-d323a661b6d4,
+consistentId=fa65ff2b-e7e2-4367-96d9-fd0915529c25,
+addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.25.4.200],
+sockAddrs=[mymachine.local/172.25.4.200:47500,
+/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500,
+order=2, intOrder=2, lastExchangeTime=1568187777249, loc=false,
+ver=8.7.5#20190520-sha1:d159cd7a, isClient=false]
+....
+
+| Global
+|===
+--
+
+== Monitoring Caches
+
+Cache-related metrics. For each cache, Ignite will create two JMX MBeans that will expose the metrics specific to the cache. One MBean shows cluster-wide information about the cache, such as the total number of entries in the cache. The other MBean shows local information about the cache, such as the number of entries of the cache that are located on the local node.
+
+
+Global Cache Mbean's Object Name: ::
++
+--
+....
+group=<Cache_Name>,name="org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl"`
+....
+
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| CacheSize | long | The total number of entries in the cache across all nodes. | Global
+|===
+--
+
+Local Cache Mbean's Object Name: ::
++
+--
+----
+group=<Cache Name>,name="org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl"
+----
+
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| CacheSize | long | The number of entries of the cache that are stored on the local node. | Node
+|===
+--
+
+== Monitoring Transactions
+Note that if a transaction spans multiple nodes (i.e., if the keys that are changed as a result of the transaction execution are located on multiple nodes), the counters will increase on each node. For example, the 'TransactionsCommittedNumber' counter will increase on each node where the keys affected by the transaction are stored.
+
+Mbean's Object Name: ::
++
+--
+----
+group=TransactionMetrics,name=TransactionMetricsMxBeanImpl
+----
+
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| LockedKeysNumber | long  | The number of keys locked on the node. | Node
+| TransactionsCommittedNumber |long | The number of transactions that have been committed on the node  | Node
+| TransactionsRolledBackNumber | long | The number of transactions that were rolled back. | Node
+| OwnerTransactionsNumber | long |  The number of transactions initiated on the node. | Node
+| TransactionsHoldingLockNumber | long | The number of open transactions that hold a lock on at least one key on the node.| Node
+|===
+--
+
+////
+this isn't in 8.7.6 yet
+{sp}+
+
+Mbean's Object Name: ::
+`group=Transactions,name=TransactionsMXBeanImpl`
+*Attributes:*::
+{sp}
++
+--
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| TotalNodeSystemTime  | long | system time | Node
+| TotalNodeUserTime |  | |  Node
+| NodeSystemTimeHistogram | | | Node
+| NodeUserTimeHistogram | |  | Node
+|===
+--
+
+////
+
+
+////
+{sp}+
+
+
+== Monitoring Compute Jobs
+
+Mbean's Object Name: ::
+`group= ,name=`
+*Attributes:*::
+{sp}
++
+--
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+|  |  | |
+|===
+--
+
+////
+
+
+////
+== Monitoring Snapshots
+
+Mbean's Object Name: ::
++
+--
+----
+group=TODO ,name= TODO
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| LastSnapshotOperation |  | |
+| LastSnapshotStartTime || |
+| SnapshotInProgress | | |
+|===
+--
+////
+
+== Monitoring Data Center Replication
+
+Refer to the link:data-center-replication/managing-and-monitoring#dr_jmx[Managing and Monitoring Replication] page.
+
+
+////
+== Monitoring Memory Consumption
+
+JVM memory
+
+Mbean's Object Name: ::
++
+----
+group=Kernal,name=ClusterMetricsMXBeanImpl
+----
+*Attributes:*::
++
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| HeapMemoryUsed | long  | The Java heap size on the node. | Node
+|===
+
+////
+
+
+== Monitoring Client Connections
+Metrics related to JDBC/ODBC or thin client connections.
+
+Mbean's Object Name: ::
++
+--
+----
+group=Clients,name=ClientListenerProcessor
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| Connections | java.util.List<String> a| A list of strings, each string containing information about a connection:
+
+....
+JdbcClient [id=4294967297, user=<anonymous>,
+rmtAddr=127.0.0.1:39264, locAddr=127.0.0.1:10800]
+....
+| Node
+|===
+
+[cols="1,4",opts="header"]
+|===
+|Operation | Description
+| dropConnection (id)| Disconnect a specific client.
+| dropAllConnections | Disconnect all clients.
+|===
+--
+
+
+== Monitoring Message Queues
+When thread pools queues' are growing, it means that the node cannot keep up with the load, or there was an error while processing messages in the queue.
+Continuous growth of the queue size can lead to OOM errors.
+
+
+=== Communication Message Queue
+The queue of outgoing communication messages contains communication messages that are waiting to be sent to other nodes.
+If the size is growing, it means there is a problem.
+
+Mbean's Object Name: ::
++
+--
+----
+group=SPIs,name=TcpCommunicationSpi
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| OutboundMessagesQueueSize  | int | The size of the queue of outgoing communication messages. | Node
+|===
+--
+
+=== Discovery Messages Queue
+
+The queue of discovery messages.
+
+Mbean's Object Name: ::
++
+--
+----
+group=SPIs,name=TcpDiscoverySpi
+----
+[{table_opts}]
+|===
+| Attribute | Type | Description | Scope
+| MessageWorkerQueueSize | int | The size of the queue of discovery messages that are waiting to be sent to other nodes. | Node
+|AvgMessageProcessingTime|long| Average message processing time. | Node
+|===
+--
+
+////
+
+== Monitoring Executor Queue Size
+
+There is a number of executor thread pools running within each node that are dedicated to specific tasks.
+You may want to monitor the size of the executor's queues.
+You can read more about the thread pools on the link:perf-troubleshooting-guide/thread-pools-tuning[Thread Tuning Page]
+
+There is a JMX Bean for each thread pool.
+
+////
+
+
+
+
+
diff --git a/docs/_docs/monitoring-metrics/new-metrics-system.adoc b/docs/_docs/monitoring-metrics/new-metrics-system.adoc
new file mode 100644
index 0000000..39f6013
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/new-metrics-system.adoc
@@ -0,0 +1,220 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= New Metrics System
+
+:javaFile: {javaCodeDir}/ConfiguringMetrics.java
+
+== Overview
+
+WARNING: Experimental
+
+Ignite 2.8 introduced a new mechanism for collecting metrics, which is intended to replace the link:monitoring-metrics/metrics[legacy metrics system].
+This section explains the new system and how you can use it to monitor your cluster.
+//the types of metrics and how to export them, but first let's explore the basic concepts of the new metrics mechanism in Ignite.
+
+Let's explore the basic concepts of the new metrics system in Ignite.
+First, there are different metrics.
+Each metric has a name and a return value.
+The return value can be a simple value like `String`, `long`, or `double`, or can represent a Java object.
+Some metrics represent <<histograms>>.
+
+And then there are different ways to export the metrics — what we call _exporters_.
+To put it another way, the exporter are different ways you can access the metrics.
+Each exporter always gives access to all available metrics.
+
+Ignite includes the following exporters:
+
+* JMX
+* SQL Views
+* Log files
+* OpenCensus
+
+You can create a custom exporter by implementing the javadoc:org.apache.ignite.spi.metric.MetricExporterSpi[] interface.
+
+
+== Metric Registers
+
+Metrics are grouped into categories (called _registers_).
+Each register has a name.
+The full name of a specific metric within the register consists of the register name followed by a dot, followed by the name of the metric: `<register_name>.<metric_name>`.
+For example, the register for data storage metrics is called `io.datastorage`.
+The metric that return the storage size is called `io.datastorage.StorageSize`.
+
+The list of all registers and the metrics  they contain are described link:monitoring-metrics/new-metrics[here].
+
+== Metric Exporters
+
+If you want to enable metrics, configure one or multiple metric exporters in the node configuration.
+This is a node-specific configuration, which means it enables metrics only on the node where it is specified.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/metrics.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=new-metric-framework, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+The following sections describe the exporters available in Ignite by default.
+
+
+=== JMX
+
+`org.apache.ignite.spi.metric.jmx.JmxMetricExporterSpi` exposes metrics via JMX beans.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=metrics-filter, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+
+=== SQL View
+
+`org.apache.ignite.spi.metric.sql.SqlViewMetricExporterSpi` exposes metrics via the `SYS.METRICS` view.
+Each metric is displayed as a single record.
+You can use any supported SQL tool to view the metrics:
+
+[source, shell,subs="attributes"]
+----
+> select name, value from SYS.METRICS where name LIKE 'cache.myCache.%';
++-----------------------------------+--------------------------------+
+|                NAME               |             VALUE              |
++-----------------------------------+--------------------------------+
+| cache.myCache.CacheTxRollbacks    | 0                              |
+| cache.myCache.OffHeapRemovals     | 0                              |
+| cache.myCache.QueryCompleted      | 0                              |
+| cache.myCache.QueryFailed         | 0                              |
+| cache.myCache.EstimatedRebalancingKeys | 0                         |
+| cache.myCache.CacheEvictions      | 0                              |
+| cache.myCache.CommitTime          | [J@2eb66498                    |
+....
+----
+
+This is how you can configure the SQL View exporter:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/metrics.xml[tags=!*;ignite-config;sql-exporter, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=sql-exporter, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+=== Log
+
+`org.apache.ignite.spi.metric.log.LogExporterSpi` prints the metrics to the log file at regular intervals (1 min by default) at INFO level.
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/metrics.xml[tags=!*;ignite-config;log-exporter, indent=0]
+----
+
+
+tab:Java[]
+
+If you use programmatic configuration, you can change the print frequency as follows:
+
+[source, java]
+----
+include::{javaFile}[tags=log-exporter, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+=== OpenCensus
+
+`org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi` adds integration with the OpenCensus library.
+
+To use the OpenCensus exporter:
+
+. link:setup#enabling-modules[Enable the 'ignite-opencensus' module].
+. Add `org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi` to the list of exporters in the node configuration.
+. Configure OpenCensus StatsCollector to export to a specific system. See link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/opencensus/OpenCensusMetricsExporterExample.java[OpenCensusMetricsExporterExample.java] for an example and OpenCensus documentation for additional information.
+
+
+Configuration parameters:
+
+* `filter` - predicate that filters metrics.
+* `period` - export period.
+* `sendInstanceName` - if enabled, a tag with the Ignite instance name is added to each metric.
+* `sendNodeId` - if enabled, a tag with the Ignite node id is added to each metric.
+* `sendConsistentId` - if enabled, a tag with the Ignite node consistent id is added to each metric.
+
+
+
+
+== Histograms
+
+The metrics that represent histograms are available in the JMX exporter only.
+Histogram metrics are exported as a set of values where each value corresponds to a specific bucket and is available through a separate JMX bean attribute.
+The attribute names of a histogram metric have the following format:
+
+```
+{metric_name}_{low_bound}_{high_bound}
+```
+
+where
+
+* `{metric_name}` - the name of the metric.
+* `{low_bound}` - start of the bound. `0` for the first bound.
+* `{high_bound}` - end of the bound. `inf` for the last bound.
+
+
+Example of the metric names if the bounds are [10,100]:
+
+* `histogram_0_10` - less than 10.
+* `histogram_10_100` - between 10 and 100.
+* `histogram_100_inf` - more than 100.
+
+
+
diff --git a/docs/_docs/monitoring-metrics/new-metrics.adoc b/docs/_docs/monitoring-metrics/new-metrics.adoc
new file mode 100644
index 0000000..5266ff0
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/new-metrics.adoc
@@ -0,0 +1,342 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Metrics
+
+This page describes metrics registers (categories) and the metrics available in each register.
+
+
+== System
+
+
+System metrics such as JVM or CPU metrics.
+
+Register name: `sys`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name    |Type|    Description
+|CpuLoad| double|  CPU load.
+|CurrentThreadCpuTime  |  long|    ThreadMXBean.getCurrentThreadCpuTime()
+|CurrentThreadUserTime|   long   | ThreadMXBean.getCurrentThreadUserTime()
+|DaemonThreadCount|   integer| ThreadMXBean.getDaemonThreadCount()
+|GcCpuLoad   |double|  GC CPU load.
+|PeakThreadCount |integer| ThreadMXBean.getPeakThreadCount
+|SystemLoadAverage|   java.lang.Double|    OperatingSystemMXBean.getSystemLoadAverage()
+|ThreadCount |integer| ThreadMXBean.getThreadCount
+|TotalExecutedTasks  |long|    Total executed tasks.
+|TotalStartedThreadCount |long|    ThreadMXBean.getTotalStartedThreadCount
+|UpTime|  long  |  RuntimeMxBean.getUptime()
+|memory.heap.committed|   long|    MemoryUsage.getHeapMemoryUsage().getCommitted()
+|memory.heap.init |   long|    MemoryUsage.getHeapMemoryUsage().getInit()
+|memory.heap.used    |long|    MemoryUsage.getHeapMemoryUsage().getUsed()
+|memory.nonheap.committed|    long|    MemoryUsage.getNonHeapMemoryUsage().getCommitted()
+|memory.nonheap.init |long  |  MemoryUsage.getNonHeapMemoryUsage().getInit()
+|memory.nonheap.max  |long  |  MemoryUsage.getNonHeapMemoryUsage().getMax()
+|memory.nonheap.used |long  |  MemoryUsage.getNonHeapMemoryUsage().getUsed()
+|===
+
+
+== Caches
+
+Cache metrics.
+
+Register name: `cache.{cache_name}.{near}`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name | Type | Description
+|CacheEvictions | long|The total number of evictions from the cache.
+|CacheGets   |long|The total number of gets to the cache.
+|CacheHits   |long|The number of get requests that were satisfied by the cache.
+|CacheMisses |long|A miss is a get request that is not satisfied.
+|CachePuts   |long|The total number of puts to the cache.
+|CacheRemovals  | long|The total number of removals from the cache.
+|CacheTxCommits | long|Total number of transaction commits.
+|CacheTxRollbacks |long|Total number of transaction rollbacks.
+|CommitTime  |histogram  | Commit time in nanoseconds.
+|CommitTimeTotal |long| The total time of commit, in nanoseconds.
+|EntryProcessorHits | long|The total number of invocations on keys, which exist in cache.
+|EntryProcessorInvokeTimeNanos | long|The total time of cache invocations, in nanoseconds.
+|EntryProcessorMaxInvocationTime |long|So far, the maximum time to execute cache invokes.
+|EntryProcessorMinInvocationTime |long|So far, the minimum time to execute cache invokes.
+|EntryProcessorMisses |long|The total number of invocations on keys, which don't exist in cache.
+|EntryProcessorPuts   |long|The total number of cache invocations, caused update.
+|EntryProcessorReadOnlyInvocations   |long|The total number of cache invocations, caused no updates.
+|EntryProcessorRemovals  |long|The total number of cache invocations, caused removals.
+|EstimatedRebalancingKeys|long|Number estimated to rebalance keys.
+|GetTime |histogram|   Get time in nanoseconds.
+|GetTimeTotal|long|The total time of cache gets, in nanoseconds.
+|IsIndexRebuildInProgress|boolean | True if index rebuild is in progress.
+|OffHeapEvictions|long|The total number of evictions from the off-heap memory.
+|OffHeapGets |long|The total number of get requests to the off-heap memory.
+|OffHeapHits |long|The number of get requests that were satisfied by the off-heap memory.
+|OffHeapMisses   |long|A miss is a get request that is not satisfied by off-heap memory.
+|OffHeapPuts |long|The total number of put requests to the off-heap memory.
+|OffHeapRemovals |long|The total number of removals from the off-heap memory.
+|PutTime | histogram|   Put time in nanoseconds.
+|PutTimeTotal|long|The total time of cache puts, in nanoseconds.
+|QueryCompleted  |long|Count of completed queries.
+|QueryExecuted   |long|Count of executed queries.
+|QueryFailed |long|Count of failed queries.
+|QueryMaximumTime |long| Maximum query execution time.
+|QueryMinimalTime |long| Minimum query execution time.
+|QuerySumTime |long| Query summary time.
+|RebalanceClearingPartitionsLeft |long| Number of partitions need to be cleared before actual rebalance start.
+|RebalanceStartTime  |long| Rebalance start time.
+|RebalancedKeys |long| Number of already rebalanced keys.
+|RebalancingBytesRate|long|Estimated rebalancing speed in bytes.
+|RebalancingKeysRate |long|Estimated rebalancing speed in keys.
+|RemoveTime  |histogram|   Remove time in nanoseconds.
+|RemoveTimeTotal |long|The total time of cache removal, in nanoseconds.
+|RollbackTime|histogram|   Rollback time in nanoseconds.
+|RollbackTimeTotal   |long|The total time of rollback, in nanoseconds.
+|TotalRebalancedBytes|long|Number of already rebalanced bytes.
+|===
+
+== Cache Groups
+
+
+Register name: `cacheGroups.{group_name}`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name | Type | Description
+|AffinityPartitionsAssignmentMap |java.util.Map|  Affinity partitions assignment map.
+|Caches  |java.util.ArrayList| List of caches
+|IndexBuildCountPartitionsLeft |  long|    Number of partitions need processed for finished indexes create or rebuilding.
+|LocalNodeMovingPartitionsCount  |integer| Count of partitions with state MOVING for this cache group located on this node.
+|LocalNodeOwningPartitionsCount  |integer| Count of partitions with state OWNING for this cache group located on this node.
+|LocalNodeRentingEntriesCount |   long|    Count of entries remains to evict in RENTING partitions located on this node for this cache group.
+|LocalNodeRentingPartitionsCount |integer| Count of partitions with state RENTING for this cache group located on this node.
+|MaximumNumberOfPartitionCopies | integer| Maximum number of partition copies for all partitions of this cache group.
+|MinimumNumberOfPartitionCopies  |integer| Minimum number of partition copies for all partitions of this cache group.
+|MovingPartitionsAllocationMap   |java.util.Map|  Allocation map of partitions with state MOVING in the cluster.
+|OwningPartitionsAllocationMap   |java.util.Map | Allocation map of partitions with state OWNING in the cluster.
+|PartitionIds    |java.util.ArrayList| Local partition ids.
+|SparseStorageSize  | long|    Storage space allocated for group adjusted for possible sparsity, in bytes.
+|StorageSize |long|    Storage space allocated for group, in bytes.
+|TotalAllocatedPages |long|    Cache group total allocated pages.
+|TotalAllocatedSize  |long|    Total size of memory allocated for group, in bytes.
+|===
+
+
+== Transactions
+
+Transaction metrics.
+
+Register name: `tx`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name   | Type |    Description
+|AllOwnerTransactions|    java.util.HashMap|   Map of local node owning transactions.
+|LockedKeysNumber   | long|    The number of keys locked on the node.
+|OwnerTransactionsNumber |long|    The number of active transactions for which this node is the initiator.
+|TransactionsHoldingLockNumber |  long|    The number of active transactions holding at least one key lock.
+|LastCommitTime  |long|    Last commit time.
+|nodeSystemTimeHistogram| histogram|   Transactions system times on node represented as histogram.
+|nodeUserTimeHistogram|   histogram|   Transactions user times on node represented as histogram.
+|LastRollbackTime|    long|    Last rollback time.
+|totalNodeSystemTime |long|    Total transactions system time on node.
+|totalNodeUserTime   |long|    Total transactions user time on node.
+|txCommits   |integer| Number of transaction commits.
+|txRollbacks |integer| Number of transaction rollbacks.
+|===
+
+
+== Partition Map Exchange
+
+Partition map exchange metrics.
+
+Register name: `pme`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name    |Type |   Description
+|CacheOperationsBlockedDuration  |long  |  Current PME cache operations blocked duration in milliseconds.
+|CacheOperationsBlockedDurationHistogram |histogram |  Histogram of cache operations blocked PME durations in milliseconds.
+|Duration    |long |   Current PME duration in milliseconds.
+|DurationHistogram |  histogram  | Histogram of PME durations in milliseconds.
+|===
+
+
+== Compute Jobs
+
+Register name: `compute.jobs`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name|    Type|    Description
+|compute.jobs.Active  |long|    Number of active jobs currently executing.
+|compute.jobs.Canceled    |long|    Number of cancelled jobs that are still running.
+|compute.jobs.ExecutionTime   |long|    Total execution time of jobs.
+|compute.jobs.Finished    |long|    Number of finished jobs.
+|compute.jobs.Rejected    |long|    Number of jobs rejected after more recent collision resolution operation.
+|compute.jobs.Started |long|    Number of started jobs.
+|compute.jobs.Waiting |long|    Number of currently queued jobs waiting to be executed.
+|compute.jobs.WaitingTime |long|    Total time jobs spent on waiting queue.
+|===
+
+== Thread Pools
+
+Register name: `threadPools.{thread_pool_name}`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name |   Type |   Description
+|ActiveCount |long  |  Approximate number of threads that are actively executing tasks.
+|CompletedTaskCount|  long |   Approximate total number of tasks that have completed execution.
+|CorePoolSize    |long  |  The core number of threads.
+|KeepAliveTime|   long  |  Thread keep-alive time, which is the amount of time which threads in excess of the core pool size may remain idle before being terminated.
+|LargestPoolSize| long  |  Largest number of threads that have ever simultaneously been in the pool.
+|MaximumPoolSize |long  |  The maximum allowed number of threads.
+|PoolSize    |long|    Current number of threads in the pool.
+|QueueSize   |long |   Current size of the execution queue.
+|RejectedExecutionHandlerClass|   string | Class name of current rejection handler.
+|Shutdown  |  boolean| True if this executor has been shut down.
+|TaskCount |  long |   Approximate total number of tasks that have been scheduled for execution.
+|Terminated  |boolean| True if all tasks have completed following shut down.
+|Terminating |long|    True if terminating but not yet terminated.
+|ThreadFactoryClass|  string|  Class name of thread factory used to create new threads.
+|===
+
+
+== Cache Group IO
+
+Register name: `io.statistics.cacheGroups.{group_name}`
+
+
+[cols="2,1,3",opts="header"]
+|===
+|Name |   Type |   Description
+|LOGICAL_READS  | long |   Number of logical reads
+|PHYSICAL_READS | long |   Number of physical reads
+|grpId  | integer | Group id
+|name  |  string | Name of the index
+|startTime  | long |   Statistics collect start time
+|===
+
+
+== Sorted Indexes
+
+Register name: `io.statistics.sortedIndexes.{cache_name}.{index_name}`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name |    Type |    Description
+|LOGICAL_READS_INNER |long|    Number of logical reads for inner tree node
+|LOGICAL_READS_LEAF | long  |  Number of logical reads for leaf tree node
+|PHYSICAL_READS_INNER|    long|    Number of physical reads for inner tree node
+|PHYSICAL_READS_LEAF| long|    Number of physical reads for leaf tree node
+|indexName|   string|  Name of the index
+|name|    string|  Name of the cache
+|startTime|   long|    Statistics collection start time
+|===
+
+
+== Hash Indexes
+
+Register name: `io.statistics.hashIndexes.{cache_name}.{index_name}`
+
+
+[cols="2,1,3",opts="header"]
+|===
+|Name |   Type|    Description
+|LOGICAL_READS_INNER| long|    Number of logical reads for inner tree node
+|LOGICAL_READS_LEAF|  long|    Number of logical reads for leaf tree node
+|PHYSICAL_READS_INNER|    long|    Number of physical reads for inner tree node
+|PHYSICAL_READS_LEAF| long|    Number of physical reads for leaf tree node
+|indexName|   string|  Name of the index
+|name|    string|  Name of the cache
+|startTime|   long|    Statistics collection start time
+|===
+
+
+== Communication IO
+
+Register name: `io.communication`
+
+
+[cols="2,1,3",opts="header"]
+|===
+|Name|    Type|    Description
+|OutboundMessagesQueueSize|   integer| Outbound messages queue size.
+|SentMessagesCount  | integer| Sent messages count.
+|SentBytesCount | long  |  Sent bytes count.
+|ReceivedBytesCount|  long|    Received bytes count.
+|ReceivedMessagesCount|   integer| Received messages count.
+|===
+
+
+== Data Region IO
+
+Register name: `io.dataregion.{data_region_name}`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name |    Type |    Description
+|AllocationRate | long|    Allocation rate (pages per second) averaged across rateTimeInternal.
+|CheckpointBufferSize |    long |    Checkpoint buffer size in bytes.
+|DirtyPages |  long|    Number of pages in memory not yet synchronized with persistent storage.
+|EmptyDataPages|  long|    Calculates empty data pages count for region. It counts only totally free pages that can be reused (e. g. pages that are contained in reuse bucket of free list).
+|EvictionRate|    long|    Eviction rate (pages per second).
+|LargeEntriesPagesCount|  long|    Count of pages that fully ocupied by large entries that go beyond page size
+|OffHeapSize| long|    Offheap size in bytes.
+|OffheapUsedSize| long|    Offheap used size in bytes.
+|PagesFillFactor| double|  The percentage of the used space.
+|PagesRead|   long|    Number of pages read from last restart.
+|PagesReplaceAge| long|    Average age at which pages in memory are replaced with pages from persistent storage (milliseconds).
+|PagesReplaceRate|    long|    Rate at which pages in memory are replaced with pages from persistent storage (pages per second).
+|PagesReplaced|   long|    Number of pages replaced from last restart.
+|PagesWritten|    long|    Number of pages written from last restart.
+|PhysicalMemoryPages| long|    Number of pages residing in physical RAM.
+|PhysicalMemorySize | long|    Gets total size of pages loaded to the RAM, in bytes
+|TotalAllocatedPages |long|    Total number of allocated pages.
+|TotalAllocatedSize|  long  |  Gets a total size of memory allocated in the data region, in bytes
+|TotalThrottlingTime| long|    Total throttling threads time in milliseconds. The Ignite throttles threads that generate dirty pages during the ongoing checkpoint.
+|UsedCheckpointBufferSize  |  long|    Gets used checkpoint buffer size in bytes
+
+|===
+
+
+== Data Storage
+
+Data Storage metrics.
+
+Register name: `io.datastorage`
+
+[cols="2,1,3",opts="header"]
+|===
+|Name |    Type |    Description
+|CheckpointTotalTime| long |   Total duration of checkpoint
+|LastCheckpointCopiedOnWritePagesNumber|  long |   Number of pages copied to a temporary checkpoint buffer during the last checkpoint.
+|LastCheckpointDataPagesNumber|   long  |  Total number of data pages written during the last checkpoint.
+|LastCheckpointDuration | long  |  Duration of the last checkpoint in milliseconds.
+|LastCheckpointFsyncDuration| long  |  Duration of the sync phase of the last checkpoint in milliseconds.
+|LastCheckpointLockWaitDuration|  long|    Duration of the checkpoint lock wait in milliseconds.
+|LastCheckpointMarkDuration | long  |  Duration of the checkpoint lock wait in milliseconds.
+|LastCheckpointPagesWriteDuration|    long|    Duration of the checkpoint pages write in milliseconds.
+|LastCheckpointTotalPagesNumber|  long|    Total number of pages written during the last checkpoint.
+|SparseStorageSize  | long|    Storage space allocated adjusted for possible sparsity, in bytes.
+|StorageSize | long|    Storage space allocated, in bytes.
+|WalArchiveSegments | integer| Current number of WAL segments in the WAL archive.
+|WalBuffPollSpinsRate|    long  |  WAL buffer poll spins number over the last time interval.
+|WalFsyncTimeDuration |   long |   Total duration of fsync
+|WalFsyncTimeNum |long  |  Total count of fsync
+|WalLastRollOverTime |long |   Time of the last WAL segment rollover.
+|WalLoggingRate | long|    Average number of WAL records per second written during the last time interval.
+|WalTotalSize|    long  |  Total size in bytes for storage wal files.
+|WalWritingRate|  long  |  Average number of bytes per second written during the last time interval.
+|===
diff --git a/docs/_docs/monitoring-metrics/system-views.adoc b/docs/_docs/monitoring-metrics/system-views.adoc
new file mode 100644
index 0000000..ac45667
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/system-views.adoc
@@ -0,0 +1,705 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= System Views
+
+WARNING: The system views are an experimental feature and can be changed in future releases.
+
+Ignite provides a number of built-in SQL views that contain information about cluster nodes and node metrics.
+The views are available in the SYS schema.
+See the link:SQL/schemas[Understanding Schemas] page for the information on how to access a non-default schema.
+
+[IMPORTANT]
+====
+[discrete]
+=== Limitations
+. You cannot create objects in the SYS schema.
+. System views from the SYS schema cannot be joined with user tables.
+====
+
+
+== Querying System Views
+
+
+To query the system views using the link:tools/sqlline[SQLLine] tool, connect to the SYS schema as follows:
+
+[source, shell]
+----
+./sqlline.sh -u jdbc:ignite:thin://127.0.0.1/SYS
+----
+
+If your node is running on a remote server, replace `127.0.0.1` with the IP address of the server.
+
+Run a query:
+
+[source, sql]
+----
+-- get the list of nodes
+select * from NODES;
+
+-- view the CPU load as a percentage for a specific node
+select CUR_CPU_LOAD * 100 from NODE_METRICS where NODE_ID = 'a1b77663-b37f-4ddf-87a6-1e2d684f3bae'
+
+----
+
+The same example using link:thin-clients/java-thin-client[Java Thin Client]:
+
+[source, java]
+----
+include::{javaCodeDir}/JavaThinClient.java[tag=system-views,indent=0]
+----
+
+
+:table_opts: cols="2,1,4",opts="header"
+
+== CACHES
+
+[{table_opts}]
+|===
+| Column | Type |    Description
+|CACHE_NAME | string |  Cache name
+|CACHE_ID | int | Cache ID
+|CACHE_TYPE | string |  Cache type
+|CACHE_MODE | string |  Cache mode
+|ATOMICITY_MODE | string |  Atomicity mode
+|CACHE_GROUP_NAME | string |  Cache group name
+|AFFINITY | string |  toString representation of affinity function
+|AFFINITY_MAPPER | string |  toString representation of affinity mapper
+|BACKUPS | int | backup count
+|CACHE_GROUP_ID | int | cache group id
+|CACHE_LOADER_FACTORY | string |  toString representation of cache loader factory
+|CACHE_STORE_FACTORY | string |  toString representation of cache store factory
+|CACHE_WRITER_FACTORY | string |  toString representation of cache writer factory
+|DATA_REGION_NAME | string |  Data region name
+|DEFAULT_LOCK_TIMEOUT | long |    Lock timeout in milliseconds
+|EVICTION_FILTER | string |  toString representation of eviction filter
+|EVICTION_POLICY_FACTORY | string |  toString representation of eviction policy factory
+|EXPIRY_POLICY_FACTORY | string |  toString representation of expiry policy factory
+|INTERCEPTOR | string |  toString representation of interceptor
+|IS_COPY_ON_READ | boolean | Flag indicating whether a copy of the value stored in the on-heap cache
+|IS_EAGER_TTL | boolean | Flag indicating whether expired cache entries will be eagerly removed from cache
+|IS_ENCRYPTION_ENABLED | boolean | True if cache data encrypted
+|IS_EVENTS_DISABLED | boolean | True if events disabled for this cache
+|IS_INVALIDATE | boolean | True if values will be invalidated (nullified) upon commit in near cache
+|IS_LOAD_PREVIOUS_VALUE | boolean | True if value should be loaded from store if it is not in the cache
+|IS_MANAGEMENT_ENABLED | boolean|
+|IS_NEAR_CACHE_ENABLED |   boolean| True if near cache enabled
+|IS_ONHEAP_CACHE_ENABLED | boolean | True if on heap cache enabled
+|IS_READ_FROM_BACKUP | boolean | True if read operation should be performed from backup node
+|IS_READ_THROUGH | boolean | True if read from third party storage enabled
+|IS_SQL_ESCAPE_ALL | boolean | If true all the SQL table and field names will be escaped with double quotes
+|IS_SQL_ONHEAP_CACHE_ENABLED | boolean | If true SQL on-heap cache is enabled. When enabled, Ignite will cache SQL rows as they are accessed by query engine. Rows are invalidated and evicted from cache when relevant cache entry is either changed or evicted.
+|IS_STATISTICS_ENABLED | boolean|
+|IS_STORE_KEEP_BINARY |    boolean| Flag indicating that {@link CacheStore} implementation is working with binary objects instead of Java objects.
+|IS_WRITE_BEHIND_ENABLED | boolean | Flag indicating whether Ignite should use write-behind behaviour for the cache store
+|IS_WRITE_THROUGH | boolean | True if write to third party storage enabled
+|MAX_CONCURRENT_ASYNC_OPERATIONS | int | Maximum number of allowed concurrent asynchronous operations. If 0 returned then number of concurrent asynchronous operations is unlimited
+|MAX_QUERY_ITERATORS_COUNT | int | Maximum number of query iterators that can be stored. Iterators are stored to support query pagination when each page of data is sent to user's node only on demand
+|NEAR_CACHE_EVICTION_POLICY_FACTORY | string |  toString representation of near cache eviction policy factory
+|NEAR_CACHE_START_SIZE | int | Initial cache size for near cache which will be used to pre-create internal hash table after start.
+|NODE_FILTER | string |  toString representation of node filter
+|PARTITION_LOSS_POLICY | string |  toString representation of partition loss policy
+|QUERY_DETAIL_METRICS_SIZE | int | size of queries detail metrics that will be stored in memory for monitoring purposes. If 0 then history will not be collected.
+|QUERY_PARALLELISM | int | Hint to query execution engine on desired degree of parallelism within a single node
+|REBALANCE_BATCH_SIZE | int | Size (in bytes) to be loaded within a single rebalance message
+|REBALANCE_BATCHES_PREFETCH_COUNT | int | Number of batches generated by supply node at rebalancing start
+|REBALANCE_DELAY | long |    Rebalance delay in milliseconds
+|REBALANCE_MODE | string |  Rebalance mode
+|REBALANCE_ORDER | int | Rebalance order
+|REBALANCE_THROTTLE | long |    Time in milliseconds to wait between rebalance messages to avoid overloading of CPU or network
+|REBALANCE_TIMEOUT | long |    Rebalance timeout in milliseconds
+|SQL_INDEX_MAX_INLINE_SIZE | int | Index inline size in bytes
+|SQL_ONHEAP_CACHE_MAX_SIZE | int | Maximum SQL on-heap cache. Measured in number of rows. When maximum size is reached oldest cached rows will be evicted.
+|SQL_SCHEMA | string |  Schema name
+|TOPOLOGY_VALIDATOR | string |  toString representation of topology validator
+|WRITE_BEHIND_BATCH_SIZE | int | Maximum batch size for write-behind cache store operations
+|WRITE_BEHIND_COALESCING | boolean | Write coalescing flag for write-behind cache store operations. Store operations (get or remove) with the same key are combined or coalesced to single, resulting operation to reduce pressure to underlying cache store
+|WRITE_BEHIND_FLUSH_FREQUENCY | long |    Frequency with which write-behind cache is flushed to the cache store in milliseconds
+|WRITE_BEHIND_FLUSH_SIZE | int | Maximum size of the write-behind cache. If cache size exceeds this value, all cached items are flushed to the cache store and write cache is cleared
+|WRITE_BEHIND_FLUSH_THREAD_COUNT | int | Number of threads that will perform cache flushing
+|WRITE_SYNCHRONIZATION_MODE | string |  Gets write synchronization mode
+|===
+
+
+== CACHE_GROUPS
+
+
+The CACHE_GROUPS view contains information about the link:configuring-caches/cache-groups[cache groups].
+
+[{table_opts}]
+|===
+|Column|Data Type|Description
+
+
+|AFFINITY| VARCHAR | The string representation (as returned by the `toString()` method) of the affinity function defined for the cache group.
+|ATOMICITY_MODE | VARCHAR | The link:configuring-caches/atomicity-modes[atomicity mode] of the cache group.
+|BACKUPS|INT | The number of link:configuring-caches/configuring-backups[backup partitions] configured for the cache group.
+|CACHE_COUNT|INT | The number of caches in the cache group.
+|CACHE_GROUP_ID|INT | The ID of the cache group.
+|CACHE_GROUP_NAME | VARCHAR | The name of the cache group.
+|CACHE_MODE | VARCHAR | The cache mode.
+|DATA_REGION_NAME | VARCHAR | The name of the link:memory-configuration/data-regions[data region].
+|IS_SHARED|BOOLEAN | If this group contains more than one cache.
+|NODE_FILTER | VARCHAR | The string representation (as returned by the `toString()` method) of the node filter defined for the cache group.
+|PARTITION_LOSS_POLICY | VARCHAR | link:configuring-caches/partition-loss-policy[Partition loss policy].
+|PARTITIONS_COUNT|INT | The number of partitions.
+|REBALANCE_DELAY|LONG | link:data-rebalancing#other-properties[Rebalancing delay].
+|REBALANCE_MODE | VARCHAR  | link:data-rebalancing#configuring-rebalancing-mode[Rebalancing mode].
+|REBALANCE_ORDER|INT | link:data-rebalancing#other-properties[Rebalancing order].
+|TOPOLOGY_VALIDATOR | VARCHAR |  The string representation (as returned by the `toString()` method) of the topology validator defined for the cache group.
+|===
+
+
+
+== TASKS
+
+This view exposes information about currently running compute tasks started by a node. For instance, let's assume that an
+application started a compute task using the Ignite thick client and the task's job was executed on one of the server nodes.
+In this case, the thick client will report statistics related to the task via this system view while the server node will
+be updating the thick client with task-related execution details.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|ID | UUID | Task id
+|SESSION_ID | UUID | Session ID
+|TASK_NODE_ID | UUID | Task originating node id
+|TASK_NAME | string | Task name
+|TASK_CLASS_NAME | string | Task class name
+|AFFINITY_PARTITION_ID | int | Cache partition id
+|AFFINITY_CACHE_NAME | string | Cache name
+|START_TIME | long | Start time
+|END_TIME | long | End time
+|EXEC_NAME | string | Thread pool name executing task
+|INTERNAL | boolean | True if task is internal
+|USER_VERSION | string | Task user version
+|===
+
+== JOBS
+
+This system view shows a list of compute jobs started by a node as part of a compute task.
+To view the status of the compute task refer to the `TASKS` system view.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|ID | UUID | Job ID
+|SESSION_ID | UUID | Job's session ID. Note, `SESSION_ID` is equal to `TASKS.SESSION_ID` for the jobs belonging to a specific task.
+|ORIGIN_NODE_ID | UUID | The id of the node that started the job
+|TASK_NAME | string | The name of the task
+|TASK_CLASSNAME | string | Class name of the task
+|AFFINITY_CACHE_IDS | string | IDs of one or more caches if the job is executed against one of the `IgniteCompute.affinity..`
+methods. The parameter is empty, if you use `IgniteCompute` APIs that don't target specific caches.
+|AFFINITY_PARTITION_ID | int | IDs of one or more partitions if the job is executed via one of the `IgniteCompute.affinity..`
+methods. The parameter is empty, if you use `IgniteCompute` APIs that don't target specific partitions.
+|CREATE_TIME | long | Job's creation time
+|START_TIME | long | Job's start time
+|FINISH_TIME | long | Job's finish time
+|EXECUTOR_NAME | string | The name of the task's executor
+|IS_FINISHING | boolean | `True` if the job is finishing
+|IS_INTERNAL | boolean | `True` if the job is internal
+|IS_STARTED | boolean | `True` if the job has been started
+|IS_TIMEDOUT | boolean | `True` if the job timed out before completing
+|STATE | string | Possible values: +
+`ACTIVE` - Job is being executed. +
+`PASSIVE` - Job is added to the execution queue. Please, see `CollisionSPI` for more details. +
+`CANCELED` - Job is canceled.
+|===
+
+== SERVICES
+
+[{table_opts}]
+|===
+|AFFINITY_KEY | string |  Affinity key value for service
+|CACHE_NAME | string |  Cache name
+|MAX_PER_NODE_COUNT | int | Maximum count of services instances per node
+|NAME | string |  Service name
+|NAME | TYPE |    DESCRIPTION
+|NODE_FILTER | string |  toString representation of node filter
+|ORIGIN_NODE_ID | UUID |    Originating node ID
+|SERVICE_CLASS | string |  Service class name
+|SERVICE_ID | UUID |    Service ID
+|STATICALLY_CONFIGURED | boolean | True is service statically configured
+|TOTAL_COUNT | int | Total count of service instances
+|===
+
+
+== TRANSACTIONS
+
+This view exposes information about currently running transactions.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|ORIGINATING_NODE_ID | UUID |
+|STATE | string |
+|XID | UUID |
+|LABEL | string |
+|START_TIME | long |
+|ISOLATION | string |
+|CONCURRENCY | string |
+|KEYS_COUNT | int |
+|CACHE_IDS | string |
+|COLOCATED | boolean |
+|DHT | boolean |
+|DURATION | long |
+|IMPLICIT | boolean |
+|IMPLICIT_SINGLE | boolean |
+|INTERNAL | boolean |
+|LOCAL | boolean |
+|LOCAL_NODE_ID | UUID |
+|NEAR | boolean |
+|ONE_PHASE_COMMIT | boolean |
+|OTHER_NODE_ID | UUID |
+|SUBJECT_ID | UUID |
+|SYSTEM | boolean |
+|THREAD_ID | long |
+|TIMEOUT | long |
+|TOP_VER | string |
+|===
+
+== NODES
+
+
+The NODES view contains information about the cluster nodes.
+
+[cols="1,1,2",opts="header"]
+|===
+| Column | Data Type |Description
+| IS_LOCAL| BOOLEAN| Whether the node is local.
+|ADDRESSES |VARCHAR |The addresses of the node.
+|CONSISTENT_ID |VARCHAR |Node's consistent ID.
+|HOSTNAMES |VARCHAR |The host names of the node.
+|IS_CLIENT |BOOLEAN |Indicates whether the node is a client.
+|IS_DAEMON |BOOLEAN |Indicates whether the node is a daemon node.
+|NODE_ID |UUID |Node ID.
+|NODE_ORDER |INT |Node order within the topology.
+|VERSION |VARCHAR |Node version.
+|===
+
+== NODE_ATTRIBUTES
+
+
+The NODE_ATTRIBUTES view contains the attributes of all nodes.
+
+
+[{table_opts}]
+|===
+| Column |Data Type |Description
+
+|NODE_ID |UUID |Node ID.
+|NAME |VARCHAR |Attribute name.
+
+|===
+
+== BASELINE_NODES
+
+
+The BASELINE_NODES view contains information about the nodes that are part of the current baseline topology.
+
+[{table_opts}]
+|===
+| Column |Data Type |Description
+|CONSISTENT_ID |VARCHAR |Node consistent ID.
+|ONLINE |BOOLEAN |Indicates whether the node is up and running.
+
+|===
+
+
+== CLIENT_CONNECTIONS
+
+This view exposes information about currently opened client connections: JDBC, ODBC, Thin clients.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|CONNECTION_ID | long |    ID of the connection
+|LOCAL_ADDRESS | IP | address  IP address of the local node
+|REMOTE_ADDRESS | IP | address  IP address of the remote node
+|TYPE | string |  Type of the connection
+|USER | string |  User name
+|VERSION | string |  Protocol version
+|===
+
+== STRIPED_THREADPOOL_QUEUE
+
+This view exposes information about tasks waiting for the execution in the system striped thread pool.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|DESCRIPTION | string |  toString representation of the task
+|STRIPE_INDEX | int | Index of the stripe thread
+|TASK_NAME | string |  Class name of the task
+|THREAD_NAME | string |  Name of the stripe thread
+|===
+
+== DATASTREAM_THREADPOOL_QUEUE
+
+This view exposes information about tasks waiting for the execution in the data streamer stripped thread pool.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|DESCRIPTION | string |  toString representation of the task
+|STRIPE_INDEX | int | Index of the stripe thread
+|TASK_NAME | string |  Class name of the task
+|THREAD_NAME | string |  Name of the stripe thread
+|===
+
+== SCAN_QUERIES
+
+This view exposes information about currently running scan queries.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|CACHE_GROUP_ID | int | Cache group ID
+|CACHE_GROUP_NAME | string |  Cache group name
+|CACHE_ID | int | Cache ID
+|CACHE_NAME | string |  Cache name
+|CANCELED | boolean | True if canceled
+|DURATION | long |    Query duration
+|FILTER | string |  toString representation of filter
+|KEEP_BINARY | boolean | True if keepBinary enabled
+|LOCAL | boolean | True if query local only
+|ORIGIN_NODE_ID | UUID |    Node id started query
+|PAGE_SIZE | int | Page size
+|PARTITION | int | Query partition ID
+|QUERY_ID | long |    Query ID
+|START_TIME | long |    Query start time
+|SUBJECT_ID | UUID |    User ID started query
+|TASK_NAME | string |
+|TOPOLOGY | string |  Topology version
+|TRANSFORMER | string |  toString representation of transformer
+|===
+
+
+== CONTINUOUS_QUERIES
+
+This view exposes information about currently running continuous queries.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|AUTO_UNSUBSCRIBE | boolean | True if query should be stopped when node disconnected or originating node left
+|BUFFER_SIZE | int | Event batch buffer size
+|CACHE_NAME | string |  Cache name
+|DELAYED_REGISTER | boolean | True if query would be started when corresponding cache started
+|INTERVAL | long |    Notify interval
+|IS_EVENTS | boolean | True if used for subscription to remote events
+|IS_MESSAGING | boolean | True if used for subscription to messages.
+|IS_QUERY | boolean | True if user started continuous query.
+|KEEP_BINARY | boolean | True if keepBinary enabled
+|LAST_SEND_TIME | long |    Last time event batch sent to query originating node
+|LOCAL_LISTENER | string |  toString representation of local listener
+|LOCAL_TRANSFORMED_LISTENER | string |  toString representation of local transformed listener
+|NODE_ID | UUID |    Originating node id
+|NOTIFY_EXISTING | boolean | True if listener should be notified about existing entries
+|OLD_VALUE_REQUIRED | boolean | True if old entry value should be included in event
+|REMOTE_FILTER | string |  toString representation of remote filter
+|REMOTE_TRANSFORMER | string |  toString representation of remote transformer
+|ROUTINE_ID | UUID |    Query ID
+|TOPIC | string |  Query topic name
+|===
+
+
+
+== SQL_QUERIES
+
+This view exposes information about currently running SQL queries.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|DURATION | long |    Query execution duration
+|LOCAL | boolean | True if local only
+|ORIGIN_NODE_ID | UUID |    Node that started query
+|QUERY_ID | UUID |    Query ID
+|SCHEMA_NAME | string |  Schema name
+|SQL | string |  Query text
+|START_TIME | date |    Query start time
+|===
+
+== SQL_QUERIES_HISTORY
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|SCHEMA_NAME | string |  Schema name
+|SQL | string |  Query text
+|LOCAL | boolean | True if local only
+|EXECUTIONS | long |    Count of executions
+|FAILURES | long |    Count of failures
+|DURATION_MIN | long |    Minimal duration of execution
+|DURATION_MAX | long |    Maximum duration of execution
+|LAST_START_TIME | date |    Last execution date
+|===
+
+
+== SCHEMAS
+
+This view exposes information about SQL schemas.
+
+[{table_opts}]
+|===
+|NAME |    TYPE |    DESCRIPTION
+|NAME  |  string|  Name of the schema
+|PREDEFINED |  boolean | If true schema is predefined
+|===
+
+== NODE_METRICS
+
+The NODE_METRICS view provides various metrics about the state of nodes, resource consumption and other metrics.
+
+[cols="2,1,4",opts="header,stretch"]
+|===
+|Column|Data Type|Description
+|NODE_ID|UUID| Node ID.
+|LAST_UPDATE_TIME|TIMESTAMP|Last time the metrics were updated.
+|MAX_ACTIVE_JOBS|INT|  Maximum number of concurrent jobs this node ever had at one time.
+|CUR_ACTIVE_JOBS|INT| Number of currently active jobs running on the node.
+|AVG_ACTIVE_JOBS|FLOAT| Average number of active jobs concurrently executing on the node.
+|MAX_WAITING_JOBS|INT|Maximum number of waiting jobs this node ever had at one time.
+|CUR_WAITING_JOBS|INT|Number of queued jobs currently waiting to be executed.
+|AVG_WAITING_JOBS|FLOAT| Average number of waiting jobs this node ever had at one time.
+|MAX_REJECTED_JOBS|INT| Maximum number of jobs rejected at once during a single collision resolution operation.
+|CUR_REJECTED_JOBS|INT|Number of jobs rejected as a result of the most recent collision resolution operation.
+|AVG_REJECTED_JOBS|FLOAT| Average number of jobs this node rejected as a result of collision resolution operations.
+|TOTAL_REJECTED_JOBS|INT| Total number of jobs this node has rejected as a result of collision resolution operations since the node startup.
+|MAX_CANCELED_JOBS|INT| Maximum number of cancelled jobs this node ever had running concurrently.
+|CUR_CANCELED_JOBS|INT| Number of cancelled jobs that are still running.
+|AVG_CANCELED_JOBS|FLOAT| Average number of cancelled jobs this node ever had running concurrently.
+|TOTAL_CANCELED_JOBS|INT| Number of jobs cancelled since the node startup.
+|MAX_JOBS_WAIT_TIME|TIME| Maximum time a job ever spent waiting in a queue before being executed.
+|CUR_JOBS_WAIT_TIME|TIME| Longest wait time among the jobs that are currently waiting for execution.
+|AVG_JOBS_WAIT_TIME|TIME| Average time jobs spend in the queue before being executed.
+|MAX_JOBS_EXECUTE_TIME|TIME|  Maximum job execution time.
+|CUR_JOBS_EXECUTE_TIME|TIME|  Longest time a current job has been executing for.
+|AVG_JOBS_EXECUTE_TIME|TIME| Average job execution time on this node.
+|TOTAL_JOBS_EXECUTE_TIME|TIME|Total time all finished jobs took to execute on this node since the node startup.
+|TOTAL_EXECUTED_JOBS|INT|  Total number of jobs handled by the node since the node startup.
+|TOTAL_EXECUTED_TASKS|INT| Total number of tasks handled by the node.
+|TOTAL_BUSY_TIME|TIME| Total time this node spent executing jobs.
+|TOTAL_IDLE_TIME|TIME| Total time this node spent idling (not executing any jobs).
+|CUR_IDLE_TIME|TIME| Time this node has spent idling since executing the last job.
+|BUSY_TIME_PERCENTAGE|FLOAT|Percentage of job execution vs idle time.
+|IDLE_TIME_PERCENTAGE|FLOAT|Percentage of idle vs job execution time.
+|TOTAL_CPU|INT| Number of CPUs available to the Java Virtual Machine.
+|CUR_CPU_LOAD|DOUBLE| Percentage of CPU usage expressed as a fraction in the range [0, 1].
+|AVG_CPU_LOAD|DOUBLE| Average percentage of CPU usage expressed as a fraction in the range [0, 1].
+|CUR_GC_CPU_LOAD|DOUBLE| Average time spent in GC since the last update of the metrics. By default, metrics are updated every 2 seconds.
+|HEAP_MEMORY_INIT|LONG| Amount of heap memory in bytes that the JVM initially requests from the operating system for memory management. Shows `-1` if the initial memory size is undefined.
+|HEAP_MEMORY_USED|LONG| Current heap size that is used for object allocation. The heap consists of one or more memory pools. This value is the sum of used heap memory values of all heap memory pools.
+|HEAP_MEMORY_COMMITED|LONG|  Amount of heap memory in bytes that is committed for the JVM to use. This amount of memory is guaranteed for the JVM to use. The heap consists of one or more memory pools. This value is the sum of committed heap memory values of all heap memory pools.
+|HEAP_MEMORY_MAX|LONG|  Maximum amount of heap memory in bytes that can be used for memory management. The column displays `-1` if the maximum memory size is undefined.
+|HEAP_MEMORY_TOTAL|LONG| Total amount of heap memory in bytes. The column displays `-1` if the total memory size is undefined.
+|NONHEAP_MEMORY_INIT|LONG| Amount of non-heap memory in bytes that the JVM initially requests from the operating system for memory management. The column displays `-1` if the initial memory size is undefined.
+|NONHEAP_MEMORY_USED|LONG|  Current non-heap memory size that is used by Java VM. The non-heap memory consists of one or more memory pools. This value is the sum of used non-heap memory values of all non-heap memory pools.
+|NONHEAP_MEMORY_COMMITED|LONG| Amount of non-heap memory in bytes that is committed for the JVM to use. This amount of memory is guaranteed for the JVM to use. The non-heap memory consists of one or more memory pools. This value is the sum of committed non-heap memory values of all non-heap memory pools.
+|NONHEAP_MEMORY_MAX|LONG| Returns the maximum amount of non-heap memory in bytes that can be used for memory management. The column displays `-1` if the maximum memory size is undefined.
+|NONHEAP_MEMORY_TOTAL|LONG| Total amount of non-heap memory in bytes that can be used for memory management. The column displays `-1` if the total memory size is undefined.
+|UPTIME|TIME|Uptime of the JVM.
+|JVM_START_TIME|TIMESTAMP|Start time of the JVM.
+|NODE_START_TIME|TIMESTAMP| Start time of the node.
+|LAST_DATA_VERSION|LONG|In-Memory Data Grid assigns incremental versions to all cache operations. This column contains the latest data version on the node.
+|CUR_THREAD_COUNT|INT|  Number of live threads including both daemon and non-daemon threads.
+|MAX_THREAD_COUNT|INT| Maximum live thread count since the JVM started or peak was reset.
+|TOTAL_THREAD_COUNT|LONG| Total number of threads  started since the JVM started.
+|CUR_DAEMON_THREAD_COUNT|INT|Number of live daemon threads.
+|SENT_MESSAGES_COUNT|INT|Number of node communication messages sent.
+|SENT_BYTES_COUNT|LONG|  Amount of bytes sent.
+|RECEIVED_MESSAGES_COUNT|INT|Number of node communication messages received.
+|RECEIVED_BYTES_COUNT|LONG| Amount of bytes received.
+|OUTBOUND_MESSAGES_QUEUE|INT|  Outbound messages queue size.
+
+|===
+
+
+
+== TABLES
+
+
+The TABLES view contains information about the SQL tables.
+
+[{table_opts}]
+|===
+|Column|Data Type|Description
+
+|AFFINITY_KEY_COLUMN | string |  Affinity key column name
+|CACHE_ID | int | Cache id for the table
+|CACHE_NAME | string |  Cache name for the table
+|IS_INDEX_REBUILD_IN_PROGRESS | boolean | True if some index rebuild for this table in progress
+|KEY_ALIAS | string |  Key column alias
+|KEY_TYPE_NAME | string |  Key type name
+|SCHEMA_NAME | string |  Schema name of the table
+|TABLE_NAME | string |  Name of the table
+|VALUE_ALIAS | string |  Value column alias
+|VALUE_TYPE_NAME | string |  Value type name
+
+|===
+
+
+== TABLE_COLUMNS
+
+This view exposes information about SQL table columns.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|AFFINITY_COLUMN | boolean | True if column affinity key.
+|AUTO_INCREMENT | boolean | True if auto incremented
+|COLUMN_NAME | string |  Column name
+|DEFAULT_VALUE | string |  Default column value
+|NULLABLE | boolean | True if nullable
+|PK | boolean | True if primary key
+|PRECISION | int | Column precision
+|SCALE | int | Column scale
+|SCHEMA_NAME | string |  Schema name
+|TABLE_NAME | string |  Table name
+|TYPE | string |  Column type
+|===
+
+
+
+== VIEWS
+
+This view exposes information about SQL views.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|NAME | string |  Name
+|SCHEMA | string |  Schema
+|DESCRIPTION | string |  Description
+|===
+
+
+== VIEW_COLUMNS
+
+This view exposes information about SQL views columns.
+
+[{table_opts}]
+|===
+|NAME | TYPE |    DESCRIPTION
+|COLUMN_NAME | string |  Name of the column
+|DEFAULT_VALUE | string |  Column default value
+|NULLABLE | boolean | True if column nullable
+|PRECISION | int | Column precision
+|SCALE | int | Column scale
+|SCHEMA_NAME | string |  Name of the view
+|TYPE | string |  Column type
+|VIEW_NAME | string |  Name of the view
+|===
+
+== INDEXES
+
+The INDEXES view contains information about SQL indexes.
+
+[{table_opts}]
+|===
+|Column|Data Type|Description
+|INDEX_NAME | string |  Name of the index
+|INDEX_TYPE | string |  Type of the index
+|COLUMNS | string |  Columns included in index
+|SCHEMA_NAME | string |  Schema name
+|TABLE_NAME | string |  Table name
+|CACHE_NAME | string |  Cache name
+|CACHE_ID | int | Cache ID
+|INLINE_SIZE | int | Inline size in bytes
+|IS_PK | boolean | True if primary key index
+|IS_UNIQUE | boolean | True if unique index
+|===
+
+
+== PAGE_LISTS
+
+The page list is a data structure used to store a list of partially free data pages (free lists) and fully free allocated
+pages (reuse lists). The purpose of the free lists and reuse lists is to quickly locate a page with enough free space
+to save an entry or to determine that no such page exists and a new page should be allocated.
+The page lists are organized in buckets. Each bucket group references pages with about the same size of a free space.
+
+If Ignite persistence is enabled, the page lists are created for each partition of each cache group. To view such page lists
+use the `CACHE_GROUP_PAGE_LISTS` system view. If Ignite persistence is disabled, the page lists are created for each data region.
+In this case, the `DATA_REGION_PAGE_LISTS` system view needs to used. These views contain information about each bucket
+of each page list that is useful to understand how much data can be inserted into a cache without allocating new pages
+and also helps to detect skews in page lists utilization.
+
+
+=== CACHE_GROUP_PAGE_LISTS
+
+[{table_opts}]
+|===
+|Column | Data type |  Description
+|CACHE_GROUP_ID |  int| Cache group ID
+|PARTITION_ID |    int| Partition ID
+|NAME |    string|  Page list name
+|BUCKET_NUMBER |   int| Bucket number
+|BUCKET_SIZE | long  |  Count of pages in the bucket
+|STRIPES_COUNT |   int| Count of stripes used by this bucket. Stripes are used to avoid contention.
+|CACHED_PAGES_COUNT |  int| Count of pages in an on-heap page list cache for this bucket.
+|===
+
+=== DATA_REGION_PAGE_LISTS
+
+[{table_opts}]
+|===
+|Column | Data type |  Description
+|NAME |    string|  Page list name
+|BUCKET_NUMBER |   int| Bucket number
+|BUCKET_SIZE | long  |  Count of pages in the bucket
+|STRIPES_COUNT |   int| Count of stripes used by this bucket. Stripes are used to avoid contention.
+|CACHED_PAGES_COUNT |  int| Count of pages in an on-heap page list cache for this bucket.
+|===
+
+== PARTITION_STATES
+
+This view exposes information about the distribution of cache group partitions across cluster nodes.
+
+[{table_opts}]
+|===
+|Column | Data type |  Description
+|CACHE_GROUP_ID |  int| Cache group ID
+|PARTITION_ID |    int| Partition ID
+|NODE_ID | UUID | Node ID
+|STATE | string | Partition state. Possible states: MOVING - partition is being loaded from another node to this node; OWNING - this node is either a primary or backup owner; RENTING - this node is neither primary nor back up owner (is being currently evicted); EVICTED - partition has been evicted; LOST - partition state is invalid, the partition should not be used.
+|IS_PRIMARY | boolean  | Primary partition flag
+|===
+
+== BINARY_METADATA
+
+This view exposes information about all available binary types.
+
+[{table_opts}]
+|===
+|Column | Data type |  Description
+|TYPE_ID | int | Type ID
+|TYPE_NAME | string | Type name
+|AFF_KEY_FIELD_NAME | string | Affinity key field name
+|FIELDS_COUNT | int | Fields count
+|FIELDS | string | Recorded object fields
+|SCHEMAS_IDS | string | Schema IDs registered for this type
+|IS_ENUM | boolean | Whether this is enum type
+|===
+
+== METASTORAGE
+
+This view exposes the contents of the metastorage cache.
+
+[{table_opts}]
+|===
+|Column | Data type |  Description
+|NAME | string | Name
+|VALUE | string | String or raw binary (if data could not be deserialized for some reason) representation of an element
+|===
diff --git a/docs/_docs/monitoring-metrics/tracing.adoc b/docs/_docs/monitoring-metrics/tracing.adoc
new file mode 100644
index 0000000..440873d
--- /dev/null
+++ b/docs/_docs/monitoring-metrics/tracing.adoc
@@ -0,0 +1,183 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Tracing
+
+:javaFile: {javaCodeDir}/Tracing.java
+
+WARNING: This feature is experimental.
+
+A number of APIs in Ignite are instrumented for tracing with OpenCensus.
+You can collect distributed traces of various tasks executed in your cluster and use this information to diagnose latency problems.
+
+We suggest you get familiar with OpenCensus tracing documentation before reading this chapter: https://opencensus.io/tracing/[^].
+
+The following Ignite APIs are instrumented for tracing:
+
+* Discovery
+* Communication
+* Exchange
+* Transactions
+
+
+To view traces, you must export them into an external system.
+You can use one of the OpenCensus exporters or write your own, but in any case, you will have to write code that registers an exporter in Ignite.
+Refer to <<Exporting Traces>> for details.
+
+
+== Configuring Tracing
+
+Enable OpenCensus tracing in the node configuration. All nodes in the cluster must use the same tracing configuration.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/tracing.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=config, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+
+== Enabling Trace Sampling
+
+When you start your cluster with the above configuration, Ignite does not collect traces.
+You have to enable trace sampling for a specific API at runtime.
+You can turn trace sampling on and off at will, for example, only for the period when you are troubleshooting a problem.
+
+You can do this in two ways:
+
+* via the control script from the command line
+* programmatically
+
+Traces are collected at a given probabilistic sampling rate.
+The rate is specified as a value between 0.0 and 1.0 inclusive: `0` means no sampling, `1` means always sampling.
+
+When the sampling rate is set to a value greater than 0, Ignite collects traces.
+To disable trace collection, set the sampling rate to 0.
+
+The following sections describe the two ways of enabling trace sampling.
+
+=== Using Control Script
+
+Go to the `{IGNITE_HOME}/bin` directory of your Ignite installation.
+Enable experimental commands in the control script:
+
+[source, shell]
+----
+export IGNITE_ENABLE_EXPERIMENTAL_COMMAND=true
+----
+
+Enable tracing for a specific API:
+
+[source, shell]
+----
+./control.sh --tracing-configuration set --scope TX --sampling-rate 1
+----
+
+Refer to the link:control-script#tracing-configuration[Control Script] sections for the list of all parameters.
+
+=== Programmatically
+
+Once you start the node, you can enable trace sampling as follows:
+
+[source, java]
+----
+include::{javaFile}[tags=enable-sampling, indent=0]
+----
+
+
+The `--scope` parameter specifies the API you want to trace.
+The following APIs are instrumented for tracing:
+
+* `DISCOVERY` — discovery events
+* `EXCHANGE` —  exchange events
+* `COMMUNICATION` — communication events
+* `TX` — transactions
+
+The `--sampling-rate` is the probabilistic sampling rate, a number between `0` and `1`:
+
+* `0` means no sampling,
+* `1` means always sampling.
+
+
+== Exporting Traces
+
+To view traces, you need to export them to an external backend using one of the available exporters.
+OpenCensus supports a number of exporters out-of-the-box, and you can write a custom one.
+Refer to the link:https://opencensus.io/exporters/[OpenCensus Exporters^] for details.
+
+In this section, we will show how to export traces to link:https://zipkin.io[Zipkin^].
+
+. Follow link:https://zipkin.io/pages/quickstart.html[this guide^] to launch Zipkin on your machine.
+. Register `ZipkinTraceExporter` in the application where you start Ignite:
++
+--
+[source, java]
+----
+include::{javaFile}[tags=export-to-zipkin, indent=0]
+----
+--
+
+
+. Open http://localhost:9411/zipkin[^] in your browser and click the search icon.
++
+--
+This is what a trace of the transaction looks like:
+
+image::images/trace_in_zipkin.png[]
+--
+
+== Analyzing Trace Data
+
+A trace is recorded information about the execution of a specific event.
+Each trace consists of a tree of _spans_.
+A span is an individual unit of work performed by the system in order to process the event.
+
+Because of the distributed nature of Ignite, an operation usually involves multiple nodes.
+Therefore, a trace can include spans from multiple nodes.
+Each span always contains the information about the node where the corresponding operation was executed.
+
+In the image of the transaction trace presented above, you can see that the trace contains the spans associated with the following operations:
+
+* acquire locks (`transactions.colocated.lock.map`),
+* get (`transactions.near.enlist.read`),
+* put (`transactions.near.enlist.write`),
+* commit (`transactions.commit`), and
+* close (`transactions.close`).
+
+The commit operation, in turn, consists of two operations: prepare and finish.
+
+You can click on each span to view the annotations and tags attached to it.
+
+
+image::images/span.png[Span]
+
+////
+TODO: describe annotations and tags
+=== Annotations
+
+=== Tags
+
+The `node.id` and `node.consistentId` are the ID and consistent ID of the node where the root operation started.
+////
diff --git a/docs/_docs/net-specific/asp-net-output-caching.adoc b/docs/_docs/net-specific/asp-net-output-caching.adoc
new file mode 100644
index 0000000..aaaadc9
--- /dev/null
+++ b/docs/_docs/net-specific/asp-net-output-caching.adoc
@@ -0,0 +1,93 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ASP.NET Output Caching
+
+== Overview
+
+Ignite cache can be used as an ASP.NET output cache. This can work especially well for web farms where cached output will
+be shared between web servers.
+
+== Installation
+
+* *Binary distribution*: add a reference to `Apache.Ignite.AspNet.dll`
+* *NuGet*: `Install-Package Apache.Ignite.AspNet`
+
+== Launching Ignite Automatically
+
+To start Ignite automatically for output caching, configure it
+link:net-specific/configuration-options#configure-with-application-or-web-config-files[in the web.config file via IgniteConfigurationSection]
+
+[tabs]
+--
+tab:web.config[]
+[source,xml]
+----
+<configuration>
+    <configSections>
+        <section name="igniteConfiguration" type="Apache.Ignite.Core.IgniteConfigurationSection, Apache.Ignite.Core" />
+    </configSections>
+
+    <igniteConfiguration autoGenerateIgniteInstanceName="true">
+        <cacheConfiguration>
+            <cacheConfiguration name='myWebCache' />
+        </cacheConfiguration>
+    </igniteConfiguration>
+</configuration>
+----
+--
+
+Enable the caching in the `web.config` settings:
+
+[tabs]
+--
+tab:web.config[]
+[source,xml]
+----
+<system.web>
+  <caching>
+    <outputCache defaultProvider="apacheIgnite">
+      <providers>
+          <add name="apacheIgnite" type="Apache.Ignite.AspNet.IgniteOutputCacheProvider, Apache.Ignite.AspNet" igniteConfigurationSectionName="igniteConfiguration" cacheName="myWebCache" />
+      </providers>
+    </outputCache>
+  </caching>
+</system.web>
+----
+--
+
+== Launching Ignite Manually
+
+You can start an Ignite instance manually and specify it's name in the provider configuration:
+
+[tabs]
+--
+tab:web.config[]
+[source,xml]
+----
+<system.web>
+  <caching>
+    <outputCache defaultProvider="apacheIgnite">
+      <providers>
+          <add name="apacheIgnite" type="Apache.Ignite.AspNet.IgniteOutputCacheProvider, Apache.Ignite.AspNet" cacheName="myWebCache" />
+      </providers>
+    </outputCache>
+  </caching>
+</system.web>
+----
+--
+
+The Ignite instance needs to be started before any request is served. Typically this is done in the `Application_Start` method of the `global.asax`.
+
+See link:net-specific/deployment-options#asp-net-deployment[ASP.NET Deployment] for web deployment specifics related to the `IGNITE_HOME` variable.
diff --git a/docs/_docs/net-specific/asp-net-session-state-caching.adoc b/docs/_docs/net-specific/asp-net-session-state-caching.adoc
new file mode 100644
index 0000000..4c3e9d1
--- /dev/null
+++ b/docs/_docs/net-specific/asp-net-session-state-caching.adoc
@@ -0,0 +1,81 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ASP.NET Session State Caching
+
+== Overview
+
+The ASP.NET session state caching is designed to allow you to store user session data in different sources.
+By default, session state values and information are stored in memory within the ASP.NET process.
+
+Ignite.NET implements a session state store provider that stores session data in a distributed Ignite cluster that spreads
+the session data across multiple servers in order to provide high availability, load balancing and fault tolerance.
+
+[CAUTION]
+====
+[discrete]
+=== Development and Debugging
+During development and debugging, IIS will dynamically detect code changes when you build and run your web application.
+This, however, does not restart the embedded Ignite instance and can cause exceptions and undesired behavior.
+Make sure to restart IIS manually when using the Ignite Session State Cache.
+====
+
+== Installation
+
+* *Binary distribution*: add a reference to Apache.Ignite.AspNet.dll
+* *NuGet*: `Install-Package Apache.Ignite.AspNet`
+
+== Configuration
+
+To enable the Ignite-based session state storage, modify the `web.config` file as follows:
+
+[tabs]
+--
+tab:web.config[]
+[source,xml]
+----
+<system.web>
+  ...
+  <sessionState mode="Custom" customProvider="IgniteSessionStateProvider">
+    <providers>
+      <add name="IgniteSessionStateProvider"
+           type="Apache.Ignite.AspNet.IgniteSessionStateStoreProvider, Apache.Ignite.AspNet"
+           igniteConfigurationSectionName="igniteConfiguration"
+           applicationId="myApp"
+           gridName="myGrid"
+           cacheName="aspNetSessionCache" />
+    </providers>
+  </sessionState>
+  ...
+</<system.web>
+----
+--
+
+While the `name` and `type` attributes are required, the other attributes listed below are optional:
+
+[cols="1,3",opts="header"]
+|===
+|Attribute |Description
+|`igniteConfigurationSectionName`| The `web.config` section name defined in `configSections`. See
+link:http://127.0.0.1:4000/docs/net-specific/configuration-options#configure-with-application-or-web-config-files[Configuration: web.config] for
+more details. This configuration will be used to start Ignite if it is not started yet.
+|`applicationId`| Should only be used when multiple web applications share the same Ignite session state cache. Assign
+different ID strings to avoid session data conflicts between applications. It is recommended to use a separate cache
+for each application via `cacheName` attribute.
+|`gridName`| Session state provider calls `Ignition.TryGetIgnite` with this grid name to check whether Ignite is already started.
+|`cacheName`| Session state cache name. Default is `ASPNET_SESSION_STATE`.
+|===
+
+For more details on how to start Ignite within an ASP.NET application, refer to link:net-specific/asp-net-output-caching[ASP.NET Output Caching].
+Also, see link:net-specific/deployment-options#asp-net-deployment[ASP.NET Deployment] for web deployment specifics related to the `IGNITE_HOME` variable.
diff --git a/docs/_docs/net-specific/index.adoc b/docs/_docs/net-specific/index.adoc
new file mode 100644
index 0000000..a165d08
--- /dev/null
+++ b/docs/_docs/net-specific/index.adoc
@@ -0,0 +1,23 @@
+---
+layout: toc
+---
+
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+= Ignite.NET Specific Capabilities of Ignite
+
+This section covers Ignite features, configuration approaches and architectural nuances that are specific for C# and .NET
+applications.
diff --git a/docs/_docs/net-specific/net-configuration-options.adoc b/docs/_docs/net-specific/net-configuration-options.adoc
new file mode 100644
index 0000000..bf07835
--- /dev/null
+++ b/docs/_docs/net-specific/net-configuration-options.adoc
@@ -0,0 +1,190 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite.NET Configuration Options
+
+== Overview
+
+Ignite.NET nodes can be configured in a variety of ways and then started via configuration-specific `Ignition.Start*` methods.
+
+== Configure Programmatically in C#
+
+Use the `Ignition.Start(IgniteConfiguration)` method to configure an Ignite.NET node from your C# application.
+
+[tabs]
+--
+tab:Sample C# Configuration[]
+[source,csharp]
+----
+Ignition.Start(new IgniteConfiguration
+{
+    DiscoverySpi = new TcpDiscoverySpi
+    {
+        IpFinder = new TcpDiscoveryStaticIpFinder
+        {
+            Endpoints = new[] {"127.0.0.1:47500..47509"}
+        },
+        SocketTimeout = TimeSpan.FromSeconds(0.3)
+    },
+    IncludedEventTypes = EventType.CacheAll,
+    JvmOptions = new[] { "-Xms1024m", "-Xmx1024m" }
+});
+----
+--
+
+== Configure With Application or Web Config Files
+
+`Ignition.StartFromApplicationConfiguration` methods read configuration from `Apache.Ignite.Core.IgniteConfigurationSection`
+of the app.config or web.config files.
+
+The `IgniteConfigurationSection.xsd` schema file can be found next to Apache.Ignite.Core.dll in the binary distribution,
+and in `Apache.Ignite.Schema` NuGet package. Include it in your project with `None` build action to enable IntelliSense
+in Visual Studio while editing `IgniteConfigurationSection` in the config files.
+
+[tabs]
+--
+tab:Configure in app.config[]
+[source,xml]
+----
+<configuration>
+    <configSections>
+        <section name="igniteConfiguration" type="Apache.Ignite.Core.IgniteConfigurationSection, Apache.Ignite.Core" />
+    </configSections>
+
+    <runtime>
+        <gcServer enabled="true"/>
+    </runtime>
+
+    <igniteConfiguration xmlns="http://ignite.apache.org/schema/dotnet/IgniteConfigurationSection" gridName="myGrid1">
+        <discoverySpi type="TcpDiscoverySpi">
+            <ipFinder type="TcpDiscoveryStaticIpFinder">
+                <endpoints>
+                    <string>127.0.0.1:47500..47509</string>
+                </endpoints>
+            </ipFinder>
+        </discoverySpi>
+
+        <cacheConfiguration>
+            <cacheConfiguration cacheMode='Replicated' readThrough='true' writeThrough='true' />
+            <cacheConfiguration name='secondCache' />
+        </cacheConfiguration>
+
+        <includedEventTypes>
+            <int>42</int>
+            <int>TaskFailed</int>
+            <int>JobFinished</int>
+        </includedEventTypes>
+
+        <userAttributes>
+            <pair key='myNode' value='true' />
+        </userAttributes>
+
+        <JvmOptions>
+          <string>-Xms1024m</string>
+          <string>-Xmx1024m</string>
+        </JvmOptions>
+    </igniteConfiguration>
+</configuration>
+----
+tab:Use in C#[]
+[source,csharp]
+----
+var ignite = Ignition.StartFromApplicationConfiguration("igniteConfiguration");
+----
+--
+
+[NOTE]
+====
+[discrete]
+To add the `IgniteConfigurationSection.xsd` schema file to a Visual Studio project go to `Projects` menu and click on
+`Add Existing Item...` menu item. After that locate `IgniteConfigurationSection.xsd` inside of the Apache Ignite distribution
+and pick it up. Alternatively, install NuGet package: `Install-Package Apache.Ignite.Schema`. This will add an xsd file to
+the project automatically. To improve editing, make sure that `Statement Completion` options are enabled in
+`Tools - Options - Text Editor - XML`.
+====
+
+=== Ignite Configuration Section Syntax
+
+The configuration section maps directly to `IgniteConfiguration` class:
+
+* Simple properties (strings, primitives, enums) map to XML attributes (attribute name = camelCased C# property name).
+* Complex properties map to nested XML elements (element name = camelCased C# property name).
+* When a complex property is an interface or abstract class, `type` attribute is used to specify the type, using *assembly-qualified name*. For built-in types (like TcpDiscoverySpi in the code sample above) assembly name and namespace can be omitted.
+* When in doubt, consult the schema in `IgniteConfigurationSection.xsd`.
+
+== Configure With Spring XML
+
+Spring XML enables the native java-based Ignite configuration method. A Spring config file can be provided via
+the `Ignition.Start(string)` method or the `IgniteConfiguration.SpringConfigUrl` property. This configuration method
+is useful when some Java property is not natively supported by Ignite.NET.
+
+When the `IgniteConfiguration.SpringConfigUrl` property is used, the Spring config is loaded first, and other
+`IgniteConfiguration` properties are applied on top of it.
+
+[tabs]
+--
+tab:Configure With Spring XML[]
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xmlns:util="http://www.springframework.org/schema/util"
+       xsi:schemaLocation="http://www.springframework.org/schema/beans
+                           http://www.springframework.org/schema/beans/spring-beans.xsd
+                           http://www.springframework.org/schema/util
+                           http://www.springframework.org/schema/util/spring-util.xsd">
+    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="localHost" value="127.0.0.1"/>
+        <property name="gridName" value="grid1"/>
+        <property name="userAttributes">
+            <map>
+                <entry key="my_attr" value="value1"/>
+            </map>
+        </property>
+
+        <property name="cacheConfiguration">
+            <list>
+                <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                    <property name="name" value="cache1"/>
+                    <property name="startSize" value="10"/>
+                </bean>
+            </list>
+        </property>
+
+        <property name="discoverySpi">
+            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+                <property name="ipFinder">
+                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                        <property name="addresses">
+                            <list>
+                                <value>127.0.0.1:47500..47509</value>
+                            </list>
+                        </property>
+                    </bean>
+                </property>
+                <property name="socketTimeout" value="300" />
+            </bean>
+        </property>
+    </bean>
+</beans>
+----
+tab:Use in C#[]
+[source,csharp]
+----
+var ignite = Ignition.Start("spring-config.xml");
+----
+--
+
diff --git a/docs/_docs/net-specific/net-cross-platform-support.adoc b/docs/_docs/net-specific/net-cross-platform-support.adoc
new file mode 100644
index 0000000..b343665
--- /dev/null
+++ b/docs/_docs/net-specific/net-cross-platform-support.adoc
@@ -0,0 +1,65 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cross-Platform Support
+
+== Overview
+
+Starting with version 2.4, Ignite.NET supports .NET Core. It is possible to run .NET nodes and develop Ignite.NET
+applications for Linux and macOS, as well as Windows.
+
+== .NET Core
+
+*Requirements:*
+
+* https://www.microsoft.com/net/download/[.NET Core SDK 2.0+, window=_blank]
+* http://www.oracle.com/technetwork/java/javase/downloads/index.html[Java 8+, window=_blank] (macOS requires JDK, otherwise JRE works)
+
+*Running Examples*
+https://ignite.apache.org/download.cgi#binaries[Binary distribution, window=_blank] includes .NET Core examples:
+
+* Download https://ignite.apache.org/download.cgi#binaries[binary distribution, window=_blank] from the Ignite website and extract into any directory.
+* `cd platforms/dotnet/examples/dotnetcore`
+* `dotnet run`
+
+== Java Detection
+
+Ignite.NET looks for a Java installation directory in the following places:
+
+* `HKLM\Software\JavaSoft\Java Runtime Environment` (Windows)
+* `/usr/bin/java` (Linux)
+* `/Library/Java/JavaVirtualMachines` (macOS)
+
+If you changed the default location of Java, then specify the actual path using one of the methods below:
+
+* Set the `IgniteConfiguration.JvmDllPath` property
+* or set the `JAVA_HOME` environment variable
+
+== Known Issues
+
+*No Java runtime present, requesting install*
+
+Java `8u151` has a known bug on macOS: https://bugs.openjdk.java.net/browse/JDK-7131356[JDK-7131356, window=_blank]. Make sure to install `8u152` or later.
+
+*Serializing delegates is not supported on this platform*
+
+.NET Core does not support delegate serialization, `System.MulticastDelegate.GetObjectData`
+just https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/MulticastDelegate.cs#L52[throws an exception, window=_blank],
+so Ignite.NET can not serialize delegates or objects containing them.
+
+*Could not load file or assembly 'System.Configuration.ConfigurationManager'*
+
+Known https://github.com/dotnet/standard/issues/506[.NET issue (506), window=_blank], in some cases additional package reference is required:
+
+* `dotnet add package System.Configuration.ConfigurationManager`
diff --git a/docs/_docs/net-specific/net-deployment-options.adoc b/docs/_docs/net-specific/net-deployment-options.adoc
new file mode 100644
index 0000000..752a78d
--- /dev/null
+++ b/docs/_docs/net-specific/net-deployment-options.adoc
@@ -0,0 +1,152 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Application Deployment Options
+
+== Overview
+
+Apache Ignite.NET consists of .NET assemblies and Java jar files. The .NET assemblies are referenced by your project and
+are copied to an output folder during the build automatically. The JAR files should be copied manually.
+Ignite.NET discovers them via the `IgniteHome` or `JvmClasspath` settings.
+
+This page introduces several most-commonly used deployment options of Ignite.NET nodes.
+
+== Full Binary Package Deployment
+
+* Copy the https://ignite.apache.org[whole Ignite distribution package, window=_blank] along with your application
+* Set the `IGNITE_HOME` environment variable or `IgniteConfiguration.IgniteHome` setting to point to that folder
+
+== NuGet Deployment
+
+The post-build event is updated automatically during the Ignite.NET NuGet package installation to copy jar files to
+`Libs` folder in the output directory (see link:quick-start/dotnet[Getting Started]).
+Make sure to include that `Libs` folder when distributing your binaries.
+
+Make sure `IGNITE_HOME` is not set globally. Normally you don't need to set `IGNITE_HOME` with NuGet, except for
+ASP.NET deployments (see below).
+
+[tabs]
+--
+tab:Post-Build Event[]
+[source,shell]
+----
+if not exist "$(TargetDir)Libs" md "$(TargetDir)Libs"
+xcopy /s /y "$(SolutionDir)packages\Apache.Ignite.1.6.0\Libs\*.*" "$(TargetDir)Libs"
+----
+--
+
+== Custom Deployment
+
+The JAR files are located in the `libs` folder of the binary distribution and NuGet package.
+The minimum set of jars for Ignite.NET is:
+
+* `ignite-core-{VER}.jar`
+* `cache-api-1.0.0.jar`
+* `ignite-indexing` folder (if SQL queries are used)
+* `ignite-spring` folder (if a Spring XML configuration is used)
+
+=== Deploying JARs to a default location:
+
+* Copy the JAR files to the `Libs` folder next to Apache.Ignite.Core.dll
+* Do not set the `IgniteConfiguration.JvmClasspath`, `IgniteConfiguration.IgniteHome` properties and `IGNITE_HOME` environment variable
+
+=== Deploying jar files to an arbitrary location:
+
+* Copy the JAR files somewhere
+* Set the `IgniteConfiguration.JvmClasspath` property to a semicolon-separated string of paths for each jar file
+* Do not set the `IGNITE_HOME` environment variable and `IgniteConfiguration.IgniteHome` property
+
+[tabs]
+--
+tab:IgniteConfiguration.JvmClasspath Example[]
+[source,shell]
+----
+c:\ignite-jars\ignite-core-1.5.0.final.jar;c:\ignite-jars\cache-api-1.0.0.jar
+----
+--
+
+== ASP.NET Deployment
+
+`JvmClasspath` or `IgniteHome` have to be explicitly set when using Ignite in a web environment (IIS and IIS Express),
+because DDL files are copied to temporary folders, and Ignite can not locate JAR files automatically.
+
+You can set `IgniteHome` like this in ASP.NET environment:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+Ignition.Start(new IgniteConfiguration
+{
+    IgniteHome = HttpContext.Current.Server.MapPath(@"~\bin\")
+});
+----
+--
+
+Alternatively, `IGNITE_HOME` can be set globally. Add this line at the top of the `Application_Start` method in `Global.asax.cs`:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+Environment.SetEnvironmentVariable("IGNITE_HOME", HttpContext.Current.Server.MapPath(@"~\bin\"));
+----
+--
+
+Finally, you can use the following method to populate `JvmClasspath`:
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+static string GetDefaultWebClasspath()
+{
+    var dir = HttpContext.Current.Server.MapPath(@"~\bin\libs");
+
+    return string.Join(";", Directory.GetFiles(dir, "*.jar"));
+}
+----
+--
+
+== IIS Application Pool Lifecycle, AppDomains, and Ignite.NET
+
+There is a known problem with IIS. When a web application is restarted due to code changes or a manual restart,
+the application pool process remains alive, while AppDomain gets recycled.
+
+Ignite.NET automatically stops when AppDomain is unloaded. However, a new domain may be started when old one is still
+unloading. So the node from the old domain can have an `IgniteConfiguration.IgniteInstanceName` conflict with a node from the new domain.
+
+To fix this issue make sure to either assign a unique `IgniteInstanceName`, or set
+`IgniteConfiguration.AutoGenerateIgniteInstanceName` property to `true`.
+
+[tabs]
+--
+tab:Use in C#[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration { AutoGenerateIgniteInstanceName = true };
+----
+tab:web.config[]
+[source,xml]
+----
+<igniteConfiguration autoGenerateIgniteInstanceName="true">
+  ...
+</igniteConfiguration>
+----
+--
+
+Refer to the http://stackoverflow.com/questions/42961879/how-do-i-retrieve-a-started-ignite-instance-when-a-website-restart-occurs-in-iis/[following StackOverflow discussion, window=_blank]
+for more details.
diff --git a/docs/_docs/net-specific/net-entity-framework-cache.adoc b/docs/_docs/net-specific/net-entity-framework-cache.adoc
new file mode 100644
index 0000000..5d2de15
--- /dev/null
+++ b/docs/_docs/net-specific/net-entity-framework-cache.adoc
@@ -0,0 +1,198 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Entity Framework 2nd Level Cache
+
+== Overview
+
+Entity Framework, as most other ORMs, can use caching on multiple levels.
+
+* First level caching is performed by `DbContext` on the entity level (entities are cached within corresponding `DbSet`)
+* Second level caching is on the level of `DataReader` and holds raw query data (however, there is no out-of-the-box 2nd
+level caching mechanism in Entity Framework 6).
+
+Ignite.NET provides an EF6 second level caching solution that stores data in a distributed Ignite cache. This is ideal
+for scenarios with multiple application servers using a single SQL database via Entity Framework - cached queries are
+shared between all machines in the cluster.
+
+== Installation
+* *Binary distribution*: add a reference to `Apache.Ignite.EntityFramework.dll`
+* *NuGet*: `Install-Package Apache.Ignite.EntityFramework`
+
+== Configuration
+
+Ignite.NET provides a custom `DbConfiguration` implementation which enables second level caching - `Apache.Ignite.EntityFramework.IgniteDbConfiguration`.
+There is a number of ways to apply `DbConfiguration` to the EntityFramework `DbContext`. See the following MSDN document
+for details: https://msdn.microsoft.com/en-us/library/jj680699[msdn.microsoft.com/en-us/library/jj680699, window=_blank].
+
+The simplest way to implement this is to use the `[DbConfigurationType]` attribute:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+[DbConfigurationType(typeof(IgniteDbConfiguration))]
+class MyContext : DbContext
+{
+  public virtual DbSet<Foo> Foos { get; set; }
+  public virtual DbSet<Bar> Bars { get; set; }
+}
+----
+--
+
+To customize caching behavior, create a class that inherits `IgniteDbConfiguration` and call one of the base constructors.
+The example below shows the most custom base constructor:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+private class MyDbConfiguration : IgniteDbConfiguration
+{
+  public MyDbConfiguration()
+    : base(
+      // IIgnite instance to use
+      Ignition.Start(),
+      // Metadata cache configuration (small cache, does not tolerate data loss)
+      // Should be replicated or partitioned with backups
+      new CacheConfiguration("metaCache")
+      {
+        CacheMode = CacheMode.Replicated
+      },
+      // Data cache configuration (large cache, holds actual query results,
+      // tolerates data loss). Can have no backups.
+      new CacheConfiguration("dataCache")
+      {
+        CacheMode = CacheMode.Partitioned,
+        Backups = 0
+      },
+      // Custom caching policy.
+      new MyCachingPolicy())
+    {
+      // No-op.
+    }
+}
+
+// Apply custom configuration to the DbContext
+[DbConfigurationType(typeof(MyDbConfiguration))]
+class MyContext : DbContext
+{
+  ...
+}
+----
+--
+
+=== Caching Policy
+
+The caching policy feature controls a selected caching mode, expiration, and which entity sets should be cached. With the default
+`null` policy, all entity sets are cached in the `ReadWrite` mode with no expiration. A caching policy can be configured
+by implementing the `IDbCachingPolicy` interface or inheriting the `DbCachingPolicy` class. The example below shows a sample implementation:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+public class DbCachingPolicy : IDbCachingPolicy
+{
+  /// <summary>
+  /// Determines whether the specified query can be cached.
+  /// </summary>
+  public virtual bool CanBeCached(DbQueryInfo queryInfo)
+  {
+    // This method is called before database call.
+    // Cache only Persons.
+    return queryInfo.AffectedEntitySets.All(x => x.Name == "Person");
+  }
+
+  /// <summary>
+  /// Determines whether specified number of rows should be cached.
+  /// </summary>
+  public virtual bool CanBeCached(DbQueryInfo queryInfo, int rowCount)
+  {
+    // This method is called after database call.
+    // Cache only queries that return less than 1000 rows.
+    return rowCount < 1000;
+  }
+
+  /// <summary>
+  /// Gets the absolute expiration timeout for a given query.
+  /// </summary>
+  public virtual TimeSpan GetExpirationTimeout(DbQueryInfo queryInfo)
+  {
+    // Cache for 5 minutes.
+    return TimeSpan.FromMinutes(5);
+  }
+
+  /// <summary>
+  /// Gets the caching strategy for a given query.
+  /// </summary>
+  public virtual DbCachingMode GetCachingMode(DbQueryInfo queryInfo)
+  {
+    // Cache with invalidation.
+    return DbCachingMode.ReadWrite;
+  }
+}
+----
+--
+
+=== Caching Modes
+
+[cols="1,3",opts="header"]
+|===
+|DbCachingMode |Description
+|`ReadOnly`| Read-only mode, never invalidates. Database updates are ignored in this mode. Once query results have been
+cached, they are kept in cache until expired (forever when no expiration is specified). This mode is suitable for data
+that is not expected to change (like a list of countries and other dictionary data).
+|`ReadWrite`| Read-write mode. Cached data is invalidated when underlying entity set changes. This is "normal" cache mode
+which always provides correct query results. Keep in mind that this mode works correctly only when all database changes
+are performed via DbContext with Ignite caching configured. Other database updates are not tracked.
+|===
+
+== app.config & web.config
+
+Ignite caching can be enabled in the config files by providing an assembly-qualified type name of `IgniteDbConfiguration` (or your class that inherits it):
+
+[tabs]
+--
+tab:app.config[]
+[source,xml]
+----
+<entityFramework codeConfigurationType="Apache.Ignite.EntityFramework.IgniteDbConfiguration, Apache.Ignite.EntityFramework">
+    ...Your EF config...
+</entityFramework>
+----
+--
+
+== Advanced Configuration
+
+When there is no possibility to inherit `IgniteDbConfiguration` (it already inherits some other class), you can call the
+`IgniteDbConfiguration.InitializeIgniteCaching` static method from the constructor, passing `this` as the first argument:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+private class MyDbConfiguration : OtherDbConfiguration
+{
+  public MyDbConfiguration() : base(...)
+  {
+    IgniteDbConfiguration.InitializeIgniteCaching(this, Ignition.GetIgnite(), null, null, null);
+  }
+}
+----
+--
diff --git a/docs/_docs/net-specific/net-java-services-execution.adoc b/docs/_docs/net-specific/net-java-services-execution.adoc
new file mode 100644
index 0000000..19624fa
--- /dev/null
+++ b/docs/_docs/net-specific/net-java-services-execution.adoc
@@ -0,0 +1,116 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Java Services Execution from Ignite.NT
+
+== Overview
+
+Ignite.NET can work with Java services the same way as with .NET services. To call a Java service from a .NET application,
+you need to know the interface of the service.
+
+== Example
+
+Let's review how to use this capability by taking a usage example.
+
+=== Create Java Service
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+public class MyJavaService implements Service {
+  // Service method to be called from .NET
+  public String testToUpper(String x) {
+    return x.toUpperCase();
+  }
+
+  // Service interface implementation
+  @Override public void cancel(ServiceContext context) { // No-op.  }
+  @Override public void init(ServiceContext context) throws Exception { // No-op. }
+  @Override public void execute(ServiceContext context) throws Exception { // No-op. }
+}
+----
+--
+
+This Java service can be deployed on any nodes (.NET, C{pp}, Java-only), so there are no restrictions on deployment options:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+ignite.services().deployClusterSingleton("myJavaSvc", new MyJavaService());
+----
+--
+
+=== Call Java Service From .NET
+
+Create a version of the service interface for .NET:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+// Interface can have any name
+interface IJavaService
+{
+  // Method must have the same name (case-sensitive) and same signature:
+  // argument types and order.
+  // Argument names and return type do not matter.
+  string testToUpper(string str);
+}
+----
+--
+
+Get the service proxy and invoke the method:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var config = new IgniteConfiguration
+{
+  // Make sure that Java service class is in classpath on all nodes, including .NET
+  JvmClasspath = @"c:\my-project\src\Java\target\classes\"
+}
+
+var ignite = Ignition.Start(config);
+
+// Make sure to use the same service name as in deployment
+var prx = ignite.GetServices().GetServiceProxy<IJavaService>("myJavaSvc");
+string result = prx.testToUpper("invoking Java service...");
+Console.WriteLine(result);
+----
+--
+
+== Interface Methods Mapping
+
+The .NET service interface is mapped to its Java counterpart dynamically. This happens at the time of the method invocation:
+
+* It is not necessary to specify all Java service methods in the .NET interface.
+* The .NET interface can have members that are not present in Java service. You won't get any exception until you call these missing methods.
+
+The Java methods are resolved the following way:
+
+* Ignite looks for a method with the specified name and parameter count. If there is only one exist, Ignite will use it.
+* Among the matched methods Ignite looks for a method with compatible arguments (via `Class.isAssignableFrom`).
+Ignite invoke the matched method or throws an exception in case of ambiguity.
+* The method return type is ignored, since .NET and Java do not allow identical methods with different return types.
+
+See link:net-specific/platform-interoperability[Platform Interoperability, Type Compatibility section] for details on
+method arguments and result mapping. Note, that the `params/varargs` are also supported, since in .NET and Java these are
+syntactic sugar for object arrays.
diff --git a/docs/_docs/net-specific/net-linq.adoc b/docs/_docs/net-specific/net-linq.adoc
new file mode 100644
index 0000000..006535a
--- /dev/null
+++ b/docs/_docs/net-specific/net-linq.adoc
@@ -0,0 +1,256 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite.NET LINQ Provider
+
+== Overview
+
+Apache Ignite.NET includes a LINQ provider that is integrated with Ignite SQL APIs. You can avoid dealing with SQL
+syntax directly and write queries in C# with LINQ. The Ignite LINQ provider supports all features of ANSI-99 SQL including
+distributed joins, groupings, aggregates, sorting, and more.
+
+== Installation
+
+* If you use the Ignite *binary distribution*: add a reference to `Apache.Ignite.Linq.dll`
+* If you use *NuGet*: `Install-Package Apache.Ignite.Linq`
+
+== Configuration
+
+SQL indexes need to be configured the same way as for regular SQL queries, see link:SQL/indexes[Defining Indexes section]
+for details.
+
+== Usage
+
+`Apache.Ignite.Linq.CacheLinqExtensions` class is an entry point for the LINQ provider.
+Obtain a queryable instance over an Ignite cache by calling the `AsCacheQueryable` method, and use LINQ on it:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+ICache<EmployeeKey, Employee> employeeCache = ignite.GetCache<EmployeeKey, Employee>(CacheName);
+
+IQueryable<ICacheEntry<EmployeeKey, Employee>> queryable = cache.AsCacheQueryable();
+
+Employee[] interns = queryable.Where(emp => emp.Value.IsIntern).ToArray();
+----
+--
+
+[CAUTION]
+====
+[discrete]
+You can use LINQ directly on the cache instance, without calling `AsCacheQueryable()`. However, this will result in LINQ
+to Objects query that fetches and processes entire cache data set locally, which is very inefficient.
+====
+
+== Introspection
+
+The Ignite LINQ provider uses `ICache.QueryFields` underneath. You can examine produced `SqlFieldsQuery` by casting
+`IQueryable` to `ICacheQueryable` at any point before materializing statements (`ToList`, `ToArray`, etc):
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+// Create query
+var query = ignite.GetCache<EmployeeKey, Employee>(CacheName).AsCacheQueryable().Where(emp => emp.Value.IsIntern);
+
+// Cast to ICacheQueryable
+var cacheQueryable = (ICacheQueryable) query;
+
+// Get resulting fields query
+SqlFieldsQuery fieldsQuery = cacheQueryable.GetFieldsQuery();
+
+// Examine generated SQL
+Console.WriteLine(fieldsQuery.Sql);
+
+// Output: select _T0._key, _T0._val from "persons".Person as _T0 where _T0.IsIntern
+----
+--
+
+== Projections
+
+Simple `Where` queries operate on `ICacheEntry` objects. You can select Key, Value, or any of the Key and Value fields
+separately. Multiple fields can be selected using anonymous types.
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var query = ignite.GetCache<EmployeeKey, Employee>(CacheName).AsCacheQueryable().Where(emp => emp.Value.IsIntern);
+
+IQueryable<EmployeeKey> keys = query.Select(emp => emp.Key);
+
+IQueryable<Employee> values = query.Select(emp => emp.Value);
+
+IQueryable<string> names = values.Select(emp => emp.Name);
+
+var custom = query.Select(emp => new {Id = emp.Key, Name = emp.Value.Name, Age = emp.Value.Age});
+----
+--
+
+== Compiled Queries
+
+The LINQ provider causes certain overhead caused by expression parsing and SQL generation. You may want to eliminate this
+overhead for frequently used queries.
+
+The `Apache.Ignite.Linq.CompiledQuery` class supports queries compilation. Call the `Compile` method to create a new delegate
+to represent the compiled query. All query parameters should be in the delegate parameters.
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var queryable = ignite.GetCache<EmployeeKey, Employee>(CacheName).AsCacheQueryable();
+
+// Regular query
+var persons = queryable.Where(emp => emp.Value.Age > 21);
+var result = persons.ToArray();
+
+// Corresponding compiled query
+var compiledQuery = CompiledQuery.Compile((int age) => queryable.Where(emp => emp.Value.Age > age));
+IQueryCursor<ICacheEntry<EmployeeKey, Employee>> cursor = compiledQuery(21);
+result = cursor.ToArray();
+----
+--
+
+Refer to the https://ptupitsyn.github.io/LINQ-vs-SQL-in-Ignite/[LINQ vs SQL blog post, window=_blank] for more details
+on the LINQ provider performance.
+
+== Joins
+
+The LINQ provider support JOINs that span several caches/tables and nodes.
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var persons = ignite.GetCache<int, Person>("personCache").AsCacheQueryable();
+var orgs = ignite.GetCache<int, Organization>("orgCache").AsCacheQueryable();
+
+// SQL join on Person and Organization to find persons working for Apache
+var qry = from person in persons from org in orgs
+          where person.Value.OrgId == org.Value.Id
+          && org.Value.Name == "Apache"
+          select person
+
+foreach (var cacheEntry in qry)
+    Console.WriteLine(cacheEntry.Value);
+
+// Same query with method syntax
+qry = persons.Join(orgs, person => person.Value.OrgId, org => org.Value.Id,
+(person, org) => new {person, org}).Where(p => p.org.Name == "Apache").Select(p => p.person);
+----
+--
+
+== Contains
+
+`ICollection.Contains` is supported, which is useful when we want to retrieve data by a set of ids, for example:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var persons = ignite.GetCache<int, Person>("personCache").AsCacheQueryable();
+var ids = new int[] {1, 20, 56};
+
+var personsByIds = persons.Where(p => ids.Contains(p.Value.Id));
+----
+--
+
+This query translates into the `... where Id IN (?, ?, ?)` command. However, keep in mind, that this form cannot be used
+in compiled queries because of variable argument number. Better alternative is to use `Join` on the `ids` collection:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var persons = ignite.GetCache<int, Person>("personCache").AsCacheQueryable();
+var ids = new int[] {1, 20, 56};
+
+var personsByIds = persons.Join(ids,
+                                person => person.Value.Id,
+                                id => id,
+                                (person, id) => person);
+----
+--
+
+This LINQ query translates to a temp table join:
+`select _T0._KEY, _T0._VAL from "person".Person as _T0 inner join table (F0 int = ?) _T1 on (_T1.F0 = _T0.ID)`,
+and has a single array parameter, so the plan can be cached properly, and compiled queries are also allowed.
+
+== Supported SQL Functions
+
+Below is a list of .NET functions and their SQL equivalents that are supported by the Ignite LINQ provider.
+
+[width="100%",cols="1,3",opts="header"]
+|===
+|`String.Length`| `LENGTH`
+|`String.ToLower`| `LOWER`
+|`String.ToUpper`| `UPPER`
+|`String.StartsWith("foo")`| `LIKE 'foo%'`
+|`String.EndsWith("foo")`| `LIKE '%foo'`
+|`String.Contains("foo")`| `LIKE '%foo%'`
+|`String.IndexOf("abc")`| `INSTR(MyField, 'abc') - 1`
+|`String.IndexOf("abc", 3)`| `INSTR(MyField, 'abc', 3) - 1`
+|`String.Substring("abc", 4)`| `SUBSTRING(MyField, 4 + 1)`
+|`String.Substring("abc", 4, 7)`| `SUBSTRING(MyField, 4 + 1, 7)`
+|`String.Trim()`| `TRIM`
+|`String.TrimStart()`| `LTRIM`
+|`String.TrimEnd()`| `RTRIM`
+|`String.Trim('x')`| `TRIM(MyField, 'x')`
+|`String.TrimStart('x')`| `LTRIM(MyField, 'x')`
+|`String.TrimEnd('x')`| `RTRIM(MyField, 'x')`
+|`String.Replace`| `REPLACE`
+|`String.PadLeft`| `LPAD`
+|`String.PadRight`| `RPAD`
+|`Regex.Replace`| `REGEXP_REPLACE`
+|`Regex.IsMatch`| `REGEXP_LIKE`
+|`Math.Abs`| `ABS`
+|`Math.Acos`| `ACOS`
+|`Math.Asin`| `ASIN`
+|`Math.Atan`| `ATAN`
+|`Math.Atan2`| `ATAN2`
+|`Math.Ceiling`| `CEILING`
+|`Math.Cos`| `COS`
+|`Math.Cosh`| `COSH`
+|`Math.Exp`| `EXP`
+|`Math.Floor`| `FLOOR`
+|`Math.Log`| `LOG`
+|`Math.Log10`| `LOG10`
+|`Math.Pow`| `POWER`
+|`Math.Round`| `ROUND`
+|`Math.Sign`| `SIGN`
+|`Math.Sin`| `SIN`
+|`Math.Sinh`| `SINH`
+|`Math.Sqrt`| `SQRT`
+|`Math.Tan`| `TAN`
+|`Math.Tanh`| `TANH`
+|`Math.Truncate`| `TRUNCATE`
+|`DateTime.Year`| `YEAR`
+|`DateTime.Month`| `MONTH`
+|`DateTime.Day`| `DAY_OF_MONTH`
+|`DateTime.DayOfYear`| `DAY_OF_YEAR`
+|`DateTime.DayOfWeek`| `DAY_OF_WEEK - 1`
+|`DateTime.Hour`| `HOUR`
+|`DateTime.Minute`| `MINUTE`
+|`DateTime.Second`| `SECOND`
+|===
diff --git a/docs/_docs/net-specific/net-logging.adoc b/docs/_docs/net-specific/net-logging.adoc
new file mode 100644
index 0000000..f499124
--- /dev/null
+++ b/docs/_docs/net-specific/net-logging.adoc
@@ -0,0 +1,133 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite.NET Logging
+
+== Overview
+By default, Ignite uses underlying the Java log4j logging system. Log messages from both .NET and Java are recorded there.
+You can also write to this log via an `IIgnite.Logger` instance:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var ignite = Ignition.Start();
+ignite.Logger.Info("Hello World!");
+----
+--
+
+`LoggerExtensions` class provides convenient shortcuts for `ILogger.Log` method.
+
+== Custom Logger
+
+You can provide a logger implementation via the `IgniteConfiguration.Logger` and `ILogger` interface.
+Messages from both .NET and Java will be redirected there.
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration
+{
+  Logger = new MemoryLogger()
+}
+
+var ignite = Ignition.Start();
+
+class MemoryLogger : ILogger
+{
+  // Logger can be called from multiple threads, use concurrent collection
+  private readonly ConcurrentBag<string> _messages = new ConcurrentBag<string>();
+
+  public void Log(LogLevel level, string message, object[] args,
+                  IFormatProvider formatProvider, string category,
+                  string nativeErrorInfo, Exception ex)
+  {
+    _messages.Add(message);
+  }
+
+  public bool IsEnabled(LogLevel level)
+  {
+    // Accept any level.
+    return true;
+  }
+}
+----
+tab:app.config[]
+[source,xml]
+----
+<igniteConfiguration>
+<logger type="MyNamespace.MemoryLogger, MyAssembly" />
+</igniteConfiguration>
+----
+--
+
+== NLog & log4net Loggers
+
+Ignite.NET provides `ILogger` implementations for http://nlog-project.org/[NLog, window=_blank] and https://logging.apache.org/log4net/[Apache log4net, window=_blank].
+They are included in the binary package (`Apache.Ignite.NLog.dll` and `Apache.Ignite.Log4Net.dll`) and can be installed via NuGet:
+
+* `Install-Package Apache.Ignite.NLog`
+* `Install-Package Apache.Ignite.Log4Net`
+
+NLog and Log4Net use statically defined configuration, so there is nothing to configure in Ignite besides `IgniteConfiguration.Logger`:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration
+{
+  Logger = new IgniteNLogLogger()  // or IgniteLog4NetLogger
+}
+
+var ignite = Ignition.Start();
+----
+tab:app.config[]
+[source,xml]
+----
+<igniteConfiguration>
+  <logger type="Apache.Ignite.NLog.IgniteNLogLogger, Apache.Ignite.NLog" />
+</igniteConfiguration>
+----
+--
+
+A simple file-based logging with NLog can be set up like this:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var nlogConfig = new LoggingConfiguration();
+
+var fileTarget = new FileTarget
+{
+  FileName = "ignite_nlog.log"
+};
+nlogConfig.AddTarget("logfile", fileTarget);
+
+nlogConfig.LoggingRules.Add(new LoggingRule("*", LogLevel.Trace, fileTarget));
+LogManager.Configuration = nlogConfig;
+
+var igniteConfig = new IgniteConfiguration
+{
+  Logger = new IgniteNLogLogger()
+};
+Ignition.Start(igniteConfig);
+----
+--
diff --git a/docs/_docs/net-specific/net-platform-cache.adoc b/docs/_docs/net-specific/net-platform-cache.adoc
new file mode 100644
index 0000000..73a09ca
--- /dev/null
+++ b/docs/_docs/net-specific/net-platform-cache.adoc
@@ -0,0 +1,125 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= .NET Platform Cache
+
+CAUTION: Experimental API
+
+Ignite.NET provides an additional layer of caching in the link:https://docs.microsoft.com/en-us/dotnet/standard/clr[CLR] heap. The platform cache keeps a deserialized copy of every cache entry that is present on the current node, thus greatly improving cache read performance at the cost of increased memory usage.
+
+[NOTE]
+====
+Platform caches are bypassed within transactions: when a transaction is active, `cache.Get` and the other APIs listed below do not use the platform cache. Transaction support is coming soon.
+====
+
+
+== Configuring Platform Cache
+
+:dotnetCodeFile: code-snippets/dotnet/PlatformCache.cs
+
+
+=== Server Nodes
+
+Platform cache is configured once for all server nodes by setting `CacheConfiguration.PlatformCacheConfiguration` to a non-null value. Platform cache on server nodes stores *all primary and backup cache entries assigned to the given node* in .NET memory.
+Entries are updated in real time and they are guaranteed to be up to date at any given moment, even before user code accesses them.
+
+CAUTION: Platform cache effectively doubles memory usage on server nodes, every cache entry is stored twice: serialized in unmanaged (offheap) memory, and deserialized in CLR heap.
+
+[tabs]
+--
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetCodeFile}[tag=platformCacheConf,indent=0]
+----
+--
+
+
+=== Client Nodes
+
+Platform caches on client nodes require a link:configuring-caches/near-cache[Near Cache] to be configured, since client nodes do not store data otherwise. The platform cache on a client node keeps the same set of entries as the near cache on that node. the near cache eviction policy effectively applies to the platform cache.
+
+[tabs]
+--
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetCodeFile}[tag=platformCacheConfClient,indent=0]
+----
+--
+
+
+== Supported APIs
+
+The following `ICache<K, V>` APIs use a platform cache (including the corresponding async versions):
+
+* `Get`, `TryGet`, indexer (`ICache[k]`)
+* `GetAll` (reads from the platform cache first and falls back to the distributed cache when necessary)
+* `ContainsKey`, `ContainsKeys`
+* `LocalPeek`, `TryLocalPeek`
+* `GetLocalEntries`
+* `GetLocalSize`
+* `Query` with `ScanQuery`
+** Uses the platform cache to pass values to `ScanQuery.Filter`
+** Iterates over the platform cache directly when `ScanQuery.Local` is `true` and `ScanQuery.Partition` is not null
+
+
+== Access Platform Cache Data Directly
+
+You don't need to change your code to take advantage of a Platform Cache. Existing calls to `ICache.Get` and other APIs listed above will be served from the platform cache when possible, improving performance. When a given entry is not present in the platform cache, Ignite falls back to a normal path and retrieves the value from the cluster.
+
+However, in some cases we may wish to access the platform cache exclusively, avoiding Java and network calls. `Local` APIs in combination with `CachePeekMode.Platform` allow us to do just that:
+
+[tabs]
+--
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetCodeFile}[tag=platformCacheAccess,indent=0]
+----
+--
+
+
+== Advanced Configuration
+
+=== Binary Mode
+
+In order to use link:key-value-api/binary-objects[Binary Objects] together with platform cache, set `PlatformCacheConfiguration.KeepBinary` to `true`:
+
+[tabs]
+--
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetCodeFile}[tag=advancedConfigBinaryMode,indent=0]
+----
+--
+
+
+=== Key and Value Types
+
+When using Ignite cache with link:https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/value-types[Value Types], you should set `PlatformCacheConfiguration.KeyTypeName` and `ValueTypeName` accordingly to achieve maximum performance and reduce GC pressure:
+
+[tabs]
+--
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetCodeFile}[tag=advancedConfigKeyValTypes,indent=0]
+----
+--
+
+Ignite uses `ConcurrentDictionary<object, object>` by default to store platform cache data, because the actual types are  unknown beforehand. This results in link:https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/types/boxing-and-unboxing[boxing and unboxing] for value types, reducing performance and allocating more memory. When `KeyTypeName` and `ValueTypeName` are set in `PlatformCacheConfiguration`, Ignite uses those types to create an internal `ConcurrentDictionary` instead of the default `object`.
+
+CAUTION: Incorrect `KeyTypeName` and/or `ValueTypeName` settings can cause runtime cast exceptions.
diff --git a/docs/_docs/net-specific/net-platform-interoperability.adoc b/docs/_docs/net-specific/net-platform-interoperability.adoc
new file mode 100644
index 0000000..a9cc397
--- /dev/null
+++ b/docs/_docs/net-specific/net-platform-interoperability.adoc
@@ -0,0 +1,195 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite.NET and Platform Interoperability
+
+Ignite allows different platforms, such as .NET, Java and C{pp}, to interoperate with each other.
+Classes and objects defined and written to Ignite by one platform can be read and used by another platform.
+
+== Identifiers
+
+To achieve interoperability Ignite writes objects using the common binary format. This format encodes object type and
+fields using integer identifiers.
+
+To transform an object's type and field names to an integer value, Ignite passes them through two stage:
+
+* Name transformation: full type name and field names are passed to `IBinaryNameMapper` interface and converted to some common form.
+* ID transformation: resulting strings are passed to `IBinaryIdMapper` to produce either type ID or field ID.
+
+Mappers can be set either globally in `BinaryConfiguration` or for concrete type in `BinaryTypeConfiguration`.
+
+Java has the same interfaces `BinaryNameMapper` and `BinaryIdMapper`. They are set on `BinaryConfiguration` or `BinaryTypeConfiguration`.
+
+.NET and Java types must map to the same type ID and relevant fields must map to the same field ID.
+
+== Default Behavior
+
+The .NET part of Ignite.NET applies the following conversions by default:
+
+* Name transformation: the `System.Type.FullName` property for non-generics types; field or property name is unchanged.
+* ID transformation: names are converted to lower case and then ID is calculated in the same way as in the `java.lang.String.hashCode()` method in Java.
+
+The Java part of Ignite.NET applies the following conversions by default:
+
+* Name transformation: the `Class.getName()` method to get class name; field name is unchanged.
+* ID transformation: names are converted to lower case and then `java.lang.String.hashCode()` is used to calculate IDs.
+
+For example, the following two types will automatically map to each other, if they are outside namespaces (.NET) and packages (Java):
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+class Person
+{
+    public int Id { get; set; }
+    public string Name { get; set; }
+    public byte[] Data { get; set; }
+}
+----
+tab:Java[]
+[source,java]
+----
+class Person
+{
+    public int id;
+    public String name;
+    public byte[] data;
+}
+----
+--
+
+However, the types are normally within some namespace or package. And naming conventions for packages and namespaces
+differ in Java and .NET. It may be problematic to have .NET namespace be the same as Java package.
+
+Simple name mapper (which ignores namespace) can be used to avoid this problem. It should be configured both for .NET and Java:
+
+[tabs]
+--
+tab:Java Spring XML[]
+[source,xml]
+----
+<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    ...
+    <property name="binaryConfiguration">
+        <bean class="org.apache.ignite.configuration.BinaryConfiguration">
+            <property name="nameMapper">
+                <bean class="org.apache.ignite.binary.BinaryBasicNameMapper">
+                    <property name="simpleName" value="true"/>
+                </bean>
+            </property>
+        </bean>
+    </property>
+    ...
+</bean>
+----
+tab:C#[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration
+{
+  BinaryConfiguration = new BinaryConfiguration
+  {
+    NameMapper = new BinaryBasicNameMapper {IsSimpleName = true}
+  }
+}
+----
+tab:app.config[]
+[source,xml]
+----
+<igniteConfiguration>
+  <binaryConfiguration>
+    <nameMapper type="Apache.Ignite.Core.Binary.BinaryBasicNameMapper, Apache.Ignite.Core" isSimpleName="true" />
+  </binaryConfiguration>
+</igniteConfiguration>
+----
+--
+
+== Types Compatibility
+
+[width="100%",cols="1,3",opts="header"]
+|===
+|`C#`| `Java`
+|`bool`| `boolean`
+|`byte (*), sbyte`| `byte`
+|`short, ushort (*)`| `short`
+|`int, uint (*)`| `int`
+|`long, ulong (*)`| `long`
+|`char`| `char`
+|`float`| `float`
+|`double`| `double`
+|`decimal`| `java.math.BigDecimal (**)`
+|`string`| `java.lang.String`
+|`Guid`| `java.util.UUID`
+|`DateTime`| `java.util.Date, java.sql.Timestamp`
+|===
+`* byte, ushort, uint, ulong` do not have Java counterparts, and are mapped directly byte-by-byte (no range check).
+For example, `byte` value of `200` in C# will result in signed `byte` value of `-56` in Java.
+
+`** Java BigDecimal` has arbitrary size and precision, while C# decimal is fixed to 16 bytes and 28-29 digit precision. Ignite.NET will throw `BinaryObjectException` if a `BigDecimal` value does not fit into `decimal` on deserialization.
+
+`Enum` - In Ignite, Java `writeEnum` can only write ordinal values, but in .NET you can assign any number to the `enumValue`.
+So, note that any custom enum-to-primitive value bindings are not taken into account.
+
+[CAUTION]
+====
+[discrete]
+=== DateTime Serialization
+DateTime can be Local and UTC; Java Timestamp can only be UTC. Because of that, Ignite.NET can serialize DateTime in
+following ways:
+
+* .NET style (can work with non-UTC values, does not work in SQL) and as Timestamp (throws exception on non-UTC values, works properly in SQL).
+
+* Reflective serialization: mark field with `[QuerySqlField]` to enforce Timestamp serialization, or set `BinaryReflectiveSerializer.ForceTimestamp`
+to true; this can be done on per-type basis, or globally like this:
+`IgniteConfiguration.BinaryConfiguration = new BinaryConfiguration { Serializer = new BinaryReflectiveSerializer { ForceTimestamp = true } }`
+
+* `IBinarizable`: use IBinaryWriter.WriteTimestamp method.
+
+When it is not possible to modify class to mark fields with `[QuerySqlField]` or implement `IBinarizable`, use the `IBinarySerializer` approach.
+See link:net-specific/net-serialization[Serialization page] for more details.
+====
+
+== Collection Compatibility
+
+Arrays of simple types (from the table above) and arrays of objects are interoperable in all cases. For all other collections
+and arrays default behavior (with reflective serialization or `IBinaryWriter.WriteObject`) in Ignite.NET is to use `BinaryFormatter`,
+and the result can not be read by Java code (this is done to properly support generics). To write collections in interoperable
+format, implement 'IBinarizable' interface and use `IBinaryWriter.WriteCollection`, `IBinaryWriter.WriteDictionary`,
+`IBinaryReader.ReadCollection`, `IBinaryReader.ReadDictionary`methods.
+
+== Mixed-Platform Clusters
+
+Ignite, Ignite.NET and Ignite.C{pp} nodes can join the same cluster
+
+All platforms are built on top of Java, so any node can execute Java computations.
+However, .NET and C{pp} computations can be executed only by corresponding nodes.
+
+The following Ignite.NET functionality is not supported when there is at least one non-.NET node in the cluster:
+
+* Scan Queries with filter
+* Continuous Queries with filter
+* ICache.Invoke methods
+* ICache.LoadCache with filter
+* Services
+* IMessaging.RemoteListen
+* IEvents.RemoteQuery
+
+Blog post with detailed walk-through: https://ptupitsyn.github.io/Ignite-Multi-Platform-Cluster/[Multi-Platform Ignite Cluster: Java + .NET, window=_blank]
+
+== Compute in Mixed-Platform Clusters
+
+The `ICompute.ExecuteJavaTask` methods work without limitations in any cluster. Other `ICompute` methods will execute
+closures only on .NET nodes.
diff --git a/docs/_docs/net-specific/net-plugins.adoc b/docs/_docs/net-specific/net-plugins.adoc
new file mode 100644
index 0000000..bc8211f
--- /dev/null
+++ b/docs/_docs/net-specific/net-plugins.adoc
@@ -0,0 +1,169 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Extending Ignite.NET With Custom Plugins
+
+== Overview
+
+The Ignite.NET plugin system allows you to extend the core Ignite.NET functionality with custom plugins. The best way to
+explain how Ignite plugins work is by looking at the life cycle of plugins.
+
+== IgniteConfiguration.PluginConfigurations
+
+First, an Apache Ignite plugin has to be registered via the `IgniteConfiguration.PluginConfigurations` property which is
+a collection of the `IPluginConfiguration` implementations. From a user's perspective, this is a manual process - a
+plugin's assembly has to be referenced and configured explicitly.
+
+The `IPluginConfiguration` interface has two members that interact with the Java part of Apache Ignite.NET. This is
+described in the next section. Besides those two members, `IPluginConfiguration` implementation should contain all the
+other plugin-specific configuration properties.
+
+Another part of an `IPluginConfiguration` implementation is the mandatory `[PluginProviderType]` attribute that tethers a
+plugin configuration with a plugin implementation. For example:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+[PluginProviderType(typeof(MyPluginProvider))]
+    public class MyPluginConfiguration : IPluginConfiguration
+    {
+        public string MyProperty { get; set; }  // Plugin-specific property
+
+        public int? PluginConfigurationClosureFactoryId
+        {
+            get { return null; }  // No Java part
+        }
+
+        public void WriteBinary(IBinaryRawWriter writer)
+        {
+            // No-op.
+        }
+    }
+----
+--
+
+To recap, this is how plugins are added and initialized:
+
+* You add the `IPluginConfiguration` implementation instance to `IgniteConfiguration`.
+* You start an Ignite node with the prepared configuration.
+* Before the Ignite node initialization is finished, the Ignite plugin engine examines the `IPluginConfiguration` implementation
+for the `[PluginProviderType]` attribute and instantiates the specified class.
+
+== IPluginProvider
+
+The `IPluginProvider` implementation is the work-horse of the newly added plugin. It deals with the Ignite node life cycle
+by processing the calls to the `OnIgniteStart` and `OnIgniteStop` methods. In addition, it can provide an optional API
+to be used by an end user via the `GetPlugin<T>()` method.
+
+The first method to be invoked on the `IPluginProvider` implementation by the Ignite.NET engine is
+`Start(IPluginContext<TestIgnitePluginConfiguration> context)`. `IPluginContext` provides an access to an initial plugin
+configuration and all means to interact with Ignite.
+
+When Ignite is being stopped, the `Stop` and `OnIgniteStop` methods are executed sequentially so that the plugin
+implementation can accomplish all cleanup and shutdown-related​ tasks.
+
+== IIgnite.GetPlugin
+
+Plugins can expose user-facing API which is accessed via the `IIgnite.GetPlugin(string name)` method. The Ignite engine
+will search for `IPluginProvider` with the passed name and call `GetPlugin` on it.
+
+== Interacting With Java
+
+The Ignite.NET plugin can interact with an Ignite Java plugin via the `PlatformTarget` & `IPlatformTarget` interface pair.
+
+=== Java-Specific Logic
+
+. Implement the `PlatformTarget` interface, which is a communication point with .NET:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+class MyPluginTarget implements PlatformTarget {
+  @Override public long processInLongOutLong(int type, long val) throws IgniteCheckedException {
+    if (type == 1)
+        return val + 1;
+    else
+      return val - 1;
+  }
+  ...  // Other methods here.
+}
+----
+--
+
+* Implement the `PlatformPluginExtension` interface:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+public class MyPluginExtension implements PlatformPluginExtension {
+  @Override public int id() {
+    return 42;  // Unique id to be used from .NET side.
+  }
+
+  @Override public PlatformTarget createTarget() {
+    return new MyPluginTarget();  // Return target from previous step.
+  }
+}
+----
+--
+
+* Implement the `PluginProvider.initExtensions` method and register the `PlatformPluginExtension` class:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+@Override public void initExtensions(PluginContext ctx, ExtensionRegistry registry) {
+  registry.registerExtension(PlatformPluginExtension.class, new MyPluginExtension());
+}
+----
+--
+
+=== .NET-specific Logic
+
+Call `IPluginContext.GetExtension` with a corresponding id. This will invoke the `createTarget` call on the Java side:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+IPlatformTarget extension = pluginContext.GetExtension(42);
+
+long result = extension.InLongOutLong(1, 2);  // processInLongOutLong is called in Java
+----
+--
+
+Other `IPlatformTarget` methods provide an efficient way to exchange any kind of data between Java and .NET code.
+
+=== Callbacks from Java
+
+.NET \-> Java call mechanism is described above; you can also do Java \-> .NET calls:
+
+* Register callback handler with some ID on the .NET side via the `IPluginContext.RegisterCallback` method.
+* Call `PlatformCallbackGateway.pluginCallback` with that ID on the Java side.
+
+[NOTE]
+====
+[discrete]
+=== Complete Example
+A detailed walk-through plugin example can be found in https://ptupitsyn.github.io/Ignite-Plugin/[this blog post, window=_blank].
+====
diff --git a/docs/_docs/net-specific/net-remote-assembly-loading.adoc b/docs/_docs/net-specific/net-remote-assembly-loading.adoc
new file mode 100644
index 0000000..25639c9
--- /dev/null
+++ b/docs/_docs/net-specific/net-remote-assembly-loading.adoc
@@ -0,0 +1,154 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Remote Assembly Loading
+
+== Overview
+
+Many Ignite APIs involve remote code execution. For example, Ignite compute tasks are serialized, sent to remote nodes, and executed there.
+However, by default, .NET assemblies (DLL files) with those tasks in, must be loaded on remote nodes in order to instantiate
+and deserialize tasks' instances.
+
+Before version 2.1 you had to manually load assemblies (using `-assembly` swith with `Apache.Ignite.exe` or some other ways).
+Starting Ignite 2.1 you can take advantage of the remote assembly loading feature, that can be enabled with the
+`IgniteConfiguration.PeerAssemblyLoadingMode` flag. This configuration property needs to have the same value on all nodes
+in the cluster. Another available mode is `CurrentAppDomain`.
+
+== CurrentAppDomain Mode
+
+`PeerAssemblyLoadingMode.CurrentAppDomain` enables automatic on-demand assembly requests to other nodes in cluster,
+loading assemblies into https://msdn.microsoft.com/en-us/library/system.appdomain.aspx[AppDomain, window=_blank] where Ignite node runs.
+
+Consider the following code:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+// Print Hello World on all cluster nodes.
+ignite.GetCompute().Broadcast(new HelloAction());
+
+class HelloAction : IComputeAction
+{
+  public void Invoke()
+  {
+    Console.WriteLine("Hello World!");
+  }
+}
+----
+--
+* Ignite serializes the `HelloAction` instance and broadcasts to every node in the cluster.
+* Remote nodes attempt to deserialize the `HelloAction` instance. If there is no such class in the currently loaded or referenced assemblies,
+the nodes request an assembly with the class from the node that initiated the compute task or from other nodes (if necessary).
+* The assembly file is sent from the originating or other node as a byte array and loaded with the `Assembly.Load(byte[])` method.
+
+=== Versioning
+
+https://msdn.microsoft.com/en-us/library/system.type.assemblyqualifiedname.aspx[Assembly-qualified type name, window=_blank]
+includes the assembly version and is used to resolve types.
+
+If you keep the cluster running, do this change in the logic and see how the assembly gets reloaded automatically:
+
+* Modify `HelloAction` intance to print something else
+* Change https://msdn.microsoft.com/en-us/library/system.reflection.assemblyversionattribute.aspx[AssemblyVersion, window=_blank]
+* Recompile and run the application code
+* The new version of the assembly will be deployed and executed on other nodes.
+
+Note, if you keep the `AssemblyVersion` unchanged, Ignite will use existing assembly that was previously loaded, since
+there are no changes in the type name.
+
+Assemblies with different versions can co-exist and be used side by side. Some nodes can continue running old code, while
+other nodes can execute computations with a newer version of the same class.
+
+The `AssemblyVersion` attribute can include asterisk (`*`) to enable the auto-increment on build: `[assembly: AssemblyVersion("1.0.*")]`.
+This way you can keep the cluster running, repeatedly modify and run computations, and new assembly versions will be deployed every time.
+
+=== Dependencies
+
+Dependent assemblies are also loaded automatically, e.g. when `ComputeAction` calls some code from a different assembly.
+Keep that in mind when using heavy frameworks and libraries: single compute call can cause lots of assemblies to be sent over the network.
+
+=== Unloading
+
+.NET does not allow assembly unloading. Instead, only the entire `AppDomain` can be unloaded with all assemblies.
+Currently available `CurrentAppDomain` mode uses existing `AppDomain`, which means all peer-deployed assemblies will stay
+loaded while current `AppDomain` lives. This may cause increased memory usage.
+
+== Example
+
+https://github.com/apache/ignite/blob/56975c266e7019f307bb9da42333a6db4e47365e/modules/platforms/dotnet/examples/Apache.Ignite.Examples/Compute/PeerAssemblyLoadingExample.cs[PeerAssemblyLoadingExample, window=_blank] can be used
+to try out the remote assembly loading feature in practice:
+
+* Create a new Console Application in Visual Studio
+* Install the Ignite.NET NuGet package `Install-Package Apache.Ignite`
+* Open the `packages\Apache.Ignite.2.1\lib\net40` folder
+* Add the `peerAssemblyLoadingMode='CurrentAppDomain'` attribute to `<igniteConfiguration>` element
+* Run `Apache.Ignite.exe` (one or more times), leave the processes running
+* Change `[AssemblyVersion]` in `AssemblyInfo.cs` to `1.0.*`
+* Modify `Program.cs` in Visual Studio as shown below
++
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Compute;
+using Apache.Ignite.Core.Deployment;
+
+namespace ConsoleApp
+{
+    class Program
+    {
+        static void Main(string[] args)
+        {
+            var cfg = new IgniteConfiguration
+            {
+                PeerAssemblyLoadingMode = PeerAssemblyLoadingMode.CurrentAppDomain
+            };
+
+            using (var ignite = Ignition.Start(cfg))
+            {
+                ignite.GetCompute().Broadcast(new HelloAction());
+            }
+        }
+
+        class HelloAction : IComputeAction
+        {
+            public void Invoke()
+            {
+                Console.WriteLine("Hello, World!");
+            }
+        }
+    }
+}
+----
+tab:Apache.Ignite.exe.config[]
+[source,xml]
+----
+<igniteConfiguration peerAssemblyLoadingMode='CurrentAppDomain' />
+----
+tab:AssemblyInfo.cs[]
+[source,csharp]
+----
+...
+[assembly: AssemblyVersion("1.0.*")]
+...
+----
+--
+* Run the project and observe the `"Hello, World!"` output in the console of all `Apache.Ignite.exe` windows.
+* Change the `"Hello, World!"` text to something else and run the program again
+* Observe different output on the nodes started with `Apache.Ignite.exe` earlier.
diff --git a/docs/_docs/net-specific/net-serialization.adoc b/docs/_docs/net-specific/net-serialization.adoc
new file mode 100644
index 0000000..eeb48a9
--- /dev/null
+++ b/docs/_docs/net-specific/net-serialization.adoc
@@ -0,0 +1,314 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Serialization in Ignite.NET
+
+Most of the user-defined classes going through the Ignite .NET API will be trasferred over the network to other cluster nodes. These classes include:
+
+* Cache keys and values
+* Cache processors and filters (`ICacheEntryProcessor`, `ICacheEntryFilter`, `ICacheEntryEventFilter`, `ICacheEntryEventListener`)
+* Compute functions (`IComputeFunc`), actions (`IComputeAction`) and jobs (`IComputeJob`)
+* Services (`IService`)
+* Event and Message handlers (`IEventListener`, `IEventFilter`, `IMessageListener`)
+
+Passing objects of these classes over the network requires serialization. Ignite .NET supports the following ways of serializing user data:
+
+* `Apache.Ignite.Core.Binary.IBinarizable` interface
+* `Apache.Ignite.Core.Binary.IBinarySerializer` interface
+* `System.Runtime.Serialization.ISerializable` interface
+* Ignite reflective serialization (when none of the above applies)
+
+== IBinarizable
+
+`IBinarizable` approach provides a fine-grained control over serialization. This is a preferred way for high-performance production code.
+
+First, implement the `IBinarizable` interface in your class:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+public class Address : IBinarizable
+{
+    public string Street { get; set; }
+
+    public int Zip { get; set; }
+
+    public void WriteBinary(IBinaryWriter writer)
+    {
+        // Alphabetic field order is required for SQL DML to work.
+        // Even if DML is not used, alphabetic order is recommended.
+        writer.WriteString("street", Street);
+        writer.WriteInt("zip", Zip);
+    }
+
+    public void ReadBinary(IBinaryReader reader)
+    {
+        // Read order does not matter, however, reading in the same order
+        // as writing improves performance.
+        Street = reader.ReadString("street");
+        Zip = reader.ReadInt("zip");
+    }
+}
+----
+--
+
+`IBinarizable` can also be implemented in raw mode, without field names. This provides the fastest and the most compact
+serialization, but disables SQL queries:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+public class Address : IBinarizable
+{
+    public string Street { get; set; }
+
+    public int Zip { get; set; }
+
+    public void WriteBinary(IBinaryWriter writer)
+    {
+        var rawWriter = writer.GetRawWriter();
+
+        rawWriter.WriteString(Street);
+        rawWriter.WriteInt(Zip);
+    }
+
+    public void ReadBinary(IBinaryReader reader)
+    {
+        // Read order must be the same as write order
+        var rawReader = reader.GetRawReader();
+
+        Street = rawReader.ReadString();
+        Zip = rawReader.ReadInt();
+    }
+}
+----
+--
+
+[NOTE]
+====
+[discrete]
+=== Automatic GetHashCode and Equals Implementation
+If an object can be serialized into a binary form, then Ignite will calculate its hash code during serialization and
+write it to the resulting binary array. Also, Ignite provides a custom implementation of the equals method for the
+binary object's comparison needs. This means that you do not need to override the `GetHashCode` and `Equals` methods of
+your custom keys and values in order for them to be used in Ignite.
+====
+
+== IBinarySerializer
+
+`IBinarySerializer` is similar to `IBinarizable`, but separates serialization logic from the class implementation.
+This may be useful when the class code can not be modified, and serialization logic is shared between multiple classes,
+etc. The following code has exactly the same serialization as in the `Address` example above:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+public class Address : IBinarizable
+{
+    public string Street { get; set; }
+
+    public int Zip { get; set; }
+}
+
+public class AddressSerializer : IBinarySerializer
+{
+    public void WriteBinary(object obj, IBinaryWriter writer)
+    {
+        var addr = (Address) obj;
+
+        writer.WriteString("street", addr.Street);
+        writer.WriteInt("zip", addr.Zip);
+    }
+
+    public void ReadBinary(object obj, IBinaryReader reader)
+    {
+        var addr = (Address) obj;
+
+        addr.Street = reader.ReadString("street");
+        addr.Zip = reader.ReadInt("zip");
+    }
+}
+----
+--
+
+The `Serializer` should be specified in the configuration like this:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var cfg = new IgniteConfiguration
+{
+    BinaryConfiguration = new BinaryConfiguration
+    {
+        TypeConfigurations = new[]
+        {
+            new BinaryTypeConfiguration(typeof (Address))
+            {
+                Serializer = new AddressSerializer()
+            }
+        }
+    }
+};
+
+using (var ignite = Ignition.Start(cfg))
+{
+  ...
+}
+----
+--
+
+== ISerializable
+
+Types that implement the `System.Runtime.Serialization.ISerializable` interface will be serialized accordingly
+(by calling `GetObjectData` and serialization constructor). All system features are supported: `IObjectReference`,
+`IDeserializationCallback`, `OnSerializingAttribute`, `OnSerializedAttribute`, `OnDeserializingAttribute`, `OnDeserializedAttribute`.
+
+The `GetObjectData` result is written into the Ignite binary format. The following three classes provide identical serialized representation:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+class Reflective
+{
+    public int Id { get; set; }
+    public string Name { get; set; }
+}
+
+class Binarizable : IBinarizable
+{
+    public int Id { get; set; }
+    public string Name { get; set; }
+
+    public void WriteBinary(IBinaryWriter writer)
+    {
+        writer.WriteInt("Id", Id);
+        writer.WriteString("Name", Name);
+    }
+
+    public void ReadBinary(IBinaryReader reader)
+    {
+        Id = reader.ReadInt("Id");
+        Name = reader.ReadString("Name");
+    }
+}
+
+class Serializable : ISerializable
+{
+    public int Id { get; set; }
+    public string Name { get; set; }
+
+    public Serializable() {}
+
+    protected Serializable(SerializationInfo info, StreamingContext context)
+    {
+        Id = info.GetInt32("Id");
+        Name = info.GetString("Name");
+    }
+
+    public void GetObjectData(SerializationInfo info, StreamingContext context)
+    {
+        info.AddValue("Id", Id);
+        info.AddValue("Name", Name);
+    }
+}
+----
+--
+
+== Ignite Reflective Serialization
+
+Ignite reflective serialization is essentially the `IBinarizable` approach where the interface is implemented automatically
+by reflecting over all fields and emitting write/read calls.
+
+There are no requirements for this mechanism, any class or struct can be serialized including all system types, delegates,
+expression trees, or anonymous types.
+
+Use the `[NonSerialized]` attribute to filter out specific fields during serialization.
+
+The raw mode can be enabled by specifying `BinaryReflectiveSerializer` explicitly:
+
+[tabs]
+--
+tab:C#[]
+[source,csharp]
+----
+var binaryConfiguration = new BinaryConfiguration
+{
+    TypeConfigurations = new[]
+    {
+        new BinaryTypeConfiguration(typeof(MyClass))
+        {
+            Serializer = new BinaryReflectiveSerializer {RawMode = true}
+        }
+    }
+};
+----
+tab:app.config[]
+[source,xml]
+----
+<igniteConfiguration>
+    <binaryConfiguration>
+        <typeConfigurations>
+            <binaryTypeConfiguration typeName='Apache.Ignite.ExamplesDll.Binary.Address'>
+                <serializer type='Apache.Ignite.Core.Binary.BinaryReflectiveSerializer, Apache.Ignite.Core' rawMode='true' />
+            </binaryTypeConfiguration>
+        </typeConfigurations>
+    </binaryConfiguration>
+</igniteConfiguration>
+----
+--
+
+Otherwise, `BinaryConfiguration` is not required.
+
+Performance is identical to manual the `IBinarizable` approach. Reflection is only used on startup to iterate over the
+fields and emit efficient IL code.
+
+Types marked with `[Serializable]` attribute but without `ISerializable` interface are written with Ignite reflective serializer.
+
+== Using Entity Framework POCOs
+
+The Entity Framework POCOs can be used directly with Ignite.
+
+However, https://msdn.microsoft.com/en-us/data/jj592886.aspx[POCO proxies, window=_blank] cannot be directly serialized
+or deserialized by Ignite, because the proxy type is a dynamic type.
+
+Make sure to disable proxy creation when using EF objects with Ignite:
+
+[tabs]
+--
+tab:Entity Framework 6[]
+[source,csharp]
+----
+ctx.Configuration.ProxyCreationEnabled = false;
+----
+tab:Entity Framework 5[]
+[source,csharp]
+----
+ctx.ContextOptions.ProxyCreationEnabled = false;
+----
+--
+
+== More Info
+
+See https://ptupitsyn.github.io/Ignite-Serialization-Performance/[Ignite Serialization Performance, window=_blank] blog
+post for more details on serialization performance of various modes introduced on this page.
diff --git a/docs/_docs/net-specific/net-standalone-nodes.adoc b/docs/_docs/net-specific/net-standalone-nodes.adoc
new file mode 100644
index 0000000..823ccca
--- /dev/null
+++ b/docs/_docs/net-specific/net-standalone-nodes.adoc
@@ -0,0 +1,130 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite.NET Standalone Nodes
+
+== Overview
+
+An Ignite.NET node can be started within the code of a .NET application by using `Ignition.Start()` or as a separate
+process with `Apache.Ignite.exe` executable located under `{apache_ignite_release}\platforms\dotnet\bin` folder.
+Internally `Apache.Ignite.exe` references `Apache.Ignite.Core.dll` and uses `Ignition.Start()` as you would normally do,
+and can be configured with command line arguments, listed below, by passing them as command line options or setting directly
+in `Apache.Ignite.exe.config` file.
+
+Usually, you start server nodes in the standalone mode. An Ignite cluster is a group of server nodes interconnected
+together in order to provide shared resources like RAM and CPU to your applications.
+
+== Configure Standalone Node via Command Line
+
+Below you can see basic Ignite parameters that can be passed as command line arguments when a node is started with
+`Apache.Ignite.exe` executable:
+
+[width="100%",cols="1,3",opts="header"]
+|===
+|Command Line Argument |Description
+|`-IgniteHome`| A path to Ignite installation directory (if not provided, the `IGNITE_HOME` environment variable is used)
+|`-ConfigFileName`| A path to the app.config file (if not provided, `Apache.Ignite.exe.config` is used).
+|`-ConfigSectionName`| The name of the `IgniteConfigurationSection` from a configuration file.
+|`-SpringConfigUrl`| A path to a Spring configuration file.
+|`-JvmDllPath`| A path to JVM library `jvm.dll` (if not provided, `JAVA_HOME` environment variable is used).
+|`-JvmClasspath`| The classpath to pass to JVM started by Ignite.NET internally (use to enlist additional JAR files).
+|`-SuppressWarnings`| Whether or not to print warnings.
+|`-J<javaOption>`| Additional JVM options to be used during the initialization of the JVM.
+|`-Assembly`| Additional .NET assemblies to be loaded.
+|`-JvmInitialMemoryMB`| Initial Java heap size, in megabytes. Maps to the `-Xms` Java parameter.
+|`-JvmMaxMemoryMB`| Maximum Java heap size, in megabytes. Maps to the `-Xmx` Java parameter.
+|`/install`| Installs Ignite Windows service with provided options.
+|`/uninstall`| Uninstalls Ignite Windows service.
+|===
+
+
+[tabs]
+--
+tab:Example[]
+[source,shell]
+----
+Apache.Ignite.exe -ConfigFileName=c:\ignite\my-config.xml -ConfigSectionName=igniteConfiguration -Assembly=c:\ignite\my-code.dll -J-Xms1024m -J-Xmx2048m
+----
+--
+
+== Configure Standalone Node via XML Files
+
+A standalone node can be configured with app.config XML or Spring XML (or both). Every command line argument, listed above,
+can also be used in `Apache.Ignite.exe.config` under `appSettings` section:
+
+[tabs]
+--
+tab:Apache.Ignite.exe.config[]
+[source,xml]
+----
+<configuration>
+  <configSections>
+    <section name="igniteConfiguration" type="Apache.Ignite.Core.IgniteConfigurationSection, Apache.Ignite.Core" />
+  </configSections>
+
+  <igniteConfiguration springConfigUrl="c:\ignite\spring.xml">
+    <cacheConfiguration name="myCache" cacheMode="Replicated" />
+  </igniteConfiguration>
+
+  <appSettings>
+    <add key="Ignite.Assembly.1" value="my-assembly.dll"/>
+    <add key="Ignite.Assembly.2" value="my-assembly2.dll"/>
+    <add key="Ignite.ConfigSectionName" value="igniteConfiguration" />
+  </appSettings>
+</configuration>
+----
+--
+
+This example defines the `igniteConfiguration` section and uses it to start Ignite via the `Ignite.ConfigSectionName` setting.
+It also references the Spring XML configuration file, whose settings will be added to the specified configuration.
+
+== Load User Assemblies
+
+Some Ignite APIs involve remote code execution and require you to load assemblies with your code into `Apache.Ignite.exe`
+via `-Assembly` command line argument or `Ignite.Assembly` app setting.
+
+The following functionality requires a corresponding assembly to be loaded on all nodes:
+
+* ICompute (supports automatic loading, see link:net-specific/remote-assembly-loading[Remote Assembly Loading])
+* Scan Queries with filter
+* Continuous Queries with filter
+* ICache.Invoke methods
+* ICache.LoadCache with filter
+* IServices
+* IMessaging.RemoteListen
+* IEvents.RemoteQuery
+
+[NOTE]
+====
+[discrete]
+=== Missing User Assemblies
+If a user assembly cannot be located a `Could not load file or assembly 'MyAssembly'` or one of its dependencies
+exception will be thrown.
+
+Note, that it is also necessary to add any dependencies of the user assembly to the list.
+====
+
+== Ignite.NET as Windows Service
+
+`Apache.Ignite.exe` can be installed as a Windows Service so it is started automatically via `/install` command line argument.
+All other command line arguments will be preserved and used each time the service starts. Use `/uninstall` to uninstall the service.
+
+[tabs]
+--
+tab:Example[]
+[source,shell]
+----
+Apache.Ignite.exe /install -J-Xms513m -J-Xmx555m -ConfigSectionName=igniteConfiguration
+----
+--
diff --git a/docs/_docs/perf-and-troubleshooting/general-perf-tips.adoc b/docs/_docs/perf-and-troubleshooting/general-perf-tips.adoc
new file mode 100644
index 0000000..99ec7de
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/general-perf-tips.adoc
@@ -0,0 +1,49 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Generic Performance Tips
+
+Ignite as distributed storages and platforms require certain optimization techniques. Before you dive
+into the more advanced techniques described in this and other articles, consider the following basic checklist:
+
+* Ignite is designed and optimized for distributed computing scenarios. Deploy and benchmark a multi-node cluster
+rather than a single-node one.
+
+* Ignite can scale horizontally and vertically equally well.
+Thus, consider allocating all the CPU and RAM resources available on a local machine to an Ignite node.
+A single node per physical machine is a recommended configuration.
+
+* In cases when Ignite is deployed in a virtual or cloud environment, it's ideal (but not strictly required) to
+pin a Ignite node to a single host. This provides two benefits:
+
+** Avoids the "noisy neighbor" problem where Ignite VM would compete for the host resources with other applications.
+This might cause performance spikes on your Ignite cluster.
+** Ensures high-availability. If a host goes down and you have two or more Ignite server node VMs pinned to it, then it can lead to data loss.
+
+* If resources allow, store the entire data set in RAM. Even though Ignite can keep and work with on-disk data,
+its architecture is memory-first. In other words, _the more data you cache in RAM the faster the performance_.
+link:perf-and-troubleshooting/memory-tuning[Configure and tune] memory appropriately.
+
+* It might seem counter to the bullet point above but it's not enough just to put data in RAM and expect an
+order of magnitude performance improvements. Be ready to adjust your data model and existing applications if any.
+Use the link:data-modeling/affinity-collocation[affinity colocation] concept during the data
+modelling phase for proper data distribution. For instance, if your data is properly colocated, you can run SQL
+queries with JOINs at massive scale and expect significant performance benefits.
+
+* If Native persistence is used, then follow these link:perf-and-troubleshooting/persistence-tuning[persistence optimization techniques].
+
+* If you are going to run SQL with Ignite, then get to know link:perf-and-troubleshooting/sql-tuning[SQL-related optimizations].
+
+* Adjust link:data-rebalancing[data rebalancing settings] to ensure that rebalancing completes faster when your cluster topology changes.
+
diff --git a/docs/_docs/perf-and-troubleshooting/handling-exceptions.adoc b/docs/_docs/perf-and-troubleshooting/handling-exceptions.adoc
new file mode 100644
index 0000000..0510183
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/handling-exceptions.adoc
@@ -0,0 +1,248 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Handling Exceptions
+
+This section outlines basic exceptions that can be generated by Ignite, and explains how to set
+up and use the critical failures handler.
+
+== Handling Ignite Exceptions
+
+Exceptions supported by the Ignite API and actions you can take related to these exceptions are described below.
+Please see the Javadoc _throws_ clause for checked exceptions.
+
+[cols="25%,35%,30%,10%", width="100%"]
+|=======================================================================
+|Exception	|Description	|Action	|Runtime exception
+
+| `CacheInvalidStateException`
+| Thrown when you try to perform an operation on a cache in which some partitions have been lost. Depending on the partition
+loss policy configured for the cache, this exception is thrown either on read and/or write operations.
+See link:configuring-caches/partition-loss-policy[Partition Loss Policy] for details.
+| Reset lost partitions. You may want to restore the data by returning the nodes that caused the partition loss to the cluster.
+| Yes
+
+|`IgniteException`
+|Indicates an error condition in the cluster.
+|Operation failed. Exit from the method.
+|Yes
+
+|`IgniteClientDisconnectedException`
+|Thrown by the Ignite API when a client node gets disconnected from cluster. Thrown from Cache operations, compute API, and data structures.
+|Wait and use retry logic.
+|Yes
+|`IgniteAuthenticationException`
+|Thrown when there is either a node authentication failure or security authentication failure.
+|Operation failed. Exit from the method.
+|No
+|`IgniteClientException`
+|Can be thrown from Cache operations.
+|Check exception message for the action to be taken.
+|Yes
+|`IgniteDeploymentException`
+|Thrown when the Ignite API fails to deploy a job or task on a node. Thrown from the Compute API.
+|Operation failed. Exit from the method.
+|Yes
+|`IgniteInterruptedException`
+|Used to wrap the standard `InterruptedException` into `IgniteException`.
+|Retry after clearing the interrupted flag.
+|Yes
+|`IgniteSpiException`
+|Thrown by various SPI (`CollisionSpi`, `LoadBalancingSpi`, `TcpDiscoveryIpFinder`, `FailoverSpi`, `UriDeploymentSpi`, etc.)
+|Operation failed. Exit from the method.
+|Yes
+|`IgniteSQLException`
+|Thrown when there is a SQL query processing error. This exception also provides query specific error codes.
+|Operation failed. Exit from the method.
+|Yes
+|`IgniteAccessControlException`
+|Thrown when there is an authentication / authorization failure.
+|Operation failed. Exit from the method.
+|No
+|`IgniteCacheRestartingException`
+|Thrown from Ignite cache API if a cache is restarting.
+|Wait and use retry logic.
+|Yes
+|`IgniteFutureTimeoutException`
+|Thrown when a future computation is timed out.
+|Either increase timeout limit or exit from the method.
+|Yes
+|`IgniteFutureCancelledException`
+|Thrown when a future computation cannot be retrieved because it was cancelled.
+|Use retry logic.
+|Yes
+|`IgniteIllegalStateException`
+|Indicates that the Ignite instance is in an invalid state for the requested operation.
+|Operation failed. Exit from the method.
+|Yes
+|`IgniteNeedReconnectException`
+|Indicates that a node should try to reconnect to the cluster.
+|Use retry logic.
+|No
+|`IgniteDataIntegrityViolationException`
+|Thrown if a data integrity violation is found.
+|Operation failed. Exit from the method.
+|Yes
+|`IgniteOutOfMemoryException`
+|Thrown when the system does not have enough memory to process Ignite operations. Thrown from Cache operations.
+|Operation failed. Exit from the method.
+|Yes
+|`IgniteTxOptimisticCheckedException`
+|Thrown when a transaction fails optimistically.
+|Use retry logic.
+|No
+|`IgniteTxRollbackCheckedException`
+|Thrown when a transaction has been automatically rolled back.
+|Use retry logic.
+|No
+|`IgniteTxTimeoutCheckedException`
+|Thrown when a transaction times out.
+|Use retry logic.
+|No
+|`ClusterTopologyException`
+|Indicates an error with the cluster topology (e.g. crashed node, etc.). Thrown from Compute and Events API
+|Wait on future and use retry logic.
+|Yes
+|=======================================================================
+
+== Critical Failures Handling
+
+Ignite is a robust and fault tolerant system. But in the real world, some unpredictable issues and problems arise
+that can affect the state of both an individual node as well as the whole cluster. Such issues can be detected at
+runtime and handled accordingly using a preconfigured critical failure handler.
+
+=== Critical Failures
+
+The following failures are treated as critical:
+
+* System critical errors (e.g. `OutOfMemoryError`).
+
+* Unintentional system worker termination (e.g. due to an unhandled exception).
+
+* System workers hanging.
+
+* Cluster nodes segmentation.
+
+A system critical error is an error which leads to the system's inoperability. For example:
+
+* File I/O errors - usually `IOException` is thrown by file read/write operations. It's possible when Ignite
+native persistence is enabled (e.g., in cases when no space is left or on a device error), and also for in-memory
+mode because Ignite uses disk storage for keeping some metadata (e.g., in cases when the file descriptors limit is
+exceeded or file access is prohibited).
+
+* Out of memory error - when Ignite memory management system fails to allocate more space
+(`IgniteOutOfMemoryException`).
+
+* Out of memory error - when a cluster node runs out of Java heap (`OutOfMemoryError`).
+
+=== Failures Handling
+
+When Ignite detects a critical failure, it handles the failure according to a preconfigured failure handler.
+The failure handler can be configured as follows:
+
+:javaFile: code-snippets/java/src/main/java/org/apache/ignite/snippets/FailureHandler.java
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="failureHandler">
+        <bean class="org.apache.ignite.failure.StopNodeFailureHandler"/>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=configure-handler,indent=0]
+----
+--
+
+Ignite support following failure handlers:
+
+[width=100%,cols="30%,70%"]
+|=======================================================================
+|Class |Description
+
+|`NoOpFailureHandler`
+|Ignores any failures. Useful for testing and debugging.
+|`RestartProcessFailureHandler`
+|A specific implementation that can be used only with `ignite.sh\|bat`. The process must be terminated by using the `Ignition.restart(true)` method.
+|`StopNodeFailureHandler`
+|Stops the node in case of critical errors by calling the `Ignition.stop(true)` or `Ignition.stop(nodeName, true)` methods.
+|`StopNodeOrHaltFailureHandler`
+|This is the default handler, which tries to stop a node. If the node can't be stopped, then the handler  terminates the JVM process.
+
+|=======================================================================
+
+=== Critical Workers Health Check
+
+Ignite has a number of internal workers that are essential for the cluster to function correctly. If one of them is
+terminated, the node can become inoperative.
+
+The following system workers are considered mission critical:
+
+* Discovery worker - discovery events handling.
+* TCP communication worker - peer-to-peer communication between nodes.
+* Exchange worker - partition map exchange.
+* Workers of the system's striped pool.
+* Data Streamer striped pool workers.
+* Timeout worker - timeouts handling.
+* Checkpoint thread - check-pointing in Ignite persistence.
+* WAL workers - write-ahead logging, segments archiving, and compression.
+* Expiration worker - TTL based expiration.
+* NIO workers - base networking.
+
+Ignite has an internal mechanism for verifying that critical workers are operational.
+Each worker is regularly checked to confirm that it is alive and updating its heartbeat timestamp.
+If a worker is not alive and updating, the worker is regarded as blocked and Ignite will print a message to the log file.
+You can set the period of inactivity via the `IgniteConfiguration.systemWorkerBlockedTimeout` property.
+
+Even though Ignite considers an unresponsive system worker to be a critical error, it doesn't handle this situation automatically,
+other than printing out a message to the log file.
+If you want to enable a particular failure handler for unresponsive system workers of all the types, clear the
+`ignoredFailureTypes` property of the handler as shown below:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+    <property name="systemWorkerBlockedTimeout" value="#{60 * 60 * 1000}"/>
+
+    <property name="failureHandler">
+        <bean class="org.apache.ignite.failure.StopNodeFailureHandler">
+
+          <!-- Enable this handler to react to unresponsive critical workers occasions. -->
+          <property name="ignoredFailureTypes">
+            <list>
+            </list>
+          </property>
+
+      </bean>
+
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=failure-types,indent=0]
+----
+--
+
diff --git a/docs/_docs/perf-and-troubleshooting/index.adoc b/docs/_docs/perf-and-troubleshooting/index.adoc
new file mode 100644
index 0000000..6b642c4
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/index.adoc
@@ -0,0 +1,18 @@
+---
+layout: toc
+---
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Performance and Troubleshooting Guide
diff --git a/docs/_docs/perf-and-troubleshooting/memory-tuning.adoc b/docs/_docs/perf-and-troubleshooting/memory-tuning.adoc
new file mode 100644
index 0000000..dfaedff
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/memory-tuning.adoc
@@ -0,0 +1,185 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Memory and JVM Tuning
+
+This article provides best practices for memory tuning that are relevant for deployments with and without native persistence or an external storage.
+Even though Ignite stores data and indexes off the Java heap, Java heap is still used to store objects generated by
+queries and operations executed by your applications.
+Thus, certain recommendations should be considered for JVM and garbage collection (GC) related optimizations.
+
+[NOTE]
+====
+[discrete]
+Refer to link:perf-and-troubleshooting/persistence-tuning[persistence] tuning article for disk-related
+optimization practices.
+====
+
+== Tune Swappiness Setting
+
+An operating system starts swapping pages from RAM to disk when overall RAM usage hits a certain threshold.
+Swapping can impact Ignite cluster performance.
+You can adjust the operating system's setting to prevent this from happening.
+For Unix, the best option is to either decrease the `vm.swappiness` parameter to `10`, or set it to `0` if native persistence is enabled:
+
+[source,shell]
+----
+sysctl -w vm.swappiness=0
+----
+
+The value of this setting can prolong GC pauses as well. For instance, if your GC logs show `low user time, high
+system time, long GC pause` records, it might be caused by Java heap pages being swapped in and out. To
+address this, use the `swappiness` settings above.
+
+== Share RAM with OS and Apps
+
+An individual machine's RAM is shared among the operating system, Ignite, and other applications.
+As a general recommendation, if an Ignite cluster is deployed in pure in-memory mode (native
+persistence is disabled), then you should not allocate more than 90% of RAM capacity to Ignite nodes.
+
+On the other hand, if native persistence is used, then the OS requires extra RAM for its page cache in order to optimally sync up data to disk.
+If the page cache is not disabled, then you should not give more than 70% of the server's RAM to Ignite.
+
+Refer to link:memory-configuration/data-regions[memory configuration] for configuration examples.
+
+In addition to that, because using native persistence might cause high page cache utilization, the `kswapd` daemon might not keep up with page reclamation, which is used by the page cache in the background.
+As a result, this can cause high latencies due to direct page reclamation and lead to long GC pauses.
+
+To work around the effects caused by page memory reclamation on Linux, add extra bytes between `wmark_min` and `wmark_low` with `/proc/sys/vm/extra_free_kbytes`:
+
+[source,shell]
+----
+sysctl -w vm.extra_free_kbytes=1240000
+----
+
+Refer to link:https://events.static.linuxfound.org/sites/events/files/lcjp13_moriya.pdf[this resource, window=_blank]
+for more insight into the relationship between page cache settings, high latencies, and long GC pauses.
+
+== Java Heap and GC Tuning
+
+Even though Ignite and Ignite keep data in their own off-heap memory regions invisible to Java garbage collectors, Java
+Heap is still used for objects generated by your applications workloads.
+For instance, whenever you run SQL queries against an Ignite cluster, the queries will access data and indexes stored in
+the off-heap memory while the result sets of such queries will be kept in Java Heap until your application reads the result sets.
+Thus, depending on the throughput and type of operations, Java Heap can still be utilized heavily and this might require
+JVM and GC related tuning for your workloads.
+
+We've included some common recommendations and best practices below.
+Feel free to start with them and make further adjustments as necessary, depending on the specifics of your applications.
+
+[NOTE]
+====
+[discrete]
+Refer to link:perf-and-troubleshooting/troubleshooting#debugging-gc-issues[GC debugging techniques] sections for best
+practices on GC logs and heap dumps collection.
+====
+
+=== Generic GC Settings
+
+Below are sets of example JVM configurations for applications that can utilize Java Heap on server nodes heavily, thus
+triggering long — or frequent, short — stop-the-world GC pauses.
+
+For JDK 1.8+ deployments you should use G1 garbage collector.
+The settings below are a good starting point if 10GB heap is more than enough for your server nodes:
+
+[source,shell]
+----
+-server
+-Xms10g
+-Xmx10g
+-XX:+AlwaysPreTouch
+-XX:+UseG1GC
+-XX:+ScavengeBeforeFullGC
+-XX:+DisableExplicitGC
+----
+
+If G1 does not work for you, consider using CMS collector and starting with the following settings.
+Note that 10GB heap is used as an example and a smaller heap can be enough for your use case:
+
+[source,shell]
+----
+-server
+-Xms10g
+-Xmx10g
+-XX:+AlwaysPreTouch
+-XX:+UseParNewGC
+-XX:+UseConcMarkSweepGC
+-XX:+CMSClassUnloadingEnabled
+-XX:+CMSPermGenSweepingEnabled
+-XX:+ScavengeBeforeFullGC
+-XX:+CMSScavengeBeforeRemark
+-XX:+DisableExplicitGC
+----
+
+[NOTE]
+====
+//TODO: Is this still valid? What does it do?
+If you use link:persistence/native-persistence[Ignite native persistence], we recommend that you set the
+`MaxDirectMemorySize` JVM parameter to `walSegmentSize * 4`.
+With the default WAL settings, this value is equal to 256MB.
+====
+
+=== Advanced Memory Tuning
+
+In Linux and Unix environments, it's possible that an application can face long GC pauses or lower performance due to
+I/O or memory starvation due to kernel specific settings.
+This section provides some guidelines on how to modify kernel settings in order to overcome long GC pauses.
+
+[WARNING]
+====
+[discrete]
+All the shell commands given below were tested on RedHat 7.
+They may differ for your Linux distribution.
+Before changing the kernel settings, make sure to check the system statistics/logs to confirm that you really have a problem.
+Consult your IT department before making changes at the Linux kernel level in production.
+====
+
+If GC logs show `low user time, high system time, long GC pause` then most likely memory constraints are triggering swapping or scanning of a free memory space.
+
+* Check and adjust the link:perf-and-troubleshooting/memory-tuning#tune-swappiness-setting[swappiness settings].
+* Add `-XX:+AlwaysPreTouch` to JVM settings on startup.
+* Disable NUMA zone-reclaim optimization.
++
+[source,shell]
+----
+sysctl -w vm.zone_reclaim_mode=0
+----
+
+* Turn off Transparent Huge Pages if RedHat distribution is used.
++
+[source,shell]
+----
+echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
+echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
+----
+
+=== Advanced I/O Tuning
+
+If GC logs show `low user time, low system time, long GC pause` then GC threads might be spending too much time in the kernel space being blocked by various I/O activities.
+For instance, this can be caused by journal commits, gzip, or log roll over procedures.
+
+As a solution, you can try changing the page flushing interval from the default 30 seconds to 5 seconds:
+
+[source,shell]
+----
+sysctl -w vm.dirty_writeback_centisecs=500
+sysctl -w vm.dirty_expire_centisecs=500
+----
+
+[NOTE]
+====
+[discrete]
+Refer to the link:perf-and-troubleshooting/persistence-tuning[persistence tuning] section for the optimizations related to disk.
+Those optimizations can have a positive impact on GC.
+====
diff --git a/docs/_docs/perf-and-troubleshooting/persistence-tuning.adoc b/docs/_docs/perf-and-troubleshooting/persistence-tuning.adoc
new file mode 100644
index 0000000..54cd57d
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/persistence-tuning.adoc
@@ -0,0 +1,269 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Persistence Tuning
+:javaFile: code-snippets/java/src/main/java/org/apache/ignite/snippets/PersistenceTuning.java
+:xmlFile: code-snippets/xml/persistence-tuning.xml
+:dotnetFile: code-snippets/dotnet/PersistenceTuning.cs
+
+This article summarizes best practices for Ignite native persistence tuning.
+If you are using an external (3rd party) storage for persistence needs, please refer to performance guides from the 3rd party vendor.
+
+== Adjusting Page Size
+
+The `DataStorageConfiguration.pageSize` parameter should be no less than the lower of: the page size of your storage media (SSD, Flash, HDD, etc.) and the cache page size of your operating system.
+The default value is 4KB.
+
+The operating system's cache page size can be easily checked using
+link:https://unix.stackexchange.com/questions/128213/how-is-page-size-determined-in-virtual-address-space[system tools and parameters, window=_blank].
+
+The page size of the storage device such as SSD is usually noted in the device specification. If the manufacturer does
+not disclose this information, try to run SSD benchmarks to figure out the number.
+Many manufacturers have to adapt their drivers for 4 KB random-write workloads because a variety of standard
+benchmarks use 4 KB by default.
+link:https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ssd-server-storage-applications-paper.pdf[This white paper,window=_blank]
+from Intel confirms that 4 KB should be enough.
+
+Once you pick the most optimal page size, apply it in your cluster configuration:
+
+////
+TODO for .NET and other languages.
+////
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;page-size,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=page-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=page-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Keep WALs Separately
+
+Consider using separate drives for data files and link:persistence/native-persistence#write-ahead-log[Write-Ahead-Logging (WAL)].
+Ignite actively writes to both the data and WAL files.
+
+The example below shows how to configure separate paths for the data storage, WAL, and WAL archive:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;paths,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=separate-wal,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=separate-wal,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Increasing WAL Segment Size
+
+The default WAL segment size (64 MB) may be inefficient in high load scenarios because it causes WAL to switch between segments too frequently and switching/rotation is a costly operation. Setting the segment size to a higher value (up to 2 GB) may help reduce the number of switching operations. However, the tradeoff is that this will increase the overall volume of the write-ahead log.
+
+See link:persistence/native-persistence#changing-wal-segment-size[Changing WAL Segment Size] for details.
+
+== Changing WAL Mode
+
+Consider other WAL modes as alternatives to the default mode. Each mode provides different degrees of reliability in
+case of node failure and that degree is inversely proportional to speed, i.e. the more reliable the WAL mode, the
+slower it is. Therefore, if your use case does not require high reliability, you can switch to a less reliable mode.
+
+See link:persistence/native-persistence#wal-modes[WAL Modes] for more details.
+
+== Disabling WAL
+
+//TODO: when should bhis be done?
+There are situations where link:persistence/native-persistence#disabling-wal[disabling the WAL] can help improve performance.
+
+== Pages Writes Throttling
+
+Ignite periodically starts the link:persistence/native-persistence#checkpointing[checkpointing process] that syncs
+dirty pages from memory to disk. A dirty page is a page that was updated in RAM but was not written to a respective
+partition file (an update was just appended to the WAL). This process happens in the background without affecting the application's logic.
+
+However, if a dirty page, scheduled for checkpointing, is updated before being written to disk, its previous state is
+copied to a special region called a checkpointing buffer.
+If the buffer gets overflowed, Ignite will stop processing all updates until the checkpointing is over.
+As a result, write performance can drop to zero as shown in​ this diagram, until the checkpointing cycle is completed:
+
+image::images/checkpointing-chainsaw.png[Checkpointing Chainsaw]
+
+The same situation occurs if the dirty pages threshold is reached again while the checkpointing is in progress.
+This will force Ignite to schedule one more checkpointing execution and to halt all the update operations until the first checkpointing cycle is over.
+
+Both situations usually arise when either a disk device is slow or the update rate is too intensive.
+To mitigate and prevent these performance drops, consider enabling the pages write throttling algorithm.
+The algorithm brings the performance of update operations down to the speed of the disk device whenever the checkpointing buffer fills in too fast or the percentage of dirty pages soar rapidly.
+
+[NOTE]
+====
+[discrete]
+=== Pages Write Throttling in a Nutshell
+
+Refer to the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-PagesWriteThrottling[Ignite wiki page, window=_blank]
+maintained by Apache Ignite persistence experts to get more details about throttling and its causes.
+====
+
+The example below shows how to enable write throttling:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=throttling,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=throttling,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Adjusting Checkpointing Buffer Size
+
+The size of the checkpointing buffer, explained in the previous section, is one of the checkpointing process triggers.
+
+The default buffer size is calculated as a function of the link:memory-configuration/data-regions[data region] size:
+
+[width=100%,cols="1,2",options="header"]
+|=======================================================================
+| Data Region Size |Default Checkpointing Buffer Size
+
+|< 1 GB | MIN (256 MB, Data_Region_Size)
+
+|between 1 GB and 8 GB | Data_Region_Size / 4
+
+|> 8 GB | 2 GB
+
+|=======================================================================
+
+The default buffer size can be suboptimal for write-intensive workloads because the page write
+throttling algorithm will slow down your writes whenever the size reaches the critical mark. To keep write
+performance at the desired pace while the checkpointing is in progress, consider increasing
+`DataRegionConfiguration.checkpointPageBufferSize` and enabling write throttling to prevent performance​ drops:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling;data-region,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=checkpointing-buffer-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=checkpointing-buffer-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+In the example above, the checkpointing buffer size of the default region is set to 1 GB.
+
+////
+TODO: describe when checkpointing is triggered
+[NOTE]
+====
+[discrete]
+=== When is the Checkpointing Process Triggered?
+
+Checkpointing is started if either the dirty pages count goes beyond the `totalPages * 2 / 3` value or
+`DataRegionConfiguration.checkpointPageBufferSize` is reached. However, if page write throttling is used, then
+`DataRegionConfiguration.checkpointPageBufferSize` is never encountered because it cannot be reached due to the way the algorithm works.
+====
+////
+
+== Enabling Direct I/O
+//TODO: why is this not enabled by default?
+Usually, whenever an application reads data from disk, the OS gets the data and puts it in a file buffer cache first.
+Similarly, for every write operation, the OS first writes the data in the cache and transfers it to disk later. To
+eliminate this process, you can enable Direct I/O in which case the data is read and written directly from/to the
+disk, bypassing the file buffer cache.
+
+The Direct I/O module in Ignite is used to speed up the checkpointing process, which writes dirty pages from RAM to disk.
+Consider using the Direct I/O plugin for write-intensive workloads.
+
+[NOTE]
+====
+[discrete]
+=== Direct I/O and WALs
+
+Note that Direct I/O cannot be enabled specifically for WAL files. However, enabling the Direct I/O module provides
+a slight benefit regarding the WAL files as well: the WAL data will not be stored in the OS buffer cache for too long;
+it will be flushed (depending on the WAL mode) at the next page cache scan and removed from the page cache.
+====
+
+You can enable Direct I/O, move the `{ignite_dir}/libs/optional/ignite-direct-io` folder to the upper level `libs/optional/ignite-direct-io`
+folder in your Ignite distribution or as a Maven dependency as described link:setup#enabling-modules[here].
+
+You can use the `IGNITE_DIRECT_IO_ENABLED` system property to enable or disable the plugin at runtime.
+
+Get more details from the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-DirectI/O[Ignite Direct I/O Wiki section, window=_blank].
+
+== Purchase Production-Level SSDs
+
+Note that the performance of Ignite Native Persistence may drop after several hours of intensive write load due to
+the nature of how
+link:http://codecapsule.com/2014/02/12/coding-for-ssds-part-2-architecture-of-an-ssd-and-benchmarking[SSDs are designed and operate, window=_blank].
+Consider buying fast production-level SSDs to keep the performance high or switch to non-volatile memory devices like
+Intel Optane Persistent Memory.
+
+== SSD Over-provisioning
+
+Performance of random writes on a 50% filled disk is much better than on a 90% filled disk because of the SSDs over-provisioning
+(see link:https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti[https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti, window=_blank]).
+
+Consider buying SSDs with higher over-provisioning rates and make sure the manufacturer provides the tools to adjust it.
+
+[NOTE]
+====
+[discrete]
+=== Intel 3D XPoint
+
+Consider using 3D XPoint drives instead of regular SSDs to avoid the bottlenecks caused by a low over-provisioning
+setting and constant garbage collection at the SSD level.
+Read more link:http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.html[here, window=_blank].
+====
diff --git a/docs/_docs/perf-and-troubleshooting/sql-tuning.adoc b/docs/_docs/perf-and-troubleshooting/sql-tuning.adoc
new file mode 100644
index 0000000..695526a
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/sql-tuning.adoc
@@ -0,0 +1,525 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Performance Tuning
+
+This article outlines basic and advanced optimization techniques for Ignite SQL queries. Some of the sections are also useful for debugging and troubleshooting.
+
+== Basic Considerations: Ignite vs RDBMS
+
+Ignite is frequently compared to relational databases for their SQL capabilities with an expectation that existing SQL
+queries, created for an RDBMS, will work out of the box and perform faster in Ignite without any
+changes. Usually, such an assumption is based on the fact that Ignite stores and processes data in-memory.
+However, it's not enough just to put data in RAM and expect an order of magnitude increase in performance. Generally,
+extra tuning is required. Below you can see a standard checklist of
+best practices to consider before you benchmark Ignite against an RDBMS or do any performance testing:
+
+* Ignite is optimized for _multi-nodes_ deployments with RAM as a primary storage. Don't
+try to compare a single-node Ignite cluster to a relational database. You should deploy a multi-node Ignite cluster with the whole copy of data in RAM.
+
+* Be ready to adjust your data model and existing SQL queries.
+Use the link:data-modeling/affinity-collocation[affinity colocation] concept during the data
+modelling phase for proper data distribution. Remember, it's not enough just to put data in RAM. If your data is properly colocated, you can run SQL queries with JOINs at massive scale and expect significant performance benefits.
+
+* Define secondary indexes and use other standard, and Ignite-specific, tuning techniques described below.
+
+* Keep in mind that relational databases leverage local caching techniques and, depending on the total data size, an
+RDBMS can complete _some queries_ even faster than Ignite even in a multi-node configuration.
+If your data set is around 10-100GB and an RDBMS has enough RAM for caching data locally than it, for instance, can
+outperform a multi-node Ignite cluster because the latter will be utilizing the network. Store much more data in Ignite to see the difference.
+
+
+== Using the EXPLAIN Statement
+
+Ignite supports the `EXPLAIN` statement which could be used to read the execution plan of a query.
+Use this command to analyse your queries for possible optimization. Note that the plan will contain multiple rows: the
+last one will contain a query for the reducing side (usually your application), others are for map nodes (usually server nodes).
+Read the link:SQL/sql-introduction#distributed-queries[Distributed Queries] section to learn how queries are executed in Ignite.
+
+[source,sql]
+----
+EXPLAIN SELECT name FROM Person WHERE age = 26;
+----
+
+The execution plan is generated by H2 as described link:http://www.h2database.com/html/performance.html#explain_plan[here, window=_blank].
+
+== OR Operator and Selectivity
+
+//*TODO*: is this still valid?
+
+If a query contains an `OR` operator, then indexes may not be used as expected depending on the complexity of the query.
+For example, for the query `select name from Person where gender='M' and (age = 20 or age = 30)`, an index on the `gender`
+field will be used instead of an index on the `age` field, although the latter is a more selective index.
+As a workaround for this issue, you can rewrite the query with `UNION ALL` (notice that `UNION` without `ALL` will return
+`DISTINCT` rows, which will change the query semantics and will further penalize your query performance):
+
+[source,sql]
+----
+SELECT name FROM Person WHERE gender='M' and age = 20
+UNION ALL
+SELECT name FROM Person WHERE gender='M' and age = 30
+----
+
+== Avoid Having Too Many Columns
+
+Avoid having too many columns in the result set of a `SELECT` query. Due to limitations of the H2 query parser, queries
+with 100+ columns may perform worse than expected.
+
+== Lazy Loading
+
+By default, Ignite attempts to load the whole result set to memory and send it back to the query initiator (which is
+usually your application). This approach provides optimal performance for queries of small or medium result sets.
+However, if the result set is too big to fit in the available memory, it can lead to prolonged GC pauses and even `OutOfMemoryError` exceptions.
+
+To minimize memory consumption, at the cost of a moderate performance hit, you can load and process the result sets
+lazily by passing the `lazy` parameter to the JDBC and ODBC connection strings or use a similar method available for Java, .NET, and C++ APIs:
+
+[tabs]
+--
+
+tab:Java[]
+[source,java]
+----
+SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10");
+
+// Result set will be loaded lazily.
+query.setLazy(true);
+----
+tab:JDBC[]
+[source,sql]
+----
+jdbc:ignite:thin://192.168.0.15?lazy=true
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+var query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10")
+{
+    // Result set will be loaded lazily.
+    Lazy = true
+};
+----
+tab:C++[]
+--
+
+////
+*TODO* Add tabs for ODBC and other programming languages - C# and C++
+////
+
+== Querying Colocated Data
+
+When Ignite executes a distributed query, it sends sub-queries to individual cluster nodes to fetch the data and groups
+the results on the reducer node (usually your application).
+If you know in advance that the data you are querying is link:data-modeling/affinity-collocation[colocated]
+by the `GROUP BY` condition, you can use `SqlFieldsQuery.collocated = true` to tell the SQL engine to do the grouping on the remote nodes.
+This will reduce network traffic between the nodes and query execution time.
+When this flag is set to `true`, the query is executed on individual nodes first and the results are sent to the reducer node for final calculation.
+
+Consider the following example, in which we assume that the data is colocated by `department_id` (in other words, the
+`department_id` field is configured as the affinity key).
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Employee GROUP BY department_id
+----
+
+Because of the nature of the SUM operation, Ignite will sum the salaries across the elements stored on individual nodes,
+and then send these sums to the reducer node where the final result will be calculated.
+This operation is already distributed, and enabling the `collocated` flag will only slightly improve performance.
+
+Let's take a slightly different example:
+
+[source,sql]
+----
+SELECT AVG(salary) FROM Employee GROUP BY department_id
+----
+
+In this example, Ignite has to fetch all (`salary`, `department_id`) pairs to the reducer node and calculate the results there.
+However, if employees are colocated by the `department_id` field, i.e. employee data for the same department
+is stored on the same node, setting `SqlFieldsQuery.collocated = true` will reduce query execution time because Ignite
+will calculate the averages for each department on the individual nodes and send the results to the reducer node for final calculation.
+
+
+== Enforcing Join Order
+
+When this flag is set, the query optimizer will not reorder tables in joins.
+In other words, the order in which joins are applied during query execution will be the same as specified in the query.
+Without this flag, the query optimizer can reorder joins to improve performance.
+However, sometimes it might make an incorrect decision.
+This flag helps to control and explicitly specify the order of joins instead of relying on the optimizer.
+
+Consider the following example:
+
+[source, sql]
+----
+SELECT * FROM Person p
+JOIN Company c ON p.company = c.name where p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+This query contains a join between two tables: `Person` and `Company`.
+To get the best performance, we should understand which join will return the smallest result set.
+The table with the smaller result set size should be given first in the join pair.
+To get the size of each result set, let's test each part.
+
+.Q1:
+[source, sql]
+----
+SELECT count(*)
+FROM Person p
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000;
+----
+
+.Q2:
+[source, sql]
+----
+SELECT count(*)
+FROM Company c
+where
+c.name NOT LIKE 'O%';
+----
+
+After running Q1 and Q2, we can get two different outcomes:
+
+Case 1:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 30000
+|Q2 |100000
+|===
+
+Q2 returns more entries than Q1.
+In this case, we don't need to modify the original query, because smaller subset has already been located on the left side of the join.
+
+Case 2:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 50000
+|Q2 |10000
+|===
+
+Q1 returns more entries than Q2. So we need to change the initial query as follows:
+
+[source, sql]
+----
+SELECT *
+FROM Company c
+JOIN Person p
+ON p.company = c.name
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+The force join order hint can be specified as follows:
+
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC driver connection parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver connection attribute]
+* If you use link:SQL/sql-api[SqlFieldsQuery] to execute SQL queries, you can set the enforce join order
+hint by calling the `SqlFieldsQuery.setEnforceJoinOrder(true)` method.
+
+
+== Increasing Index Inline Size
+
+Every entry in the index has a constant size which is calculated during index creation. This size is called _index inline size_.
+Ideally this size should be enough to store full indexed entry in serialized form.
+When values are not fully included in the index, Ignite may need to perform additional data page reads during index lookup,
+which can impair performance if persistence is enabled.
+
+//If a value type allows, Ignite includes indexed values in the index itself to optimize querying and data updates.
+
+
+Here is how values are stored in the index:
+
+// the source code block below uses css-styles from the pygments library. If you change the highlighting library, you should change the syles as well.
+[source,java,subs="quotes"]
+----
+[tok-kt]#int#
+0     1       5
+| tag | value |
+[tok-k]#Total: 5 bytes#
+
+[tok-kt]#long#
+0     1       9
+| tag | value |
+[tok-k]#Total: 9 bytes#
+
+[tok-kt]#String#
+0     1      3             N
+| tag | size | UTF-8 value |
+[tok-k]#Total: 3 + string length#
+
+[tok-kt]#POJO (BinaryObject)#
+0     1         5
+| tag | BO hash |
+[tok-k]#Total: 5#
+----
+
+For primitive data types (bool, byte, short, int, etc.), Ignite automatically calculates the index inline size so that the values are included in full.
+For example, for `int` fields, the inline size is 5 (1 byte for the tag and 4 bytes for the value itself). For `long` fields, the inline size is 9 (1 byte for the tag + 8 bytes for the value).
+
+For binary objects, the index includes the hash of each object, which is enough to avoid collisions. The inline size is 5.
+
+For variable length data, indexes include only first several bytes of the value.
+//As you can see, indexes on `Strings` (and other variable-length types) only store first several bytes of the value.
+Therefore, when indexing fields with variable-length data, we recommend that you estimate the length of your field values and set the inline size to a value that includes most (about 95%) or all values.
+For example, if you have a `String` field with 95% of the values containing 10 characters or fewer, you can set the inline size for the index on that field to 13.
+
+//For example, when you create a table with a single column primary key, Ignite will automatically create an index on the primary key.
+
+The inline sizes explained above apply to single field indexes.
+However, when you define an index on a field in the value object or on a non-primary key column, Ignite creates a _composite index_
+by appending the primary key to the indexed value.
+Therefore, when calculating the inline size for composite indexes, add up the inline size of the primary key.
+
+//To summarize, when creating indexes on a variable size data fields, choose the inline size to include most of the values that the field will hold. For other data types, Ignite will calculate the inline size automatically.
+
+Below is an example of index inline size calculation for a cache where both key and value are complex objects.
+
+[source, java]
+----
+public class Key {
+    @QuerySqlField
+    private long id;
+
+    @QuerySqlField
+    @AffinityKeyMapped
+    private long affinityKey;
+}
+
+public class Value {
+    @QuerySqlField(index = true)
+    private long longField;
+
+    @QuerySqlField(index = true)
+    private int intField;
+
+    @QuerySqlField(index = true)
+    private String stringField; // we suppose that 95% of the values are 10 symbols
+}
+----
+
+The following table summarizes the inline index sizes for the indexes defined in the example above.
+
+[cols="1,1,1,2",opts="stretch,header"]
+|===
+|Index | Kind | Recommended Inline Size | Comment
+
+| (_key)
+|Primary key index
+| 5
+|Inlined hash of a binary object (5)
+
+|(affinityKey, _key)
+|Affinity key index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(longField, _key)
+|Secondary index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(intField, _key)
+|Secondary index
+|10
+|Inlined int (5) + binary object up to hash (5)
+
+|(stringField, _key)
+|Secondary index
+|18
+|Inlined string (13) + binary object's hash (5) (assuming that the string is {tilde}10 symbols)
+
+|===
+//_
+
+//The inline size for the first two indexes is set via `CacheConfiguration.sqlIndexMaxInlineSize = 29` (because a single property is responsible for two indexes, we set it to the largest value).
+//The inline size for the rest of the indexes is set when you define a corresponding index.
+Note that you will only have to set the inline size for the index on `stringField`. For other indexes, Ignite will calculate the inline size automatically.
+
+Refer to the link:SQL/indexes#configuring-index-inline-size[Configuring Index Inline Size] section for the information on how to change the inline size.
+
+You can check the inline size of an existing index in the link:monitoring-metrics/system-views#indexes-view[INDEXES] system view.
+
+[WARNING]
+====
+Note that since Ignite encodes strings to `UTF-8`, some characters use more than 1 byte.
+====
+
+== Query Parallelism
+
+By default, a SQL query is executed in a single thread on each participating Ignite node. This approach is optimal for
+queries returning small result sets involving index search. For example:
+
+[source,sql]
+----
+SELECT * FROM Person WHERE p.id = ?;
+----
+
+Certain queries might benefit from being executed in multiple threads.
+This relates to queries with table scans and aggregations, which is often the case for HTAP and OLAP workloads.
+For example:
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Person;
+----
+
+The number of threads created on a single node for query execution is configured per cache and by default equals 1.
+You can change the value by setting the `CacheConfiguration.queryParallelism` parameter.
+If you create SQL tables using the CREATE TABLE command, you can use a link:configuring-caches/configuration-overview#cache-templates[cache template] to set this parameter.
+
+If a query contains `JOINs`, then all the participating caches must have the same degree of parallelism.
+
+== Index Hints
+
+Index hints are useful in scenarios when you know that one index is more suitable for certain queries than another.
+You can use them to instruct the query optimizer to choose a more efficient execution plan.
+To do this, you can use `USE INDEX(indexA,...,indexN)` statement as shown in the following example.
+
+
+[source,sql]
+----
+SELECT * FROM Person USE INDEX(index_age)
+WHERE salary > 150000 AND age < 35;
+----
+
+
+== Partition Pruning
+
+Partition pruning is a technique that optimizes queries that use affinity keys in the `WHERE` condition. When
+executing such a query, Ignite will scan only those partitions where the requested data is stored. This will reduce
+query time because the query will be sent only to the nodes that store the requested partitions.
+
+In the following example, the employee objects are colocated by the `id` field (if an affinity key is not set
+explicitly then the primary key is used as the affinity key):
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR)
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+In the next example, the affinity key is set explicitly and, therefore, will be used to colocate data and direct
+queries to the nodes that keep primary copies of the data:
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR) WITH "AFFINITY_KEY=department_id"
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+
+[NOTE]
+====
+Refer to link:data-modeling/affinity-collocation[affinity colocation] page for more details
+on how data gets colocated and how it helps boost performance in distributed storages like Ignite.
+====
+
+== Skip Reducer on Update
+
+When Ignite executes a DML operation, it first fetches all the affected intermediate rows for analysis to the reducer
+node (usually your application), and only then prepares batches of updated values that will be sent to remote nodes.
+
+This approach might affect performance and saturate the network if a DML operation has to move many entries.
+
+Use this flag as a hint for the SQL engine to do all intermediate rows analysis and updates “in-place” on the server nodes.
+The hint is supported for JDBC and ODBC connections.
+
+
+[tabs]
+--
+tab:JDBC Connection String[]
+[source,text]
+----
+//jdbc connection string
+jdbc:ignite:thin://192.168.0.15/skipReducerOnUpdate=true
+----
+--
+
+
+////
+*TODO* Add tabs for ODBC and other programming languages - C# and C++
+////
+
+== SQL On-heap Row Cache
+
+Ignite stores data and indexes in its own memory space outside of Java heap. This means that with every data
+access, a part of the data will be copied from the off-heap space to Java heap, potentially deserialized, and kept in
+the heap as long as your application or server node references it.
+
+The SQL on-heap row cache is intended to store hot rows (key-value objects) in Java heap, minimizing resources
+spent for data copying and deserialization. Each cached row refers to an entry in the off-heap region and can be
+invalidated when one of the following happens:
+
+* The master entry stored in the off-heap region is updated or removed.
+* The data page that stores the master entry is evicted from RAM.
+
+The on-heap row cache can be enabled for a specific cache/table (if you use CREATE TABLE to create SQL tables and caches,
+then the parameter can be passed via a link:configuring-caches/configuration-overview#cache-templates[cache template]):
+
+////
+TODO Add tabs for ODBC/JDBC and other programming languages - Java C# and C++
+////
+
+[source,xml]
+----
+include::code-snippets/xml/sql-on-heap-cache.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+////
+*TODO* Add tabs for ODBC/JDBC and other programming languages - Java C# and C++
+////
+
+If the row cache is enabled, you might be able to trade RAM for performance. You might get up to a 2x performance increase for some SQL queries and use cases by allocating more RAM for rows caching purposes.
+
+[WARNING]
+====
+[discrete]
+=== SQL On-Heap Row Cache Size
+
+Presently, the cache is unlimited and can occupy as much RAM as allocated to your memory data regions. Make sure to:
+
+* Set the JVM max heap size equal to the total size of all the data regions that store caches for which this on-heap row cache is enabled.
+
+* link:perf-and-troubleshooting/memory-tuning#java-heap-and-gc-tuning[Tune] JVM garbage collection accordingly.
+====
+
+== Using TIMESTAMP instead of DATE
+
+//TODO: is this still valid?
+Use the `TIMESTAMP` type instead of `DATE` whenever possible. Presently, the `DATE` type is serialized/deserialized very
+inefficiently resulting in performance degradation.
diff --git a/docs/_docs/perf-and-troubleshooting/thread-pools-tuning.adoc b/docs/_docs/perf-and-troubleshooting/thread-pools-tuning.adoc
new file mode 100644
index 0000000..6da456c
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/thread-pools-tuning.adoc
@@ -0,0 +1,117 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Thread Pools Tuning
+
+Ignite creates and maintains a variety of thread pools that are used for different purposes. In this section, we list some of the more common internal pools and show how you can create a custom one.
+
+////
+Refer to the *TODO Link to APIs/Javadoc/etc.* APIs documentation to get a full list of thread pools available in Ignite.
+////
+
+== System Pool
+
+The system pool handles all the cache related operations except for SQL and some other types of queries that go to the queries pool.
+Also, this pool is responsible for processing compute tasks' cancellation operations.
+
+The default pool size is `max(8, total number of cores)`.
+Use `IgniteConfiguration.setSystemThreadPoolSize(...)` or a similar API from your programming language to change the pool size.
+
+== Queries Pool
+
+The queries pool takes care of all SQL, Scan, and SPI queries being sent and executed across the cluster.
+
+The default pool size is `max(8, total number of cores)`.
+Use `IgniteConfiguration.setQueryThreadPoolSize(...)` or a similar API from your programming language to change the pool size.
+
+== Public Pool
+
+Public pool is the work-horse of the Compute Grid. All computations are received and processed by this pool.
+
+The default pool size is `max(8, total number of cores)`. Use `IgniteConfiguration.setPublicThreadPoolSize(...)` or a similar API from your programming language to change the pool size.
+
+== Service Pool
+
+Service Grid calls go to the services' thread pool.
+Having dedicated pools for the Service and Compute components allows us to avoid threads starvation and deadlocks when a service implementation wants to call a computation or vice versa.
+
+The default pool size is `max(8, total number of cores)`. Use `IgniteConfiguration.setServiceThreadPoolSize(...)` or a similar API from your programming language to change the pool size.
+
+== Striped Pool
+
+The striped pool helps accelerate basic cache operations and transactions by spreading operations execution across multiple stripes that don't contend with each other for resources.
+
+The default pool size is `max(8, total number of cores)`. Use `IgniteConfiguration.setStripedPoolSize(...)` or a similar API from your programming language to change the pool size.
+
+== Data Streamer Pool
+
+The data streamer pool processes all messages and requests coming from `IgniteDataStreamer` and a variety of streaming adapters that use `IgniteDataStreamer` internally.
+
+The default pool size is `max(8, total number of cores)`. Use `IgniteConfiguration.setDataStreamerThreadPoolSize(...)` or a similar API from your programming language to change the pool size.
+
+== Creating Custom Thread Pool
+
+It is possible to configure a custom thread pool for compute tasks.
+This is useful if you want to execute one compute task from another synchronously avoiding deadlocks.
+To guarantee this, you need to make sure that a nested task is executed in a thread pool separate from the parent's tasks thread pool.
+
+A custom pool is defined in `IgniteConfiguration` and must have a unique name:
+
+:javaFile: code-snippets/java/src/main/java/org/apache/ignite/snippets/CustomThreadPool.java
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/thread-pool.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=pool-config,indent=0]
+----
+--
+
+Now, let's assume that you want to execute the following compute task in a thread from the `myPool` defined above:
+
+[source,java]
+----
+include::{javaFile}[tags=inner-runnable,indent=0]
+----
+
+To do that, use `IgniteCompute.withExecutor()`, which will execute the task immediately from the parent task, as shown below:
+
+[source,java]
+----
+include::{javaFile}[tags=outer-runnable,indent=0]
+----
+
+The parent task's execution might be triggered the following way and, in this scenario, it will be executed by the public pool:
+
+[source,java]
+----
+ignite.compute().run(new OuterRunnable());
+----
+
+[WARNING]
+====
+[discrete]
+=== Undefined Thread Pool
+
+If an application attempts to execute a compute task in a custom pool which is not defined in the configuration of the node, then a special warning message will be printed to the logs, and the task will be picked up by the public pool for execution.
+====
diff --git a/docs/_docs/perf-and-troubleshooting/troubleshooting.adoc b/docs/_docs/perf-and-troubleshooting/troubleshooting.adoc
new file mode 100644
index 0000000..2ed3956
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/troubleshooting.adoc
@@ -0,0 +1,164 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Troubleshooting and Debugging
+
+This article covers some common tips and tricks for debugging and troubleshooting Ignite deployments.
+
+== Debugging Tools: Consistency Check Command
+
+The `./control.sh|bat` utility includes a set of link:tools/control-script#consistency-check-commands[consistency check commands]
+that help with verifying internal data consistency invariants.
+
+== Persistence Files Disappear on Restart
+
+On some systems, the default location for Ignite persistence files might be under a `temp` folder. This can lead to situations when persistence files are removed by an operating system whenever a node process is restarted. To avoid this:
+
+* Ensure that `WARN` logging level is enabled for Ignite. You will see a warning if the persistence files are written to the temporary directory.
+* Change the location of all persistence files using the `DataStorageConfiguration` APIs, such as `setStoragePath(...)`,
+`setWalPath(...)`, and `setWalArchivePath(...)`
+
+== Cluster Doesn't Start After Field Type Changes
+
+When developing your application, you may need to change the type of a custom
+object’s field. For instance, let’s say you have object `A` with field `A.range` of
+ `int` type and then you decide to change the type of `A.range` to `long` right in
+ the source code. When you do this, the cluster or the application will fail to
+ restart because Ignite doesn't support field/column type changes.
+
+When this happens _and you are still in development_, you need to go into the
+file system and remove the following directories: `marshaller/`, `db/`, and `wal/`
+located in the Ignite working directory (`db` and `wal` might be located in other
+places if you have redefined their location).
+
+However, if you are _in production_ then instead of changing field types, add a
+new field with a different name to your object model and remove the old one. This operation is fully
+supported. At the same time, the `ALTER TABLE` command can be used to add new
+columns or remove existing ones at run time.
+
+== Debugging GC Issues
+
+The section contains information that may be helpful when you need to debug and
+troubleshoot issues related to Java heap usage or GC pauses.
+
+=== Heap Dumps
+
+If JVM generates `OutOfMemoryException` exceptions then dump the heap automatically the next time the exception occurs.
+This helps if the root cause of this exception is not clear and a deeper look at the heap state at the moment of failure is required:
+
+++++
+<code-tabs>
+<code-tab data-tab="Shell">
+++++
+[source,shell]
+----
+-XX:+HeapDumpOnOutOfMemoryError
+-XX:HeapDumpPath=/path/to/heapdump
+-XX:OnOutOfMemoryError=“kill -9 %p”
+-XX:+ExitOnOutOfMemoryError
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+=== Detailed GC Logs
+
+In order to capture detailed information about GC related activities, make sure you have the settings below configured
+in the JVM settings of your cluster nodes:
+
+++++
+<code-tabs>
+<code-tab data-tab="Shell">
+++++
+[source,shell]
+----
+-XX:+PrintGCDetails
+-XX:+PrintGCTimeStamps
+-XX:+PrintGCDateStamps
+-XX:+UseGCLogFileRotation
+-XX:NumberOfGCLogFiles=10
+-XX:GCLogFileSize=100M
+-Xloggc:/path/to/gc/logs/log.txt
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+Replace `/path/to/gc/logs/` with an actual path on your file system.
+
+In addition, for G1 collector set the property below. It provides many additional details that are
+purposefully not included in the `-XX:+PrintGCDetails` setting:
+
+++++
+<code-tabs>
+<code-tab data-tab="Shell">
+++++
+[source,shell]
+----
+-XX:+PrintAdaptiveSizePolicy
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+=== Performance Analysis With Flight Recorder
+
+In cases when you need to debug performance or memory issues you can use Java Flight Recorder to continuously
+collect low level runtime statistics, enabling after-the-fact incident analysis. To enable Java Flight Recorder use the
+following settings:
+
+++++
+<code-tabs>
+<code-tab data-tab="Shell">
+++++
+[source,shell]
+----
+-XX:+UnlockCommercialFeatures
+-XX:+FlightRecorder
+-XX:+UnlockDiagnosticVMOptions
+-XX:+DebugNonSafepoints
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+To start recording the state on a particular Ignite node use the following command:
+
+++++
+<code-tabs>
+<code-tab data-tab="Shell">
+++++
+[source,shell]
+----
+jcmd <PID> JFR.start name=<recordcing_name> duration=60s filename=/var/recording/recording.jfr settings=profile
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+For Flight Recorder related details refer to Oracle's official documentation.
+
+=== JVM Pauses
+
+Occasionally you may see an warning message about the JVM being paused for too long. It can happen during bulk loading, for example.
+
+Adjusting the `IGNITE_JVM_PAUSE_DETECTOR_THRESHOLD` timeout setting may give the process time to finish without generating the warning. You can set the threshold via an environment variable, or pass it as a JVM argument (`-DIGNITE_JVM_PAUSE_DETECTOR_THRESHOLD=5000`) or as a parameter to ignite.sh (`-J-DIGNITE_JVM_PAUSE_DETECTOR_THRESHOLD=5000`).
+
+The value is in milliseconds.
+
diff --git a/docs/_docs/perf-and-troubleshooting/yardstick-benchmarking.adoc b/docs/_docs/perf-and-troubleshooting/yardstick-benchmarking.adoc
new file mode 100644
index 0000000..127e7b8
--- /dev/null
+++ b/docs/_docs/perf-and-troubleshooting/yardstick-benchmarking.adoc
@@ -0,0 +1,176 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Benchmarking Ignite With Yardstick Framework
+
+== Yardstick Ignite Benchmarks
+
+Apache Ignite benchmarks are written on top of the Yardstick Framework, allowing you to measure the performance of
+various Apache Ignite components and modules. The documentation below describes how to execute and configure pre-assembled
+benchmarks. If you need to add new benchmarks
+or build existing one, then refer to the instructions from Ignite's `DEVNOTES.txt` file in the source directory.
+
+Visit the https://github.com/gridgain/yardstick[Yardstick Repository, window=_blank] for more details on the resulting graphs generation
+and how the framework works.
+
+== Running Ignite Benchmarks Locally
+
+The simplest way to start with benchmarking is to use one of the executable scripts available under the `benchmarks/bin` directory:
+
+[tabs]
+--
+tab:Shell[]
+[source, shell]
+----
+./bin/benchmark-run-all.sh config/benchmark-sample.properties
+----
+--
+
+The command above will benchmark the cache `put` operations for a distributed atomic cache. The results of the benchmark
+will be added to an auto-generated `output/results-{DATE-TIME}+\` directory.
+
+If the `./bin/benchmark-run-all.sh` command is executed as-is without any parameters and modifications in the configuration
+file, then all the available benchmarks will be executed on a local machine using the `config/benchmark.properties`
+configuration. In case of any issues, refer to the logs that are added to an auto-generated `output/logs-{DATE-TIME}` directory.
+
+For more information about available benchmarks and configuration parameters, refer to the
+<<existing-benchmarks,Existing Benchmarks>> and <<properties-and-command-line-arguments,Properties And Command Line Arguments>>
+sections below.
+
+== Running Ignite Benchmarks Remotely
+
+To benchmark Apache Ignite across several remote hosts:
+
+. Go to `config/ignite-remote-config.xml` and replace `<value>127.0.0.1:47500..47509</value>` with actual IPs of all the remote
+hosts. Refer to the documentation section below if you prefer to use another kind of IP finder: link:clustering/clustering[Cluster Configuration]
+. Go to `config/benchmark-remote-sample.properties` and replace `localhost` with actual IPs of the remote hosts in the following places:
+`SERVERS=localhost,localhost`
+`DRIVERS=localhost,localhost`
+ where the `DRIVER` is a host (usually an Ignite client node) that executes benchmarking logic. `SERVERS` are Ignite nodes
+that are benchmarked. Replace the `localhost` occurrences in the same places in the `config/benchmark-remote.properties`
+file if you plan to execute a full set of benchmarks available.
+. Upload Ignite Yardstick Benchmarks to one of your `DRIVERS` host in its own working directory.
+. Log in on the remote host that will be the `DRIVER`, and execute the following command:
++
+[tabs]
+--
+tab:Shell[]
+[source, shell]
+----
+./bin/benchmark-run-all.sh config/benchmark-remote-sample.properties
+----
+--
+
+By default, all the necessary files will be automatically uploaded from the host in which you run the command above to
+every other host to the same path. If you prefer to do it manually set the `AUTO_COPY` variable in property file to `false`.
+
+The command above will benchmark the cache put operation for a distributed atomic cache. The results of the benchmark will
+be added to an auto-generated `output/results-{DATE-TIME}` directory.
+
+If you want to execute all the available benchmarks across the remote hosts, then execute the following command on the `DRIVER` side:
+[tabs]
+--
+tab:Shell[]
+[source, shell]
+----
+./bin/benchmark-run-all.sh config/benchmark-remote.properties
+----
+--
+
+== Existing Benchmarks
+
+The following benchmarks are provided by default:
+
+. `GetBenchmark` - benchmarks atomic distributed cache get operation.
+. `PutBenchmark` - benchmarks atomic distributed cache put operation.
+. `PutGetBenchmark` - benchmarks atomic distributed cache put and get operations together.
+. `PutTxBenchmark` - benchmarks transactional distributed cache put operation.
+. `PutGetTxBenchmark` - benchmarks transactional distributed cache put and get operations together.
+. `SqlQueryBenchmark` - benchmarks distributed SQL query over cached data.
+. `SqlQueryJoinBenchmark` - benchmarks distributed SQL query with a Join over cached data.
+. `SqlQueryPutBenchmark` - benchmarks distributed SQL query with simultaneous cache updates.
+. `AffinityCallBenchmark` - benchmarks affinity call operation.
+. `ApplyBenchmark` - benchmarks apply operation.
+. `BroadcastBenchmark` - benchmarks broadcast operations.
+. `ExecuteBenchmark` - benchmarks execute operations.
+. `RunBenchmark` - benchmarks running task operations.
+. `PutGetOffHeapBenchmark` - benchmarks atomic distributed cache put and get operations together off-heap.
+. `PutGetOffHeapValuesBenchmark` - benchmarks atomic distributed cache put value operations off-heap.
+. `PutOffHeapBenchmark` - benchmarks atomic distributed cache put operations off-heap.
+. `PutOffHeapValuesBenchmark` - benchmarks atomic distributed cache get value operations off-heap.
+. `PutTxOffHeapBenchmark` - benchmarks transactional distributed cache put operation off-heap.
+. `PutTxOffHeapValuesBenchmark` - benchmarks transactional distributed cache put value operation off-heap.
+. `SqlQueryOffHeapBenchmark` -benchmarks distributed SQL query over cached data off-heap.
+. `SqlQueryJoinOffHeapBenchmark` - benchmarks distributed SQL query with a Join over cached data off-heap.
+. `SqlQueryPutOffHeapBenchmark` - benchmarks distributed SQL query with simultaneous cache updates off-heap.
+. `PutAllBenchmark` - benchmarks atomic distributed cache batch put operation.
+. `PutAllTxBenchmark` - benchmarks transactional distributed cache batch put operation.
+
+== Properties And Command Line Arguments
+
+Note that this section only describes configuration parameters specific to Ignite benchmarks, and not for Yardstick framework.
+To run Ignite benchmarks and generate graphs, you will need to run them using the Yardstick framework scripts in the `bin` folder.
+
+Refer to the https://github.com/gridgain/yardstick/blob/master/README.md[Yardstick Documentation, window=_blank] for common Yardstick
+properties and command line arguments for running Yardstick scripts.
+
+The following Ignite benchmark properties can be defined in the benchmark configuration:
+
+* `-b <num>` or `--backups <num>` - Number of backups for every key.
+* `-cfg <path>` or `--Config <path>` - Path to Ignite configuration file.
+* `-cs` or `--cacheStore` - Enable or disable cache store readThrough, writeThrough.
+* `-cl` or `--client` - Client flag. Use this flag if you running more than one `DRIVER`, otherwise additional drivers would behave like a `servers`.
+* `-nc` or `--nearCache` - Near cache flag.
+* `-nn <num>` or `--nodeNumber <num>` - Number of nodes (automatically set in `benchmark.properties`); used to wait for the specified number of nodes to start.
+* `-sm <mode>` or `-syncMode <mode>` - Synchronization mode (defined in CacheWriteSynchronizationMode`).
+* `-r <num>` or `--range` - Range of keys that are randomly generated for cache operations.
+* `-rd or --restartdelay` - Restart delay in seconds.
+* `-rs or --restartsleep` - Restart sleep in seconds.
+* `-rth <host>` or `--restHost <host>` - REST TCP host.
+* `-rtp <num>` or `--restPort <num>` - REST TCP port, indicates that a Ignite node is ready to process Ignite Clients.
+* `-ss` or `--syncSend` - Flag indicating whether synchronous send is used in `TcpCommunicationSpi`.
+* `-txc <value>` or `--txConcurrency <value>` - Cache transaction concurrency control, either `OPTIMISTIC` or `PESSIMISTIC` (defined in `CacheTxConcurrency`).
+* `-txi <value>` or `--txIsolation <value>` - Cache transaction isolation (defined in `CacheTxIsolation`).
+* `-wb` or `--writeBehind` - Enable or disable writeBehind for cache store.
+
+For example, if you want to run 2 `IgniteNode` servers on localhost with the `PutBenchmark` benchmark, number of
+backups set to `1`, and synchronization mode set to `PRIMARY_SYNC`, then the following configuration should be specified
+in the `benchmark.properties` file:
+[tabs]
+--
+tab:Shell[]
+[source, shell]
+----
+SERVER_HOSTS=localhost,localhost
+...
+
+# Note that -dn and -sn, which stand for data node and server node,
+# are native Yardstick parameters and are documented in
+# Yardstick framework.
+CONFIGS="-b 1 -sm PRIMARY_SYNC -dn PutBenchmark`IgniteNode"
+----
+--
+
+== Building From Sources
+
+Run `mvn clean package -Pyardstick -pl modules/yardstick -am -DskipTests` in the Apache Ignite root directory.
+
+This command will compile the project and also unpack the scripts from `yardstick-resources.zip` file to `modules/yardstick/target/assembly/bin` directory.
+
+Artifacts can be found in the `modules/yardstick/target/assembly` directory.
+
+== Custom Ignite Benchmarks
+
+All benchmarks extend the `AbstractBenchmark` class. A new benchmark should also extend this abstract class and
+implement the `test` method (this is the method that actually tests performance).
diff --git a/docs/_docs/persistence/custom-cache-store.adoc b/docs/_docs/persistence/custom-cache-store.adoc
new file mode 100644
index 0000000..8847391
--- /dev/null
+++ b/docs/_docs/persistence/custom-cache-store.adoc
@@ -0,0 +1,103 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Implementing Custom Cache Store
+
+You can implement your own custom `CacheStore` and use it as an underlying data storage for the cache. The methods of `IgniteCache` that read or modify the data will call the corresponding methods of the `CacheStore` implementation.
+
+The following table describes the methods of the `CacheStore` interface.
+
+[cols="1,3",opts="header"]
+|===
+|Method | Description
+
+|`loadCache()` | The `loadCache(...)` method is called whenever `IgniteCache.loadCache(...)` is called and is usually used to preload data from the underlying database into memory. This method loads data on all nodes on which the cache is present.
+
+To load the data on a single node, call `IgniteCache.localLoadCache()` on that node.
+
+|`load()`, `write()`, `delete()` | The `load()`, `write()`, and `delete()` methods are called whenever the `get()`, `put()`, and `remove()` methods are called on the `IgniteCache` interface. These methods are used to enable the _read-through_ and _write-through_ behavior when working with individual cache entries.
+
+|`loadAll()`, `writeAll()`, `deleteAll()` | `loadAll()`, `writeAll()`, and `deleteAll()` in the `CacheStore` are called whenever methods `getAll()`, `putAll()`, and `removeAll()` are called on the `IgniteCache` interface. These methods are used to enable the read-through and write-through behavior when working with multiple cache entries and should generally be implemented using batch operations to provide better performance.
+|===
+
+
+== CacheStoreAdapter
+`CacheStoreAdapter` is an extension of `CacheStore` that provides default implementations for bulk operations, such as `loadAll(Iterable)`, `writeAll(Collection)`, and `deleteAll(Collection)`, by iterating through all entries and calling corresponding `load()`, `write()`, and `delete()` operations on individual entries.
+
+== CacheStoreSession
+Cache store sessions are used to hold the context between multiple operations on the store and mainly employed to provide transactional support. The operations within one transaction are executed using the same database connection, and the connection is committed when the transaction commits.
+Cache store session is represented by an object of the `CacheStoreSession` class, which can be injected into your `CacheStore` implementation via the `@GridCacheStoreSessionResource` annotation.
+
+An example of how to implement a transactional cache store can be found on link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java[GitHub].
+
+== Example
+
+Below is an example of a non-transactional implementation of `CacheStore`. For an example of the implementation with support for transactions, please refer to the link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java[CacheJdbcPersonStore.java] file on GitHub.
+
+
+
+
+.JDBC non-transactional
+[source, java]
+----
+include::{javaCodeDir}/CacheJdbcPersonStore.java[tags=class, indent=0]
+
+----
+
+
+////
+== Cache Store and Binary Objects
+*TODO*
+////
+
+////
+The need for this section is questionable
+
+=== Partition-Aware Data Loading
+
+When you call `IgniteCache.loadCache()`, it delegates to the underlying `CacheStore.loadCache()`, which is called on all server nodes. The default implementation of that method simply iterates over all records and skips those keys that do not link:data-modeling/data-partitioning[belong to the node]. This is not very efficient because every node loads *TODO*
+
+
+
+To improve loading speed, you can take advantage of partitioning. Each node holds a subset of partitions and only needs to load the data for these partitions.
+
+You can use the <<affinity function>> to find how keys are assigned to partitions.
+
+
+Let's extend the example given above to make it partition aware. We add a field that will indicate the partition ID the key belongs to.
+
+[source,java]
+----
+IgniteCache cache = ignite.cache(cacheName);
+Affinity aff = ignite.affinity(cacheName);
+
+for (int personId = 0; personId < PERSONS_COUNT; personId++) {
+    // Get partition ID for the key under which person is stored in cache.
+    int partId = aff.partition(personId);
+
+    Person person = new Person(personId);
+    person.setPartitionId(partId);
+    // Fill other fields.
+
+    cache.put(personId, person);
+}
+----
+
+NOTE: If you alread have a database with large amount of data and want to use CacheStore as a caching layer, you can accelerate data loading
+
+
+////
+
+
+
diff --git a/docs/_docs/persistence/disk-compression.adoc b/docs/_docs/persistence/disk-compression.adoc
new file mode 100644
index 0000000..8d3ccdd
--- /dev/null
+++ b/docs/_docs/persistence/disk-compression.adoc
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Disk Compression
+
+Disk compression refers to the process of compressing data pages when they are written to disk, reducing the size of the on-disk storage.
+The pages are kept in memory uncompressed, but when the data is flushed to disk it is compressed using the configured algorithm.
+This applies only to data pages that are stored to the persistent storage and does not compress indexes or WAL records.
+link:[WAL records compression] can be enabled separately.
+
+Disk page compression can be enabled on a per cache basis in the cache configuration.
+The cache must reside in a persistent link:[data region].
+There is no option to enable disk page compression globally at the moment.
+Moreover, the following prerequisites must be met:
+
+* Set the `pageSize` property in your data storage configuration to at least 2 times the page size of your file system. It means that the page size must be either 8K or 16K.
+* Enable the `ignite-compress` module.
+
+To enable disk page compression for a cache, provide one of the available compression algorithms in the cache configuration, as shown in the following example:
+
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/disk-compression.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/DiskCompression.java[tags=configuration, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+
+== Supported Algorithms
+
+The supported compression algorithms include:
+
+* `ZSTD` — supports compression levels from -131072 to 22 (default: 3).
+* `LZ4` — supports compression levels from 0 to 17 (default: 0).
+* `SNAPPY` —  the Snappy algorithm.
+* `SKIP_GARBAGE` — this algorithm only extracts useful data from half-filled pages and does not compress the data.
diff --git a/docs/_docs/persistence/external-storage.adoc b/docs/_docs/persistence/external-storage.adoc
new file mode 100644
index 0000000..a7ab74f
--- /dev/null
+++ b/docs/_docs/persistence/external-storage.adoc
@@ -0,0 +1,224 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= External Storage
+:javaFile: {javaCodeDir}/ExternalStorage.java
+
+== Overview
+
+You can use Ignite as a caching layer on top of an existing database such as an RDBMS or NoSQL databases, for example, Apache Cassandra or MongoDB.
+This use case accelerates the underlying database by employing in-memory processing.
+
+Ignite provides an out-of-the-box integration with Apache Cassandra.
+For other NoSQL databases for which integration is not available off-the-shelf, you can provide your own link:persistence/custom-cache-store[implementation of the `CacheStore` interface].
+
+The two main use cases where an external storage can be used include:
+
+* A caching layer to an existing database. In this scenario, you can improve the processing speed by loading data into memory. You can also bring SQL support to a database that does not have it (when all data is loaded into memory).
+
+* You want to persist the data in an external database (instead of using the link:persistence/native-persistence[native persistence]).
+
+image::images/external_storage.png[]
+
+////////////////////////////////////////////////////////////////////////////////
+What follows is a java-specific documentation
+////////////////////////////////////////////////////////////////////////////////
+
+
+The `CacheStore` interface extends both `javax.cache.integration.CacheLoader` and `javax.cache.integration.CacheWriter`, which are used for _read-through_ and _write-through_ features respectively. You can also implement each of the interfaces individually and provide them to the cache configuration separately.
+
+NOTE: In addition to key-value operations, Ignite writes through the results of SQL INSERT, UPDATE, and MERGE queries. However, SELECT queries never read through data from the external database.
+
+=== Read-Through and Write-Through
+
+Read-through means that the data is read from the underlying persistent store if it is not available in the cache.
+Note that this is true only for get operations made through the key-value API; SELECT queries never read through data from the external database.
+To execute select queries, the data must be pre-loaded from the database into the cache by calling the `loadCache()` method.
+
+Write-through means that the data is automatically persisted when it is updated in the cache.
+All read-through and write-through operations participate in cache transactions and are committed or rolled back as a whole.
+
+=== Write-Behind Caching
+
+In a simple write-through mode, each put and remove operation involves a corresponding request to the persistent store; therefore, the overall duration of the update operation might be relatively long. Additionally, an intensive cache update rate can cause an extremely high storage load.
+
+For such cases, you can enable the _write-behind_ mode, in which update operations are performed asynchronously. The key concept of this approach is to accumulate updates and asynchronously flush them to the underlying database as a bulk operation.
+You can trigger flushing of data based on time-based events (the maximum time that data entry can reside in the queue is limited), queue-size events (the queue is flushed when its size reaches some particular point), or both of them (whichever occurs first).
+
+[WARNING]
+====
+[discrete]
+=== Performance vs. Consistency
+
+Enabling write-behind caching increases performance by performing asynchronous updates, but this can lead to a potential drop in consistency as some updates could be lost due to node failures or crashes.
+
+====
+
+
+
+With the write-behind approach, only the last update to an entry is written to the underlying storage.
+If a cache entry with a key named `key1` is sequentially updated with values `value1`, `value2`, and `value3` respectively, then only a single store request for the `(key1, value3)` pair is propagated to the persistent store.
+
+[NOTE]
+====
+[discrete]
+=== Update Performance
+
+Batch operations are usually more efficient than a sequence of individual operations.
+You can exploit this feature by enabling batch operations in the write-behind mode.
+Update sequences of similar types (put or remove) can be grouped to a single batch.
+For example, if you put the pairs `(key1, value1)`, `(key2, value2)`, `(key3, value3)` into the cache sequentially, the three operations are batched into a single `CacheStore.putAll(...)` operation.
+====
+
+
+== RDBMS Integration
+
+To use an RDBMS as an underlying storage, you can use one of the following implementations of `CacheStore`.
+
+* `CacheJdbcPojoStore` -- stores objects as a set of fields using reflection. Use this implementation if you are adding Ignite on top of an existing database and want to use specific fields (or all of them) from the underlying table.
+* `CacheJdbcBlobStore` -- stores objects in the underlying database in the Blob format. This option is useful in scenarios when you use an external database as a persistent storage and want to store your data in a simple format.
+
+
+
+////////////////////////////////////////////////////////////////////////////////
+To configure a `CacheStore`:
+
+. Add the JDBC driver of the database you are using to the classpath of your application.
+. Set the `CacheConfiguration.cacheStoreFactory` property of `CacheConfiguration` to use one of the implementation of `CacheStore`. You will need to provide connection parameters in `cacheStoreFactory`.
+
+Once the configuration is set, you can use the `IgniteCache.loadCache(...)` method to load the data from the database into the respective caches.
+////////////////////////////////////////////////////////////////////////////////
+
+
+
+Below are configuration examples for both implementations of `CacheStore`.
+
+
+=== CacheJdbcPojoStore
+
+With `CacheJdbcPojoStore`, you can store objects as a set of fields and can configure the mapping between table columns and objects fields via the configuration.
+
+ . Set the `CacheConfiguration.cacheStoreFactory` property to `org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory` and provide the following properties:
++
+--
+* `dataSourceBean` -- database connection credentials: URL, user, password.
+* `dialect` -- the class that implements the SQL dialect compatible with your database.
+Ignite provides out-of-the-box implementations for MySQL, Oracle, H2, SQLServer, and DB2 databases.
+These dialects can be found in the `org.apache.ignite.cache.store.jdbc.dialect` package of the Ignite distribution.
+* `types` -- this property is required to define mappings between the database table and the corresponding POJO (see POJO configuration example below).
+--
+. Optionally, configure link:SQL/sql-api#query-entities[query entities] if you want to execute SQL queries on the cache.
+
+The following example demonstrates how to configure an Ignite cache on top of a MySQL table.
+The table has 2 columns: `id` (INTEGER) and `name` (VARCHAR), which are mapped to objects of the `Person` class.
+
+
+You can configure `CacheJdbcPojoStore` via both the XML configuration and Java code.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/cache-jdbc-pojo-store.xml[tags=, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=pojo,indent=0]
+----
+--
+
+.Person Class
+[source,java]
+----
+include::{javaFile}[tag=person,indent=0]
+----
+
+=== CacheJdbcBlobStore
+`CacheJdbcBlobStore` stores objects in the underlying database in the blob format.
+It creates a table named 'ENTRIES', with the 'akey' and 'val' columns (both have the `binary` type).
+
+You can change the default table definition by providing a custom create table query and DML queries used to load, delete, and update the data.
+Refer to javadoc:org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStore[] for details.
+
+In the example below, the objects of the Person class are stored as an array of bytes in a single column.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="mysqlDataSource" class="com.mysql.jdbc.jdbc2.optional.MysqlDataSource">
+  <property name="URL" value="jdbc:mysql://[host]:[port]/[database]"/>
+  <property name="user" value="YOUR_USER_NAME"/>
+  <property name="password" value="YOUR_PASSWORD"/>
+</bean>
+
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+   <property name="cacheConfiguration">
+     <list>
+       <bean class="org.apache.ignite.configuration.CacheConfiguration">
+           <property name="name" value="PersonCache"/>
+           <property name="cacheStoreFactory">
+             <bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcBlobStoreFactory">
+               <property name="dataSourceBean" value = "mysqlDataSource" />
+             </bean>
+           </property>
+       </bean>
+      </list>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=blob1,indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+== Loading Data
+
+After you configure the cache store and start the cluster, load the data from the database into your cluster as follows:
+
+[source,java]
+----
+include::{javaFile}[tag=blob2,indent=0]
+----
+
+== NoSQL Database Integration
+You can integrate Ignite with any NoSQL database by implementing the `CacheStore` interface.
+
+CAUTION: Even though Ignite supports distributed transactions, it doesn't make your NoSQL database transactional, unless the database supports transactions out of the box.
+
+
+=== Cassandra Integration
+
+Ignite provides an out-of-the-box implementation of `CacheStore` that enables you to use Apache Cassandra as a persistent
+storage. This implementation utilizes Cassandra's link:http://www.datastax.com/dev/blog/java-driver-async-queries[asynchronous queries, window=_blank]
+to provide high performance batch operations such as `loadAll()`, `writeAll()` and `deleteAll()`, and automatically creates
+all necessary tables and namespaces in Cassandra.
+
+Refer to link:extensions-and-integrations/cassandra/overview[this documentation section] for configuration and usage guidelines.
+
+////
+== Implementing Custom CacheStore
+
+See link:advanced-topics/custom-cache-store[Implementing Custom Cache Store].
+////
diff --git a/docs/_docs/persistence/native-persistence.adoc b/docs/_docs/persistence/native-persistence.adoc
new file mode 100644
index 0000000..b6c0a23
--- /dev/null
+++ b/docs/_docs/persistence/native-persistence.adoc
@@ -0,0 +1,362 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite Persistence
+
+:javaFile: {javaCodeDir}/IgnitePersistence.java
+
+== Overview
+
+Ignite Persistence, or Native Persistence, is a set of features designed to provide persistent storage.
+When it is enabled, Ignite always stores all the data on disk, and loads as much data as it can into RAM for processing.
+For example, if there are 100 entries and RAM has the capacity to store only 20, then all 100 are stored on disk and only 20 are cached in RAM for better performance.
+
+When Native persistence is turned off and no external storage is used, Ignite behaves as a pure in-memory store.
+
+When persistence is enabled, every server node persists a subset of the data that only includes the partitions that are assigned to that node (including link:data-modeling/data-partitioning#backup-partitions[backup partitions] if backups are enabled).
+
+The Native Persistence functionality is based on the following features:
+
+* Storing data partitions on disk
+* Write-ahead logging
+* Checkpointing
+* Usage of OS swap
+////
+*TODO: diagram: update operation + wal + checkpointing*
+////
+
+When persistence is enabled, Ignite stores each partition in a separate file on disk.
+The data format of the partition files is the same as that of the data when it is kept in memory.
+If partition backups are enabled, they are also saved on disk.
+In addition to data partitions, Ignite stores indexes and metadata.
+
+image::images/persistent_store_structure.png[]
+
+You can change the default location of data files in the <<Configuration Properties,configuration>>.
+
+////
+If your data set is very large and you use persistence, data rebalancing may take a long time.
+To avoid unnecessary data transfer, you can decide when you want to start rebalancing by changing the baseline topology manually.
+////
+
+////
+
+Because persistence is configured per link:memory-configuration/data-regions[data region], in-memory data regions differ from regions with persistence with respect to data rebalancing:
+
+[cols="1,1",options="header"]
+|===
+| In-memory data region | Data region with persistence
+| When a node joins/leaves the cluster, PME is triggered and followed by data rebalancing. | PME is performed. Data rebalancing is triggered when the baseline topology is changed.
+|===
+////
+
+
+
+////////////////////////////////////////////////////////////////////////////////
+* When you start the cluster for the first time, the baseline topology is empty and the cluster is inactive. Any CRUD operations with data are prohibited.
+* When you activate the cluster for the first time, all server nodes that are in the cluster at the moment will be added to the baseline topology.
+* When you restart the cluster with persistence, it is activated automatically as soon as all nodes that are registered in the baseline topology join in.
+////////////////////////////////////////////////////////////////////////////////
+
+
+== Enabling Persistent Storage
+
+Native Persistence is configured per link:memory-configuration/data-regions[data region].
+To enable persistent storage, set the `persistenceEnabled` property to `true` in the data region configuration.
+You can have in-memory data regions and data regions with persistence at the same time.
+
+The following example shows how to enable persistent storage for the default data region.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/persistence.xml[tags=ignite-config;!storage-path;!discovery,indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=cfg;!storage-path,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PersistenceIgnitePersistence.cs[tags=cfg;!storage-path,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Configuring Persistent Storage Directory
+
+When persistence is enabled, the node stores user data, indexes and WAL files in the `{IGNITE_WORK_DIR}/db` directory.
+This directory is referred to as the storage directory.
+You can change the storage directory by setting the `storagePath` property of the `DataStorageConfiguration` object, as shown below.
+
+Each node maintains the following sub-directories under the storage directory meant to store cache data, WAL files, and WAL archive files:
+
+
+[cols="3,4",opts="header"]
+|===
+|Subdirectory name | Description
+|{WORK_DIR}/db/{nodeId}  | This directory contains cache data and indexes.
+|{WORK_DIR}/db/wal/{nodeId} | This directory contains WAL files.
+|{WORK_DIR}/db/wal/archive/{nodeId}|  This directory contains WAL archive files.
+|===
+
+
+`nodeId` here is either the consistent node ID (if it's defined in the node configuration) or https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-SubfoldersGeneration[auto-generated node id,window=_blank]. It is used to ensure uniqueness of the directories for the node.
+If multiple nodes share the same work directory, they uses different sub-directories.
+
+If the work directory contains persistence files for multiple nodes (there are multiple {nodeId} subdirectories with different nodeIds), the node picks up the first subdirectory that is not being used.
+To make sure a node always uses a specific subdirectory and, thus, specific data partitions even after restarts, set `IgniteConfiguration.setConsistentId` to a cluster-wide unique value in the node configuration.
+
+You can change the storage directory as follows:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/persistence.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=cfg,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PersistenceIgnitePersistence.cs[tags=cfg,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+You can also change the WAL and WAL archive paths to point to directories outside of the storage directory. Refer to the next section for details.
+
+== Write-Ahead Log
+
+The write-ahead log is a log of all data modifying operations (including deletes) that happen on a node. When a page is updated in RAM, the update is not directly written to the partition file but is appended to the tail of the WAL.
+
+The purpose of the write-ahead log is to provide a recovery mechanism for scenarios where a single node or the whole cluster goes down. In case of a crash or restart, the cluster can always be recovered to the latest successfully committed transaction by relying on the content of the WAL.
+
+The WAL consists of several files (called active segments) and an archive. The active segments are filled out sequentially and are overwritten in a cyclical order. Once the 1st segment is full, its content is copied to the WAL archive (see the <<WAL Archive>> section below). While the 1st segment is being copied, the 2nd segment is treated as an active WAL file and accepts all the updates coming from the application side. By default, there are 10 active segments.
+
+////////////////////////////////////////////////////////////////////////////////
+
+*TODO - Do we need this here? I think not. Move to the javadoc. (Garrett agrees, let's move this out)*
+Each update is written to a buffer before being written to the WAL file. The size of the buffer is specified by the `DataStorageConfiguration.walBuffSize` parameter. By default, the WAL buffer size equals the WAL segment size if the memory mapped file is enabled, and `(WAL segment size) / 4` if the memory-mapped file is disabled. Note that the memory mapped file is enabled by default. It can be turned off using the `IGNITE_WAL_MMAP` system property that can be passed to JVM as follows:  `-DIGNITE_WAL_MMAP=false`.
+
+////////////////////////////////////////////////////////////////////////////////
+
+=== WAL Modes
+There are three WAL modes. Each mode differs in how it affects performance and provides different consistency guarantees.
+
+[cols="20%,45%,35%",opts="header"]
+|===
+|Mode |Description | Consistency Guarantees
+|`FSYNC` | The changes are guaranteed to be persisted to disk for every atomic write or transactional commit.
+| Data updates are never lost surviving any OS or process crashes, or power failure.
+
+|`LOG_ONLY` | The default mode.
+
+The changes are guaranteed to be flushed to either the OS buffer cache or a memory-mapped file for every atomic write or transactional commit.
+
+The memory-mapped file approach is used by default and can be switched off by setting the `IGNITE_WAL_MMAP` system property to `false`.
+
+| Data updates survive a process crash.
+
+| `BACKGROUND` | When the `IGNITE_WAL_MMAP` property is enabled (default), this mode behaves like the `LOG_ONLY` mode.
+
+If the memory-mapped file approach is disabled then the changes stay in node's internal buffer and are periodically flushed to disk. The frequency of flushing is specified via the `walFlushFrequency` parameter.
+
+| When the `IGNITE_WAL_MMAP` property is enabled (default), the mode provides the same guarantees as `LOG_ONLY` mode.
+
+Otherwise, recent data updates may get lost in case of a process crash or other outages.
+
+| `NONE` | WAL is disabled. The changes are persisted only if you shut down the node gracefully.
+Use `Ignite.active(false)` to deactivate the cluster and shut down the node.
+
+| Data loss might occur.
+
+If a node is terminated abruptly during update operations, it is very likely that the data stored on the disk becomes out-of-sync or corrupted.
+
+|===
+
+
+=== WAL Archive
+The WAL archive is used to store WAL segments that may be needed to recover the node after a crash. The number of segments kept in the archive is such that the total size of all segments does not exceed the specified size of the WAL archive.
+
+By default, the maximum size of the WAL archive (total space it occupies on disk) is defined as 4 times the size of the link:persistence/persistence-tuning#adjusting-checkpointing-buffer-size[checkpointing buffer]. You can change that value in the <<Configuration Properties,configuration>>.
+
+CAUTION: Setting the WAL archive size to a value lower than the default may impact performance and should be tested before being used in production.
+
+:walXmlFile: code-snippets/xml/wal.xml
+
+=== Changing WAL Segment Size
+
+The default WAL segment size (64 MB) may be inefficient in high load scenarios because it causes WAL to switch between segments too frequently and switching/rotation is a costly operation.
+A larger size of WAL segments can help increase performance under high loads at the cost of increasing the total size of the WAL files and WAL archive.
+
+You can change the size of the WAL segment files in the data storage configuration. The value must be between 512KB and 2GB.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{walXmlFile}[tags=ignite-config;!discovery;segment-size, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tags=segment-size, indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+=== Disabling WAL
+There are situations when it is reasonable to have the WAL disabled to get better performance. For instance, it is useful to disable WAL during initial data loading and enable it after the pre-loading is complete.
+
+////
+todo: add c++ examples
+////
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=wal,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PersistenceIgnitePersistence.cs[tag=disableWal,indent=0]
+----
+tab:SQL[]
+[source, sql]
+----
+ALTER TABLE Person NOLOGGING
+
+//...
+
+ALTER TABLE Person LOGGING
+----
+tab:C++[unsupported]
+--
+
+WARNING: If WAL is disabled and you restart a node, all data is removed from the persistent storage on that node. This is implemented because without WAL data consistency cannot be guaranteed in case of node crash or restart.
+
+=== WAL Archive Compaction
+You can enable WAL Archive compaction to reduce the space occupied by the WAL Archive.
+By default, WAL Archive contains segments for the last 20 checkpoints (this number is configurable).
+If compaction is enabled, all archived segments that are 1 checkpoint old are compressed in ZIP format.
+If the segments are needed (for example, to re-balance data between nodes), they are uncompressed to RAW format.
+
+See the <<Configuration Properties>> section below to learn how to enable WAL archive compaction.
+
+=== WAL Records Compression
+
+As described in the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-WAL[design document], physical and logical records that represent data updates are written to the WAL files before the user operation is acknowledged.
+Ignite can compress WAL records in memory before they are written to disk to save space.
+
+WAL Records Compression requires that the 'ignite-compress' module be enabled. See link:setup#enabling-modules[Enabling Modules].
+
+By default WAL records compression is disabled.
+To enable it, set the compression algorithm and compression level in the data storage configuration:
+
+[tabs]
+--
+tab:XML[]
+
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=wal-records-compression, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[unsupported]
+--
+
+The supported compression algorithms are listed in javadoc:org.apache.ignite.configuration.DiskPageCompression[].
+
+=== Disabling WAL Archive
+
+In some cases, you may want to disable WAL archiving, for example, to reduce the overhead associated with copying of WAL segments to the archive. There can be a situation where Ignite writes data to WAL segments faster than the segments are copied to the archive. This may create an I/O bottleneck that can freeze the operation of the node. If you experience such problems, try disabling WAL archiving.
+
+////
+It is safe to disable WAL archiving because a cluster without the WAL archive provides the same data retention guarantees as a cluster with a WAL archive. Moreover, disabling WAL archiving can provide better performance.
+////
+
+////
+*TODO: Artem, should we mention why someone would want to use WAL Archiving, if it can impact performance and a cluster without the archive has the same guarantees?*
+////
+
+To disable archiving, set the WAL path and the WAL archive path to the same value.
+In this case, Ignite does not copy segments to the archive; instead, it creates new segments in the WAL folder.
+Old segments are deleted as the WAL grows, based on the WAL Archive size setting.
+
+
+== Checkpointing
+
+_Checkpointing_ is the process of copying dirty pages from RAM to partition files on disk. A dirty page is a page that was updated in RAM but was not written to the respective partition file (the update, however, was appended to the WAL).
+
+After a checkpoint is created, all changes are persisted to disk and will be available if the node crashes and is restarted.
+
+Checkpointing and write-ahead logging are designed to ensure durability of data and recovery in case of a node failure.
+
+image:images/checkpointing-persistence.png[]
+
+This process helps to utilize disk space frugally by keeping pages in the most up-to-date state on disk. After a checkpoint is passed, you can delete the WAL segments that were created before that point in time.
+
+See the following related documentation:
+
+* link:monitoring-metrics/metrics#monitoring-checkpointing-operations[Monitoring Checkpointing Operations].
+* link:persistence/persistence-tuning#adjusting-checkpointing-buffer-size[Adjusting Checkpointing Buffer Size]
+
+== Configuration Properties
+
+The following table describes some properties of link:{javadoc_base_url}/org/apache/ignite/configuration/DataStorageConfiguration.html[DataStorageConfiguration].
+
+[width=100%,cols="1,2,1",options="header"]
+|=======================================================================
+| Property Name |Description |Default Value
+
+|`persistenceEnabled` | Set this property to `true` to enable Native Persistence. | `false`
+
+|`storagePath` | The path where data is stored. |  `${IGNITE_HOME}/work/db/node{IDX}-{UUID}`
+
+| `walPath` | The path to the directory where active WAL segments are stored. | `${IGNITE_HOME}/work/db/wal/`
+| `walArchivePath` | The path to the WAL archive.  | `${IGNITE_HOME}/work/db/wal/archive/`
+| `walCompactionEnabled` | Set to `true` to enable <<WAL Archive Compaction, WAL archive compaction>>. | `false`
+| `walSegmentSize` | The size of a WAL segment file in bytes. | 64MB
+|`walMode` | <<WAL Modes,Write-ahead logging mode>>. | `LOG_ONLY`
+
+| `walCompactionLevel` | WAL archive compression level. `1` indicates the fastest speed, and `9` indicates the best compression. | `1`
+|`maxWalArchiveSize`  | The maximum size (in bytes) the WAL archive can occupy on the file system. | Four times the size of the link:persistence/persistence-tuning#adjusting-checkpointing-buffer-size[checkpointing buffer].
+|=======================================================================
diff --git a/docs/_docs/persistence/persistence-tuning.adoc b/docs/_docs/persistence/persistence-tuning.adoc
new file mode 100644
index 0000000..293d9de
--- /dev/null
+++ b/docs/_docs/persistence/persistence-tuning.adoc
@@ -0,0 +1,258 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Persistence Tuning
+:javaFile: {javaCodeDir}/PersistenceTuning.java
+:xmlFile: code-snippets/xml/persistence-tuning.xml
+:dotnetFile: code-snippets/dotnet/PersistenceTuning.cs
+
+This article summarizes best practices for Ignite native persistence tuning.
+If you are using an external (3rd party) storage for persistence needs, please refer to performance guides from the 3rd party vendor.
+
+== Adjusting Page Size
+
+The `DataStorageConfiguration.pageSize` parameter should be no less than the lower of: the page size of your storage media (SSD, Flash, HDD, etc.) and the cache page size of your operating system.
+The default value is 4KB.
+
+The operating system's cache page size can be easily checked using
+link:https://unix.stackexchange.com/questions/128213/how-is-page-size-determined-in-virtual-address-space[system tools and parameters, window=_blank].
+
+The page size of the storage device such as SSD is usually noted in the device specification. If the manufacturer does not disclose this information, try to run SSD benchmarks to figure out the number.
+Many manufacturers have to adapt their drivers for 4 KB random-write workloads because a variety of standard
+benchmarks use 4 KB by default.
+link:https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ssd-server-storage-applications-paper.pdf[This white paper,window=_blank] from Intel confirms that 4 KB should be enough.
+
+Once you pick the most optimal page size, apply it in your cluster configuration:
+
+////
+TODO for .NET and other languages.
+////
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;page-size,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=page-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=page-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Keep WALs Separately
+
+Consider using separate drives for data files and link:persistence/native-persistence#write-ahead-log[Write-Ahead-Logging (WAL)].
+Ignite actively writes to both the data and WAL files.
+
+The example below shows how to configure separate paths for the data storage, WAL, and WAL archive:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;paths,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=separate-wal,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=separate-wal,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Increasing WAL Segment Size
+
+The default WAL segment size (64 MB) may be inefficient in high load scenarios because it causes WAL to switch between segments too frequently and switching/rotation is a costly operation. Setting the segment size to a higher value (up to 2 GB) may help reduce the number of switching operations. However, the tradeoff is that this will increase the overall volume of the write-ahead log.
+
+See link:persistence/native-persistence#changing-wal-segment-size[Changing WAL Segment Size] for details.
+
+== Changing WAL Mode
+
+Consider other WAL modes as alternatives to the default mode. Each mode provides different degrees of reliability in
+case of node failure and that degree is inversely proportional to speed, i.e. the more reliable the WAL mode, the
+slower it is. Therefore, if your use case does not require high reliability, you can switch to a less reliable mode.
+
+See link:persistence/native-persistence#wal-modes[WAL Modes] for more details.
+
+== Disabling WAL
+
+//TODO: when should bhis be done?
+There are situations where link:persistence/native-persistence#disabling-wal[disabling the WAL] can help improve performance.
+
+== Pages Writes Throttling
+
+Ignite periodically starts the link:persistence/native-persistence#checkpointing[checkpointing process] that syncs dirty pages from memory to disk. A dirty page is a page that was updated in RAM but was not written to a respective partition file (an update was just appended to the WAL). This process happens in the background without affecting the application's logic.
+
+However, if a dirty page, scheduled for checkpointing, is updated before being written to disk, its previous state is copied to a special region called a checkpointing buffer.
+If the buffer gets overflowed, Ignite will stop processing all updates until the checkpointing is over.
+As a result, write performance can drop to zero as shown in​ this diagram, until the checkpointing cycle is completed:
+
+image::images/checkpointing-chainsaw.png[Checkpointing Chainsaw]
+
+The same situation occurs if the dirty pages threshold is reached again while the checkpointing is in progress.
+This will force Ignite to schedule one more checkpointing execution and to halt all the update operations until the first checkpointing cycle is over.
+
+Both situations usually arise when either a disk device is slow or the update rate is too intensive.
+To mitigate and prevent these performance drops, consider enabling the pages write throttling algorithm.
+The algorithm brings the performance of update operations down to the speed of the disk device whenever the checkpointing buffer fills in too fast or the percentage of dirty pages soar rapidly.
+
+[NOTE]
+====
+[discrete]
+=== Pages Write Throttling in a Nutshell
+
+Refer to the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-PagesWriteThrottling[Ignite wiki page, window=_blank] maintained by Apache Ignite persistence experts to get more details about throttling and its causes.
+====
+
+The example below shows how to enable write throttling:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=throttling,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=throttling,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Adjusting Checkpointing Buffer Size
+
+The size of the checkpointing buffer, explained in the previous section, is one of the checkpointing process triggers.
+
+The default buffer size is calculated as a function of the link:memory-configuration/data-regions[data region] size:
+
+[width=100%,cols="1,2",options="header"]
+|=======================================================================
+| Data Region Size |Default Checkpointing Buffer Size
+
+|< 1 GB | MIN (256 MB, Data_Region_Size)
+
+|between 1 GB and 8 GB | Data_Region_Size / 4
+
+|> 8 GB | 2 GB
+
+|=======================================================================
+
+The default buffer size can be suboptimal for write-intensive workloads because the page write throttling algorithm will slow down your writes whenever the size reaches the critical mark.
+To keep write performance at the desired pace while the checkpointing is in progress, consider increasing
+`DataRegionConfiguration.checkpointPageBufferSize` and enabling write throttling to prevent performance​ drops:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling;data-region,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=checkpointing-buffer-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=checkpointing-buffer-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+In the example above, the checkpointing buffer size of the default region is set to 1 GB.
+
+////
+TODO: describe when checkpointing is triggered
+[NOTE]
+====
+[discrete]
+=== When is the Checkpointing Process Triggered?
+
+Checkpointing is started if either the dirty pages count goes beyond the `totalPages * 2 / 3` value or
+`DataRegionConfiguration.checkpointPageBufferSize` is reached. However, if page write throttling is used, then
+`DataRegionConfiguration.checkpointPageBufferSize` is never encountered because it cannot be reached due to the way the algorithm works.
+====
+////
+
+== Enabling Direct I/O
+//TODO: why is this not enabled by default?
+Usually, whenever an application reads data from disk, the OS gets the data and puts it in a file buffer cache first.
+Similarly, for every write operation, the OS first writes the data in the cache and transfers it to disk later. To
+eliminate this process, you can enable Direct I/O in which case the data is read and written directly from/to the
+disk, bypassing the file buffer cache.
+
+The Direct I/O module in Ignite is used to speed up the checkpointing process, which writes dirty pages from RAM to disk. Consider using the Direct I/O plugin for write-intensive workloads.
+
+[NOTE]
+====
+[discrete]
+=== Direct I/O and WALs
+
+Note that Direct I/O cannot be enabled specifically for WAL files. However, enabling the Direct I/O module provides
+a slight benefit regarding the WAL files as well: the WAL data will not be stored in the OS buffer cache for too long;
+it will be flushed (depending on the WAL mode) at the next page cache scan and removed from the page cache.
+====
+
+You can enable Direct I/O, move the `{IGNITE_HOME}/libs/optional/ignite-direct-io` folder to the upper level `libs/optional/ignite-direct-io` folder in your Ignite distribution or as a Maven dependency as described link:setup#enabling-modules[here].
+
+You can use the `IGNITE_DIRECT_IO_ENABLED` system property to enable or disable the plugin at runtime.
+
+Get more details from the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-DirectI/O[Ignite Direct I/O Wiki section, window=_blank].
+
+== Purchase Production-Level SSDs
+
+Note that the performance of Ignite Native Persistence may drop after several hours of intensive write load due to
+the nature of how link:http://codecapsule.com/2014/02/12/coding-for-ssds-part-2-architecture-of-an-ssd-and-benchmarking[SSDs are designed and operate, window=_blank].
+Consider buying fast production-level SSDs to keep the performance high or switch to non-volatile memory devices like
+Intel Optane Persistent Memory.
+
+== SSD Over-provisioning
+
+Performance of random writes on a 50% filled disk is much better than on a 90% filled disk because of the SSDs over-provisioning (see link:https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti[https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti, window=_blank]).
+
+Consider buying SSDs with higher over-provisioning rates and make sure the manufacturer provides the tools to adjust it.
+
+[NOTE]
+====
+[discrete]
+=== Intel 3D XPoint
+
+Consider using 3D XPoint drives instead of regular SSDs to avoid the bottlenecks caused by a low over-provisioning
+setting and constant garbage collection at the SSD level.
+Read more link:http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.html[here, window=_blank].
+====
diff --git a/docs/_docs/persistence/snapshots.adoc b/docs/_docs/persistence/snapshots.adoc
new file mode 100644
index 0000000..b2d345b
--- /dev/null
+++ b/docs/_docs/persistence/snapshots.adoc
@@ -0,0 +1,208 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cluster Snapshots
+
+== Overview
+
+Ignite provides an ability to create full cluster snapshots for deployments using
+link:persistence/native-persistence[Ignite Persistence]. An Ignite snapshot includes a consistent cluster-wide copy of
+all data records persisted on disk and some other files needed for a restore procedure.
+
+The snapshot structure is similar to the layout of the
+link:persistence/native-persistence#configuring-persistent-storage-directory[Ignite Persistence storage directory],
+with several exceptions. Let's take this snapshot as an example to review the structure:
+[source,shell]
+----
+work
+└── snapshots
+    └── backup23012020
+        └── db
+            ├── binary_meta
+            │         ├── node1
+            │         ├── node2
+            │         └── node3
+            ├── marshaller
+            │         ├── node1
+            │         ├── node2
+            │         └── node3
+            ├── node1
+            │    └── my-sample-cache
+            │        ├── cache_data.dat
+            │        ├── part-3.bin
+            │        ├── part-4.bin
+            │        └── part-6.bin
+            ├── node2
+            │    └── my-sample-cache
+            │        ├── cache_data.dat
+            │        ├── part-1.bin
+            │        ├── part-5.bin
+            │        └── part-7.bin
+            └── node3
+                └── my-sample-cache
+                    ├── cache_data.dat
+                    ├── part-0.bin
+                    └── part-2.bin
+----
+* The snapshot is located under the `work\snapshots` directory and named as `backup23012020` where `work` is Ignite's work
+directory.
+* The snapshot is created for a 3-node cluster with all the nodes running on the same machine. In this example,
+the nodes are named as `node1`, `node2`, and `node3`, while in practice, the names are equal to nodes'
+link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStoreunderthehood-SubfoldersGeneration[consistent IDs].
+* The snapshot keeps a copy of the `my-sample-cache` cache.
+* The `db` folder keeps a copy of data records in `part-N.bin` and `cache_data.dat` files. Write-ahead and checkpointing
+are not added into the snapshot as long as those are not required for the current restore procedure.
+* The `binary_meta` and `marshaller` directories store metadata and marshaller-specific information.
+
+[NOTE]
+====
+[discrete]
+=== Usually Snapshot is Spread Across the Cluster
+
+The previous example shows the snapshot created for the cluster running on the same physical machine. Thus, the whole
+snapshot is located in a single place. While in practice, all the nodes will be running on different machines having the
+snapshot data spread across the cluster. Each node keeps a segment of the snapshot with the data belonging to this particular node.
+The link:persistence/snapshots#restoring-from-snapshot[restore procedure] explains how to tether together all the segments during recovery.
+====
+
+== Configuring Snapshot Directory
+
+By default, a segment of the snapshot is stored in the work directory of a respective Ignite node and uses the same storage
+media where Ignite Persistence keeps data, index, WAL, and other files. Since the snapshot can consume as much space as
+already taken by the persistence files and can affect your applications' performance by sharing the disk I/O with the
+Ignite Persistence routines, it's suggested to store the snapshot and persistence files on different media.
+
+You can avoid this interference between Ignite Native persistence and snapshotting
+by either changing link:persistence/native-persistence#configuring-persistent-storage-directory[storage directories of the persistence files]
+or overriding the default snapshots' location as shown below:
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/snapshots.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/Snapshots.java[tags=config, indent=0]
+----
+--
+
+== Creating Snapshot
+
+Ignite provides several APIs for the snapshot creation. Let's review all the options.
+
+=== Using Control Script
+
+Ignite ships the link:control-script[control script] that supports snapshots-related commands listed below:
+
+[source,shell]
+----
+#Create a cluster snapshot:
+control.(sh|bat) --snapshot create snapshot_name
+
+#Cancel a running snapshot:
+control.(sh|bat) --snapshot cancel snapshot_name
+
+#Kill a running snapshot:
+control.(sh|bat) --kill SNAPSHOT snapshot_name
+----
+
+=== Using JMX
+
+Use the `SnapshotMXBean` interface to perform the snapshot-specific procedures via JMX:
+
+[cols="1,1",opts="header"]
+|===
+|Method | Description
+|createSnapshot(String snpName) | Create a snapshot.
+|cancelSnapshot(String snpName) | Cancel a snapshot on the node initiated its creation.
+|===
+
+=== Using Java API
+
+Also, it's possible to create a snapshot programmatically in Java:
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/Snapshots.java[tags=create, indent=0]
+----
+--
+
+== Restoring From Snapshot
+
+Currently, the data restore procedure has to be performed manually. In a nutshell, you need to stop the cluster,
+replace persistence data and other files with the data from the snapshot, and restart the nodes.
+
+The detailed procedure looks as follows:
+
+. Stop the cluster you intend to restore
+. Remove all files from the checkpoint `$IGNITE_HOME/work/cp` directory
+. Do the following on each node. Clean the
+link:link:persistence/native-persistence#configuring-persistent-storage-directory[`db/{node_id}`] directory separately if
+it's not located under the Ignite `work` dir:
+    - Remove the files related to the `{nodeId}` from the `$IGNITE_HOME/work/db/binary_meta` directory
+    - Remove the files related to the `{nodeId}` from the `$IGNITE_HOME/work/db/marshaller` directory
+    - Remove the files and sub-directories related to the `{nodeId}` under your `$IGNITE_HOME/work/db` directory. Clean the
+    - Copy the files belonging to a node with the `{node_id}` from the snapshot into the `$IGNITE_HOME/work/` directory.
+If the `db/{node_id}` directory is not located under the Ignite `work` dir then you need to copy data files there.
+. Restart the cluster
+
+*Restore On Cluster of Different Topology*
+
+Sometimes you might want to create a snapshot of an N-node cluster and use it to restore on an M-node cluster. The table
+below explains what options are supported:
+
+[cols="1,1",opts="header"]
+|===
+|Condition | Description
+|N == M | The *recommended* case. Create and use the snapshot on clusters of a similar topology.
+|N < M | Start the first N nodes of the M-node cluster and apply the snapshot. Add the rest of the M-cluster nodes to
+the topology and wait while the data gets rebalanced and indexes are rebuilt.
+|N > M | Unsupported.
+|===
+
+== Consistency Guarantees
+
+All snapshots are fully consistent in terms of concurrent cluster-wide operations as well as ongoing changes with Ignite
+Persistence data, index, schema, binary metadata, marshaller and other files on nodes.
+
+The cluster-wide snapshot consistency is achieved by triggering the link:https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood[Partition-Map-Exchange]
+procedure. By doing that, the cluster will eventually get to the point in time when all previously started transactions are completed, and new
+ones are paused. Once this happens, the cluster initiates the snapshot creation procedure. The PME procedure ensures
+that the snapshot includes primary and backup in a consistent state.
+
+The consistency between the Ignite Persistence files and their snapshot copies is achieved by copying the original
+files to the destination snapshot directory with tracking all concurrent ongoing changes. The tracking of the changes
+might require extra space on the Ignite Persistence storage media (up to the 1x size of the storage media).
+
+== Current Limitations
+
+The snapshot procedure has some limitations that you should be aware of before using the feature in your production environment:
+
+* Snapshotting of specific caches/tables is unsupported. You always create a full cluster snapshot.
+* Caches/tables that are not persisted in Ignite Persistence are not included into the snapshot.
+* Encrypted caches are not included in the snapshot.
+* You can have only one snapshotting operation running at a time.
+* The snapshot procedure is interrupted if a server node leaves the cluster.
+* Snapshot may be restored only at the same cluster topology with the same node IDs;
+* The automatic restore procedure is not available yet. You have to restore it manually.
+
+If any of these limitations prevent you from using Apache Ignite, then select alternate snapshotting implementations for
+Ignite provided by enterprise vendors.
diff --git a/docs/_docs/persistence/swap.adoc b/docs/_docs/persistence/swap.adoc
new file mode 100644
index 0000000..c176592
--- /dev/null
+++ b/docs/_docs/persistence/swap.adoc
@@ -0,0 +1,66 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Swapping
+
+== Overview
+
+When using a pure in-memory storage, it is possible that the size of data loaded into a node exceeds the physical RAM size, leading to out of memory errors (OOMEs).
+If you do not want to use the native persistence or an external storage, you can enable swapping, in which case the in-memory data is moved to the swap space located on disk.
+Please note that Ignite does not provide its own implementation of swap space.
+Instead, it takes advantage of the swapping functionality provided by the operating system (OS).
+
+When swap space is enabled, Ignite stores data in memory-mapped files (MMF) whose content is swapped to disk by the OS according to the current RAM consumption;
+however, in that scenario the data access time is longer.
+Moreover, there are no data durability guarantees.
+Which means that the data from the swap space is available only as long as the node is alive.
+Once the node where the swap space exists shuts down, the data is lost.
+Therefore, you should use swap space as an extension to RAM only to give yourself enough time to add more nodes to the cluster in order to re-distribute data and avoid OOMEs which might happen if the cluster is not scaled in time.
+
+[CAUTION]
+====
+Since swap space is located on disk, it should not be considered as a replacement for native persistence.
+Data from the swap space is available as long as the node is active. Once the node shuts down, the data is lost.
+To ensure that data is always available, you should either enable link:persistence/native-persistence/[native persistence] or use an link:persistence/external-storage[external storage].
+====
+
+== Enabling Swapping
+
+Data Region `maxSize` defines the total `maxSize` of the region.
+You will get out of memory errors if your data size exceeds `maxSize` and neither native persistence nor an external database is used.
+To avoid this situation with the swapping capabilities, you need to:
+
+* Set `maxSize` to a value that is bigger than the total RAM size. In this case, the OS takes care of the swapping.
+* Enable swapping in the data region configuration, as shown below.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/swap.xml[tag=swap,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Swap.java[tag=swap,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PersistenceIgnitePersistence.cs[tag=cfg-swap,indent=0]
+----
+tab:C++[unsupported]
+--
+
diff --git a/docs/_docs/plugins.adoc b/docs/_docs/plugins.adoc
new file mode 100644
index 0000000..b991d46
--- /dev/null
+++ b/docs/_docs/plugins.adoc
@@ -0,0 +1,129 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Plugins
+
+== Overview
+
+The Ignite plugin system allows you to extend the core functionality of Ignite.
+Plugins have access to different internal Ignite components, such as security processor and others, and can extend the programmatic API of Ignite.
+
+To add a custom plugin, implement the `PluginProvider` interface and register the implementation in the node configuration.
+The following is an overview of the steps involved in creating a plugin:
+
+. Implement the `PluginProvider` interface. This is the main interface for creating plugins.
+
+. Implement the `IgnitePlugin` interface. If your plugin adds functionality that is meant to be triggered by end users, you should add public methods to this class. An instance of this class is available to end users at runtime via `Ignite.plugin(String pluginName)`.
+
+. Register the plugin in `IgniteConfiguration.setPluginProviders(...)` either programmatically or via XML configuration.
+
+. If your plugin has a public API, call `MyPlugin plugin = Ignite.plugin(pluginName)` at runtime and execute specific actions.
+
+The following section gives an example of a plugin and goes into details about how plugins work in Ignite.
+
+== Example Plugin
+
+Let's create a simple Ignite plugin that prints information about the number of entries in every cache to the console periodically, on each node.
+In addition, it exposes a public method that the users can call programmatically from their application to print the cache size information on demand.
+The plugin has one configuration parameter: the time interval for printing the cache size information.
+
+=== {counter:step}. Implement PluginProvider
+
+`PluginProvider` is the main interface for creating Ignite plugins.
+Ignite calls the methods of each registered plugin provider during initialization.
+
+The following methods must return non-null values. Other methods are optional.
+
+* `name()` - returns the name of the plugin
+* `plugin()` - returns the object of your plugin class
+
+Below is an example implementation of a plugin provider.
+We create an object of `MyPlugin` class (see next step) in the `initExtensions()` method.
+Ignite passes a `PluginContext` object as an argument to this method.
+`PluginContext` provides access to the Ignite APIs and node configuration.
+See the javadoc:org.apache.ignite.plugin.PluginContext[] javadoc for more information.
+Here we simply pass the `PluginContext` and the time interval to the `MyPlugin` constructor.
+
+.MyPluginProvider.java:
+[source, java]
+----
+include::{javaCodeDir}/plugin/MyPluginProvider.java[tags=!no-op-methods, indent=0]
+----
+
+The `onIgniteStart()` method is invoked when Ignite is started.
+We start the plugin by calling `MyPlugin.start()`, which simply schedules periodic execution of the task that prints cache size information.
+
+=== {counter:step}. Implement IgnitePlugin
+
+The implementation of the `IgnitePlugin` returned by the plugin provider is available to the users via `Ignite.plugin(String pluginName)`.
+If you want to provide public API to end users, the API should be exposed in the class that implements `IgnitePlugin`.
+
+Strictly speaking, this step is not necessary if your plugin does not provide a public API.
+Your plugin functionality may be implemented and initialized in the `PluginProvider` implementation, and the `PluginProvider.plugin()` method may return an empty implementation of the `IgnitePlugin` interface.
+
+
+In our case, we encapsulate the plugin functionality in the `MyPlugin` class and provide one public method (`MyPlugin.printCacheInfo()`).
+The `MyPlugin.java` implements the `Runnable` interface.
+The `start()` and `stop()` methods schedule periodic printing of cache size information.
+
+
+.MyPlugin.java:
+[source, java]
+----
+include::{javaCodeDir}/plugin/MyPlugin.java[tags=, indent=0]
+----
+
+
+=== {counter:step}. Register your Plugin
+
+
+Programmatically:
+
+[source, java]
+----
+include::{javaCodeDir}/plugin/PluginExample.java[tags=register-plugin, indent=0]
+----
+
+
+Via XML Configuration:
+
+Compile your plugin source code and add the classes to the classpath on each node.
+Then, you can register the plugin as follows:
+
+[source, xml]
+----
+include::code-snippets/xml/plugins.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+When you start the node, you should see the following message in the console:
+
+[source, text]
+----
+[11:00:49] Initial heap size is 248MB (should be no less than 512MB, use -Xms512m -Xmx512m).
+[11:00:49] Configured plugins:
+[11:00:49]   ^-- MyPlugin 1.0
+[11:00:49]   ^-- MyCompany
+[11:00:49]
+----
+
+=== {counter:step}. Access the Plugin at Runtime
+
+You can access the instance of the plugin by calling `Ignite.plugin(pluginName)`.
+The `pluginName` argument must be equal to the plugin name returned in `MyPluginProvider.name()`.
+
+[source, java]
+----
+include::{javaCodeDir}/plugin/PluginExample.java[tags=access-plugin, indent=0]
+----
+
diff --git a/docs/_docs/quick-start/cpp.adoc b/docs/_docs/quick-start/cpp.adoc
new file mode 100644
index 0000000..94e519e
--- /dev/null
+++ b/docs/_docs/quick-start/cpp.adoc
@@ -0,0 +1,131 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for C++
+
+This chapter explains system requirements for running Ignite and how to install Ignite, start a cluster, and run a simple Hello World example in C++.
+
+== Prerequisites
+
+Ignite C++ was officially tested on:
+
+include::includes/cpp-prerequisites.adoc[]
+
+
+== Installing Ignite
+
+include::includes/install-ignite.adoc[]
+
+== Starting an Ignite Node
+
+include::includes/starting-node.adoc[]
+
+NOTE: Ignite for C++ supports a thick client and a thin client.
+Because this guide focuses on the thin client, you can run the examples below, connecting to the Java-based nodes you just started.
+
+Once the cluster is started, you can use the Ignite C++ thin client to perform cache operations (things like getting or putting data, or using SQL).
+
+== Getting Started with Ignite and C++
+
+Ignite ships with a robust {cpp} client.
+To get started with Ignite and {cpp}, you will need to be familiar with building {cpp} applications.
+
+. Install `openssl` and add it to your path.
+. If you haven't already, download/install <<Installing Ignite,Apache Ignite>>.
+. Navigate to the `{IGNITE_HOME}/platforms/cpp/project/vs` folder.
+. Launch the appropriate Visual Studio solution file for your system (`ignite.sln` is for 64-bit).
+. Build the solution.
+
+From here, you can create your own code, or run one of the existing examples located in the `{IGNITE_HOME}/platforms/cpp/examples/project/vs` directory.
+
+There is much more information about how to build, test, and use GGCE for {cpp} in the `readme.txt` and `DEVNOTES.txt` files located in the `{IGNITE_HOME}/platforms/cpp` folder.
+
+For information about the {cpp} thin client, see link:thin-clients/cpp-thin-client[C++ Thin Client].
+
+== C++ for Unix
+
+On unix systems, you can use the command line to build and run the examples included in the Ignite distribution.
+
+=== Prerequisites
+include::includes/cpp-linux-build-prerequisites.adoc[]
+
+=== Building C++ Ignite
+
+- Download and unzip the Ignite binary release. We'll refer to a resulting directory as to `${IGNITE_HOME}`.
+- Create a build directory for CMake. We'll refer to this as `${CPP_BUILD_DIR}`
+- Build and install Ignite.C++ by executing the following commands:
+
+[tabs]
+--
+tab:Ubuntu[]
+[source,bash,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON -DWITH_THIN_CLIENT=ON ${IGNITE_HOME}/platforms/cpp 
+make
+sudo make install
+----
+
+tab:CentOS/RHEL[]
+[source,shell,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake3 -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON -DWITH_THIN_CLIENT=ON ${IGNITE_HOME}/platforms/cpp 
+make 
+sudo make install
+----
+
+--
+
+
+=== Building and running the Thick Client Example
+- Create a build directory for cmake. We'll refer to it as `${CPP_EXAMPLES_BUILD_DIR}`
+- Build the examples by executing the following commands:
+
+[tabs]
+--
+tab:Ubuntu[]
+[source,bash,subs="attributes,specialchars"]
+----
+cd ${CPP_EXAMPLES_BUILD_DIR}
+cmake -DCMAKE_BUILD_TYPE=Release ${IGNITE_HOME}/platforms/cpp/examples && make
+cd ./put-get-example
+./ignite-put-get-example
+----
+
+tab:CentOS/RHEL[]
+[source,shell,subs="attributes,specialchars"]
+----
+cd ${CPP_EXAMPLES_BUILD_DIR}
+cmake3 -DCMAKE_BUILD_TYPE=Release ${IGNITE_HOME}/platforms/cpp/examples && make
+cd ./put-get-example
+./ignite-put-get-example
+----
+
+--
+
+== Next Steps
+
+From here, you may want to:
+
+* Check out the link:thin-clients/cpp-thin-client[C++ thin client] that provides a lightweight form of connectivity
+to Ignite clusters
+* Explore the link:{githubUrl}/modules/platforms/cpp/examples[additional C++ examples] included with Ignite
+* Refer to the link:cpp-specific[C{plus}{plus} specific section] of the documentation to learn more about capabilities
+that are available for C++ applications
+
+
+
+
+
diff --git a/docs/_docs/quick-start/dotnet.adoc b/docs/_docs/quick-start/dotnet.adoc
new file mode 100644
index 0000000..c5b1903
--- /dev/null
+++ b/docs/_docs/quick-start/dotnet.adoc
@@ -0,0 +1,95 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for .NET/C#
+
+This chapter explains how to use .NET Core to build and run a simple Hello World example in .NET that starts a node, puts a value into the node and then gets the value.
+
+
+== Prerequisites
+
+Ignite.NET was officially tested on:
+
+include::includes/dotnet-prerequisites.adoc[]
+
+
+== Running a Simple .NET Example
+
+[NOTE]
+====
+Ignite for .NET supports a thick client and a thin client. Because this guide focuses on the _thick_ client, you can run the example below after adding the Ignite library package. You do not need to download and install the Ignite distribution to run the example.
+
+For information about the .NET thin client, see link:thin-clients/dotnet-thin-client[.NET Thin Client].
+====
+
+//TODO??: WARNING: If you use the thick client without downloading and installing Ignite distribution, some functionality (Logging, etc.) will be missing or not configured.
+
+. Install .NET Core SDK (version 2+): https://dotnet.microsoft.com/download
+
+. Use the CLI (unix shell, Windows CMD or PowerShell, etc.) to run the following two commands:
++
+`> dotnet new console`
++
+This creates an empty project, which includes a project file with metadata and a .cs file with code.
++
+And:
++
+`> dotnet add package Apache.Ignite`
++
+This modifies the project file - `.csproj` - to add dependencies.
+
+. Open `Program.cs` in any text editor and replace the contents with the following:
++
+[tabs]
+--
+tab:C#/.NET[]
+[source,csharp]
+----
+using System;
+using Apache.Ignite.Core;
+
+namespace  IgniteTest
+{
+    class Program
+    {
+        static void Main(string[] args)
+        {
+          var ignite = Ignition.Start();
+          var cache = ignite.GetOrCreateCache<int, string>("my-cache");
+          cache.Put(1, "Hello, World");
+          Console.WriteLine(cache.Get(1));
+        }
+    }
+}
+----
+--
+
+. Save and then run the program:
++
+`> dotnet run`
+
+And that's it! You should see a node launch and then display "Hello, World".
+
+
+== Next Steps
+
+From here, you may want to:
+
+* Check out the link:thin-clients/dotnet-thin-client[.NET thin client] that provides a lightweight form of connectivity
+to Ignite clusters
+* Explore the link:{githubUrl}/modules/platforms/dotnet/examples[additional examples] included with Ignite
+* Refer to the link:net-specific[NET-specific section] of the documentation to learn more about capabilities
+that are available for C# and .NET applications.
+
+
diff --git a/docs/_docs/quick-start/index.adoc b/docs/_docs/quick-start/index.adoc
new file mode 100644
index 0000000..f15157a
--- /dev/null
+++ b/docs/_docs/quick-start/index.adoc
@@ -0,0 +1,18 @@
+---
+layout: toc
+---
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Quick Start Guides
diff --git a/docs/_docs/quick-start/java.adoc b/docs/_docs/quick-start/java.adoc
new file mode 100644
index 0000000..cbb911e
--- /dev/null
+++ b/docs/_docs/quick-start/java.adoc
@@ -0,0 +1,171 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for Java
+
+This page explains system requirements for running Ignite, how to install Ignite, start a cluster and run a simple Hello World example.
+
+== Prerequisites
+
+Ignite was officially tested on:
+
+include::includes/prereqs.adoc[]
+
+If you use Java version 11 or later, see <<Running Ignite with Java 11 or later>> for details.
+
+== Installing Ignite
+
+include::includes/install-ignite.adoc[]
+
+
+== Starting a Node
+
+include::includes/starting-node.adoc[]
+
+== Running Your First Application
+
+
+Once the cluster is started, follow the steps below to run a simple HelloWorld example.
+
+=== 1. Add Maven Dependency
+
+
+The easiest way to get started with Ignite in Java is to use Maven dependency management.
+
+Create a new Maven project with your favorite IDE and add the following dependencies in your project’s pom.xml file.
+
+[source,xml,subs="attributes,specialchars"]
+----
+<properties>
+    <ignite.version>{version}</ignite.version>
+</properties>
+
+<dependencies>
+    <dependency>
+        <groupId>org.apache.ignite</groupId>
+        <artifactId>ignite-core</artifactId>
+        <version>${ignite.version}</version>
+    </dependency>
+    <dependency>
+        <groupId>org.apache.ignite</groupId>
+        <artifactId>ignite-spring</artifactId>
+        <version>${ignite.version}</version>
+    </dependency>
+</dependencies>
+----
+
+=== 2. HelloWorld.java
+
+
+Here is a sample HelloWord.java file that prints 'Hello World' and some other environment details on all
+the server nodes of the cluster.
+The sample shows how to prepare a cluster configuration with Java APIs, create a sample cache with some data in it, and execute custom Java logic on the server nodes.
+
+[source,java]
+----
+public class HelloWorld {
+    public static void main(String[] args) throws IgniteException {
+        // Preparing IgniteConfiguration using Java APIs
+        IgniteConfiguration cfg = new IgniteConfiguration();
+
+        // The node will be started as a client node.
+        cfg.setClientMode(true);
+
+        // Classes of custom Java logic will be transferred over the wire from this app.
+        cfg.setPeerClassLoadingEnabled(true);
+
+        // Setting up an IP Finder to ensure the client can locate the servers.
+        TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
+        ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
+        cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
+
+        // Starting the node
+        Ignite ignite = Ignition.start(cfg);
+
+        // Create an IgniteCache and put some values in it.
+        IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
+        cache.put(1, "Hello");
+        cache.put(2, "World!");
+
+        System.out.println(">> Created the cache and add the values.");
+
+        // Executing custom Java compute task on server nodes.
+        ignite.compute(ignite.cluster().forServers()).broadcast(new RemoteTask());
+
+        System.out.println(">> Compute task is executed, check for output on the server nodes.");
+
+        // Disconnect from the cluster.
+        ignite.close();
+    }
+
+    /**
+     * A compute tasks that prints out a node ID and some details about its OS and JRE.
+     * Plus, the code shows how to access data stored in a cache from the compute task.
+     */
+    private static class RemoteTask implements IgniteRunnable {
+        @IgniteInstanceResource
+        Ignite ignite;
+
+        @Override public void run() {
+            System.out.println(">> Executing the compute task");
+
+            System.out.println(
+                "   Node ID: " + ignite.cluster().localNode().id() + "\n" +
+                "   OS: " + System.getProperty("os.name") +
+                "   JRE: " + System.getProperty("java.runtime.name"));
+
+            IgniteCache<Integer, String> cache = ignite.cache("myCache");
+
+            System.out.println(">> " + cache.get(1) + " " + cache.get(2));
+        }
+    }
+}
+----
+[NOTE]
+====
+Don't forget to add imports for HelloWorld.java. It should be trivial as long as Maven solves all of the dependencies.
+
+Plus, you might need to add these settings to your pom.xml if the IDE keeps using Java compiler from a version earlier than 1.8:
+[source,xml]
+----
+<build>
+    <plugins>
+        <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-compiler-plugin</artifactId>
+            <configuration>
+                <source>1.8</source>
+                <target>1.8</target>
+            </configuration>
+        </plugin>
+    </plugins>
+</build>
+----
+====
+
+
+=== 3. Run HelloWorld.java
+
+
+Run HelloWorld.java. You will see 'Hello World!' and other environment details printed on all the server nodes.
+
+
+== Further Examples
+
+include::includes/exampleprojects.adoc[]
+
+== Running Ignite with Java 11 or later
+
+include::includes/java9.adoc[]
+
diff --git a/docs/_docs/quick-start/nodejs.adoc b/docs/_docs/quick-start/nodejs.adoc
new file mode 100644
index 0000000..af0edaf
--- /dev/null
+++ b/docs/_docs/quick-start/nodejs.adoc
@@ -0,0 +1,104 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for Node.JS
+
+This chapter explains system requirements for running Ignite, how to install Ignite, start a cluster and run a simple Hello World example using a thin client for Node.js.
+
+Thin Client is a lightweight Ignite connection mode.
+It does not participate in cluster, never holds any data, or performs computations.
+All it does is establish a socket connection to an individual Ignite node and perform all operations through that node.
+
+== Prerequisites
+
+Ignite was tested on:
+
+include::includes/prereqs.adoc[]
+
+and:
+
+[width="100%",cols="1,3"]
+|=======================================================================
+|Node.js |Version 8 or higher is required. Either download the Node.js pre-built binary for the target platform, or install Node.js via package manager.
+|=======================================================================
+
+== Installing Ignite
+
+include::includes/install-ignite.adoc[]
+
+Once that's done, execute the following command to install the Node.js Thin Client package:
+
+include::includes/install-nodejs-npm.adoc[]
+
+== Starting a Node
+
+Before connecting to Ignite from Node.JS thin client, you must start at least one cluster node.
+
+include::includes/starting-node.adoc[]
+
+== Running Your First Application
+
+
+Once the cluster is started, you can use the Ignite Node.js thin client to perform cache operations.
+Your Ignite installation includes several ready-to-run Node.JS examples in the `{ignite_nodejs_dir}/platforms/nodejs/examples` directory. For example,
+
+[source,shell]
+----
+cd {IGNITE_HOME}/platforms/nodejs/examples
+node CachePutGetExample.js
+----
+
+Assuming that the server node is running locally, and that you have completed
+all of the pre-requisites listed above, here is a very simple _HelloWorld_
+example that puts and gets values from the cache. If you followed the
+instructions above, and if you place this hello world example in your examples
+folder, it should work.
+
+[source,javascript]
+----
+const IgniteClient = require('apache-ignite-client');
+const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
+const ObjectType = IgniteClient.ObjectType;
+const CacheEntry = IgniteClient.CacheEntry;
+
+async function performCacheKeyValueOperations() {
+    const igniteClient = new IgniteClient();
+    try {
+        await igniteClient.connect(new IgniteClientConfiguration('127.0.0.1:10800'));
+        const cache = (await igniteClient.getOrCreateCache('myCache')).
+            setKeyType(ObjectType.PRIMITIVE_TYPE.INTEGER);
+        // put and get value
+        await cache.put(1, 'Hello World');
+        const value = await cache.get(1);
+        console.log(value);
+
+    }
+    catch (err) {
+        console.log(err.message);
+    }
+    finally {
+        igniteClient.disconnect();
+    }
+}
+
+performCacheKeyValueOperations();
+----
+
+== Next Steps
+
+From here, you may want to:
+
+* Read more about using Ignite Node.js Thin Client link:thin-clients/nodejs-thin-client[here]
+//* Explore the link:https://github.com/gridgain/nodejs-thin-client/tree/master/examples[additional examples] included with Ignite
+
diff --git a/docs/_docs/quick-start/php.adoc b/docs/_docs/quick-start/php.adoc
new file mode 100644
index 0000000..e4ac432
--- /dev/null
+++ b/docs/_docs/quick-start/php.adoc
@@ -0,0 +1,125 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for PHP
+
+This chapter explains system requirements for running Ignite and how to install Ignite, start a cluster, and run a simple Hello World example using a thin client for PHP.
+
+Thin Client is a lightweight Ignite connection mode.
+It does not participate in cluster, never holds any data, or performs computations.
+All it does is establish a socket connection to an individual Ignite node and perform all operations through that node.
+
+== Prerequisites
+
+Ignite was tested on:
+
+include::includes/prereqs.adoc[]
+
+and:
+
+[cols="1,3"]
+|=======================================================================
+|PHP |Version 7.2 or higher and Composer Dependency Manager. PHP Multibyte String extension. Depending on your PHP configuration, you may need to additionally install/configure it.
+|=======================================================================
+
+
+== Installing Ignite
+
+include::includes/install-ignite.adoc[]
+
+Once that's done, go to `{IGNITE_HOME}/platforms/php` and install Ignite PHP Thin Client as a Composer package using the command below:
+
+[source, ruby]
+----
+composer install --no-dev
+----
+
+You're almost ready to run your first application.
+
+== Starting a Node
+
+Before connecting to Ignite from the PHP thin client, you must start at least one Ignite cluster node.
+
+include::includes/starting-node.adoc[]
+
+== Running Your First  Application
+
+Once at least one node is started, you can use the Ignite PHP thin client to perform cache operations.
+Your Ignite installation includes several ready-to-run PHP examples in the `{IGNITE_HOME}/platforms/php/examples` directory. For example,
+
+
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+cd {IGNITE_HOME}/platforms/php/examples
+php CachePutGetExample.php
+----
+
+tab:Windows[]
+[source,shell]
+----
+cd {IGNITE_HOME}\platforms\php\examples
+php CachePutGetExample.php
+----
+--
+
+
+Assuming that the server node is running locally, and that you have completed all of the pre-requisites listed above, here is a very simple _HelloWorld_ example that puts and gets values from the cache.
+Note the `require_once` line — make sure the path is correct.
+If you followed the instructions above, and if you place this hello world example in your examples folder, it should work.
+
+
+[source,php]
+----
+<?php
+
+require_once __DIR__ . '/../vendor/autoload.php';
+
+use Apache\Ignite\Client;
+use Apache\Ignite\ClientConfiguration;
+use Apache\Ignite\Type\ObjectType;
+use Apache\Ignite\Cache\CacheEntry;
+use Apache\Ignite\Exception\ClientException;
+
+function performCacheKeyValueOperations(): void
+{
+    $client = new Client();
+    try {
+        $client->connect(new ClientConfiguration('127.0.0.1:10800'));
+        $cache = $client->getOrCreateCache('myCache')->
+            setKeyType(ObjectType::INTEGER);
+
+        // put and get value
+        $cache->put(1, 'Hello World');
+        $value = $cache->get(1);
+        echo($value);
+    } catch (ClientException $e) {
+        echo($e->getMessage());
+    } finally {
+        $client->disconnect();
+    }
+}
+
+performCacheKeyValueOperations();
+----
+
+== Next Steps
+
+From here, you may want to:
+
+* Read more about using  link:thin-clients/php-thin-client[PHP Thin Client]
+//* Explore the link:https://github.com/gridgain/php-thin-client/tree/master/examples[additional examples] included with GridGain
+
diff --git a/docs/_docs/quick-start/python.adoc b/docs/_docs/quick-start/python.adoc
new file mode 100644
index 0000000..ae18b40
--- /dev/null
+++ b/docs/_docs/quick-start/python.adoc
@@ -0,0 +1,88 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Ignite for Python
+
+This chapter explains system requirements for running Ignite and how to install Ignite, start a cluster, and run a simple Hello World example using a thin link:thin-clients/python-thin-client[client for Python].
+
+Thin Client is a lightweight Ignite connection mode. It does not participate in the cluster, never holds any data, or performs computations.
+All it does is establish a socket connection to one or multiple Ignite nodes and perform all operations through those nodes.
+
+== Prerequisites
+
+Ignite was tested on:
+
+include::includes/prereqs.adoc[]
+
+and:
+
+[cols="1,3"]
+|=======================================================================
+|Python |Version 3.4 or above
+|=======================================================================
+
+== Installing Ignite
+
+include::includes/install-ignite.adoc[]
+
+Once that's done, execute the following command to install the Python Thin Client package.
+This thin client is abbreviated as `pyignite`:
+
+include::includes/install-python-pip.adoc[]
+
+== Starting a Node
+
+Before connecting to Ignite via the Python thin client, you must start at least one Ignite cluster node.
+
+include::includes/starting-node.adoc[]
+
+== Running Your First Application
+
+Once the cluster is started, you can use the Ignite Python thin client to perform cache operations.
+
+Assuming that the server node is running locally, here is a _HelloWorld_ example that puts and gets values from the cache:
+
+.hello.py
+[source,python]
+----
+from pyignite import Client
+
+client = Client()
+client.connect('127.0.0.1', 10800)
+
+#Create cache
+my_cache = client.create_cache('my cache')
+
+#Put value in cache
+my_cache.put(1, 'Hello World')
+
+#Get value from cache
+result = my_cache.get(1)
+print(result)
+----
+
+To run this, you can save the example as a text file (hello.py for example) and run it from the command line:
+
+
+[source, python]
+----
+python3 hello.py
+----
+
+Or you can enter the example into your Python interpreter/shell (IDLE on Windows, for example) and modify/execute it there.
+
+
+== Further Examples
+
+Explore more Ignite Python examples link:{githubUrl}/modules/platforms/python/examples[here^].
diff --git a/docs/_docs/quick-start/restapi.adoc b/docs/_docs/quick-start/restapi.adoc
new file mode 100644
index 0000000..f1e8111
--- /dev/null
+++ b/docs/_docs/quick-start/restapi.adoc
@@ -0,0 +1,96 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= REST API for Ignite
+
+This chapter explains system requirements for running Ignite, including how to install Ignite, start a cluster, and run a simple Hello World example using Ignite's REST API.
+
+
+== Prerequisites
+
+Ignite was tested on:
+
+include::includes/prereqs.adoc[]
+
+
+== Installing Ignite
+
+include::includes/install-ignite.adoc[]
+
+Once that's done, you will need to enable HTTP connectivity.
+To do this, copy the `ignite-rest-http` module from `{IGNITE_HOME}/libs/optional/` to the `{IGNITE_HOME}/libs` folder.
+
+== Starting a Node
+
+Before connecting to Ignite via the REST API, you must start at least one cluster node.
+
+include::includes/starting-node.adoc[]
+
+== Running Your First Application
+
+Once the cluster is started, you can use the Ignite REST API to perform cache operations.
+
+You don't need to explicitly configure anything because the connector is initialized automatically, listening on port 8080.
+
+To verify the connector is ready, use curl:
+
+[source,shell]
+----
+curl "http://localhost:8080/ignite?cmd=version"
+----
+
+You should see a message like this:
+
+
+[source, shell,subs="attributes,specialchars"]
+-------------------------------------------------------------------------------
+$ curl "http://localhost:8080/ignite?cmd=version"
+{"successStatus":0,"error":null,"sessionToken":null,"response":"{version}"}
+-------------------------------------------------------------------------------
+
+You can see in the result that Ignite version is {version}.
+
+Request parameters may be provided as either a part of URL or in a form data:
+
+[source,shell]
+----
+curl 'http://localhost:8080/ignite?cmd=put&cacheName=myCache' -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'key=testKey&val=testValue'
+----
+
+Assuming that the server node is running locally, here is a simple example that creates a cache (myCache) and then puts and gets the string "Hello_World!" from the cache via the REST API:
+
+Create a cache:
+
+[source,shell]
+----
+curl "http://localhost:8080/ignite?cmd=getorcreate&cacheName=myCache"
+----
+
+Put data into the cache. The default type is "string" but you can specify a link:restapi#data-types[data type] via the `keyType` parameter.
+[source,shell]
+----
+curl "http://localhost:8080/ignite?cmd=put&key=1&val="Hello_World!"&cacheName=myCache"
+----
+
+Get the data from the cache
+[source,shell]
+----
+curl "http://localhost:8080/ignite?cmd=get&key=1&cacheName=myCache"
+----
+
+Now that you've seen a very basic example of accessing Ignite clusters via the REST API, you should probably keep the following in mind:
+
+- This is a very basic example. You will want to read more on the REST API link:restapi[here,window=_blank]. That page includes a listing of the various API calls and also covers important subjects like Authentication.
+- The REST interface may not be suitable for all tasks. For example, you should use one of the language clients instead if you're trying to load bulk data, or perform mission critical tasks with millisecond latency.
+
diff --git a/docs/_docs/quick-start/sql.adoc b/docs/_docs/quick-start/sql.adoc
new file mode 100644
index 0000000..c1d1eed
--- /dev/null
+++ b/docs/_docs/quick-start/sql.adoc
@@ -0,0 +1,129 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Getting Started Quickly with SQL Via the Command Line
+
+If you just want to start up a cluster on the local machine and add a few rows of data without running Java or starting up an IDE, you can do some basic data loading and run some queries via the command line purely in SQL in less than 5 minutes.
+
+To do this, we'll use the `sqlline` utility (located in the `/bin` directory of your Ignite installation).
+
+NOTE: This example shows just one simple way to load data into Ignite, quickly, for the sake of experimenting.
+For larger, production-scale work, you would want to use a more robust method of loading data (IgniteDataStreamer, Spark, advanced SQL, etc.).
+Refer to the link:persistence/external-storage[External Storage] page for the information on how to load data from an RDBMS.
+
+== Installing Ignite
+
+Before we can get to any of that, we'll first need to install Ignite.
+
+include::includes/install-ignite.adoc[]
+
+
+== Running Ignite
+
+include::includes/starting-node.adoc[]
+
+This is the most basic startup method.
+It starts a node on the local machine, which gives us a place into which we can load data.
+
+Now just connect to the node and add data.
+
+== Using sqlline
+
+Using the `sqlline` utility is easy — you just need to connect to the node and then start entering SQL statements.
+
+. Open one more command shell tab and ensure you're in the `{IGNITE_HOME}\bin`
+folder.
+
+. Connect to the cluster with `sqlline`:
++
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+$ ./sqlline.sh -u jdbc:ignite:thin://127.0.0.1/
+----
+tab:Windows[]
+[source,shell]
+----
+$ sqlline -u jdbc:ignite:thin://127.0.0.1
+----
+--
+
+. Create two tables by running these two statements in `sqlline`:
++
+[source, sql]
+----
+CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR) WITH "template=replicated";
+
+CREATE TABLE Person (id LONG, name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id))
+WITH "backups=1, affinityKey=city_id";
+----
+
+
+. Insert some rows by copy-pasting the statements below:
++
+[source, sql]
+----
+INSERT INTO City (id, name) VALUES (1, 'Forest Hill');
+INSERT INTO City (id, name) VALUES (2, 'Denver');
+INSERT INTO City (id, name) VALUES (3, 'St. Petersburg');
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+INSERT INTO Person (id, name, city_id) VALUES (2, 'Jane Roe', 2);
+INSERT INTO Person (id, name, city_id) VALUES (3, 'Mary Major', 1);
+INSERT INTO Person (id, name, city_id) VALUES (4, 'Richard Miles', 2);
+----
+
+. And then run some basic queries:
++
+[source, sql]
+----
+SELECT * FROM City;
+
++--------------------------------+--------------------------------+
+|               ID               |              NAME              |
++--------------------------------+--------------------------------+
+| 1                              | Forest Hill                    |
+| 2                              | Denver                         |
+| 3                              | St. Petersburg                 |
++--------------------------------+--------------------------------+
+3 rows selected (0.05 seconds)
+----
+
+. As well as queries with distributed JOINs:
++
+[source, sql]
+----
+SELECT p.name, c.name FROM Person p, City c WHERE p.city_id = c.id;
+
++--------------------------------+--------------------------------+
+|              NAME              |              NAME              |
++--------------------------------+--------------------------------+
+| Mary Major                     | Forest Hill                    |
+| Jane Roe                       | Denver                         |
+| John Doe                       | St. Petersburg                 |
+| Richard Miles                  | Denver                         |
++--------------------------------+--------------------------------+
+4 rows selected (0.011 seconds)
+----
+
+Easy!
+
+
+== Next Steps
+
+From here, you may want to:
+
+* Read more about using Ignite and link:SQL/sql-introduction[SQL]
+* Read more about using link:tools/sqlline[sqlline]
diff --git a/docs/_docs/read-repair.adoc b/docs/_docs/read-repair.adoc
new file mode 100644
index 0000000..50c7595
--- /dev/null
+++ b/docs/_docs/read-repair.adoc
@@ -0,0 +1,56 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Read Repair
+
+WARNING: Experimental API.
+
+
+"Read Repair" refers to a technique of repairing inconsistencies between primary and backup copies of data during normal read operations. When a specific key (or keys) is read by a user operation, Ignite checks the values for the given key in all backup copies.
+
+The Read Repair mode is designed to maintain consistency. However, read operations become {tilde}2 times more costly because backup copies are checked. It is generally not advisable to use this mode all the time, but rather on a once-in-a-while basis.
+
+To enable Read Repair mode, obtain an instance of the cache that enables Read Repair reads as follows:
+
+[source, java]
+----
+include::{javaCodeDir}/BasicCacheOperations.java[tags=read-repair, indent=0]
+----
+
+A consistency check is incompatible with the following cache configurations:
+
+* Caches without backups.
+* Local caches.
+* Near caches.
+* Caches that use "read-through" mode.
+
+== Transactional Caches
+
+All values across the topology are replaced with the latest version.
+
+*  Automatically for transactions that have `TransactionConcurrency.OPTIMISTIC` concurrency mode or `TransactionIsolation.READ_COMMITTED` isolation level
+*  at the commit() phase for transactions that have `TransactionConcurrency.PESSIMISTIC` concurrency mode and isolation level other than `TransactionIsolation.READ_COMMITTED`
+
+When a backup inconsistency is detected, Ignite will generate a link:https://ignite.apache.org/releases/{version}/javadoc/org/apache/ignite/events/EventType.html#EVT_CONSISTENCY_VIOLATION[consistency violation event] (if the event is enabled in the configuration). You can listen to this event to get notified about inconsistency issues. Refer to the link:events/listening-to-events[Working with Events] section for the information on how to listen to events.
+
+Read Repair does not guarantee "all copies check" in case value already cached inside the transaction.
+For example, in case you use !TransactionIsolation.READ_COMMITTED isolation mode and already read the value or performed a write, you'll gain the cached value.
+
+== Atomic Caches
+
+The consistency violation exception is thrown if differences are found.
+
+Due to the nature of the atomic cache, false-positive results can be observed. For example, an attempt to check consistency under load may lead to consistency violation exception. By default, the implementation tries to check the given key three times. The number of attempts can be changed by setting `IGNITE_NEAR_GET_MAX_REMAPS` system property.
+
+Be aware that the consistency violation event will not be fired for atomic caches.
diff --git a/docs/_docs/resources-injection.adoc b/docs/_docs/resources-injection.adoc
new file mode 100644
index 0000000..6792d29
--- /dev/null
+++ b/docs/_docs/resources-injection.adoc
@@ -0,0 +1,88 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Resources Injection
+
+== Overview
+
+Ignite supports the dependency injection of pre-defined Ignite resources, and supports field-based as well as method-based
+injection. Resources with proper annotations will be injected into the corresponding task, job, closure, or SPI before it is initialized.
+
+== Field-Based and Method-Based Injection
+
+You can inject resources by annotating either a field or a method. When you annotate a field, Ignite simply sets the
+value of the field at injection time (disregarding an access modifier of the field). If you annotate a method with
+resource annotation, it should accept an input parameter of the type corresponding to an injected resource. If it does,
+then the method is invoked at injection time with the appropriate resource passed as an input argument.
+
+[tabs]
+--
+tab:Field-Based Approach[]
+[source,java]
+----
+Ignite ignite = Ignition.ignite();
+
+Collection<String> res = ignite.compute().broadcast(new IgniteCallable<String>() {
+  // Inject Ignite instance.
+  @IgniteInstanceResource
+  private Ignite ignite;
+
+  @Override
+  public String call() throws Exception {
+    IgniteCache<Object, Object> cache = ignite.getOrCreateCache(CACHE_NAME);
+
+    // Do some stuff with cache.
+     ...
+  }
+});
+----
+tab:Method-Based Approach[]
+[source,java]
+----
+public class MyClusterJob implements ComputeJob {
+    ...
+    private Ignite ignite;
+    ...
+    // Inject Ignite instance.
+    @IgniteInstanceResource
+    public void setIgnite(Ignite ignite) {
+        this.ignite = ignite;
+    }
+    ...
+}
+----
+--
+
+== Pre-defined Resources
+
+There are a number of pre-defined Ignite resources that you can inject:
+
+[cols="1,3",opts="header"]
+|===
+| Resource | Description
+
+| `CacheNameResource` | Injects grid cache name provided via `CacheConfiguration.getName()`.
+| `CacheStoreSessionResource` | Injects the current `CacheStoreSession` instance.
+| `IgniteInstanceResource` | Injects the Ignite node instance.
+| `JobContextResource` | Injects an instance of `ComputeJobContext`. The job context holds useful information about a
+particular job execution. For example, you can get the name of the cache containing the entry for which a job was co-located.
+| `LoadBalancerResource` | Injects an instance of ComputeLoadBalancer that can be used by a task to do the load balancing.
+| `ServiceResource` | Injects an Ignite service by specified service name.
+| `SpringApplicationContextResource` | Injects Spring's `ApplicationContext` resource.
+| `SpringResource` | Injects resource from Spring's `ApplicationContext`. Use it whenever you would like to access a bean
+specified in Spring's application context XML configuration.
+| `TaskContinuousMapperResource` | Injects an instance of `ComputeTaskContinuousMapper`. Continuous mapping allows to
+emit jobs from the task at any point, even after initial map phase.
+| `TaskSessionResource` | Injects instance of `ComputeTaskSession` resource which defines a distributed session for a particular task execution.
+|===
diff --git a/docs/_docs/restapi.adoc b/docs/_docs/restapi.adoc
new file mode 100644
index 0000000..5bd630b
--- /dev/null
+++ b/docs/_docs/restapi.adoc
@@ -0,0 +1,2953 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= REST API
+:request_table_props: cols="15%,10%,10%,45%,20%",options="header"
+:response_table_props: cols="15%,15%,50%,20%",options="header"
+
+Ignite provides an HTTP REST client that can communicate with the cluster over HTTP and HTTPS protocols using the REST approach. REST APIs can be used to perform different operations like read/write from/to cache, execute tasks, get various metrics, and more.
+
+Internally, Ignite uses Jetty to provide HTTP server features. See <<Configuration>> section below for details on how to configure jetty.
+
+== Getting Started
+
+To enable HTTP connectivity, make sure that the `ignite-rest-http` module is enabled.
+If you use the binary distribution, copy the `ignite-rest-http` module from `IGNITE_HOME/libs/optional/` to the `IGNITE_HOME/libs` folder.
+See link:setup#enabling-modules[Enabling modules] for details.
+
+Explicit configuration is not required; the connector starts up automatically and listens on port `8080`. You can check if it works with curl:
+
+[source,shell]
+----
+curl 'http://localhost:8080/ignite?cmd=version'
+----
+
+Request parameters may be provided as either a part of URL or in a form data:
+
+[source,shell]
+----
+curl 'http://localhost:8080/ignite?cmd=put&cacheName=myCache' -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'key=testKey&val=testValue'
+----
+
+=== Configuration
+
+You can change HTTP server parameters as follows:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/http-configuration.xml[tags=ignite-config;http-configuration;!discovery, indent=0]
+----
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/RESTConfiguration.java[tags=http-configuration, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+The following table describes the properties of `ConnectorConfiguration` that are related to the http server:
+
+[width="100%", cols="30%,50%,10%,10%"]
+|=======
+| Parameter Name | Description |Optional |Default Value
+
+|`setSecretKey(String)`
+|Defines secret key used for client authentication. When provided, client request must contain `HTTP header X-Signature` with the string "[1]:[2]", where [1] is timestamp in milliseconds and [2] is the Base64 encoded SHA1 hash of the secret key.
+|Yes
+|`null`
+
+|`setPortRange(int)`
+|Port range for Jetty server. If the port provided in Jetty configuration or `IGNITE_JETTY_PORT` system property is already in use, Ignite iteratively increments port by 1 and tries to bind once again until provided port range is exceeded.
+|Yes
+|`100`
+
+|`setJettyPath(String)`
+|Path to Jetty configuration file. Should be either absolute or relative to `IGNITE_HOME`. If the path is not set, Ignite starts a Jetty server with a simple HTTP connector. This connector uses `IGNITE_JETTY_HOST` and `IGNITE_JETTY_PORT` system properties as `host` and `port` respectively. If `IGNITE_JETTY_HOST` is not provided, `localhost` is used as default. If `IGNITE_JETTY_PORT` is not provided, port `8080` is used.
+|Yes
+|`null`
+
+|`setMessageInterceptor(...)`
+|The interceptor transforms all objects exchanged via REST protocol. For example, if you use custom serialisation on client you can write an interceptor to transform binary representations received from the client to Java objects and later access them from Java code directly.
+|Yes
+|`null`
+|=======
+
+==== Example Jetty XML Configuration
+
+Path to this configuration should be set to `ConnectorConfiguration.setJettyPath(String)` as explained above.
+
+[source,xml]
+----
+include::code-snippets/xml/jetty.xml[tags=, indent=0]
+----
+
+=== Security
+
+When link:security/authentication[authentication] is configured in the cluster, all applications that use REST API request authentication by providing security credentials.
+The authentication request returns a session token that can be used with any command within that session.
+
+There are two ways to request authorization:
+
+. Use the authenticate command with `ignite.login=[user]&ignite.password=[password]` parameters.
++
+--
+----
+https://[host]:[port]/ignite?cmd=authenticate&ignite.login=[user]&ignite.password=[password]
+----
+--
+. Use any REST command with `ignite.login=[user]&ignite.password=[password]` parameters in the path of your connection string. In our example below, we use the `version` command:
++
+--
+[source, curl]
+----
+http://[host]:[port]/ignite?cmd=version&ignite.login=[user]&ignite.password=[password]
+----
+--
+In both examples above, replace `[host]`, `[port]`, `[user]`, and `[password]` with actual values.
+
+Executing any one of the above strings in a browser returns a response with a session token which looks like this:
+
+----
+{"successStatus":0,"error":null,"sessionToken":"EF6013FF590348CE91DEAE9870183BEF","response":true}
+----
+
+Once you obtain the session token, use the sessionToken parameter with your connection string as shown in the example below:
+
+----
+http://[host]:[port]/ignite?cmd=top&sessionToken=[sessionToken]
+----
+
+In the above connection string, replace `[host]`, `[port]`, and `[sessionToken]` with actual values.
+
+[WARNING]
+====
+Either user credentials or a session token is required when authentication is enabled on the server.
+Failure to provide either a `sessionToken` or `user` & `password` parameters in the REST connection string results in an error:
+
+[source, json]
+----
+{
+    "successStatus":2,
+    "sessionToken":null,
+    "error":"Failed to handle request - session token not found or invalid",
+    "response":null
+}
+----
+====
+
+
+[NOTE]
+====
+[discrete]
+=== Session Token Expiration
+
+A session token is valid only for 30 seconds. Using an expired session token results in an error, like the one below:
+
+[source, json]
+----
+{
+    "successStatus":1,
+    "error":"Failed to handle request - unknown session token (maybe expired session) [sesTok=12FFFD4827D149068E9FFF59700E5FDA]",
+    "sessionToken":null,
+    "response":null
+}
+----
+
+To set a custom expire time, set the system variable: `IGNITE_REST_SESSION_TIMEOUT` (in seconds).
+
+[source, text]
+----
+-DIGNITE_REST_SESSION_TIMEOUT=3600
+----
+
+
+====
+
+== Data Types
+By default, the REST API exchanges query parameters in the `String` format. The cluster works with the parameters as
+with `String` objects.
+
+If a type of a parameter is different from `String`, you can use the `keyType` or `valueType` to specify the real type
+of the argument. The REST API supports both <<Java Types>> and <<Custom Types>>.
+
+=== Java Types
+
+[width="100%", cols="50%,50%"]
+|=======
+| REST KeyType/ValueType | Corresponding Java Type
+
+|`boolean`
+|`java.lang.Boolean`
+
+|`byte`
+|`java.lang.Byte`
+
+|`short`
+|`java.lang.Short`
+
+|`integer`
+|`java.lang.Integer`
+
+|`long`
+|`java.lang.Long`
+
+|`float`
+|`java.lang.Float`
+
+|`double`
+|`java.lang.Double`
+
+|`date`
+|`java.sql.Date`
+
+The date value should be in the format as specified in the `valueOf(String)` method in the link:https://docs.oracle.com/javase/8/docs/api/java/sql/Date.html#valueOf-java.lang.String-[Java documentation ,window=_blank]
+
+Example: 2018-01-01
+
+|`time`
+|`java.sql.Time`
+
+The time value should be in the format as specified in the `valueOf(String)` method in the link:https://docs.oracle.com/javase/8/docs/api/java/sql/Date.html#valueOf-java.lang.String-[Java documentation ,window=_blank]
+
+Example: 01:01:01
+
+|`timestamp`
+|`java.sql.Timestamp`
+
+The timestamp value should be in the format as specified in the `valueOf(String)` method in the link:https://docs.oracle.com/javase/8/docs/api/java/sql/Date.html#valueOf-java.lang.String-[Java documentation ,window=_blank]
+
+Example: 2018-02-18%2001:01:01
+
+|`uuid`
+|`java.util.UUID`
+
+|`IgniteUuid`
+|`org.apache.ignite.lang.IgniteUuid`
+|=======
+
+The following example shows a `put` command with `keyType=int` and `valueType=date`:
+
+[source,text]
+----
+http://[host]:[port]/ignite?cmd=put&key=1&val=2018-01-01&cacheName=myCache&keyType=int&valueType=date
+----
+
+Similarly, the `get` command with `keyType=int` and `valueType=date` would be:
+
+[source,text]
+----
+http://[host]:[port]/ignite?cmd=get&key=1&cacheName=myCache&keyType=int&valueType=date
+----
+
+=== Custom Types
+
+The JSON format is used to exchange complex custom objects via the Ignite REST protocol.
+
+For example, let's assume you have a `Person` class, and below is the JSON representation of an object instance that
+you need to send to the cluster:
+
+[source,javascript]
+----
+ {
+  "uid": "7e51118b",
+  "name": "John Doe",
+  "orgId": 5678901,
+  "married": false,
+  "salary": 156.1
+ }
+----
+
+Next, you use this REST request to put the object in the cluster by setting the `valueType` parameter to `Person` and
+the `val` parameter to the value of the JSON object:
+
+[source,text]
+----
+http://[host]:[port]/ignite?cacheName=testCache&cmd=put&keyType=int&key=1&valueType=Person
+&val=%7B%0A+++++%22uid%22%3A+%227e51118b%22%2C%0A+++++%22name%22%3A+%22John+Doe%22%2C%0A+++++%22orgId%22%3A+5678901%2C%0A+++++%22married%22%3A+false%2C%0A+++++%22salary%22%3A+156.1%0A++%7D&
+----
+
+Once a server receives the request, it converts the object from the JSON into the internal
+link:/docs/data-modeling/data-modeling#binary-object-format[binary object] format following the conversion procedure below:
+
+* If the `Person` class exists and available on the server's classpath, the JSON object is resolved to an instance of the `Person` class.
+
+* If the `Person` class is not available on the server’s classpath, but there is a `QueryEntity` object that defines
+the `Person`, then the JSON object is resolved to a binary object of that `Person` type:
++
+[%header, cols="2"]
+|===
+|Query entity|Binary Object (Person)
+a|
+[source,xml]
+----
+<bean class="org.apache.ignite.cache.QueryEntity">
+<property name="keyType" value="java.lang.Integer"/>
+<property name="valueType" value="Person"/>
+<property name="fields">
+<map>
+<entry key="uid"     value="java.util.UUID"/>
+<entry key="name"    value="java.lang.String"/>
+<entry key="orgId"   value="java.lang.Long"/>
+<entry key="married" value="java.lang.Boolean"/>
+<entry key="salary"  value="java.lang.Float"/>
+</map>
+</property>
+</bean>
+----
+a|
+[source,javascript]
+----
+"uid": "7e51118b",  // UUID
+"name": "John Doe", // string
+"orgId": 5678901,   // long
+"married": false,   // boolean
+"salary": 156.1     // float
+----
+|===
+
+* Otherwise, the JSON object’s field types are resolved following the regular JSON convention:
++
+[source,javascript]
+----
+"uid": "7e51118b",   // string
+"name": "John Doe",  // string
+"orgId": 5678901,    // int
+"married": false,    // boolean
+"salary": 156.1      // double
+----
+
+The same conversion rules apply when you have a custom key type set via the `keyType` parameter of the Ignite
+REST protocol.
+
+== Returned Value
+The HTTP REST request returns a JSON object which has a similar structure for each command:
+
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`affinityNodeId`
+|`string`
+|Affinity node ID.
+|`2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37`
+
+|`error`
+|`string`
+|This field contains description of error if server could not handle the request.
+|Specifically for each command.
+
+|`sessionToken`
+|`string`
+|When authentication is enabled on the server, this field contains a session token that can be used with any command within that session. If authentication is off, this field contains `null`.
+When authentication is enabled - `EF6013FF590348CE91DEAE9870183BEF`
+|Otherwise, `null`.
+
+|`response`
+|`jsonObject`
+|This field contains the result of the command.
+|Specifically for each command.
+
+|`successStatus`
+|`integer`
+|Exit status code. It might have the following values:
+
+`success = 0`
+
+`failed = 1`
+
+`authorization failed = 2`
+
+`security check failed = 3`
+|`0`
+|=======
+
+== REST API Reference
+
+=== Version
+
+Returns the Ignite version.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=version
+----
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": "1.0.0",
+  "successStatus": 0
+}
+----
+
+=== Cluster State
+Returns the current link:monitoring-metrics/cluster-states[state of the cluster].
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=state
+----
+
+*Response:*::
++
+Returns `true` if the cluster is active. Returns `false` if the cluster in inactive.
++
+[source,json]
+----
+{
+  "successStatus":0,
+  "error":null,
+  "sessionToken":null,
+  "response": "ACTIVE_READ_ONLY"
+}
+----
+
+
+=== Change Cluster State
+
+The `setstate` command changes the link:monitoring-metrics/cluster-states[cluster state].
+
+*Request:*::
++
+--
+[source,shell]
+----
+http://host:port/ignite?cmd=setstate&state={new_state}
+----
+
+[cols="15%,10%,75%",options="header"]
+|===
+|Parameter
+|Type
+|Description
+
+|`state` | String a| New cluster state. One of the values:
+
+* `ACTIVE`: active state,
+* `ACTIVE_READ_ONLY`: read only state,
+* `INACTIVE`: the cluster is deactivated.
+
+include::includes/note-on-deactivation.adoc[]
+
+|===
+--
+
+*Response:*::
++
+[source,json]
+----
+{
+  "successStatus":0,
+  "error":null,
+  "sessionToken":null,
+  "response":"setstate done"
+}
+----
+
+////
+=== Deactivate
+Starts the deactivation process for a persistence-enabled cluster.
+
+include::includes/note-on-deactivation.adoc[]
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=deactivate
+----
+
+*Response:*::
++
+[source,json]
+----
+{
+  "successStatus":0,
+  "error":null,
+  "sessionToken":null,
+  "response":"deactivate started"
+}
+----
+////
+
+
+=== Increment
+
+Adds and gets current value of given atomic long.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=incr&cacheName={cacheName}&key={incrKey}&init={initialValue}&delta={delta}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| The name of atomic long.
+| counter
+
+|`init`
+|long
+| Yes
+| Initial value.
+| 15
+
+|`delta`
+| long
+|
+|Number to be added.
+| 42
+|=======
+
+*Response:*::
++
+The response contains the value after the operation.
++
+[source,json]
+----
+{
+  "affinityNodeId": "e05839d5-6648-43e7-a23b-78d7db9390d5",
+  "error": "",
+  "response": 42,
+  "successStatus": 0
+}
+----
+
+=== Decrement
+
+Subtracts and gets current value of given atomic long.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=decr&cacheName={cacheName}&key={key}&init={init_value}&delta={delta}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+|Yes
+|Cache name. If not provided, the default cache ("default") is used.
+|partitionedCache
+
+|`key`
+|string
+|
+|The name of atomic long.
+|counter
+
+|`init`
+| long
+| Yes
+| Initial value.
+| `15`
+
+|`delta`
+|long
+|
+|Number to be subtracted.
+|`42`
+|=======
+
+*Response:*::
++
+The response contains the value after the operation.
++
+[source,json]
+----
+{
+  "affinityNodeId": "e05839d5-6648-43e7-a23b-78d7db9390d5",
+  "error": "",
+  "response": -42,
+  "successStatus": 0
+}
+----
+
+
+
+=== Cache Metrics
+
+Shows metrics for a cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=cache&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "",
+  "error": "",
+  "response": {
+    "hits": 0,
+    "misses": 0,
+    "reads": 0,
+    "writes": 2
+  },
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| jsonObject
+| The JSON object contains cache metrics such as create time, count reads and etc.
+a|
+`{
+ "createTime": 1415179251551, "hits": 0, "misses": 0, "readTime":1415179251551, "reads": 0,"writeTime": 1415179252198, "writes": 2
+}`
+|=======
+
+=== Cache Size
+Gets the number of all entries cached across all nodes.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=size&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "",
+  "error": "",
+  "response": 1,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| number
+| Number of all entries cached across all nodes.
+| 5
+|=======
+
+=== Cache Metadata
+Gets metadata for a cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=metadata&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| String
+| Yes
+| Cache name. If not provided, metadata for all user caches is returned.
+| partitionedCache
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": {
+    "cacheName": "partitionedCache",
+    "types": [
+      "Person"
+    ],
+    "keyClasses": {
+      "Person": "java.lang.Integer"
+    },
+    "valClasses": {
+      "Person": "org.apache.ignite.Person"
+    },
+    "fields": {
+      "Person": {
+        "_KEY": "java.lang.Integer",
+        "_VAL": "org.apache.ignite.Person",
+        "ID": "java.lang.Integer",
+        "FIRSTNAME": "java.lang.String",
+        "LASTNAME": "java.lang.String",
+        "SALARY": "double"
+      }
+    },
+    "indexes": {
+      "Person": [
+        {
+          "name": "ID_IDX",
+          "fields": [
+            "id"
+          ],
+          "descendings": [],
+          "unique": false
+        },
+        {
+          "name": "SALARY_IDX",
+          "fields": [
+            "salary"
+          ],
+          "descendings": [],
+          "unique": false
+        }
+      ]
+    }
+  },
+  "sessionToken": "",
+  "successStatus": 0
+}
+----
+
+
+=== Compare-And-Swap
+
+Stores a given key-value pair in a cache only if the previous value is equal to the expected value passed in.
+
+
+*Request:*::
++
+[source,shell]
+----
+https://[host]:[port]/ignite?cmd=authenticate&ignite.login=[user]&ignite.password=[password]
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+|Key to store in cache.
+| name
+
+|`val`
+| string
+|
+| Value associated with the given key.
+| Jack
+
+|`val2`
+| string
+|
+| Expected value.
+| Bob
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+The response returns `true` if the value was replaced, `false` otherwise.
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
+
+=== Append
+
+Appends a line for value which is associated with key.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=append&key={appendKey}&val={_suffix}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+| `key`
+| string
+|
+| Key to store in cache.
+| name
+
+|`val`
+|string
+|
+| Value to be appended to the current value.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if replace happened, `false` otherwise.
+| true
+|=======
+
+
+=== Prepend
+
+Adds prefix to the value that is associated with a given key.
+
+*Request:* ::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=prepend&key={key}&val={value}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| myCache
+
+|`key`
+|string
+|
+|Key to store in cache.
+|name
+
+|`val`
+|string
+|
+| The string to be prepended to the current value.
+|Name_
+
+|`destId`
+|string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+|boolean
+| `true` if replace happened, `false` otherwise.
+| true
+|=======
+
+
+=== Replace
+
+Stores a given key-value pair in a cache if the cache already contains the key.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=rep&key=repKey&val=newValue&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to store in cache.
+| name
+
+|`val`
+| string
+|
+| Value associated with the given key.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+| `exp`
+| long
+| Yes
+| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| 60000
+
+|=======
+
+*Response:*::
++
+The response contains `true` if the value was replaced, `false` otherwise.
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
+
+=== Get
+Retrieves the value mapped to a specified key from a cache.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=get&key={getKey}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key whose associated value is to be returned.
+| testKey
+
+|`keyType`
+| Java built-in type
+| Yes
+| See <<Data Types>> for more details.
+|
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+
+=== Get All
+Retrieves values mapped to the specified keys from a given cache.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=getall&k1={getKey1}&k2={getKey2}&k3={getKey3}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`k1...kN`
+| string
+|
+| Keys whose associated values are to be returned.
+| key1, key2, ..., keyN
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "",
+  "error": "",
+  "response": {
+    "key1": "value1",
+    "key2": "value2"
+  },
+  "successStatus": 0
+}
+----
++
+[NOTE]
+====
+[discrete]
+=== Get output as array
+
+To obtain the output as an array, use the `IGNITE_REST_GETALL_AS_ARRAY=true` system property.
+Once the property is set, the `getall` command provides the response in the following format:
+
+`{“successStatus”:0,“affinityNodeId”:null,“error”:null,“sessionToken”:null,“response”:[{“key”:“key1”,“value”:“value1”},{“key”:“key2”,“value”:“value2”}]}`
+====
+
+
+=== Get and Remove
+Removes the given key mapping from cache and returns the previous value.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=getrmv&cacheName={cacheName}&destId={nodeId}&key={key}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is  used.
+| partitionedCache
+
+|`key`
+|string
+|
+| Key whose mapping is to be removed from the cache.
+| name
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": value,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| jsonObject
+| Value for the key.
+| `{"name": "bob"}`
+|=======
+
+=== Get and Put
+Stores a given key-value pair in a cache and returns the existing value if there is one.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=getput&key=getKey&val=newVal&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to be associated with value.
+| name
+
+|`val`
+| string
+|
+| Value to be associated with key.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+|=======
+
+*Response:*::
++
+The response contains the previous value for the key.
++
+[source,json]
+----
+{
+  "affinityNodeId": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
+  "error": "",
+  "response": {"name": "bob"},
+  "successStatus": 0
+}
+----
+
+
+
+=== Get and Put If Absent
+Stores given key-value pair in cache only if cache had no previous mapping for it. If cache previously contained value for the given key, then this value is returned.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=getputifabs&key=getKey&val=newVal&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to be associated with value.
+| name
+
+|`val`
+| string
+|
+| Value to be associated with key.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
+  "error": "",
+  "response": "value",
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| jsonObject
+| Previous value for the given key.
+|`{"name": "bob"}`
+|=======
+
+
+
+=== Get and Replace
+
+Stores a given key-value pair in cache only if there is a previous mapping for it.
+
+*Request:* ::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=getrep&key={key}&val={val}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to store in cache.
+| name
+
+|`val`
+| string
+|
+| Value associated with the given key.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+The response contains the previous value associated with the specified key.
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": oldValue,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+|jsonObject
+| The previous value associated with the specified key.
+| `{"name": "Bob"}`
+|=======
+
+=== Replace Value
+
+Replaces the entry for a key only if currently mapped to a given value.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=repval&key={key}&val={newValue}&val2={oldVal}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to store in cache.
+| name
+
+|`val`
+| string
+|
+| Value associated with the given key.
+| Jack
+
+|`val2`
+| string
+|
+|Value expected to be associated with the specified key.
+|oldValue
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if replace happened, `false` otherwise.
+|true
+|=======
+
+=== Remove
+
+Removes the given key mapping from cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=rmv&key={rmvKey}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key - for which the mapping is to be removed from cache.
+| name
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+ | boolean
+ | `true` if replace happened, `false` otherwise.
+ |true
+|=======
+
+
+
+=== Remove All
+
+Removes given key mappings from a cache.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=rmvall&k1={rmKey1}&k2={rmKey2}&k3={rmKey3}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+| `cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+| `k1...kN`
+|string
+|
+|Keys whose mappings are to be removed from  the cache.
+|name
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if replace happened, `false` otherwise.
+|true
+|=======
+
+=== Remove Value
+
+Removes the mapping for a key only if currently mapped to the given value.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=rmvval&key={rmvKey}&val={rmvVal}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+| `cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+| `key`
+| string
+|
+| Key whose mapping is to be removed from the cache.
+| name
+
+|`val`
+| string
+|
+| Value expected to be associated with the specified key.
+| oldValue
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `false` if there was no matching key.
+|true
+|=======
+
+
+=== Add
+
+Stores a given key-value pair in a cache if the cache does not contain the key.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=add&key=newKey&val=newValue&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to be associated with the value.
+| name
+
+|`val`
+| string
+|
+| Value to be associated with the key.
+| Jack
+
+|`destId`
+|string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+|`exp`
+| long
+| Yes
+| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| 60000
+
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if value was stored in cache, `false` otherwise.
+| true
+|=======
+
+
+=== Put
+
+Stores a given key-value pair in a cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=put&key=newKey&val=newValue&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to be associated with values.
+| name
+
+|`val`
+| string
+|
+| Value to be associated with keys.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+|`exp`
+| long
+| Yes
+|Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| 60000
+
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if value was stored in cache, `false` otherwise.
+|true
+|=======
+
+
+=== Put all
+Stores the given key-value pairs in cache.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=putall&k1={putKey1}&k2={putKey2}&k3={putKey3}&v1={value1}&v2={value2}&v3={value3}&cacheName={cacheName}&destId={nodeId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`k1...kN`
+| string
+|
+| Keys to be associated with values.
+| name
+
+|`v1...vN`
+| string
+|
+| Values to be associated with keys.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "1bcbac4b-3517-43ee-98d0-874b103ecf30",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+|boolean
+|`true` if the values were stored in cache, `false` otherwise.
+|true
+|=======
+
+
+=== Put If Absent
+
+Stores a given key-value pair in a cache if the cache does not contain the given key.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=putifabs&key={getKey}&val={newVal}&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key to be associated with value.
+| name
+
+|`val`
+| string
+|
+| Value to be associated with key.
+| Jack
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+|`exp`
+| long
+| Yes
+| Expiration time in milliseconds for the entry. When the parameter is set, the operation is executed with link:configuring-caches/expiry-policies[ModifiedExpiryPolicy].
+| 60000
+
+|=======
+
+*Response:*::
++
+The response field contains `true` if the entry was put, `false` otherwise.
++
+[source,json]
+----
+{
+  "affinityNodeId": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
+
+
+=== Contains Key
+
+Determines if cache contains an entry for the specified key.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=conkey&key={getKey}&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`key`
+| string
+|
+| Key whose presence in this cache is to be tested.
+| testKey
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if this map contains a mapping for the specified key.
+| true
+|=======
+
+=== Contains keys
+
+Determines if cache contains any entries for the specified keys.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=conkeys&k1={getKey1}&k2={getKey2}&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+
+|`k1...kN`
+| string
+|
+| Key whose presence in this cache is to be tested.
+| key1, key2, ..., keyN
+
+|`destId`
+| string
+| Yes
+| Node ID for which the metrics are to be returned.
+| `8daab5ea-af83-4d91-99b6-77ed2ca06647`
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "affinityNodeId": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
+  "error": "",
+  "response": true,
+  "successStatus": 0
+}
+----
++
+[{response_table_props}]
+|=======
+|Field
+|Type
+|Description
+|Example
+
+|`response`
+| boolean
+| `true` if this cache contains a mapping for the specified keys.
+| true
+|=======
+
+
+=== Get or Create Cache
+Creates a cache with the given name if it does not exist.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=getorcreate&cacheName={cacheName}
+----
++
+[width="100%", cols="15%,15%,15%,55%", opts="header"]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+
+|`cacheName`
+| String
+| Yes
+| Cache name. If not provided, the default cache is used.
+
+|`backups`
+| int
+| Yes
+| Number of backups for cache data. Default is 0.
+
+|`dataRegion`
+| String
+| Yes
+| Name of the data region the cache should belong to.
+
+|`templateName`
+| String
+| Yes
+| Name of the cache template registered in Ignite to use as a configuration for the distributed cache. See the link:configuring-caches/configuration-overview#cache-templates[Cache Template, window=_blank] section for more information.
+
+|`cacheGroup`
+| String
+| Yes
+| Name of the group the cache should belong to.
+
+|`writeSynchronizationMode`
+| String
+| Yes
+a|Sets the write synchronization mode for the given cache:
+
+- `FULL_SYNC`
+- `FULL_ASYNC`
+- `PRIMARY_SYNC`
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": null,
+  "successStatus": 0
+}
+----
+
+
+=== Destroy cache
+Destroys cache with given name.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=destcache&cacheName={cacheName}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| partitionedCache
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": null,
+  "successStatus": 0
+}
+----
+
+=== Node
+
+Gets information about a node.
+
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=node&attr={includeAttributes}&mtr={includeMetrics}&id={nodeId}&caches={includeCaches}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`mtr`
+| boolean
+| Yes
+| Response includes metrics, if this parameter is `true`.
+| true
+
+|`attr`
+| boolean
+| Yes
+| Response includes attributes, if this parameter is `true`.
+| true
+
+|`ip`
+| string
+|
+| This parameter is optional, if id parameter is passed. Response is returned for node which has the IP.
+| 192.168.0.1
+
+|`id`
+| string
+|
+| This parameter is optional, if ip parameter is passed. Response is returned for node which has the node ID.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+|`caches`
+|boolean
+|Yes
+| When set to `true` the cache information returned by node includes: name, mode, and SQL Schema.
+
+ When set to `false` the node command does not return any cache information.
+
+ Default value is `true`.
+|true
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": {
+    "attributes": null,
+    "caches": {},
+    "consistentId": "127.0.0.1:47500",
+    "defaultCacheMode": "REPLICATED",
+    "metrics": null,
+    "nodeId": "2d0d6510-6fed-4fa3-b813-20f83ac4a1a9",
+    "replicaCount": 128,
+    "tcpAddresses": ["127.0.0.1"],
+    "tcpHostNames": [""],
+    "tcpPort": 11211
+  },
+  "successStatus": 0
+}
+----
+
+=== Log
+Shows server logs.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=log&from={from}&to={to}&path={pathToLogFile}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`from`
+|integer
+|Yes
+|Number of line to start from. Parameter is mandatory if to is passed.
+|`0`
+
+|`path`
+|string
+|Yes
+|The path to log file. If not provided the a default one is used.
+|`/log/cache_server.log`
+
+|`to`
+|integer
+|Yes
+|Number to line to finish on. Parameter is mandatory if from is passed.
+|`1000`
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": ["[14:01:56,626][INFO ][test-runner][GridDiscoveryManager] Topology snapshot [ver=1, nodes=1, CPUs=8, heap=1.8GB]"],
+  "successStatus": 0
+}
+----
+
+
+=== Topology
+Gets the information about cluster topology.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=top&attr=true&mtr=true&id=c981d2a1-878b-4c67-96f6-70f93a4cd241
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`mtr`
+| boolean
+| Yes
+| Response will include metrics, if this parameter is `true`.
+| true
+
+|`attr`
+| boolean
+|Yes
+| Response will include attributes, if this parameter is `true`.
+| true
+
+|`ip`
+| string
+| Yes
+| This parameter is optional, if the `id` parameter is passed. Response will be returned for node which has the IP.
+| 192.168.0.1
+
+|`id`
+| string
+| Yes
+| This parameter is optional, if the `ip` parameter is passed. Response will be returned for node which has the node ID.
+| 8daab5ea-af83-4d91-99b6-77ed2ca06647
+
+|`caches`
+| boolean
+| Yes
+| When set to `true` the cache information returned by top will include: `name`, `mode`, and `SQL Schema`.
+ When set to `false` the top command does not return any cache information.
+ Default value is `true`.
+ true
+|=======
+
+*Response:*::
++
+[source,json]
+----
+{
+  "error": "",
+  "response": [
+    {
+      "attributes": {
+        ...
+      },
+      "caches": [
+        {
+          name: "",
+          mode: "PARTITIONED"
+        },
+        {
+          name: "partitionedCache",
+          mode: "PARTITIONED",
+          sqlSchema: "partitionedCache"
+        }
+      ],
+      "consistentId": "127.0.0.1:47500",
+      "metrics": {
+        ...
+      },
+      "nodeId": "96baebd6-dedc-4a68-84fd-f804ee1ed995",
+      "replicaCount": 128,
+      "tcpAddresses": ["127.0.0.1"],
+      "tcpHostNames": [""],
+      "tcpPort": 11211
+   },
+   {
+     "attributes": {
+       ...
+     },
+		 "caches": [
+       {
+         name: "",
+         mode: "REPLICATED"
+       }
+     ],
+     "consistentId": "127.0.0.1:47501",
+     "metrics": {
+       ...
+     },
+     "nodeId": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
+     "replicaCount": 128,
+     "tcpAddresses": ["127.0.0.1"],
+     "tcpHostNames": [""],
+     "tcpPort": 11212
+   }
+  ],
+  "successStatus": 0
+}
+----
+
+=== Execute a Task
+Executes a given task in the cluster.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=exe&name=taskName&p1=param1&p2=param2&async=true
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`name`
+| string
+|
+| Name of the task to execute.
+| `summ`
+
+|`p1...pN`
+| string
+| Yes
+| Argument of task execution.
+| arg1...argN
+
+|`async`
+| boolean
+| Yes
+| Determines whether the task is performed asynchronously.
+| `true`
+|=======
+
+*Response:*::
++
+The response contains an error message, unique identifier of the task, the status and result of computation.
++
+[source,json]
+----
+{
+  "error": "",
+  "response": {
+    "error": "",
+    "finished": true,
+    "id": "~ee2d1688-2605-4613-8a57-6615a8cbcd1b",
+    "result": 4
+  },
+  "successStatus": 0
+}
+----
+
+=== Result of a Task
+
+Returns  the computation result for a given task.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=res&id={taskId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`id`
+| string
+|
+| ID of the task whose result is to be returned.
+| 69ad0c48941-4689aae0-6b0e-4d52-8758-ce8fe26f497d{tilde}4689aae0-6b0e-4d52-8758-ce8fe26f497d
+|=======
+
+*Response:*::
++
+--
+The response contains information about errors (if any), ID of the task, and the status and result of computation.
+
+[source,json]
+----
+{
+  "error": "",
+  "response": {
+    "error": "",
+    "finished": true,
+    "id": "69ad0c48941-4689aae0-6b0e-4d52-8758-ce8fe26f497d~4689aae0-6b0e-4d52-8758-ce8fe26f497d",
+    "result": 4
+  },
+  "successStatus": 0
+}
+----
+--
+
+=== SQL Query Execute
+
+Runs SQL query over cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=qryexe&type={type}&pageSize={pageSize}&cacheName={cacheName}&arg1=1000&arg2=2000&qry={query}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`type`
+| string
+|
+|Type for the query
+|String
+
+|`pageSize`
+| number
+|
+| Page size for the query.
+| 3
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| testCache
+
+|`arg1...argN`
+| string
+|
+| Query arguments
+|1000,2000
+
+|`qry`
+| strings
+|
+| Encoding sql query
+| `salary+%3E+%3F+and+salary+%3C%3D+%3F`
+
+|`keepBinary`
+| boolean
+| Yes
+| do not deserialize link:/docs/data-modeling/data-modeling#binary-object-format[binary objects], `false` by default
+|`true`
+|=======
+
+*Response:*::
++
+The response object contains the items returned by the query, a flag indicating the last page, and `queryId`.
++
+[source,json]
+----
+{
+  "error":"",
+  "response":{
+    "fieldsMetadata":[],
+    "items":[
+      {"key":3,"value":{"name":"Jane","id":3,"salary":2000}},
+      {"key":0,"value":{"name":"John","id":0,"salary":2000}}],
+    "last":true,
+    "queryId":0},
+  "successStatus":0
+}
+----
+
+=== SQL Fields Query Execute
+Runs SQL fields query over cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=qryfldexe&pageSize=10&cacheName={cacheName}&qry={qry}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`pageSize`
+| number
+|
+| Page size for the query.
+| 3
+
+|`cacheName`
+| string
+| Yes
+| Cache name. If not provided, the default cache is used.
+| testCache
+
+|`arg1...argN`
+| string
+|
+| Query arguments.
+|1000,2000
+
+|`qry`
+| strings
+|
+| Encoding sql fields query.
+|`select+firstName%2C+lastName+from+Person`
+
+|`keepBinary`
+| boolean
+| Yes
+| do not deserialize link:/docs/data-modeling/data-modeling#binary-object-format[binary objects], `false` by default
+|`true`
+|=======
+
+*Response:*::
++
+The response object contains the items returned by the query, fields query metadata, a flag indicating the last page, and `queryId`.
++
+[source,json]
+----
+{
+  "error": "",
+  "response": {
+    "fieldsMetadata": [
+      {
+        "fieldName": "FIRSTNAME",
+        "fieldTypeName": "java.lang.String",
+        "schemaName": "person",
+        "typeName": "PERSON"
+      },
+      {
+        "fieldName": "LASTNAME",
+        "fieldTypeName": "java.lang.String",
+        "schemaName": "person",
+        "typeName": "PERSON"
+      }
+    ],
+    "items": [["Jane", "Doe" ], ["John", "Doe"]],
+    "last": true,
+    "queryId": 0
+  },
+  "successStatus": 0
+}
+----
+
+
+=== SQL Scan Query Execute
+
+Runs a scan query over a cache.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=qryscanexe&pageSize={pageSize}&cacheName={cacheName}&className={className}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`pageSize`
+| Number
+|
+| Page size for the query
+| 3
+
+|`cacheName`
+| String
+| Yes
+| Cache name. If not provided, the default cache is used.
+| testCache
+
+|`className`
+| String
+| Yes
+| Predicate class name for scan query. Class should implement `IgniteBiPredicate` interface.
+| `org.apache.ignite.filters.PersonPredicate`
+
+|`keepBinary`
+| boolean
+| Yes
+| do not deserialize link:/docs/data-modeling/data-modeling#binary-object-format[binary objects], `false` by default
+|`true`
+|=======
+
+*Response:*::
++
+The response  object contains the items returned by the scan query, fields query metadata, a flag indicating last page, and `queryId`.
++
+[source,json]
+----
+{
+  "error": "",
+  "response": {
+    "fieldsMetadata": [
+      {
+        "fieldName": "key",
+        "fieldTypeName": "",
+        "schemaName": "",
+        "typeName": ""
+      },
+      {
+        "fieldName": "value",
+        "fieldTypeName": "",
+        "schemaName": "",
+        "typeName": ""
+      }
+    ],
+    "items": [
+      {
+        "key": 1,
+        "value": {
+          "firstName": "Jane",
+          "id": 1,
+          "lastName": "Doe",
+          "salary": 1000
+        }
+      },
+      {
+        "key": 3,
+        "value": {
+          "firstName": "Jane",
+          "id": 3,
+          "lastName": "Smith",
+          "salary": 2000
+        }
+      }
+    ],
+    "last": true,
+    "queryId": 0
+  },
+  "successStatus": 0
+}
+----
+
+
+=== SQL Query Fetch
+Gets next page for the query.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=qryfetch&pageSize={pageSize}&qryId={queryId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+|`pageSize`
+| number
+|
+| Page size for the query.
+| 3
+
+|`qryId`
+| number
+|
+| Query id that is returned from the `Sql query execute`, `sql fields query execute`, or `sql fetch` commands.
+| 0
+|=======
+
+*Response:*::
++
+The response object contains the items returned by the query, a flag indicating the last page, and `queryId`.
++
+[source,json]
+----
+{
+  "error":"",
+  "response":{
+    "fieldsMetadata":[],
+    "items":[["Jane","Doe"],["John","Doe"]],
+    "last":true,
+    "queryId":0
+  },
+  "successStatus":0
+}
+----
+
+
+=== SQL Query Close
+
+Closes query resources.
+
+*Request:*::
++
+[source,shell]
+----
+http://host:port/ignite?cmd=qrycls&qryId={queryId}
+----
++
+[{request_table_props}]
+|=======
+|Parameter
+|Type
+|Optional
+|Description
+|Example
+
+
+|`qryId`
+|number
+|
+|Query id that is returned from the `SQL query execute`, `SQL fields query execute`, or `SQL fetch` commands.
+|0
+|=======
+
+*Response:*::
++
+The command returns 'true' if the query was closed successfully.
++
+[source,json]
+----
+{
+  "error":"",
+  "response":true,
+  "successStatus":0
+}
+----
+
+
diff --git a/docs/_docs/security/authentication.adoc b/docs/_docs/security/authentication.adoc
new file mode 100644
index 0000000..09f50fe
--- /dev/null
+++ b/docs/_docs/security/authentication.adoc
@@ -0,0 +1,65 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Authentication
+:javaFile: {javaCodeDir}/Security.java
+
+
+== Ignite Authentication
+
+
+You can enable Ignite Authentication by setting the `authenticationEnabled` property to `true` in the node's configuration.
+This type of authentication requires link:persistence/native-persistence[persistent storage] be enabled for at least one data region.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/ignite-authentication.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=ignite-authentication,indent=0]
+----
+
+tab:.NET/C#[]
+
+tab:C++[unsupported]
+--
+
+The first node that you start must have authentication enabled.
+Upon start-up, Ignite creates a user account with the name "ignite" and password "ignite".
+This account is meant to be used to create other user accounts for your needs.
+Then simply delete the "ignite" account.
+
+You can manage users using the following SQL commands:
+
+* link:sql-reference/ddl#create-user[CREATE USER]
+* link:sql-reference/ddl#alter-user[ALTER USER]
+* link:sql-reference/ddl#drop-user[DROP USER]
+
+
+== Supplying Credentials in Clients
+
+When authentication is configured in the cluster, all client applications must provide user credentials. Refer to the following pages for the information about specific clients:
+
+* link:thin-clients/getting-started-with-thin-clients#authentication[Thin clients]
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC driver]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver]
+* link:restapi#security[REST API]
+
+
diff --git a/docs/_docs/security/index.adoc b/docs/_docs/security/index.adoc
new file mode 100644
index 0000000..3d52e95
--- /dev/null
+++ b/docs/_docs/security/index.adoc
@@ -0,0 +1,18 @@
+---
+layout: toc
+---
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Security 
diff --git a/docs/_docs/security/master-key-rotation.adoc b/docs/_docs/security/master-key-rotation.adoc
new file mode 100644
index 0000000..53e06f0
--- /dev/null
+++ b/docs/_docs/security/master-key-rotation.adoc
@@ -0,0 +1,131 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Master key rotation
+
+== Overview
+
+Master key encrypts cache keys. Encrypted cache keys are stored on the disk. To learn more see the link:security/tde[Transparent Data Encryption] page.
+
+Ignite 2.9 introduces the master key change process. It allows users to switch Ignite to the new master key with re-encrypting cache keys.
+
+Master key rotation is required if it has been compromised or at the end of the crypto period (key validity period).
+
+== Prerequisites
+
+A new master key should be available to `EncryptionSpi` for each server node. The cluster should be active.
+
+== Configuration
+
+Master keys are identified by name. When the cluster starts for the first time, the master key name from the configuration will be used. See link:security/tde#configuration[TDE Configuration].
+
+Nodes save the master key name to the disk (local `MetaStorage`) on the first cluster activation and each master key change. If some node restarts, it will use the master key name from the local `MetaStorage`.
+
+== Changing master key
+
+NOTE: Cache start and node join during the key change process is prohibited and will be rejected.
+
+Ignite provide the ability to change the master key from the following interfaces:
+
+- link:#command-line-tool[command line tool]
+- link:#jmx[JMX]
+- link:#from-code[from code]
+
+=== Command line tool
+
+Ignite ships a `control.sh|bat` script, located in the `$IGNITE_HOME/bin` folder, that acts like a tool to manage the master key change process from the command line. The following commands can be used with `control.sh|bat`:
+
+[source,shell]
+----
+# Print the current master key name.
+control.sh|bat --encryption get_master_key_name
+
+# Change the master key.
+control.sh|bat --encryption change_master_key newMasterKeyName
+----
+
+=== JMX
+
+You can also manage the master key change process via the `EncryptionMXBean` interface:
+
+[cols="1,1",opts="header"]
+|===
+|Method | Description
+|getMasterKeyName() | Gets the current master key name.
+|changeMasterKey(String masterKeyName) | Starts master key change process.
+|===
+
+=== From code
+
+The master key change process can be managed programmatically:
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/TDE.java[tags=master-key-rotation, indent=0]
+----
+--
+
+== Recovery of the master key on failing node
+
+If some node is unavailable during a master key change process it won't be able to join to the cluster with the old master key. The node should re-encrypt local group keys during recovery on startup. The actual master key name should be set via `IGNITE_MASTER_KEY_NAME_TO_CHANGE_BEFORE_STARTUP` system property before the node starts. The node saves the key name to the local `MetaStorage` when the cluster is active.
+
+NOTE: It is recommended to delete system property after a successful recovery. Otherwise, the invalid master key name can be used when the node restarts.
+
+== Additional master key generation example
+
+Ignite comes with the `KeystoreEncryptionSpi` based on JDK provided cipher algorithm implementations. See link:security/tde#master-key-generation-example[keystore master key generation example]. An additional master key can be generated using the `keytool` as follows:
+
+[source,shell]
+----
+user:~/tmp:[]$ keytool \
+-storepass mypassw0rd \
+-storetype PKCS12 \
+-keystore ./ignite_keystore.jks \
+-list
+
+Keystore type: PKCS12
+Keystore provider: SunJSSE
+
+Your keystore contains 1 entry
+
+ignite.master.key, 15.01.2019, SecretKeyEntry,
+
+
+user:~/tmp:[]$ keytool -genseckey \
+-alias ignite.master.key2 \
+-keystore ./ignite_keystore.jks \
+-storetype PKCS12 \
+-keyalg aes \
+-storepass mypassw0rd \
+-keysize 256
+
+
+user:~/tmp:[]$ keytool \
+-storepass mypassw0rd \
+-storetype PKCS12 \
+-keystore ./ignite_keystore.jks \
+-list
+
+Keystore type: PKCS12
+Keystore provider: SunJSSE
+
+Your keystore contains 2 entries
+
+ignite.master.key, 15.01.2019, SecretKeyEntry,
+ignite.master.key2, 15.01.2019, SecretKeyEntry,
+----
diff --git a/docs/_docs/security/sandbox.adoc b/docs/_docs/security/sandbox.adoc
new file mode 100644
index 0000000..03a5632
--- /dev/null
+++ b/docs/_docs/security/sandbox.adoc
@@ -0,0 +1,94 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= The Ignite Sandbox
+
+== Overview
+Ignite allows executing custom logic via various APIs including compute tasks, event filters, message listeners.
+This user-defined logic can utilize Java APIs to get access to host resources. For example, it can create/update/delete files or system properties,
+open network connections, use reflection and other APIs to get full control of the host environment.
+Ignite Sandbox is based on the link:https://docs.oracle.com/en/java/javase/11/security/java-se-platform-security-architecture.html#GUID-C203D80F-C730-45C3-AB95-D4E61FD6D89C[Java Sandbox model,window=_blank]
+and allows you to restrict the scope of user-defined logic executed via Ignite APIs.
+
+== Ignite Sandbox Activation
+
+The activation of Ignite Sandbox involves the configuration of an `SecurityManager` instance and creation of an
+`GridSecurityProcessor` implementation.
+
+=== Install SecurityManager
+
+Due to the fact, that Ignite Sandbox is based on the Java Sandbox model, and
+link:https://docs.oracle.com/javase/8/docs/technotes/guides/security/spec/security-spec.doc6.html#a19349[SecurityManager,window=_blank]
+is an important part of that model, you need to have it installed.
+The SecurityManager is responsible for checking, which security policy is currently in effect. It also performs access control checks.
+The security manager is not automatically installed when an application is running. If you run Ignite as a separate application,
+you must invoke the Java Virtual Machine with the `-Djava.security.manager` command-line argument (which sets the value of the `java.security.manager property`).
+There is also a `-Djava.security.policy` command-line argument, defining, which policy files are utilized.
+If you don't include `-Djava.security.policy` into the command line, then the policy files specified in the security properties file will be used.
+
+NOTE: It may be convenient adding the security manager and the policy command-line arguments to `{IGNITE-HOME}/bin/ignite.sh|ignite.bat` script.
+
+NOTE: Ignite should have enough permissions to work correctly.
+You may apply the most straightforward way that is granting to Ignite the `java.security.AllPermission` permission,
+but you should remember the "giving permissions as low as possible" security principle.
+
+=== Provide GridSecurityProcessor Implementation
+
+Currently, Apache Ignite does not provide an implementation of the `GridSecurityProcessor` interface out-of-the-box.
+But, you can implement this interface as a part of link:/docs/plugins[a custom plugin].
+
+The `GridSecurityProcessor` interface has the `sandboxEnabled` method that manages a user-defined code execution inside the Ignite Sandbox.
+By default, this method returns `false`, which means no-sandbox.
+If you are going to use Ignite Sandbox, your overridden `sandboxEnabled` method needs to return `true`.
+
+If the Ignite Sandbox is turned on, you can see the following trace line:
+[source,text]
+----
+[INFO] Security status [authentication=on, sandbox=on, tls/ssl=off]
+----
+
+== Permissions
+
+A user-defined code is always executed on behalf of the security subject that initiates its execution.
+The security subject's sandbox link:https://docs.oracle.com/en/java/javase/11/security/java-se-platform-security-architecture.html#GUID-DEA8EAB1-CF00-4658-AA6D-D2C9754C8B37[permissions,window=_blank]
+define actions that a user-defined code can perform.
+The Ignite Sandbox retrieves those permissions using the `SecuritySubject#sandboxPermissions` method.
+
+NOTE: A user-defined code, when running inside Ignite Sandbox, may use the public API of Ignite without granting any additional permissions.
+
+If a security subject doesn't have enough permissions to perform a security-sensitive operation,
+an AcccessControlException appears.
+
+[source,java]
+----
+// Get compute instance over all nodes in the cluster.
+IgniteCompute compute = Ignition.ignite().compute();
+
+compute.broadcast(() -> {
+    // If the Ignite Sandbox is turned on, the lambda code is executed with restrictions.
+
+    // You can use the public API of Ignite without granting any permissions.
+    Ignition.localIgnite().cache("some.cache").get("key");
+
+    // If the current security subject doesn't have the java.util.PropertyPermission("secret.property", "read") permission,
+    // a java.security.AccessControlException appears here.
+    System.getProperty("secret.property");
+});
+----
+
+In the case of accessing the system property shown in the snippet above, you can see the following trace line with an exception:
+[source,text]
+----
+java.security.AccessControlException: access denied ("java.util.PropertyPermission" "secret.property" "read")
+----
diff --git a/docs/_docs/security/ssl-tls.adoc b/docs/_docs/security/ssl-tls.adoc
new file mode 100644
index 0000000..b56b209
--- /dev/null
+++ b/docs/_docs/security/ssl-tls.adoc
@@ -0,0 +1,225 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SSL/TLS
+
+:javaFile: {javaCodeDir}/Security.java
+:xmlFile: code-snippets/xml/ssl.xml
+
+This page explains how to configure SSL/TLS encryption between cluster nodes (both server and client nodes) and thin clients that connect to your cluster.
+
+== Considerations
+
+To ensure a sufficient level of security, we recommend that each node (server or client) has its own unique certificate in the node's keystore (including the private key).
+This certificate must be trusted by all other server nodes.
+//This configuration allows for an easier certificate replacement procedure (for example when they are expired).
+
+
+== SSL/TLS for Nodes
+
+To enable SSL/TLS for cluster nodes, configure an `SSLContext` factory in the node configuration.
+You can use the `org.apache.ignite.ssl.SslContextFactory`, which is the default factory that uses a configurable keystore to initialize the SSL context.
+//You can also implement your own `SSLContext` factory.
+
+[CAUTION]
+====
+Ensure that your version of the JVM addresses
+link:https://bugs.openjdk.java.net/browse/JDK-8219658[the following issue, window=_blank] that can cause deadlocks
+in SSL connections. If your JVM is affected but can't be updated, then set
+the link:clustering/network-configuration[`TcpDiscoverySpi.soLinger`] parameter to a non-negative value.
+====
+
+Below is an example of `SslContextFactory` configuration:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::{xmlFile}[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=ssl-context-factory,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+
+----
+tab:C++[unsupported]
+--
+
+The keystore must contain the node's certificate, including its private key.
+The trust store must contain the trusted certificates for all other cluster nodes.
+
+You can define other properties, such as key algorithm, key store type, or trust manager. See the description of the properties in the <<SslContextFactory Properties>> section.
+
+After starting the node, you should see the following messages in the logs:
+
+
+[source, text,subs="verbatim,quotes"]
+----
+Security status [authentication=off, *tls/ssl=on*]
+----
+
+
+
+////
+== SSL/TLS for Thin Clients
+
+To enable SSL/TLS for thin clients, refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[thin client documentation].
+////
+
+== SSL/TLS for Thin Clients and JDBC/ODBC [[ssl-for-clients]]
+
+Ignite uses the same SSL/TLS properties for all clients, including thin clients and JDBC/ODBC connections. The properties are configured within the client connector configuration.
+The client connector configuration is defined via the `IgniteConfiguration.clientConnectorConfiguration` property.
+
+To enable SSL/TLS for client connections, set the `sslEnabled` property to `true` and provide an `SslContextFactory` in the client connector configuration.
+You can re-use the <<SSL/TLS for Nodes,SSLContextFactory configured for nodes>>, or you can configure an SSLContext factory that will be used for client connections only.
+
+Then, configure SSL on the client side in the same way. Refer to the specific client documentation for details.
+
+Here is an example configuration that sets `SslContextFactory` for client connection:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/thin-client-cluster-config.xml[tag=ssl-configuration,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/JavaThinClient.java[tag=cluster-ssl-configuration,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ThinClient.cs[tag=ssl,indent=0]
+----
+tab:C++[unsupported]
+--
+
+If you want to re-use the SSLContext factory configured for nodes, you only need to set the `sslEnabled` property to `true`, and  `ClientConnectorConfiguration` will look for the SSLContext configured in `IgniteConfiguration`:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<property name="clientConnectorConfiguration">
+    <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+        <property name="sslEnabled" value="true"/>
+    </bean>
+</property>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/JavaThinClient.java[tag=use-global-ssl,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== Disabling Certificate Validation
+
+In some cases, it is useful to disable certificate validation, for example when connecting to a server with a self-signed certificate.
+This can be achieved by using a disabled trust manager, which can be obtained by calling the `SslContextFactory.getDisabledTrustManager()` method.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/ssl-without-validation.xml[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=disable-validation,indent=0]
+----
+--
+
+== Upgrading Certificates
+If your SSL certificates are about to expire or have been compromised, you can install new certificates without shutting down the whole cluster.
+
+The following is a procedure for updating certificate.
+
+. First of all, make sure the new certificates are trusted by all cluster nodes.
+This step may not be necessary if your trusted stores contain the root certificate and the new certificates are signed by the same CA.
++
+--
+Repeat the following procedure for the nodes where the certificate is not trusted:
+
+.. Import the new certificate to the trusted store of the node.
+.. Gracefully restart the node.
+.. Repeat these steps for all server nodes.
+
+Now all nodes trust the new certificates.
+--
+
+. Import the new certificate (including the private key) to the key store of the corresponding node and remove the old certificate. Then gracefully restart the node. Repeat this procedure for all certificates you want to update.
+
+
+//Otherwise, first you have to push the trust store to all nodes one by one. It will contain trusts for both new and old certificates while you transition.
+
+== SslContextFactory Properties
+
+`SslContextFactory` supports the following properties:
+
+[width="100%", cols="30%, 60%, 10%"]
+|=================
+| Property | Description | Default
+
+|`keyAlgorithm`
+|The key manager algorithm that will be used to create a key manager.
+|`SunX509`
+
+|`keyStoreFilePath`
+|The path to the key store file. This is a mandatory parameter since the SSL context can not be initialized without a key manager.
+|`N/A`
+
+|`keyStorePassword`
+|The key store password.
+|`N/A`
+
+|`keyStoreType`
+|The key store type.
+|`JKS`
+
+|`protocol`
+|The protocol for secure transport. https://docs.oracle.com/en/java/javase/11/docs/specs/security/standard-names.html#sslcontext-algorithms[Supported algorithms,window=_blank].
+|`TLS`
+
+|`trustStoreFilePath`
+|The path to the trust store file.
+|`N/A`
+
+|`trustStorePassword`
+|The trust store password.
+|`N/A`
+
+|`trustStoreType`
+|The trust store type.
+|`JKS`
+
+|`trustManagers`
+|A list of pre-configured trust managers.
+|`N/A`
+|=================
diff --git a/docs/_docs/security/tde.adoc b/docs/_docs/security/tde.adoc
new file mode 100644
index 0000000..3f8250f
--- /dev/null
+++ b/docs/_docs/security/tde.adoc
@@ -0,0 +1,142 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Transparent Data Encryption
+
+WARNING: This feature is in beta and not recommended for use in production environments.
+
+== Overview
+Transparent data encryption (TDE) allows users to encrypt their data at rest.
+
+When link:persistence/native-persistence[Ignite persistence] is turned on, encryption can be enabled per cache/table, in which case the following data will be encrypted:
+
+- Data on disk
+- WAL records
+
+If you enable cache/table encryption, Ignite will generate a key (called _cache encryption key_) and will use this key to encrypt/decrypt the data in the cache.
+The cache encryption key is held in the system cache and cannot be accessed by users.
+When the cache encryption key is sent to other nodes or saved to disk (when the node goes down), it is encrypted using the _master key_.
+The master key must be specified by the user in the configuration.
+
+The _same_ master key must be specified via the configuration in every server node. One way to ensure you're using the same key is to copy the JKS file from one node to the other nodes. If you try to enable TDE using different keys, the nodes with the different key will not be able to join the cluster (will be rejected based on differing digests).
+
+Ignite uses JDK-provided encryption algorithms: "AES/CBC/PKCS5Padding" to encrypt WAL records and "AES/CBC/NoPadding" to encrypt data pages on disk. To learn more about implementation details, see link:{githubUrl}/modules/core/src/main/java/org/apache/ignite/spi/encryption/keystore/KeystoreEncryptionSpi.java[KeystoreEncryptionSpi, window=_blank].
+
+== Limitations
+
+Transparent Data Encryption has some limitations that you should be aware of before deploying it in your production environment.
+
+*Encryption*
+
+* No option to change the encryption key at runtime.
+* No option to encrypt/decrypt existing caches/tables.
+
+*Snapshots and Recovery*
+
+* No support for snapshots. Snapshots are not encrypted and it's not possible to recover from a snapshot that includes an encrypted table or cache.
+
+== Configuration
+To enable encryption in the cluster, provide a master key in the configuration of each server node. A configuration example is shown below.
+
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/tde.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/TDE.java[tags=config, indent=0]
+
+----
+
+tab:C#/.NET[]
+tab:C++[unsupported]
+--
+
+
+When the master key is configured, you can enable encryption for a cache as follows:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/tde.xml[tags=cache, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaCodeDir}/TDE.java[tags=cache, indent=0]
+
+----
+
+tab:SQL[]
+[source,sql]
+----
+CREATE TABLE encrypted(
+  ID BIGINT,
+  NAME VARCHAR(10),
+  PRIMARY KEY (ID))
+WITH "ENCRYPTED=true";
+----
+
+--
+
+
+== Master Key Generation Example
+A keystore with a master key can be created using `keytool` as follows:
+
+.Master Key Generation Example
+[source,shell]
+----
+user:~/tmp:[]$ java -version
+java version "1.8.0_161"
+Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
+Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
+
+user:~/tmp:[]$ keytool -genseckey \
+-alias ignite.master.key \
+-keystore ./ignite_keystore.jks \
+-storetype PKCS12 \
+-keyalg aes \
+-storepass mypassw0rd \
+-keysize 256
+
+user:~/tmp:[]$ keytool \
+-storepass mypassw0rd \
+-storetype PKCS12 \
+-keystore ./ignite_keystore.jks \
+-list
+
+Keystore type: PKCS12
+Keystore provider: SunJSSE
+
+Your keystore contains 1 entry
+
+ignite.master.key, 07.11.2018, SecretKeyEntry,
+----
+
+== Source Code Example
+link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/encryption/EncryptedCacheExample.java[EncryptedCacheExample.java, window=_blank]
diff --git a/docs/_docs/services/services.adoc b/docs/_docs/services/services.adoc
new file mode 100644
index 0000000..1823473
--- /dev/null
+++ b/docs/_docs/services/services.adoc
@@ -0,0 +1,267 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Services
+
+:javaFile: {javaCodeDir}/services/ServiceExample.java
+
+== Overview
+A service is a piece of functionality that can be deployed to an Ignite cluster and execute specific operations.
+You can have multiple instances of a service on one or multiple nodes.
+
+Ignite services have the following features:
+
+Load balancing::
+In all cases, other than singleton service deployment, Ignite automatically makes sure that about an equal number of services are deployed on each node within the cluster. Whenever the cluster topology changes, Ignite re-evaluates service deployments and may re-deploy an already deployed service to another node for better load balancing.
+
+Fault tolerance::
+Ignite always guarantees that services are continuously available, and are deployed according to the specified configuration, regardless of any topology changes or node crashes.
+
+Hot Redeployment::
+You can use Ignite's `DeploymentSpi` configuration to re-deploy services without restarting the cluster. See <<Re-deploying Services>>.
+
+
+Ignite services can be used as a backbone of a micro-services based solution or application. Learn more about this use case from the following series of articles:
+
+* link:https://dzone.com/articles/running-microservices-on-top-of-in-memory-data-gri[Microservices on Top of Apache Ignite - Part I^]
+* link:https://dzone.com/articles/running-microservices-on-top-of-in-memory-data-gri-1[Microservices on Top of Apache Ignite - Part II^]
+* link:https://dzone.com/articles/microservices-on-top-of-an-in-memory-data-grid-par[Microservices on Top of Apache Ignite - Part III^]
+
+Refer to a service example implementation in the Apache Ignite link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/servicegrid[code base^].
+
+== Implementing a Service
+
+A service implements the javadoc:org.apache.ignite.services.Service[Service] interface.
+The `Service` interface has three methods:
+
+* `init(ServiceContext)`: this method is called by Ignite before the service is deployed (and before the `execute()` method is called)
+* `execute(ServiceContext)`: starts execution of the service
+* `cancel(ServiceContext)`:  cancels service execution
+
+//The service must be available in the classpash of all server nodes. *TODO: deployment options*
+
+//Each service is associated with a name.
+//Deploy the service at runtime by calling one of the `IgniteServices.deploy...` methods.
+//Or, specify the service in the node configuration, in which case the service is deployed when the node starts.
+
+== Deploying Services
+
+You can deploy your service either programmatically at runtime, or by providing a service configuration as part of the node configuration.
+In the latter case, the service is deployed when the cluster starts.
+
+=== Deploying Services at Runtime
+
+You can deploy services at runtime via the instance of `IgniteServices`, which can be obtained from an instance of `Ignite` by calling the `Ignite.services()` method.
+
+The `IgniteServices` interface has a number of methods for deploying services:
+
+* `deploy(ServiceConfiguration)` deploys a service defined by a given configuration.
+* `deployNodeSingleton(...)` ensures that an instance of the service is running on each server node.
+* `deployClusterSingleton(...)` deploys a single instance of the service per cluster. If the cluster node on which the service is deployed stops, Ignite automatically redeploys the service on another node.
+* `deployKeyAffinitySingleton(...)` deploys a single instance of the service on the primary node for a given cache key.
+* `deployMultiple(...)` deploys the given number of instances of the service.
+
+
+This is an example of cluster singleton deployment:
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=start-with-method, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[]
+--
+
+
+And here is how to deploy a cluster singleton using `ServiceConfiguration`:
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=start-with-service-config, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+
+=== Deploying Services at Node Startup
+
+You can specify your service as part of the node configuration and start the service together with the node.
+If your service is a node singleton, the service is started on each node of the cluster.
+If the service is a cluster singleton, it is started in the first cluster node, and is redeployed to one of the other nodes if the first node terminates.
+The service must be available on the classpath of each node.
+
+Below is an example of configuring a cluster singleton service:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/services.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=service-configuration, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+
+== Deploying to a Subset of Nodes
+
+When you obtain the `IgniteServices` interface by calling `ignite.services()`, the `IgniteServices` instance is associated with all server nodes.
+It means that Ignite chooses where to deploy the service from the set of all server nodes.
+You can change the set of nodes considered for service deployment by using various approaches describe below.
+
+=== Cluster Singleton
+
+A cluster singleton is a deployment strategy where there is only one instance of the service in the cluster, and Ignite guarantees that the instance is always available.
+In case the cluster node on which the service is deployed crashes or stops, Ignite automatically redeploys the instance to another node.
+
+=== ClusterGroup
+
+You can use the `ClusterGroup` interface to deploy services to a subset of nodes.
+If the service is a node singleton, the service is deployed on all nodes from the subset.
+If the service is a cluster singleton, it is deployed on one of the nodes from the subset.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=deploy-with-cluster-group, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+
+=== Node Filter
+
+You can use node attributes to define the subset of nodes meant for service deployment.
+This is achieved by using a node filter.
+A node filter is an `IgnitePredicate<ClusterNode>` that Ignite calls for each node associated with the `IgniteService` interface.
+If the predicate returns `true` for a given node, the node is included.
+
+CAUTION: The class of the node filter must be present in the classpath of all nodes.
+
+Here is an example of a node filter.
+The filter includes the server nodes that have the "west.coast.node" attribute.
+
+[source, java]
+----
+include::{javaFile}[tags=node-filter, indent=0]
+----
+
+Deploy the service using the node filter:
+[source, java]
+----
+include::{javaFile}[tags=deploy-with-node-filter, indent=0]
+
+----
+
+=== Cache Key
+Affinity-based deployment allows you to deploy a service to the primary node for a specific key in a specific cache.
+Refer to the link:data-modeling/affinity-collocation[Affinity Colocation] section for details.
+For an affinity-base deployment, specify the desired cache and key in the service configuration.
+The cache does not have to contain the key. The node is determined by the affinity function.
+If the cluster topology changes in a way that the key is re-assigned to another node, the service is redeployed to that node as well.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=deploy-by-key, indent=0]
+
+----
+tab:C#/.NET[]
+tab:C++[]
+--
+
+== Accessing Services
+
+You can access the service at runtime via a service proxy.
+Proxies can be either _sticky_ or _non-sticky_.
+A sticky proxy always connects to the same cluster node to access a remotely deployed service.
+A non-sticky proxy load-balances remote service invocations among all cluster nodes on which the service is deployed.
+
+The following code snippet obtains a non-sticky proxy to the service and calls a service method:
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/services/ServiceExample.java[tags=access-service, indent=0]
+----
+
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+
+//== Accessing Services from Compute Tasks
+// TODO the @ServiceResource annotation
+
+== Un-deploying Services
+
+To undeploy a service, use the `IgniteServices.cancel(serviceName)` or `IgniteServices.cancelAll()` methods.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=undeploy, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+== Re-deploying Services
+
+If you want to update the implementation of a service without stopping the cluster, you can do it if you use the Ignite's link:code-deployment/deploying-user-code[DeploymentSPI configuration].
+
+Use the following procedure to redeploy the service:
+
+. Update the JAR file(s) in the location where the service is stored (pointed to by your `UriDeploymentSpi.uriList` property). Ignite will reload the new classes after the configured update period.
+. Add the service implementation to the classpass of a client node and start the client.
+. Call the `Ignite.services().cancel()` method on the client node to stop the service.
+. Deploy the service from the client node.
+. Stop the client node.
+
+In this way, you don't have to stop the server nodes, so you don't interrupt the operation of your cluster.
+
+
+// TODO: add how to  call java services from .NET
+
+
diff --git a/docs/_docs/setup.adoc b/docs/_docs/setup.adoc
new file mode 100644
index 0000000..f6eb990
--- /dev/null
+++ b/docs/_docs/setup.adoc
@@ -0,0 +1,303 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Setting Up
+
+[NOTE]
+====
+[discrete]
+=== Configuring .NET, Python, Node.JS and other programming languages
+
+* .NET developers: refer to the link:net-specific/net-configuration-options[Ignite.NET Configuration] section
+* Developers of Python, Node.JS, and other programming languages: use this page to configure your
+Java-powered Ignite cluster and link:thin-clients/getting-started-with-thin-clients[thin clients] section to set up
+your language-specific applications that will be working with the cluster.
+====
+
+== System Requirements
+
+Ignite was tested on:
+
+include::includes/prereqs.adoc[]
+
+== Running Ignite with Java 11 or later
+
+include::includes/java9.adoc[]
+
+
+== Using Binary Distribution
+
+* Download the appropriate binary package from https://ignite.apache.org/download.cgi[Apache Ignite Downloads^].
+* Unzip the archive into a directory.
+* (Optional) Set the `IGNITE_HOME` environment variable to point to the
+installation folder and make sure there is no trailing `/` in the path.
+
+== Using Maven
+
+The easiest way to use Ignite is to add it to your pom.
+
+[source, xml,subs="attributes,specialchars" ]
+----
+
+<properties>
+    <ignite.version>{version}</ignite.version>
+</properties>
+
+<dependencies>
+    <dependency>
+        <groupId>org.apache.ignite</groupId>
+        <artifactId>ignite-core</artifactId>
+        <version>${ignite.version}</version>
+    </dependency>
+</dependencies>
+----
+
+The 'ignite-core' library contains the core functionality of Ignite.
+Addition functionality is provided by various Ignite modules.
+
+The following are the two most commonly used modules:
+
+
+* `ignite-spring` (support for link:understanding-configuration#spring-xml-configuration[XML-based configuration])
+* `ignite-indexing` (support for SQL indexing)
+
+
+[source, xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-spring</artifactId>
+    <version>${ignite.version}</version>
+</dependency>
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-indexing</artifactId>
+    <version>${ignite.version}</version>
+</dependency>
+
+----
+
+
+
+
+== Using Docker
+
+If you want to run Ignite in Docker, refer to the link:installation/installing-using-docker[Docker Deployment] section.
+
+== Configuring Work Directory
+
+Ignite uses a work directory to store your application data (if you use the link:persistence/native-persistence[Native Persistence] feature), index files, metadata information, logs, and other files. The default work directory is as follows:
+
+* `$IGNITE_HOME/work`, if the `IGNITE_HOME` system property is defined. This is the case when you start Ignite using the `bin/ignite.sh` script from the distribution package.
+* `./ignite/work`, this path is relative to the directory where you launch your application.
+
+There are several ways you can change the default work directory:
+
+. As an environmental variable:
++
+[source, shell]
+----
+export IGNITE_WORK_DIR=/path/to/work/directory
+----
+
+. In the node configuration:
++
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="workDirectory" value="/path/to/work/directory"/>
+    <!-- other properties -->
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/UnderstandingConfiguration.java[tag=dir,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UnderstandingConfiguration.cs[tag=SettingWorkDir,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/setting_work_directory.cpp[tag=setting-work-directory,indent=0]
+----
+--
+
+
+
+== Enabling Modules
+
+Ignite ships with a number of modules that provide various
+functionality. You can enable modules one by one, as required.
+
+All modules are included in the binary distribution, but by default they
+are disabled (except for the `ignite-core`, `ignite-spring`, and
+`ignite-indexing` modules). Modules can be found in the `lib/optional`
+directory of the distribution package (each module is located in a
+separate sub-directory).
+
+Depending on how you use Ignite, you can enable modules using one of
+the following methods:
+
+* If you use the binary distribution, move the
+`lib/optional/{module-dir}` to the `lib` directory before starting the
+node.
+* Add libraries from `lib/optional/{module-dir}` to the classpath of
+your application.
+* Add a module as a Maven dependency to your project.
++
+[source, xml]
+----
+<dependency>
+    <groupId>org.apache.ignite</groupId>
+    <artifactId>ignite-log4j2</artifactId>
+    <version>${ignite.version}</version>
+</dependency>
+----
+
+
+The following modules have LGPL dependencies and, therefore, can't be deployed on the Maven Central repository:
+
+* ignite-hibernate
+* ignite-geospatial
+* ignite-schedule
+
+To use these modules, you will need to build them from sources and add to your project.
+For example, to install `ignite-hibernate` into your local repository, run this command in the Ignite source package:
+
+
+[source, shell]
+----
+mvn clean install -DskipTests -Plgpl -pl modules/hibernate -am
+----
+
+
+
+The following modules are available:
+
+[width="100%",cols="1,2",options="header",]
+|=======================================================================
+|Module’s artifactId |Description
+|ignite-aop | Ignite AOP module provides capability to turn any Java method to a distributed closure by
+adding @Gridify annotation to it.
+
+|ignite-aws |Cluster discovery on AWS S3. Refer to link:clustering/discovery-in-the-cloud#amazon-s3-ip-finder[Amazon S3 IP Finder] for details.
+
+
+|ignite-cassandra-serializers | The Ignite Cassandra Serializers module provides additional serializers to store objects as BLOBs in Cassandra. The module could be used as in conjunction with the Ignite Cassandra Store module.
+
+|ignite-cassandra-store | Ignite Cassandra Store provides a CacheStore implementation backed by the  Cassandra database.
+
+|ignite-cloud | Ignite Cloud provides Apache jclouds implementations of the IP finder for TCP discovery.
+
+
+|ignite-direct-io | Ignite Direct IO is a plugin that provides a page store with the ability to write and read cache partitions in O_DIRECT mode.
+
+|ignite-gce | Ignite GCE provides Google Cloud Storage based implementations of IP finder for TCP discovery.
+
+|ignite-indexing | link:SQL/indexes[SQL querying and indexing]
+
+|ignite-jcl |Support for the Jakarta Common Logging (JCL) framework.
+
+|ignite-jta |Integration of Ignite transactions with JTA.
+
+|ignite-kafka | Ignite Kafka Streamer provides capability to stream data from Kafka to Ignite caches.
+
+|ignite-kubernetes | Ignite Kubernetes module provides a TCP Discovery IP Finder that uses a dedicated Kubernetes service for IP addresses lookup of Ignite pods containerized by Kubernetes.
+
+|ignite-log4j |Support for Log4j
+
+|ignite-log4j2 |Support for Log4j2
+
+
+|ignite-ml | Ignite ML Grid provides machine learning features and relevant data structures and methods of linear algebra, including on heap and off heap, dense and sparse, local and distributed implementations.
+Refer to the link:machine-learning/ml[Machine Learning] documentation for details.
+
+|ignite-osgi | This module provides bridging components to make Ignite run seamlessly inside an OSGi container such as Apache Karaf.
+
+|ignite-osgi-karaf | This module contains a feature repository to facilitate installing Ignite into an Apache Karaf container.
+
+|ignite-osgi-paxlogging a|
+This module is an OSGi fragment that exposes the following packages from the Pax Logging API bundle:
+
+- org.apache.log4j.varia
+- org.apache.log4j.xml
+
+These packages are required when installing the ignite-log4j bundle, and are not exposed by default
+by the Pax Logging API - the logging framework used by Apache Karaf.
+
+|ignite-rest-http | Ignite REST-HTTP starts a Jetty-based server within a node that can be used to execute tasks and/or cache commands in grid using HTTP-based link:restapi[RESTful APIs].
+
+|ignite-scalar | The Ignite Scalar module provides Scala-based DSL with extensions and shortcuts for Ignite API.
+
+|ignite-scalar_2.10 | Ignite Scalar module that supports Scala 2.10
+
+|ignite-schedule | This  module provides functionality for scheduling jobs locally using UNIX cron-based syntax.
+
+|ignite-slf4j | Support for link:logging#using-slf4j[SLF4J logging framework].
+
+|ignite-spark | This module provides an implementation of Spark RDD abstraction that enables easy access to Ignite caches.
+
+|ignite-spring-data | Ignite Spring Data provides an integration with Spring Data framework.
+
+|ignite-spring-data_2.0 | Ignite Spring Data 2.0 provides an integration with Spring Data 2.0 framework.
+
+|ignite-ssh | The Ignite SSH module provides capabilities to start Ignite nodes on remote machines via SSH.
+
+|ignite-tensorflow | The Ignite TensorFlow Integration Module allows using TensorFlow with Ignite. In this scenario Ignite will be a datasource for any TensorFlow model training.
+
+|ignite-urideploy | Ignite URI Deploy module provides capabilities to deploy tasks from different sources such as File System, HTTP, or even Email.
+
+|ignite-visor-console |Open source command line management and monitoring tool
+
+|ignite-web | Ignite Web allows you to start nodes inside any web container based on servlet and servlet context listener. In addition, this module provides capabilities to cache web sessions in an Ignite cache.
+
+|ignite-zookeeper | Ignite ZooKeeper provides a TCP Discovery IP Finder that uses a ZooKeeper
+directory to discover other Ignite nodes.
+
+|=======================================================================
+
+
+== Configuration Recommendations
+
+Below are some recommended configuration tips aimed at making it easier for
+you to operate an Ignite cluster or develop applications with Ignite.
+
+=== Setting Work Directory
+
+If you are going to use either binary distribution or Maven, you are
+encouraged to set up the work directory for Ignite.
+The work directory is used to store metadata information, index files, your application data (if you use the link:persistence/native-persistence[Native Persistence] feature), logs, and other files.
+We recommend you always set up the work directory.
+
+
+=== Recommended Logging Configuration
+
+Logs play an important role when it comes to troubleshooting and finding what went wrong. Here are a few general tips on how to manage your log files:
+
+* Start Ignite in verbose mode:
+   - If you use `ignite.sh`, specify the `-v` option.
+   - If you start Ignite from Java code, set the following system variable: `IGNITE_QUIET=false`.
+* Do not store log files in the `/tmp` folder. This folder is cleared up every time the server is restarted.
+* Make sure that there is enough space available on the storage where the log files are stored.
+* Archive old log files periodically to save on storage space.
diff --git a/docs/_docs/sql-reference/aggregate-functions.adoc b/docs/_docs/sql-reference/aggregate-functions.adoc
new file mode 100644
index 0000000..7a724ea
--- /dev/null
+++ b/docs/_docs/sql-reference/aggregate-functions.adoc
@@ -0,0 +1,397 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Aggregate Functions
+
+== AVG
+
+
+[source,sql]
+----
+AVG ([DISTINCT] expression)
+----
+
+The average (mean) value. If no rows are selected, the result is `NULL`. Aggregates are only allowed in select statements. The returned value is of the same data type as the parameter.
+
+=== Parameters
+
+- `DISTINCT` - optional keyword. If presents, will average the unique values.
+
+
+=== Examples
+Calculating average players' age:
+
+
+[source,sql]
+----
+SELECT AVG(age) "AverageAge" FROM Players;
+----
+
+
+== BIT_AND
+
+
+[source,sql]
+----
+BIT_AND (expression)
+----
+
+The bitwise AND of all non-null values. If no rows are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+A logical AND operation is performed on each pair of corresponding bits of two binary expressions of equal length.
+
+In each pair, it returns 1 if the first bit is 1 AND the second bit is 1. Else, it returns 0.
+
+
+== BIT_OR
+
+
+[source,sql]
+----
+BIT_OR (expression)
+----
+
+The bitwise OR of all non-null values. If no rows are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+A logical OR operation is performed on each pair of corresponding bits of two binary expressions of equal length.
+
+In each pair, the result is 1 if the first bit is 1 OR the second bit is 1 OR both bits are 1, and otherwise the result is 0.
+
+////
+== BOOL_AND
+
+[source,sql]
+----
+BOOL_AND (boolean)
+----
+
+Returns true if all expressions are true. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+=== Example
+
+[source,sql]
+----
+SELECT item, BOOL_AND(price > 10) FROM Items GROUP BY item;
+----
+
+== BOOL_OR
+
+[source,sql]
+----
+BOOL_AND  (boolean)
+----
+
+Returns true if any expression is true. If no entries​ are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+=== Example
+
+[source,sql]
+----
+SELECT BOOL_OR(CITY LIKE 'W%') FROM Users;
+----
+////
+
+== COUNT
+
+[source,sql]
+----
+COUNT (* | [DISTINCT] expression)
+----
+
+The count of all entries or of the non-null values. This method returns a long. If no entries are selected, the result is 0. Aggregates are only allowed in select statements.
+
+=== Example
+Calculate the number of players in every city:
+
+[source,sql]
+----
+SELECT city_id, COUNT(*) FROM Players GROUP BY city_id;
+----
+
+== FIRSTVALUE
+
+[source, sql]
+----
+FIRSTVALUE ([DISTINCT] <expression1>, <expression2>)
+----
+
+Returns the value of `expression1` associated with the smallest value of `expression2` for each group defined by the `group by` expression in the query.
+This function can only be used with colocated data and you have to use the `collocated` flag when executing the query.
+
+The colocated hint can be set as follows:
+
+* `SqlFieldsQuery.collocated = true` if you use the link:SQL/sql-api[SQL API] to execute queries.
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC connection string parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC connection string argument]
+
+
+=== Example
+The example returns
+[source, sql]
+----
+select company_id, firstvalue(name, age) as youngest from person group by company_id;
+----
+
+== GROUP_CONCAT
+
+[source,sql]
+----
+GROUP_CONCAT([DISTINCT] expression || [expression || [expression ...]]
+  [ORDER BY expression [ASC|DESC], [[ORDER BY expression [ASC|DESC]]]
+  [SEPARATOR expression])
+----
+
+Concatenates strings with a separator. The default separator is a ',' (without whitespace). This method returns a string. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+The `expression` can be a concatenation of columns and strings using the `||` operator, for example: `column1 || "=" || column2`.
+
+=== Parameters
+- `DISTINCT` - filters the result set for unique sets of expressions.
+- `expression` - specifies an expression that may be a column name, a result of another function, or a math operation.
+- `ORDER BY` - orders rows by expression.
+- `SEPARATOR` - overrides a string separator. By default, the separator character is the comma ','.
+
+NOTE: The `DISTINCT` and `ORDER BY` expressions inside the GROUP_CONCAT function are only supported if you group the results by the primary or affinity key (i.e. use `GROUP BY`). Moreover, you have to tell Ignite that your data is colocated by specifying the `collocated=true` property in the connection string or by calling `SqlFieldsQuery.setCollocated(true)` if you use the link:{javadoc_base_url}/org/apache/ignite/cache/query/SqlFieldsQuery.html#setCollocated-boolean-[Java API, window=_blank].
+
+
+=== Example
+Group all players' names in one row:
+
+
+[source,sql]
+----
+SELECT GROUP_CONCAT(name ORDER BY id SEPARATOR ', ') FROM Players;
+----
+
+
+== LASTVALUE
+
+[source, sql]
+----
+LASTVALUE ([DISTINCT] <expression1>, <expression2>)
+----
+
+Returns the value of `expression1` associated with the largest value of `expression2` for each group defined by the `group by` expression.
+This function can only be used with colocated data and you have to use the `collocated` flag when executing the query.
+
+The colocated hint can be set as follows:
+
+* `SqlFieldsQuery.collocated = true` if you use the link:SQL/sql-api[SQL API] to execute queries.
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC connection string parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC connection string argument]
+
+=== Example
+
+[source, sql]
+----
+select company_id, lastvalue(name, age) as oldest from person group by company_id;
+----
+
+
+
+== MAX
+
+[source,sql]
+----
+MAX (expression)
+----
+
+Returns the highest value. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements. The returned value is of the same data type as the parameter.
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+
+=== Example
+Return the height of the ​tallest player:
+
+
+[source,sql]
+----
+SELECT MAX(height) FROM Players;
+----
+
+
+== MIN
+
+[source,sql]
+----
+MIN (expression)
+----
+
+Returns the lowest value. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements. The returned value is of the same data type as the parameter.
+
+
+
+=== Parameters
+- `expression` - may be a column name, the result of another function, or a math operation.
+
+=== Example
+Return the age of the youngest player:
+
+
+[source,sql]
+----
+SELECT MIN(age) FROM Players;
+----
+
+
+== SUM
+
+[source,sql]
+----
+SUM ([DISTINCT] expression)
+----
+
+Returns the sum of all values. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements. The data type of the returned value depends on the parameter data.
+
+
+=== Parameters
+- `DISTINCT` - accumulate unique values only.
+- `expression` - may be a column name, the result of another function, or a math operation.
+
+=== Example
+Get the total number of goals scored by all players:
+
+
+[source,sql]
+----
+SELECT SUM(goal) FROM Players;
+----
+
+////
+this function is not supported
+== SELECTIVITY
+
+[source,sql]
+----
+SELECTIVITY (expression)
+----
+Estimates the selectivity (0-100) of a value. The value is defined as `(100 * distinctCount / rowCount)`. The selectivity of 0 rows is 0 (unknown). Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the selectivity of the `first_name` and `second_name` columns:
+
+
+[source,sql]
+----
+SELECT SELECTIVITY(first_name), SELECTIVITY(second_name) FROM Player
+  WHERE ROWNUM() < 20000;
+----
+
+
+== STDDEV_POP
+
+[source,sql]
+----
+STDDEV_POP ([DISTINCT] expression)
+----
+Returns the population standard deviation. This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `DISTINCT` - calculate unique value only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the standard deviation for Players' age:
+
+
+[source,sql]
+----
+SELECT STDDEV_POP(age) from Players;
+----
+
+
+== STDDEV_SAMP
+
+[source,sql]
+----
+STDDEV_SAMP ([DISTINCT] expression)
+----
+
+Calculates the sample standard deviation. This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+=== Parameters
+- `DISTINCT` - calculate unique values only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculates the sample standard deviation for Players' age:
+
+
+[source,sql]
+----
+SELECT STDDEV_SAMP(age) from Players;
+----
+
+
+== VAR_POP
+
+[source,sql]
+----
+VAR_POP ([DISTINCT] expression)
+----
+
+Calculates the _population variance_ (square of the population standard deviation). This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `DISTINCT` - calculate unique values only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the variance of Players' age:
+
+
+[source,sql]
+----
+SELECT VAR_POP (age) from Players;
+----
+
+
+
+== VAR_SAMP
+
+[source,sql]
+----
+VAR_SAMP ([DISTINCT] expression)
+----
+
+Calculates the _sample variance_ (square of the sample standard deviation). This method returns a `double`. If no entries are selected, the result is NULL. Aggregates are only allowed in select statements.
+
+
+=== Parameters
+- `DISTINCT` - calculate unique values only.
+- `expression` - may be a column name.
+
+
+=== Example
+Calculate the variance of Players' age:
+
+
+[source,sql]
+----
+SELECT VAR_SAMP(age) FROM Players;
+----
+////
diff --git a/docs/_docs/sql-reference/data-types.adoc b/docs/_docs/sql-reference/data-types.adoc
new file mode 100644
index 0000000..26ad378
--- /dev/null
+++ b/docs/_docs/sql-reference/data-types.adoc
@@ -0,0 +1,182 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Types
+
+
+The page contains a list of SQL data types available in Ignite such as string, numeric, and date/time types.
+
+Every SQL type is mapped to a programming language or driver specific type that is supported by Ignite natively.
+
+== BOOLEAN
+Possible values: TRUE and FALSE.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Boolean`
+- .NET/C#: `bool`
+- C/C++: `bool`
+- ODBC: `SQL_BIT`
+
+== BIGINT
+Possible values: [`-9223372036854775808`, `9223372036854775807`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Long`
+- .NET/C#: `long`
+- C/C++: `int64_t`
+- ODBC: `SQL_BIGINT`
+
+== DECIMAL
+Possible values: Data type with fixed precision and scale.
+
+Mapped to:
+
+- Java/JDBC: `java.math.BigDecimal`
+- .NET/C#: `decimal`
+- C/C++: `ignite::Decimal`
+- ODBC: `SQL_DECIMAL`
+
+== DOUBLE
+Possible values: A floating point number.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Double`
+- .NET/C#: `double`
+- C/C++: `double`
+- ODBC: `SQL_DOUBLE`
+
+== INT
+Possible values: [`-2147483648`, `2147483647`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Integer`
+- .NET/C#: `int`
+- C/C++: `int32_t`
+- ODBC: `SQL_INTEGER`
+
+== REAL
+Possible values: A single precision floating point number.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Float`
+- .NET/C#: `float`
+- C/C++: `float`
+- ODBC: `SQL_FLOAT`
+
+== SMALLINT
+Possible values: [`-32768`, `32767`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Short`
+- .NET/C#: `short`
+- C/C++: `int16_t`
+- ODBC: `SQL_SMALLINT`
+
+== TINYINT
+Possible values: [`-128`, `127`].
+
+Mapped to:
+
+- Java/JDBC: `java.lang.Byte`
+- .NET/C#: `sbyte`
+- C/C++: `int8_t`
+- ODBC: `SQL_TINYINT`
+
+== CHAR
+Possible values: A unicode String. This type is supported for compatibility with other databases and older applications.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.String`
+- .NET/C#: `string`
+- C/C++: `std::string`
+- ODBC: `SQL_CHAR`
+
+== VARCHAR
+Possible values: A Unicode String.
+
+Mapped to:
+
+- Java/JDBC: `java.lang.String`
+- .NET/C#: `string`
+- C/C++: `std::string`
+- ODBC: `SQL_VARCHAR`
+
+== DATE
+Possible values: The date data type. The format is `yyyy-MM-dd`.
+
+Mapped to:
+
+Java/JDBC: `java.sql.Date`
+- .NET/C#: N/A
+- C/C++: `ignite::Date`
+- ODBC: `SQL_TYPE_DATE`
+
+NOTE: Use the <<TIMESTAMP>> type instead of DATE whenever possible. The DATE type is serialized/deserialized very inefficiently resulting in performance degradation.
+
+== TIME
+Possible values: The time data type. The format is `hh:mm:ss`.
+
+Mapped to:
+
+- Java/JDBC: `java.sql.Time`
+- .NET/C#: N/A
+- C/C++: `ignite::Time`
+- ODBC: `SQL_TYPE_TIME`
+
+== TIMESTAMP
+Possible values: The timestamp data type. The format is `yyyy-MM-dd hh:mm:ss[.nnnnnnnnn]`.
+
+Mapped to:
+
+- Java/JDBC: `java.sql.Timestamp`
+- .NET/C#: `System.DateTime`
+- C/C++: `ignite::Timestamp`
+- ODBC: `SQL_TYPE_TIMESTAMP`
+
+== BINARY
+Possible values: Represents a byte array.
+
+Mapped to:
+
+- Java/JDBC: `byte[]`
+- .NET/C#: `byte[]`
+- C/C++: `int8_t[]`
+- ODBC: `SQL_BINARY`
+
+== GEOMETRY
+Possible values: A spatial geometry type, based on the `com.vividsolutions.jts` library. Normally represented in a textual format using the WKT (well-known text) format.
+
+Mapped to:
+
+- Java/JDBC: Types from the `com.vividsolutions.jts` package.
+- .NET/C#: N/A
+- C/C++: N/A
+- ODBC: N/A
+
+== UUID
+Possible values: Universally unique identifier. This is a 128 bit value.
+
+Mapped to:
+
+- Java/JDBC: `java.util.UUID`
+- .NET/C#: `System.Guid`
+- C/C++: `ignite::Guid`
+- ODBC: `SQL_GUID`
diff --git a/docs/_docs/sql-reference/date-time-functions.adoc b/docs/_docs/sql-reference/date-time-functions.adoc
new file mode 100644
index 0000000..1f6b751
--- /dev/null
+++ b/docs/_docs/sql-reference/date-time-functions.adoc
@@ -0,0 +1,399 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Date and Time Functions
+
+
+== CURRENT_DATE
+
+[source,sql]
+----
+{CURRENT_DATE [()] | CURDATE() | SYSDATE | TODAY}
+----
+
+Returns the current date.
+When called multiple times within a transaction, this function returns the same value.
+
+Example: ::
+
+[source,sql]
+----
+CURRENT_DATE()
+----
+
+
+== CURRENT_TIME
+
+[source,sql]
+----
+{CURRENT_TIME [ () ] | CURTIME()}
+----
+
+Returns the current time.
+When called multiple times within a transaction, this function returns the same value.
+
+Example: ::
+
+[source,sql]
+----
+CURRENT_TIME()
+----
+
+
+
+== CURRENT_TIMESTAMP
+
+[source,sql]
+----
+{CURRENT_TIMESTAMP [([int])] | NOW([int])}
+----
+
+
+
+Returns the current timestamp. The precision parameter for nanoseconds precision is optional. This method always returns the same value within a transaction.
+
+Example: ::
+
+[source,sql]
+----
+CURRENT_TIMESTAMP()
+----
+
+
+== DATEADD
+
+[source,sql]
+----
+{DATEADD| TIMESTAMPADD} (unitString, addIntLong, timestamp)
+----
+
+
+
+Adds units to a timestamp. The string indicates the unit. Use negative values to subtract units. `addIntLong` may be a long value when manipulating milliseconds, otherwise its range is restricted to `int`. The same units as in the EXTRACT function are supported. The DATEADD method returns a timestamp. The TIMESTAMPADD method returns a long.
+
+Example: ::
+
+[source,sql]
+----
+DATEADD('MONTH', 1, DATE '2001-01-31')
+----
+
+
+== DATEDIFF
+
+[source,sql]
+----
+{DATEDIFF | TIMESTAMPDIFF} (unitString, aTimestamp, bTimestamp)
+----
+
+
+
+Returns the number of crossed unit boundaries between two timestamps. This method returns a `long`. The string indicates the unit. The same units as in the EXTRACT function are supported.
+
+Example: ::
+
+[source,sql]
+----
+DATEDIFF('YEAR', T1.CREATED, T2.CREATED)
+----
+
+
+== DAYNAME
+
+[source,sql]
+----
+DAYNAME(date)
+----
+
+
+
+Returns the name of the day (in English).
+
+Example: ::
+
+[source,sql]
+----
+DAYNAME(CREATED)
+----
+
+
+== DAY_OF_MONTH
+
+[source,sql]
+----
+DAY_OF_MONTH(date)
+----
+
+
+
+Returns the day of the month (1-31).
+
+Example: ::
+
+[source,sql]
+----
+DAY_OF_MONTH(CREATED)
+----
+
+
+== DAY_OF_WEEK
+
+[source,sql]
+----
+DAY_OF_WEEK(date)
+----
+
+
+
+Returns the day of the week (1 means Sunday).
+
+Example: ::
+
+[source,sql]
+----
+DAY_OF_WEEK(CREATED)
+----
+
+
+== DAY_OF_YEAR
+
+[source,sql]
+----
+DAY_OF_YEAR(date)
+----
+
+
+
+Returns the day of the year (1-366).
+
+Example: ::
+
+[source,sql]
+----
+DAY_OF_YEAR(CREATED)
+----
+
+
+== EXTRACT
+
+[source,sql]
+----
+EXTRACT ({EPOCH | YEAR | YY | QUARTER | MONTH | MM | WEEK | ISO_WEEK
+| DAY | DD | DAY_OF_YEAR | DOY | DAY_OF_WEEK | DOW | ISO_DAY_OF_WEEK
+| HOUR | HH | MINUTE | MI | SECOND | SS | MILLISECOND | MS
+| MICROSECOND | MCS | NANOSECOND | NS}
+FROM timestamp)
+----
+
+
+
+Returns a specific value from a timestamps. This method returns an `int`.
+
+Example: ::
+
+[source,sql]
+----
+EXTRACT(SECOND FROM CURRENT_TIMESTAMP)
+----
+
+
+== FORMATDATETIME
+
+[source,sql]
+----
+FORMATDATETIME (timestamp, formatString [,localeString [,timeZoneString]])
+----
+
+
+
+Formats a date, time, or timestamp as a string. The most important format characters are: `y` year, `M` month, `d` day, `H` hour, `m` minute, `s` second. For details about the format, see `java.text.SimpleDateFormat`. This method returns a `string`.
+
+Example: ::
+
+[source,sql]
+----
+FORMATDATETIME(TIMESTAMP '2001-02-03 04:05:06', 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT')
+----
+
+
+== HOUR
+
+[source,sql]
+----
+HOUR(timestamp)
+----
+
+
+
+Returns the hour (0-23) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+HOUR(CREATED)
+----
+
+
+== MINUTE
+
+[source,sql]
+----
+MINUTE(timestamp)
+----
+
+
+
+Returns the minute (0-59) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+MINUTE(CREATED)
+----
+
+
+== MONTH
+
+[source,sql]
+----
+MONTH(timestamp)
+----
+
+
+
+Returns the month (1-12) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+MONTH(CREATED)
+----
+
+
+== MONTHNAME
+
+[source,sql]
+----
+MONTHNAME(date)
+----
+
+
+
+Returns the name of the month (in English).
+
+Example: ::
+
+[source,sql]
+----
+MONTHNAME(CREATED)
+----
+
+
+== PARSEDATETIME
+
+[source,sql]
+----
+PARSEDATETIME(string, formatString [, localeString [, timeZoneString]])
+----
+
+
+
+Parses a string and returns a `timestamp`. The most important format characters are: `y` year, `M` month, `d` day, `H` hour, `m` minute, `s` second. For details about the format, see `java.text.SimpleDateFormat`.
+
+Example: ::
+
+[source,sql]
+----
+PARSEDATETIME('Sat, 3 Feb 2001 03:05:06 GMT', 'EEE, d MMM yyyy HH:mm:ss z', 'en', 'GMT')
+----
+
+
+== QUARTER
+
+[source,sql]
+----
+QUARTER(timestamp)
+----
+
+
+
+Returns the quarter (1-4) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+QUARTER(CREATED)
+----
+
+
+== SECOND
+
+[source,sql]
+----
+SECOND(timestamp)
+----
+
+
+
+Returns the second (0-59) from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+SECOND(CREATED)
+----
+
+
+== WEEK
+
+[source,sql]
+----
+WEEK(timestamp)
+----
+
+
+
+Returns the week (1-53) from a timestamp. This method uses the current system locale.
+
+Example: ::
+
+[source,sql]
+----
+WEEK(CREATED)
+----
+
+
+== YEAR
+
+[source,sql]
+----
+YEAR(timestamp)
+----
+
+
+
+Returns the year from a timestamp.
+
+Example: ::
+
+[source,sql]
+----
+YEAR(CREATED)
+----
+
diff --git a/docs/_docs/sql-reference/ddl.adoc b/docs/_docs/sql-reference/ddl.adoc
new file mode 100644
index 0000000..e55d757
--- /dev/null
+++ b/docs/_docs/sql-reference/ddl.adoc
@@ -0,0 +1,520 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Definition Language (DDL)
+
+:toclevels:
+
+This page encompasses all data definition language (DDL) commands supported by Ignite.
+
+== CREATE TABLE
+
+Create a new table and an underlying cache.
+
+[source,sql]
+----
+CREATE TABLE [IF NOT EXISTS] tableName (tableColumn [, tableColumn]...
+[, PRIMARY KEY (columnName [,columnName]...)])
+[WITH "paramName=paramValue [,paramName=paramValue]..."]
+
+tableColumn := columnName columnType [DEFAULT defaultValue] [PRIMARY KEY]
+----
+
+
+Parameters:
+
+* `tableName` - name of the table.
+* `tableColumn` - name and type of a column to be created in the new table.
+* `columnName` - name of a previously defined column.
+* `DEFAULT` - specifies a default value for the column. Only constant values are accepted.
+* `IF NOT EXISTS` - create the table only if a table with the same name does not exist.
+* `PRIMARY KEY` - specifies a primary key for the table that can consist of a single column or multiple columns.
+* `WITH` - accepts additional parameters not defined by ANSI-99 SQL:
+
+** `TEMPLATE=<cache's template name>` - case-sensitive​ name of a link:configuring-caches/configuration-overview#cache-templates[cache template]. A template is an instance of the `CacheConfiguration` class registered by calling `Ignite.addCacheConfiguration()`. Use predefined `TEMPLATE=PARTITIONED` or `TEMPLATE=REPLICATED` templates to create the cache with the corresponding replication mode. The rest of the parameters will be those that are defined in the `CacheConfiguration` object. By default, `TEMPLATE=PARTITIONED` is used if the template is not specified explicitly.
+** `BACKUPS=<number of backups>` - sets the number of link:configuring-caches/configuring-backups[partition backups]. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with `0` backup copies.
+** `ATOMICITY=<ATOMIC | TRANSACTIONAL | TRANSACTIONAL_SNAPSHOT>` - sets link:key-value-api/transactions[atomicity mode] for the underlying cache. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with the `ATOMIC` mode enabled. If `TRANSACTIONAL_SNAPSHOT` is specified, the table will link:transactions/mvcc[support transactions].
+** `WRITE_SYNCHRONIZATION_MODE=<PRIMARY_SYNC | FULL_SYNC | FULL_ASYNC>` -
+sets the write synchronization mode for the underlying cache. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with `FULL_SYNC` mode enabled.
+** `CACHE_GROUP=<group name>` - specifies the link:configuring-caches/cache-groups[group name] the underlying cache belongs to.
+** `AFFINITY_KEY=<affinity key column name>` - specifies an link:data-modeling/affinity-collocation[affinity key] name which is a column of the `PRIMARY KEY` constraint.
+** `CACHE_NAME=<custom name of the new cache>` - the name of the underlying cache created by the command.
+** `DATA_REGION=<existing data region name>` - name of the link:memory-configuration/data-regions[data region] where table entries should be stored. By default, Ignite stores all the data in a default region.
+** `KEY_TYPE=<custom name of the key type>` - sets the name of the custom key type that is used from the key-value APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it can be a random one if link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used instead of a custom class. The number of fields and their types in the custom key type has to correspond to the `PRIMARY KEY`. Refer to the <<Description>> section below for more details.
+** `VALUE_TYPE=<custom name of the value type of the new cache>` - sets the name of a custom value type that is used from the key-value and other non-SQL APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it can be a random one if
+link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used instead of a custom class. The value type should include all the columns defined in the CREATE TABLE command except for those listed in the `PRIMARY KEY` constraint. Refer to the <<Description>> section below for more details.
+** `WRAP_KEY=<true | false>` - this flag controls whether a _single column_ `PRIMARY KEY` should be wrapped in the link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or not. By default, this flag is set to false. This flag does not have any effect on the `PRIMARY KEY` with multiple columns; it always gets wrapped regardless of the value of the parameter.
+** `WRAP_VALUE=<true | false>` - this flag controls whether a single column value of a primitive type should be wrapped in the link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or not. By default, this flag is set to true. This flag does not have any effect on the value with multiple columns; it always gets wrapped regardless of the value of the parameter. Set this parameter to false if you have a single column value and do not plan to add additional columns to the table. Note that once the parameter is set to false, you can't use the `ALTER TABLE ADD COLUMN` command for this specific table.
+
+The CREATE TABLE command creates a new Ignite cache and defines a SQL table on top of it. The cache stores the data in the form of key-value pairs while the table allows processing the data with SQL queries.
+
+The table will reside in the schema specified in the connection parameters. If no schema is specified, the PUBLIC schema will be used. See link:SQL/schemas[Schemas] for more information about schemas in Ignite.
+
+Note that the CREATE TABLE operation is synchronous and blocks the execution of other DDL commands that are issued while CREATE TABLE is still in progress. The execution of DML commands is not affected and can be performed in parallel.
+
+If you wish to access the data using the key-value APIs, then setting the `CACHE_NAME`, `KEY_TYPE`, and `VALUE_TYPE` parameters may be useful for the following reasons:
+
+- When the CREATE TABLE command is executed, the name of the cache is generated with the following format- `SQL_{SCHEMA_NAME}_{TABLE}`. Use the CACHE_NAME parameter to override the default name.
+- Additionally, the command creates two new binary types - for the key and value respectively. Ignite generates the names of the types randomly including a UUID string. This complicates the usage of these 'types' from a non-SQL API. Use KEY_TYPE and VALUE_TYPE to override the names with custom ones corresponding to your business model objects.
+
+Read more about the database architecture on the link:SQL/sql-introduction[SQL Introduction] page.
+
+
+Examples:
+
+Create Person table:
+
+[source,sql]
+----
+CREATE TABLE IF NOT EXISTS Person (
+  id int,
+  city_id int,
+  name varchar,
+  age int,
+  company varchar,
+  PRIMARY KEY (id, city_id)
+) WITH "template=partitioned,backups=1,affinity_key=city_id, key_type=PersonKey, value_type=MyPerson";
+----
+
+Once the CREATE TABLE command gets executed, the following happens:
+
+- A new distributed cache is created and named SQL_PUBLIC_PERSON. This cache stores objects of the `Person` type that corresponds to a specific Java, .NET, C++ class or BinaryObject. Furthermore, the key type (`PersonKey`) and value type (`MyPerson`) are defined explicitly assuming the data is to be processed by key-value and other non-SQL APIs.
+- SQL table/schema with all the parameters will be defined.
+- The data will be stored in the form of key-value pairs. The `PRIMARY KEY` columns will be used as the object's key; the rest of the columns will belong to the value.
+- Distributed cache related parameters are passed in the `WITH` clause of the statement. If the `WITH` clause is omitted, then the cache will be created with default parameters set in the CacheConfiguration object.
+
+The example below shows how to create the same table with `PRIMARY KEY` specified in the column definition and overrid some cache related parameters:
+
+[source,sql]
+----
+CREATE TABLE Person (
+  id int PRIMARY KEY,
+  city_id int,
+  name varchar,
+  age int,
+  company varchar
+) WITH "atomicity=transactional,cachegroup=somegroup";
+----
+
+
+== ALTER TABLE
+
+Modify the structure of an existing table.
+
+[source,sql]
+----
+ALTER TABLE [IF EXISTS] tableName {alter_specification}
+
+alter_specification:
+    ADD [COLUMN] {[IF NOT EXISTS] tableColumn | (tableColumn [,...])}
+  | DROP [COLUMN] {[IF EXISTS] columnName | (columnName [,...])}
+  | {LOGGING | NOLOGGING}
+
+tableColumn := columnName columnType
+----
+
+[NOTE]
+====
+[discrete]
+=== Scope of ALTER TABLE
+Presently, Ignite only supports addition and removal of columns.
+====
+
+Parameters:
+
+- `tableName` - the name of the table.
+- `tableColumn` - the name and type of the column to be added to the table.
+- `columnName` - the name of the column to be added or removed.
+- `IF EXISTS` - if applied to TABLE, do not throw an error if a table with the specified table name does not exist. If applied to COLUMN, do not throw an error if a column with the specified name does not exist.
+- `IF NOT EXISTS` - do not throw an error if a column with the same name already exists.
+- `LOGGING` - enable link:persistence/native-persistence#write-ahead-log[write-ahead logging] for the table. Write-ahead logging in enabled by default. The command is relevant only if Ignite persistence is used.
+- `NOLOGGING` - disable write-ahead logging for the table. The command is relevant only if Ignite persistence is used.
+
+
+`ALTER TABLE ADD` adds a new column or several columns to a previously created table. Once a column is added, it can be accessed using link:sql-reference/dml[DML commands] and indexed with the <<CREATE INDEX>> statement.
+
+`ALTER TABLE DROP` removes an existing column or multiple columns from a table. Once a column is removed, it cannot be accessed within queries. Consider the following notes and limitations:
+
+- The command does not remove actual data from the cluster which means that if the column 'name' is dropped, the value of the 'name' is still stored in the cluster. This limitation is to be addressed in the next releases.
+- If the column was indexed, the index has to be dropped manually using the 'DROP INDEX' command.
+- It is not possible to remove a column that is a primary key or a part of such a key.
+- It is not possible to remove a column if it represents the whole value stored in the cluster. The limitation is relevant for primitive values.
+Ignite stores data in the form of key-value pairs and all the new columns will belong to the value. It's not possible to change a set of columns of the key (`PRIMARY KEY`).
+
+Both DDL and DML commands targeting the same table are blocked for a short time until `ALTER TABLE` is in progress.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+
+Examples:
+
+Add a column to the table:
+
+[source,sql]
+----
+ALTER TABLE Person ADD COLUMN city varchar;
+----
+
+
+Add a new column to the table only if a column with the same name does not exist:
+
+[source,sql]
+----
+ALTER TABLE City ADD COLUMN IF NOT EXISTS population int;
+----
+
+
+Add a column​ only if the table exists:
+
+[source,sql]
+----
+ALTER TABLE IF EXISTS Missing ADD number long;
+----
+
+
+Add several columns to the table at once:
+
+
+[source,sql]
+----
+ALTER TABLE Region ADD COLUMN (code varchar, gdp double);
+----
+
+
+Drop a column from the table:
+
+
+[source,sql]
+----
+ALTER TABLE Person DROP COLUMN city;
+----
+
+
+Drop a column from the table only if a column with the same name does exist:
+
+
+[source,sql]
+----
+ALTER TABLE Person DROP COLUMN IF EXISTS population;
+----
+
+
+Drop a column only if the table exists:
+
+
+[source,sql]
+----
+ALTER TABLE IF EXISTS Person DROP COLUMN number;
+----
+
+
+Drop several columns from the table at once:
+
+
+[source,sql]
+----
+ALTER TABLE Person DROP COLUMN (code, gdp);
+----
+
+
+Disable write-ahead logging:
+
+
+[source,sql]
+----
+ALTER TABLE Person NOLOGGING
+----
+
+
+== DROP TABLE
+
+The `DROP TABLE` command drops an existing table.
+The underlying cache with all the data in it is destroyed, too.
+
+
+[source,sql]
+----
+DROP TABLE [IF EXISTS] tableName
+----
+
+Parameters:
+
+- `tableName` - the name of the table.
+- `IF NOT EXISTS` - do not throw an error if a table with the same name does not exist.
+
+
+Both DDL and DML commands targeting the same table are blocked while the `DROP TABLE` is in progress.
+Once the table is dropped, all pending commands will fail with appropriate errors.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+Examples:
+
+Drop Person table if the one exists:
+
+[source,sql]
+----
+DROP TABLE IF EXISTS "Person";
+----
+
+== CREATE INDEX
+
+Create an index on the specified table.
+
+[source,sql]
+----
+CREATE [SPATIAL] INDEX [[IF NOT EXISTS] indexName] ON tableName
+    (columnName [ASC|DESC] [,...]) [(index_option [...])]
+
+index_option := {INLINE_SIZE size | PARALLEL parallelism_level}
+----
+
+Parameters:
+
+* `indexName` - the name of the index to be created.
+* `ASC` - specifies ascending sort order (default).
+* `DESC` - specifies descending sort order.
+* `SPATIAL` - create the spatial index. Presently, only geometry types are supported.
+* `IF NOT EXISTS` - do not throw an error if an index with the same name already exists. The database checks indexes' names only, and does not consider columns types or count.
+* `index_option` - additional options for index creation:
+** `INLINE_SIZE` - specifies index inline size in bytes. Depending on the size, Ignite will place the whole indexed value or a part of it directly into index pages, thus omitting extra calls to data pages and increasing queries' performance. Index inlining is enabled by default and the size is pre-calculated automatically based on the table structure. To disable inlining, set the size to 0 (not recommended). Refer to the link:SQL/sql-tuning#increasing-index-inline-size[Increasing Index Inline Size] section for more details.
+** `PARALLEL` - specifies the number of threads to be used in parallel for index creation. The greater number is set, the faster the index is created and built. If the value exceeds the number of CPUs, then it will be decreased to the number of cores. If the parameter is not specified, then the number of threads is calculated as 25% of the CPU cores available.
+
+
+`CREATE INDEX` creates a new index on the specified table. Regular indexes are stored in the internal B+tree data structures. The B+tree gets distributed across the cluster along with the actual data. A cluster node stores a part of the index for the data it owns.
+
+If `CREATE INDEX` is executed in runtime on live data then the database will iterate over the specified columns synchronously indexing them. The rest of the DDL commands targeting the same table are blocked until CREATE INDEX is in progress. DML command execution is not affected and can be performed in parallel.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+
+
+=== Indexes Tradeoffs
+There are multiple things you should consider when choosing indexes for your application.
+
+- Indexes are not free. They consume memory, and each index needs to be updated separately, thus the performance of write operations might drop if too many indexes are created. On top of that, if a lot of indexes are defined, the optimizer might make more mistakes by choosing the wrong index while building the execution plan.
++
+WARNING: It is poor strategy to index everything.
+
+- Indexes are just sorted data structures (B+tree). If you define an index for the fields (a,b,c) then the records will be sorted first by a, then by b and only then by c.
++
+[NOTE]
+====
+[discrete]
+=== Example of Sorted Index
+[width="25%" cols="33l, 33l, 33l"]
+|=====
+| A | B | C
+| 1 | 2 | 3
+| 1 | 4 | 2
+| 1 | 4 | 4
+| 2 | 3 | 5
+| 2 | 4 | 4
+| 2 | 4 | 5
+|=====
+
+Any condition like `a = 1 and b > 3` can be viewed as a bounded range, both bounds can be quickly looked up in *log(N)* time, the result will be everything between.
+
+The following conditions will be able to use the index:
+
+- `a = ?`
+- `a = ? and b = ?`
+- `a = ? and b = ? and c = ?`
+
+Condition `a = ? and c = ?` is no better than `a = ?` from the index point of view.
+Obviously half-bounded ranges like `a > ?` can be used as well.
+====
+
+- Indexes on single fields are no better than group indexes on multiple fields starting with the same field (index on (a) is no better than (a,b,c)). Thus it is preferable to use group indexes.
+
+- When `INLINE_SIZE` option is specified, indexes holds a prefix of field data in the B+tree pages. This improves search performance by doing less row data retrievals, however substantially increases size of the tree (with a moderate increase in tree height) and reduces data insertion and removal performance due to excessive page splits and merges. It's a good idea to consider page size when choosing inlining size for the tree: each B-tree entry requires `16 + inline-size` bytes in the page (plus header and extra links for the page).
+
+
+Examples:
+
+Create a regular index:
+
+[source,sql]
+----
+CREATE INDEX title_idx ON books (title);
+----
+
+Create a descending index only if it does not exist:
+
+[source,sql]
+----
+CREATE INDEX IF NOT EXISTS name_idx ON persons (firstName DESC);
+----
+
+Create a composite index:
+
+[source,sql]
+----
+CREATE INDEX city_idx ON sales (country, city);
+----
+
+Create an index specifying data inline size:
+
+[source,sql]
+----
+CREATE INDEX fast_city_idx ON sales (country, city) INLINE_SIZE 60;
+----
+
+Create a geospatial​ index:
+
+[source,sql]
+----
+CREATE SPATIAL INDEX idx_person_address ON Person (address);
+----
+
+
+== DROP INDEX
+
+`DROP INDEX` deletes an existing index.
+
+
+[source,sql]
+----
+DROP INDEX [IF EXISTS] indexName
+----
+
+Parameters:
+
+* `indexName` - the name of the index to drop.
+* `IF EXISTS` - do not throw an error if an index with the specified name does not exist. The database checks indexes' names only not considering column types or count.
+
+
+DDL commands targeting the same table are blocked until `DROP INDEX` is in progress. DML command execution is not affected and can be performed in parallel.
+
+Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
+
+
+[discrete]
+=== Examples
+Drop an index:
+
+
+[source,sql]
+----
+DROP INDEX idx_person_name;
+----
+
+
+== CREATE USER
+
+The command creates a user with a given name and password.
+
+A new user can only be created using a superuser account when authentication for thin clients is enabled. Ignite creates the superuser account under the name `ignite` and password `ignite` on the first cluster start-up. Presently, you can't rename the superuser account nor grant its privileges to any other account.
+
+
+
+[source,sql]
+----
+CREATE USER userName WITH PASSWORD 'password';
+----
+
+Parameters:
+
+* `userName` - new user's name. The name cannot be longer than 60 bytes in UTF8 encoding.
+* `password` - new user's password. An empty password is not allowed.
+
+To create a _case-sensitive_ username, use the quotation (") SQL identifier.
+
+[NOTE]
+====
+[discrete]
+=== When Are Case-Sensitive Names Preferred?
+The case-insensitivity property of the usernames is supported for JDBC and ODBC interfaces only. If it's planned to access Ignite from Java, .NET, or other programming language APIs then the username has to be passed either in all upper-case letters or enclosed in double quotes (") from those interfaces.
+
+For instance, if `Test` was set as a username then:
+
+- You can use `Test`, `TEst`, `TEST` and other combinations from JDBC and ODBC.
+- You can use either `TEST` or `"Test"` as the username from Ignite's native SQL APIs designed for Java, .NET and other programming languages.
+
+Alternatively, use the case-sensitive username at all times to ensure name consistency across all the SQL interfaces.
+====
+
+Examples:
+
+Create a new user using test as a name and password:
+
+
+[source,sql]
+----
+CREATE USER test WITH PASSWORD 'test';
+----
+
+Create a case-sensitive username:
+
+
+[source,sql]
+----
+CREATE USER "TeSt" WITH PASSWORD 'test'
+----
+
+
+== ALTER USER
+
+The command changes an existing user's password.
+The password can be updated by the superuser (`ignite`, see <<CREATE USER>> for more details) or by the user themselves.
+
+
+[source,sql]
+----
+ALTER USER userName WITH PASSWORD 'newPassword';
+----
+
+
+Parameters:
+
+* `userName` - existing user's name.
+* `newPassword` - the new password to set for the user's account.
+
+
+Examples:
+
+Updating user's password:
+
+
+[source,sql]
+----
+ALTER USER test WITH PASSWORD 'test123';
+----
+
+
+== DROP USER
+
+The command removes an existing user.
+
+The user can be removed only by the superuser (`ignite`, see <<CREATE USER>> for more details).
+
+
+[source,sql]
+----
+DROP USER userName;
+----
+
+
+Parameters:
+
+* `userName` - a name of the user to remove.
+
+
+Examples:
+
+[source,sql]
+----
+DROP USER test;
+----
+
diff --git a/docs/_docs/sql-reference/dml.adoc b/docs/_docs/sql-reference/dml.adoc
new file mode 100644
index 0000000..327a92d
--- /dev/null
+++ b/docs/_docs/sql-reference/dml.adoc
@@ -0,0 +1,363 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Manipulation Language (DML)
+
+
+This page includes all data manipulation language (DML) commands supported by Ignite.
+
+== SELECT
+
+Retrieve data from a table or multiple tables.
+
+[source,sql]
+----
+SELECT
+    [TOP term] [DISTINCT | ALL] selectExpression [,...]
+    FROM tableExpression [,...] [WHERE expression]
+    [GROUP BY expression [,...]] [HAVING expression]
+    [{UNION [ALL] | MINUS | EXCEPT | INTERSECT} select]
+    [ORDER BY order [,...]]
+    [{ LIMIT expression [OFFSET expression]
+    [SAMPLE_SIZE rowCountInt]} | {[OFFSET expression {ROW | ROWS}]
+    [{FETCH {FIRST | NEXT} expression {ROW | ROWS} ONLY}]}]
+----
+
+=== Parameters
+- `DISTINCT` - removes duplicate rows from a result set.
+- `GROUP BY` - groups the result by the given expression(s).
+- `HAVING` - filters rows after grouping.
+- `ORDER BY` - sorts the result by the given column(s) or expression(s).
+- `LIMIT and FETCH FIRST/NEXT ROW(S) ONLY` - limits the number of rows returned by the query (no limit if null or smaller than zero).
+- `OFFSET` - specifies​ how many rows to skip.
+- `UNION, INTERSECT, MINUS, EXPECT` - combines the result of this query with the results of another query.
+- `tableExpression` - Joins a table. The join expression is not supported for cross and natural joins. A natural join is an inner join, where the condition is automatically on the columns with the same name.
+
+[source,sql]
+----
+tableExpression = [[LEFT | RIGHT]{OUTER}] | INNER | CROSS | NATURAL]
+JOIN tableExpression
+[ON expression]
+----
+
+- `LEFT` - LEFT JOIN performs a join starting with the first (left-most) table and then any matching second (right-most) table records.
+- `RIGHT` - RIGHT JOIN performs a join starting with the second (right-most) table and then any matching first (left-most) table records.
+- `OUTER` - Outer joins subdivide further into left outer joins, right outer joins, and full outer joins, depending on which table's rows are retained (left, right, or both).
+- `INNER` - An inner join requires each row in the two joined tables to have matching column values.
+- `CROSS` - CROSS JOIN returns the Cartesian product of rows from tables in the join.
+- `NATURAL` - The natural join is a special case of equi-join.
+- `ON` - Value or condition to join on.
+
+=== Description
+`SELECT` queries can be executed against both link:data-modeling/data-partitioning#replicated[replicated] and link:data-modeling/data-partitioning#partitioned[partitioned] data.
+
+When queries are executed against fully replicated data, Ignite sends a query to a single cluster node and run it over the local data there.
+
+On the other hand, if a query is executed over partitioned data, then the execution flow will be the following:
+
+- The query will be parsed and split into multiple map queries and a single reduce query.
+- All the map queries are executed on all the nodes where required data resides.
+- All the nodes provide result sets of local execution to the query initiator (reducer) that, in turn, will accomplish the reduce phase by properly merging provided result sets.
+
+=== JOINs
+Ignite supports colocated and non-colocated distributed SQL joins. Furthermore, if the data resides in different tables (aka. caches in Ignite), Ignite allows for cross-table joins as well.
+
+Joins between partitioned and replicated data sets always work without any limitations.
+
+However, if you join partitioned data sets, then you have to make sure that the keys you are joining on are either colocated or make sure you switched on the non-colocated joins parameter for a query.
+
+Refer to Distributed Joins page for more details.
+
+=== Group By and Order By Optimizations
+SQL queries with `ORDER BY` clauses do not require loading the whole result set to a query initiator (reducer) node in order to complete the sorting. Instead, every node to which a query will be mapped will sort its own part of the overall result set and the reducer will do the merge in a streaming fashion.
+
+The same optimization is implemented for sorted `GROUP BY` queries - there is no need to load the whole result set to the reducer in order to do the grouping before giving it to an application. In Ignite, partial result sets from the individual nodes can be streamed, merged, aggregated, and returned to the application gradually.
+
+[discrete]
+=== Examples
+
+Retrieve all rows from the `Person` table:
+
+[source,sql]
+----
+SELECT * FROM Person;
+----
+
+
+Get all rows in alphabetical order:
+
+[source,sql]
+----
+SELECT * FROM Person ORDER BY name;
+----
+
+
+Calculate the number of `Persons` from a specific city:
+
+
+[source,sql]
+----
+SELECT city_id, COUNT(*) FROM Person GROUP BY city_id;
+----
+
+
+
+Join data stored in the `Person` and `City` tables:
+
+
+[source,sql]
+----
+SELECT p.name, c.name
+	FROM Person p, City c
+	WHERE p.city_id = c.id;
+----
+
+
+
+== INSERT
+
+Inserts data into a table.
+
+
+[source,sql]
+----
+INSERT INTO tableName
+  {[( columnName [,...])]
+  {VALUES {({DEFAULT | expression} [,...])} [,...] | [DIRECT] [SORTED] select}}
+  | {SET {columnName = {DEFAULT | expression}} [,...]}
+----
+
+
+=== Parameters
+- `tableName` - name of the table to be updated.
+- `columnName` - name of a column to be initialized with a value from the VALUES clause.
+
+=== Description
+`INSERT` adds an entry or entries into a table.
+
+Since Ignite stores all the data in a form of key-value pairs, all the `INSERT` statements are finally transformed into a set of key-value operations.
+
+If a single key-value pair is being added into a cache then, eventually, an `INSERT` statement will be converted into a `cache.putIfAbsent(...)` operation. In other cases, when multiple key-value pairs are inserted, the DML engine creates an `EntryProcessor` for each pair and uses `cache.invokeAll(...)` to propagate the data into a cache.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+[discrete]
+=== Examples
+Insert a new Person into the table:
+
+
+[source,sql]
+----
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+----
+
+
+
+Fill in Person table with the data retrieved from Account table:
+
+
+[source,sql]
+----
+INSERT INTO Person(id, name, city_id)
+   (SELECT a.id + 1000, concat(a.firstName, a.secondName), a.city_id
+   FROM Account a WHERE a.id > 100 AND a.id < 1000);
+----
+
+
+== UPDATE
+
+Update data in a table.
+
+
+[source,sql]
+----
+UPDATE tableName [[AS] newTableAlias]
+  SET {{columnName = {DEFAULT | expression}} [,...]} |
+  {(columnName [,...]) = (select)}
+  [WHERE expression][LIMIT expression]
+----
+
+
+=== Parameters
+- `table` - the name of the table to be updated.
+- `columnName` - the name of a column to be updated with a value from a `SET` clause.
+
+=== Description
+`UPDATE` alters existing entries stored in a table.
+
+Since Ignite stores all the data in a form of key-value pairs, all the `UPDATE` statements are finally transformed into a set of key-value operations.
+
+Initially, the SQL engine generates and executes a `SELECT` query based on the `UPDATE WHERE` clause and only after that does it modify the existing values that satisfy the clause result.
+
+The modification is performed via a `cache.invokeAll(...)` operation. This means that once the result of the `SELECT` query is ready, the SQL engine will prepare a number of `EntryProcessors` and will execute all of them using a `cache.invokeAll(...)` operation. While the data is being modified using `EntryProcessors`, additional checks are performed to make sure that nobody has interfered between the `SELECT` and the actual update.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+=== Primary Keys Updates
+Ignite does not allow updating a primary key because the latter defines a partition the key and its value belong to statically. While the partition with all its data can change several cluster owners, the key always belongs to a single partition. The partition is calculated using a hash function applied to the key's value.
+
+Thus, if a key needs to be updated it has to be removed and then inserted.
+
+[discrete]
+=== Examples
+Update the `name` column of an entry:
+
+
+[source,sql]
+----
+UPDATE Person SET name = 'John Black' WHERE id = 2;
+----
+
+Update the `Person` table with the data taken from the `Account` table:
+
+[source,sql]
+----
+UPDATE Person p SET name = (SELECT a.first_name FROM Account a WHERE a.id = p.id)
+----
+
+
+== WITH
+
+Used to name a sub-query, can be referenced in other parts of the SQL statement.
+
+
+[source,sql]
+----
+WITH  { name [( columnName [,...] )] AS ( select ) [,...] }
+{ select | insert | update | merge | delete | createTable }
+----
+
+
+
+=== Parameters
+- `query_name` - the name of the sub-query to be created. The name assigned to the sub-query is treated as though it was an inline view or table.
+
+=== Description
+`WITH` creates a sub-query. One or more common table entries can be referred to by name. Column name declarations are optional - the column names will be inferred from the named select queries. The final action in a WITH statement can be a `select`, `insert`, `update`, `merge`, `delete`, or `create table`.
+
+[discrete]
+=== Example
+
+
+[source,sql]
+----
+WITH cte1 AS (
+        SELECT 1 AS FIRST_COLUMN
+), cte2 AS (
+        SELECT FIRST_COLUMN+1 AS FIRST_COLUMN FROM cte1
+)
+SELECT sum(FIRST_COLUMN) FROM cte2;
+----
+
+
+
+== MERGE
+
+Merge data into a table.
+
+
+[source,sql]
+----
+MERGE INTO tableName [(columnName [,...])]
+  [KEY (columnName [,...])]
+  {VALUES {({ DEFAULT | expression } [,...])} [,...] | select}
+----
+
+
+
+=== Parameters
+- `tableName` - the name of the table to be updated.
+- `columnName` - the name of a column to be initialized with a value from a `VALUES` clause.
+
+=== Description
+`MERGE` updates existing entries and inserts new entries.
+
+Because Ignite stores all the data in a form of key-value pairs, all the `MERGE` statements are transformed into a set of key-value operations.
+
+`MERGE` is one of the most straightforward operations because it is translated into `cache.put(...)` and `cache.putAll(...)` operations depending on the number of rows that need to be inserted or updated as part of the `MERGE` query.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+[discrete]
+=== Examples
+Merge some rows into the `Person` table:
+
+
+[source,sql]
+----
+MERGE INTO Person(id, name, city_id) VALUES
+	(1, 'John Smith', 5),
+        (2, 'Mary Jones', 5);
+----
+
+
+Fill in the `Person` table with the data retrieved from the `Account` table:
+
+
+[source,sql]
+----
+MERGE INTO Person(id, name, city_id)
+   (SELECT a.id + 1000, concat(a.firstName, a.secondName), a.city_id
+   FROM Account a WHERE a.id > 100 AND a.id < 1000);
+----
+
+
+
+== DELETE
+
+Delete data from a table.
+
+
+[source,sql]
+----
+DELETE
+  [TOP term] FROM tableName
+  [WHERE expression]
+  [LIMIT term]
+----
+
+
+=== Parameters
+- `tableName` - the name of the table to delete data from.
+- `TOP, LIMIT` - specifies the number​ of entries to be deleted (no limit if null or smaller than zero).
+
+=== Description
+`DELETE` removes data from a table.
+
+Because Ignite stores all the data in a form of key-value pairs, all the `DELETE` statements are transformed into a set of key-value operations.
+
+A `DELETE` statements' execution is split into two phases and is similar to the execution of `UPDATE` statements.
+
+First, using a `SELECT` query, the SQL engine gathers those keys that satisfy the `WHERE` clause in the `DELETE` statement. Next, after having all those keys in place, it creates a number of `EntryProcessors` and executes them with `cache.invokeAll(...)`. While the data is being deleted, additional checks are performed to make sure that nobody has interfered between the `SELECT` and the actual removal of the data.
+
+////
+Refer to the *TODO* link:https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-concurrent-modifications[concurrent modifications, window=_blank] section, which explains how the SQL engine solves concurrency issues.
+////
+
+[discrete]
+=== Examples
+Delete all the `Persons` with a specific name:
+
+
+[source,sql]
+----
+DELETE FROM Person WHERE name = 'John Doe';
+----
+
diff --git a/docs/_docs/sql-reference/index.adoc b/docs/_docs/sql-reference/index.adoc
new file mode 100644
index 0000000..e08968d
--- /dev/null
+++ b/docs/_docs/sql-reference/index.adoc
@@ -0,0 +1,18 @@
+---
+layout: toc
+---
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Reference
diff --git a/docs/_docs/sql-reference/numeric-functions.adoc b/docs/_docs/sql-reference/numeric-functions.adoc
new file mode 100644
index 0000000..f449303
--- /dev/null
+++ b/docs/_docs/sql-reference/numeric-functions.adoc
@@ -0,0 +1,981 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Numeric Functions
+
+== ABS
+
+[source,sql]
+----
+ABS (expression)
+----
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Returns the absolute value of an expression.
+
+[discrete]
+=== Example
+Calculate an absolute value:
+
+[source,sql]
+----
+SELECT transfer_id, ABS (price) from Transfers;
+----
+
+
+== ACOS
+
+[source,sql]
+----
+ACOS (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the arc cosine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get arc cos value:
+
+
+[source,sql]
+----
+SELECT acos(angle) FROM Triangles;
+----
+
+
+== ASIN
+
+[source,sql]
+----
+ASIN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the arc sine. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate an arc sine:
+
+
+[source,sql]
+----
+SELECT asin(angle) FROM Triangles;
+----
+
+
+== ATAN
+
+[source,sql]
+----
+ATAN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the arc tangent. This method returns a `double`.
+
+[discrete]
+=== Example
+Get an arc tangent:
+
+
+[source,sql]
+----
+SELECT atan(angle) FROM Triangles;
+----
+
+
+== COS
+
+[source,sql]
+----
+COS (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric cosine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a cosine:
+
+
+[source,sql]
+----
+SELECT COS(angle) FROM Triangles;
+----
+
+
+== COSH
+
+[source,sql]
+----
+COSH (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the hyperbolic cosine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get an hyperbolic cosine:
+
+
+[source,sql]
+----
+SELECT HCOS(angle) FROM Triangles;
+----
+
+
+== COT
+
+[source,sql]
+----
+COT (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric cotangent (1/TAN(ANGLE)). This method returns a `double`.
+
+[discrete]
+=== Example
+Gets a​ trigonometric cotangent:
+
+
+[source,sql]
+----
+SELECT COT(angle) FROM Triangles;
+----
+
+
+== SIN
+
+[source,sql]
+----
+SIN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric sine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a trigonometric sine:
+
+
+[source,sql]
+----
+SELECT SIN(angle) FROM Triangles;
+----
+
+
+== SINH
+
+[source,sql]
+----
+SINH (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the hyperbolic sine. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a hyperbolic sine:
+
+
+[source,sql]
+----
+SELECT SINH(angle) FROM Triangles;
+----
+
+
+== TAN
+
+[source,sql]
+----
+TAN (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the trigonometric tangent. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a trigonometric tangent:
+
+
+[source,sql]
+----
+SELECT TAN(angle) FROM Triangles;
+----
+
+
+== TANH
+
+[source,sql]
+----
+TANH (expression)
+----
+
+
+=== Parameters
+- `expression` - may be a column name, a result of another function, or a math operation.
+
+=== Description
+Calculates the hyperbolic tangent. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a hyperbolic tangent:
+
+
+[source,sql]
+----
+SELECT TANH(angle) FROM Triangles;
+----
+
+
+== ATAN2
+
+[source,sql]
+----
+ATAN2 (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+Calculates the angle when converting the rectangular coordinates to polar coordinates. This method returns a `double`.
+
+[discrete]
+=== Example
+Get a hyperbolic tangent:
+
+
+[source,sql]
+----
+SELECT ATAN2(X, Y) FROM Triangles;
+----
+
+
+== BITAND
+
+[source,sql]
+----
+BITAND (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The bitwise AND operation. This method returns a `long`.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+SELECT BITAND(X, Y) FROM Triangles;
+----
+
+
+== BITGET
+
+[source,sql]
+----
+BITGET (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+Returns true if and only if the first parameter has a bit set in the position specified by the second parameter. This method returns a `boolean`. The second parameter is zero-indexed; the least significant bit has position 0.
+
+[discrete]
+=== Example
+Check that 3rd bit is 1:
+
+
+[source,sql]
+----
+SELECT BITGET(X, 3) from Triangles;
+----
+
+
+== BITOR
+
+[source,sql]
+----
+BITOR (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The bitwise OR operation. This method returns a `long`.
+
+[discrete]
+=== Example
+Calculate OR between two fields:
+
+
+[source,sql]
+----
+SELECT BITGET(X, Y) from Triangles;
+----
+
+
+== BITXOR
+
+[source,sql]
+----
+BITXOR (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The bitwise XOR operation. This method returns a `long`.
+
+[discrete]
+=== Example
+Calculate XOR between two fields:
+
+
+[source,sql]
+----
+SELECT BITXOR(X, Y) FROM Triangles;
+----
+
+
+== MOD
+
+[source,sql]
+----
+MOD (y, x)
+----
+
+
+=== Parameters
+- `x and y` - the arguments.
+
+=== Description
+The modulo operation. This method returns a `long`.
+
+[discrete]
+=== Example
+Calculate MOD between two fields:
+
+
+[source,sql]
+----
+SELECT BITXOR(X, Y) FROM Triangles;
+----
+
+
+== CEILING
+
+[source,sql]
+----
+CEIL (expression)
+CEILING (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also Java Math.ceil. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate a ceiling price for items:
+
+
+[source,sql]
+----
+SELECT item_id, CEILING(price) FROM Items;
+----
+
+
+== DEGREES
+
+
+[source,sql]
+----
+DEGREES (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.toDegrees`. This method returns a `double`.
+
+[discrete]
+=== Example
+Converts the argument value to degrees:
+
+
+[source,sql]
+----
+SELECT DEGREES(X) FROM Triangles;
+----
+
+
+== EXP
+
+[source,sql]
+----
+EXP (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.exp`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates exp:
+
+
+[source,sql]
+----
+SELECT EXP(X) FROM Triangles;
+----
+
+
+== FLOOR
+
+[source,sql]
+----
+FLOOR (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.floor`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates floor price:
+
+
+[source,sql]
+----
+SELECT FLOOR(X) FROM Items;
+----
+
+
+== LOG
+
+[source,sql]
+----
+LOG (expression)
+LN (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.log`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates LOG:
+
+
+[source,sql]
+----
+SELECT LOG(X) from Items;
+----
+
+
+== LOG10
+
+[source,sql]
+----
+LOG10 (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.log10` (in Java 5). This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate LOG10:
+
+
+[source,sql]
+----
+SELECT LOG(X) FROM Items;
+----
+
+
+== RADIANS
+
+[source,sql]
+----
+RADIANS (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also Java Math.toRadians. This method returns a double.
+
+[discrete]
+=== Example
+Calculates RADIANS:
+
+
+[source,sql]
+----
+SELECT RADIANS(X) FROM Items;
+----
+
+
+== SQRT
+
+[source,sql]
+----
+SQRT (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.sqrt`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates SQRT:
+
+
+[source,sql]
+----
+SELECT SQRT(X) FROM Items;
+----
+
+
+== PI
+
+
+[source,sql]
+----
+PI (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.PI`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculates PI:
+
+
+[source,sql]
+----
+SELECT PI(X) FROM Items;
+----
+
+
+== POWER
+
+
+[source,sql]
+----
+POWER (X, Y)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+See also `Java Math.pow`. This method returns a `double`.
+
+[discrete]
+=== Example
+Calculate the ​power of 2:
+
+
+[source,sql]
+----
+SELECT pow(2, n) FROM Rows;
+----
+
+
+== RAND
+
+[source,sql]
+----
+{RAND | RANDOM} ([expression])
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression seeds the session's random number generator.
+
+=== Description
+Calling the function without a parameter returns the next a pseudo random number. Calling it with a parameter seeds the session's random number generator. This method returns a `double` between 0 (including) and 1 (excluding).
+
+[discrete]
+=== Example
+Gets a random number for every play:
+
+
+[source,sql]
+----
+SELECT random() FROM Play;
+----
+
+
+== RANDOM_UUID
+
+[source,sql]
+----
+{RANDOM_UUID | UUID} ()
+----
+
+
+=== Description
+Returns a new UUID with 122 pseudo random bits.
+
+[discrete]
+=== Example
+Gets random number for every Player:
+
+
+[source,sql]
+----
+SELECT UUID(),name FROM Player;
+----
+
+
+== ROUND
+
+[source,sql]
+----
+ROUND ( expression [, precision] )
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+- `precision` - the number of digits after the decimal to round to. Rounds to the nearest long if the number of digits if not set.
+
+=== Description
+Rounds to a number of digits, or to the nearest long if the number of digits if not set. This method returns a `numeric` (the same type as the input).
+
+[discrete]
+=== Example
+Convert every Player's age to an integer number:
+
+
+[source,sql]
+----
+SELECT name, ROUND(age) FROM Player;
+----
+
+
+== ROUNDMAGIC
+
+[source,sql]
+----
+ROUNDMAGIC (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+This function is good for rounding numbers, but it can be slow. It has special handling for numbers around 0. Only numbers smaller than or equal to `+/-1000000000000` are supported. The value is converted to a String internally, and then the last 4 characters are checked. '000x' becomes '0000' and '999x' becomes '999999', which is rounded automatically. This method returns a `double`.
+
+[discrete]
+=== Example
+Round every Player's age:
+
+
+[source,sql]
+----
+SELECT name, ROUNDMAGIC(AGE/3*3) FROM Player;
+----
+
+
+== SECURE_RAND
+
+[source,sql]
+----
+SECURE_RAND (int)
+----
+
+
+=== Parameters
+- `int` - specifies the number​ of digits.
+
+=== Description
+Generate a number of cryptographically secure random numbers. This method returns `bytes`.
+
+[discrete]
+=== Example
+Get a truly random number:
+
+
+[source,sql]
+----
+SELECT name, SECURE_RAND(10) FROM Player;
+----
+
+
+== SIGN
+
+[source,sql]
+----
+SIGN (expression)
+----
+
+
+=== Parameters
+- `expression` - any valid numeric expression.
+
+=== Description
+Return -1 if the value is smaller 0, 0 if zero, and otherwise 1.
+
+[discrete]
+=== Example
+Get a sign for every value:
+
+
+[source,sql]
+----
+SELECT name, SIGN(VALUE) FROM Player;
+----
+
+
+== ENCRYPT
+
+[source,sql]
+----
+ENCRYPT (algorithmString , keyBytes , dataBytes)
+----
+
+
+=== Parameters
+- `algorithmString` - sets a supported AES algorithm.
+- `keyBytes` - sets a key.
+- `dataBytes` - sets data.
+
+=== Description
+Encrypt data using a key. The supported algorithm is AES. The block size is 16 bytes. This method returns `bytes`.
+
+[discrete]
+=== Example
+Encrypt players name:
+
+
+[source,sql]
+----
+SELECT ENCRYPT('AES', '00', STRINGTOUTF8(Name)) FROM Player;
+----
+
+
+== DECRYPT
+
+[source,sql]
+----
+DECRYPT (algorithmString , keyBytes , dataBytes)
+----
+
+
+=== Parameters
+- `algorithmString` - sets a supported AES algorithm.
+- `keyBytes` - sets a key.
+- `dataBytes` - sets data.
+
+=== Description
+Decrypts data using a key. The supported algorithm is AES. The block size is 16 bytes. This method returns bytes.
+
+[discrete]
+=== Example
+Decrypt Players' names:
+
+
+[source,sql]
+----
+SELECT DECRYPT('AES', '00', '3fabb4de8f1ee2e97d7793bab2db1116'))) FROM Player;
+----
+
+
+== TRUNCATE
+
+
+[source,sql]
+----
+{TRUNC | TRUNCATE} (\{\{numeric, digitsInt} | timestamp | date | timestampString})
+----
+
+
+=== Description
+Truncates to a number of digits (to the next value closer to 0). This method returns a `double`. When used with a timestamp, truncates a timestamp to a date (day) value. When used with a date, truncates a date to a date (day) value less time part. When used with a timestamp as string, truncates a timestamp to a date (day) value.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+TRUNCATE(VALUE, 2);
+----
+
+
+== COMPRESS
+
+[source,sql]
+----
+COMPRESS(dataBytes [, algorithmString])
+----
+
+
+=== Parameters
+- `dataBytes` - data to compress.
+- `algorithmString` - an algorithm to use for compression.
+
+=== Description
+Compress the data using the specified compression algorithm. Supported algorithms are: LZF (faster but lower compression; default), and DEFLATE (higher compression). Compression does not always reduce size. Very small objects and objects with little redundancy may get larger. This method returns `bytes`.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+COMPRESS(STRINGTOUTF8('Test'))
+----
+
+
+== EXPAND
+
+[source,sql]
+----
+EXPAND(dataBytes)
+----
+
+
+=== Parameters
+- `dataBytes` - data to expand.
+
+=== Description
+Expand data that was compressed using the COMPRESS function. This method returns `bytes`.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+UTF8TOSTRING(EXPAND(COMPRESS(STRINGTOUTF8('Test'))))
+----
+
+
+== ZERO
+
+[source,sql]
+----
+ZERO()
+----
+
+
+=== Description
+Return the value 0. This function can be used even if numeric literals are disabled.
+
+[discrete]
+=== Example
+
+[source,sql]
+----
+ZERO()
+----
+
diff --git a/docs/_docs/sql-reference/operational-commands.adoc b/docs/_docs/sql-reference/operational-commands.adoc
new file mode 100644
index 0000000..be7223f
--- /dev/null
+++ b/docs/_docs/sql-reference/operational-commands.adoc
@@ -0,0 +1,372 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Operational Commands
+
+
+Ignite supports the following operational commands:
+
+== COPY
+
+Copy data from a CSV file into a SQL table.
+
+[source,sql]
+----
+COPY FROM '/path/to/local/file.csv'
+INTO tableName (columnName, columnName, ...) FORMAT CSV [CHARSET '<charset-name>']
+----
+
+
+=== Parameters
+- `'/path/to/local/file.csv'` - actual path to your CSV file.
+- `tableName` - name of the table to which the data will be copied.
+- `columnName` - name of a column corresponding with the columns in the CSV file.
+
+=== Description
+`COPY` allows you to copy the content of a file in the local file system to the server and apply its data to a SQL table. Internally, `COPY` reads the file content in a binary form into data packets, and sends those packets to the server. Then, the file content is parsed and executed in a streaming mode. Use this mode if you have data dumped to a file.
+
+NOTE: Currently, `COPY` is only supported via the JDBC driver and can only work with CSV format.
+
+=== Example
+`COPY` can be executed like so:
+
+[source,sql]
+----
+COPY FROM '/path/to/local/file.csv' INTO city (
+  ID, Name, CountryCode, District, Population) FORMAT CSV
+----
+
+In the above command, substitute `/path/to/local/file.csv` with the actual path to your CSV file. For instance, you can use `city.csv` which is shipped with the latest Ignite.
+You can find it in your `{IGNITE_HOME}/examples/src/main/resources/sql/` directory.
+
+== SET STREAMING
+
+Stream data in bulk from a file into a SQL table.
+
+[source,sql]
+----
+SET STREAMING [OFF|ON];
+----
+
+
+=== Description
+Using the `SET` command, you can stream data in bulk into a SQL table in your cluster. When streaming is enabled, the JDBC/ODBC driver will pack your commands in batches and send them to the server (Ignite cluster). On the server side, the batch is converted into a stream of cache update commands which are distributed asynchronously between server nodes. Performing this asynchronously increases peak throughput because at any given time all cluster nodes are busy with data loading.
+
+=== Usage
+To stream data into your cluster, prepare a file with the `SET STREAMING ON` command followed by `INSERT` commands for data that needs to be loaded. For example:
+
+[source,sql]
+----
+SET STREAMING ON;
+
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (1,'Kabul','AFG','Kabol',1780000);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (2,'Qandahar','AFG','Qandahar',237500);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (3,'Herat','AFG','Herat',186800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
+-- More INSERT commands --
+----
+
+Note that before executing the above statements, you should have the tables created in the cluster. Run `CREATE TABLE` commands, or provide the commands as part of the file that is used for inserting data, before the `SET STREAMING ON` command, like so:
+
+[source,sql]
+----
+CREATE TABLE City (
+  ID INT(11),
+  Name CHAR(35),
+  CountryCode CHAR(3),
+  District CHAR(20),
+  Population INT(11),
+  PRIMARY KEY (ID, CountryCode)
+) WITH "template=partitioned, backups=1, affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";
+
+SET STREAMING ON;
+
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (1,'Kabul','AFG','Kabol',1780000);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (2,'Qandahar','AFG','Qandahar',237500);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (3,'Herat','AFG','Herat',186800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
+INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
+-- More INSERT commands --
+----
+
+[NOTE]
+====
+[discrete]
+=== Flush All Data to the Cluster
+When you have finished loading data, make sure to close the JDBC/ODBC connection so that all data is flushed to the cluster.
+====
+
+=== Known Limitations
+While streaming mode allows you to load data much faster than other data loading techniques mentioned in this guide, it has some limitations:
+
+1. Only `INSERT` commands are allowed; any attempt to execute `SELECT` or any other DML or DDL command will cause an exception.
+2. Due to streaming mode's asynchronous nature, you cannot know update counts for every statement executed; all JDBC/ODBC commands returning update counts will return 0.
+
+=== Example
+As an example, you can use the sample world.sql file that is shipped with the latest Ignite distribution. It can be found in the `{IGNITE_HOME}/examples/sql/` directory. You can use the `run` command from tools/sqlline[SQLLine, window=_blank], as shown below:
+
+[source,shell]
+----
+!run /apache_ignite_version/examples/sql/world.sql
+----
+
+After executing the above command and *closing the JDBC connection*, all data will be loaded into the cluster and ready to be queried.
+
+image::images/set-streaming.png[]
+
+
+== KILL QUERY
+
+The `KILL QUERY` command allows you to cancel a running query. When a query is cancelled with the `KILL` command, all
+parts of the query running on all other nodes are terminated as well.
+
+[tabs]
+--
+
+tab:SQL[]
+[source,sql]
+----
+KILL QUERY {ASYNC} 'query_id'
+----
+
+tab:JMX[]
+[source,java]
+----
+QueryMXBean mxBean = ...;
+mxBean.cancelSQL(queryId);
+----
+
+tab:Unix[]
+[source,bash]
+----
+./control.sh --kill SQL query_id
+----
+
+tab:Windows[]
+[source,bash]
+----
+control.bat --kill SQL query_id
+----
+
+--
+
+=== Parameters
+
+* `query_id` - can be retrived via the link:monitoring-metrics/system-views#sql_queries[SQL_QUERIES] view.
+* `ASYNC` - is an optional parameter that returns control immediately without waiting for the cancellation to finish.
+
+== KILL TRANSACTION
+
+The `KILL TRANSACTION` command allows you to cancel a running transaction.
+
+[tabs]
+--
+tab:SQL[]
+[source,sql]
+----
+KILL TRANSACTION 'xid'
+----
+
+tab:JMX[]
+[source,java]
+----
+TransactionMXBean mxBean = ...;
+mxBean.cancel(xid);
+----
+
+tab:Unix[]
+[source,bash]
+----
+./control.sh --kill TRANSACTION xid
+----
+
+tab:Windows[]
+[source,bash]
+----
+control.bat --kill TRANSACTION xid
+----
+--
+
+=== Parameters
+
+* `xid` - the transaction id that can be retrived via the link:monitoring-metrics/system-views#transactions[TRANSACTIONS] view.
+
+
+== KILL SCAN
+
+The `KILL SCAN` command allows you to cancel a running scan query.
+
+[tabs]
+--
+
+tab:SQL[]
+[source,sql]
+----
+KILL SCAN 'origin_node_id' 'cache_name' query_id
+----
+
+tab:JMX[]
+[source,java]
+----
+QueryMXBean mxBean = ....;
+mxBean.cancelScan(originNodeId, cacheName, queryId);
+----
+
+tab:Unix[]
+[source,bash]
+----
+./control.sh --kill SCAN origin_node_id cache_name query_id
+----
+
+tab:Windows[]
+[source,bash]
+----
+control.bat --kill SCAN origin_node_id cache_name query_id
+----
+
+--
+
+=== Parameters
+
+* `origin_node_id`, `cache_name`, `query_id` - can be retrived via the link:monitoring-metrics/system-views#scan_queries[SCAN_QUERIES] view.
+
+=== Example
+
+[source,sql]
+----
+KILL SCAN '6fa749ee-7cf8-4635-be10-36a1c75267a7_54321' 'cache-name' 1
+----
+
+== KILL COMPUTE
+
+The `KILL COMPUTE` command allows you to cancel a running compute.
+
+[tabs]
+--
+
+tab:SQL[]
+[source,sql]
+----
+KILL COMPUTE 'session_id'
+----
+
+tab:JMX[]
+[source,java]
+----
+ ComputeMXBean#cancel
+----
+
+tab:Unix[]
+[source,bash]
+----
+./control.sh --kill COMPUTE session_id
+----
+
+tab:Windows[]
+[source,bash]
+----
+control.bat --kill COMPUTE session_id
+----
+
+--
+
+=== Parameters
+
+* `session_id` - can be retrived via the link:monitoring-metrics/system-views#tasks[TASKS] or
+link:monitoring-metrics/system-views#jobs[JOBS] views.
+
+== KILL CONTINUOUS
+
+The `KILL CONTINUOUS` command allows you to cancel a running continuous query.
+
+[tabs]
+--
+
+tab:SQL[]
+[source,sql]
+----
+KILL CONTINUOUS 'origin_node_id', 'routine_id'
+----
+
+tab:JMX[]
+[source,java]
+----
+QueryMXBean mxBean = ...;
+mxBean.cancelContinuous(originNodeId, routineId);
+----
+
+tab:Unix[]
+[source,bash]
+----
+./control.sh --kill CONTINUOUS origin_node_id routine_id
+----
+
+tab:Windows[]
+[source,bash]
+----
+control.bat --kill CONTINUOUS origin_node_id routine_id
+----
+
+--
+
+=== Parameters
+
+* `origin_node_id` and `routine_id` - can be retrived via the link:monitoring-metrics/system-views#continuous_queries[CONTINUOUS_QUERIES] view.
+
+=== Example
+
+[source,sql]
+----
+KILL CONTINUOUS '6fa749ee-7cf8-4635-be10-36a1c75267a7_54321' '6fa749ee-7cf8-4635-be10-36a1c75267a7_12345'
+----
+
+== KILL SERVICE
+
+The `KILL SERVICE` command allows you to cance a running service.
+
+[tabs]
+--
+
+tab:SQL[]
+[source,sql]
+----
+KILL SERVICE 'name'
+----
+
+tab:JMX[]
+[source,java]
+----
+ServiceMXBean mxBean = ...;
+mxBean.cancel(name);
+----
+
+tab:Unix[]
+[source,bash]
+----
+./control.sh --kill SERVICE name
+----
+
+tab:Windows[]
+[source,bash]
+----
+control.bat --kill SERVICE name
+----
+
+--
+
+=== Parameters
+
+* `name` - corresponds to the name you selected for the service upon the deployment time.
+You can always find it with the link:monitoring-metrics/system-views#services[SERVICES] view.
diff --git a/docs/_docs/sql-reference/sql-conformance.adoc b/docs/_docs/sql-reference/sql-conformance.adoc
new file mode 100644
index 0000000..7cc2ed5
--- /dev/null
+++ b/docs/_docs/sql-reference/sql-conformance.adoc
@@ -0,0 +1,471 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Conformance
+
+Apache Ignite supports most of the major features of ANSI-99 out-of-the-box. The following table shows Ignite compliance to link:https://en.wikipedia.org/wiki/SQL_compliance[SQL:1999 (Core), window=_blank].
+
+
+
+[width="100%", cols="20%,80%a"]
+|=======
+|Feature ID, Name
+|Support
+
+| `E011` Numeric data types
+| Ignite fully supports the following sub-features:
+
+`E011–01` INTEGER and SMALLINT data types (including all spellings)
+
+`E011–02` REAL, DOUBLE PRECISON, and FLOAT data types
+
+`E011–05` Numeric comparison
+
+`E011–06` Implicit casting among the numeric data types
+
+Ignite provides partial support for the following sub-features:
+
+`E011–03` DECIMAL and NUMERIC data types. Fixed <scale> is not supported for DEC and NUMERIC, so there are violations for:
+
+7) If a <scale> is omitted, then a <scale> of 0 (zero) is implicit (6.1 <data type>)
+
+22) NUMERIC specifies the data type exact numeric, with the decimal precision and scale specified by the <precision> and <scale>.
+
+23) DECIMAL specifies the data type exact numeric, with the decimal scale specified by the <scale> and the implementation-defined decimal precision equal to or greater than the value of the specified <precision>.
+
+`E011–04` Arithmetic operator. See issue for feature E011–03
+
+| `E021` Character string types
+| Ignite fully supports the following sub-features:
+
+`E021–03` Character literals
+
+`E021–04` CHARACTER_LENGTH function
+
+`E021–05` OCTET_LENGTH function
+
+`E021–06` SUBSTRING function
+
+`E021–07` Character concatenation
+
+`E021–08` UPPER and LOWER functions
+
+`E021–09` TRIM function
+
+`E021–10` Implicit casting among the fixed-length and variable-length character string types
+
+`E021–11` POSITION function
+
+`E021–12` Character comparison
+
+Ignite provides partial support for the following sub-features:
+
+E021–01 CHARACTER data type (including all its spellings).
+
+----
+<character string type> ::=
+CHARACTER [ <left paren> <length> <right paren> ]
+\| CHAR [ <left paren> <length> <right paren> ]
+\| CHARACTER VARYING <left paren> <length> <right paren>
+\| CHAR VARYING <left paren> <length> <right paren>
+\| VARCHAR <left paren> <length> <right paren>
+----
+
+<length> is not supported for CHARACTER and CHARACTER VARYING data type.
+
+`E021–02` CHARACTER VARYING data type (including all its spellings). See issue for feature E021–01
+
+| `E031` Identifiers
+| Ignite fully supports the following sub-features:
+
+`E031–01` Delimited identifiers
+
+`E031–02` Lower case identifiers
+
+`E031–03` Trailing underscore
+
+| `E051` Basic query specification
+| Ignite fully supports the following sub-features:
+
+`E051–01` SELECT DISTINCT
+
+`E051–04` GROUP BY can contain columns not in <select-list>
+
+`E051–05` Select list items can be renamed
+
+`E051–06` HAVING clause
+
+`E051–07` Qualified * in select list
+
+`E051–08` Correlation names in the FROM clause
+
+Ignite does not support the following sub-features:
+
+`E051–02` GROUP BY clause; No support for ROLLUP, CUBE, GROUPING SETS.
+
+`E051–09` Rename columns in the FROM clause. Some information about support from other products is link:http://modern-sql.com/feature/table-column-aliases[here, window=_blank].
+
+| `E061` Basic predicates and search conditions
+| Ignite fully supports the following sub-features:
+
+`E061–01` Comparison predicate
+
+`E061–02` BETWEEN predicate
+
+`E061–03` IN predicate with list of values
+
+`E061–06` NULL predicate
+
+`E061–08` EXISTS predicate
+
+`E061–09` Subqueries in comparison predicate
+
+`E061–11` Subqueries in IN predicate
+
+`E061–13` Correlated subqueries
+
+`E061–14` Search condition
+
+Ignite provides partial support for the following sub-features:
+
+`E061–04` LIKE predicate; There is support for <character like predicate>, but <octet like predicate> could not be checked because of link:https://issues.apache.org/jira/browse/IGNITE-7480[this issue, window=_blank].
+
+`E061–05` LIKE predicate: ESCAPE clause; There is support for <character like predicate>, but <octet like predicate> could not be checked because of link:https://issues.apache.org/jira/browse/IGNITE-7480[this issue, window=_blank].
+
+`E061–07` Quantified comparison predicate; Except ALL (see link:https://issues.apache.org/jira/browse/IGNITE-5749[issue, window=_blank]).
+
+Ignite does not support the following sub-feature:
+
+`E061–12` Subqueries in quantified comparison predicate.
+
+| `E071` Basic query expressions
+| Ignite provides partial support for the following sub-features:
+
+`E071–01` UNION DISTINCT table operator
+
+`E071–02` UNION ALL table operator
+
+`E071–03` EXCEPT DISTINCT table operator
+
+`E071–05` Columns combined via table operators need not have exactly the same data type
+
+`E071–06` Table operators in subqueries
+
+Note that there is no support for non-recursive WITH clause in H2 and Ignite. According to link:http://www.h2database.com/html/grammar.html#with[the H2 docs, window=_blank] there is support for recursive WITH clause, but it fails in Ignite.
+
+| E081 Basic Privileges
+| Ignite does not support the following sub-feature:
+
+`E081–01` SELECT privilege at the table level
+
+`E081–02` DELETE privilege
+
+`E081–03` INSERT privilege at the table level
+
+`E081–04` UPDATE privilege at the table level
+
+`E081–05` UPDATE privilege at the column level
+
+`E081–06` REFERENCES privilege at the table
+
+`E081–07` REFERENCES privilege at the column
+
+`E081–08` WITH GRANT OPTION
+
+`E081–09` USAGE privilege
+
+`E081–10` EXECUTE privilege
+
+| `E091` Set functions
+| Ignite provides partial support for the following sub-features:
+
+`E091–01` AVG
+
+`E091–02` COUNT
+
+`E091–03` MAX
+
+`E091–04` MIN
+
+`E091–05` SUM
+
+`E091–06` ALL quantifier
+
+`E091–07` DISTINCT quantifier
+
+Note that there is no support for:
+
+- GROUPING and ANY (both in H2 and Ignite).
+
+- EVERY and SOME functions. There is support in H2, but fails in Ignite.
+
+| `E101` Basic data manipulation
+| Ignite fully supports the following sub-features:
+
+`E101–03` Searched UPDATE statement
+
+`E101–04` Searched DELETE statement
+
+Ignite provides partial support for the following sub-features:
+
+`E101–01` INSERT statement. No support for DEFAULT values in Ignite. Works in H2.
+
+| `E111` Single row SELECT statement
+| Ignite does not support this feature.
+
+| `E121` Basic cursor support
+| Ignite does not support the following sub-features
+
+`E121–01` DECLARE CURSOR
+
+`E121–02` ORDER BY columns need not be in select list
+
+`E121–03` Value expressions in ORDER BY clause
+
+`E121–04` OPEN statement
+
+`E121–06` Positioned UPDATE statement
+
+`E121–07` Positioned DELETE statement
+
+`E121–08` CLOSE statement
+
+`E121–10` FETCH statement: implicit NEXT
+
+`E121–17` WITH HOLD cursors
+
+| `E131` Null value support (nulls in lieu of values)
+| Ignite fully supports this feature.
+
+| `E141` Basic integrity constraints
+| Ignite fully supports the following sub-feature:
+
+`E141–01` NOT NULL constraints YES
+
+Ignite provides partial support for the following sub-features:
+
+`E141–03` PRIMARY KEY constraints. See link:https://issues.apache.org/jira/browse/IGNITE-7479[IGNITE-7479, window=_blank]
+
+`E141–08` NOT NULL inferred on PRIMARY KEY. See link:https://issues.apache.org/jira/browse/IGNITE-7479[IGNITE-7479, window=_blank]
+
+Ignite does not support the following sub-features:
+
+`E141–02` UNIQUE constraints of NOT NULL columns
+
+`E141–04` Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action
+
+`E141–06` CHECK constraints
+
+`E141–07` Column defaults
+
+`E141–10` Names in a foreign key can be specified in any order
+
+| `E151` Transaction support
+| Ignite does not support the following sub-features:
+
+`E151–01` COMMIT statement
+
+`E151–02` ROLLBACK statement
+
+| `E152` Basic SET TRANSACTION statement
+| Ignite does not support the following sub-features:
+
+`E152–01` SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause
+
+`E152–02` SET TRANSACTION statement: READ ONLY and READ WRITE clauses
+
+| `E153` Updatable queries with subqueries
+| Ignite fully supports this feature.
+
+| `E161` SQL comments using leading double minus
+| Ignite fully supports this feature.
+
+| `E171` SQLSTATE support
+| Ignite provides partial support for this feature implementing a subset of standard error codes and introducing custom ones. A full list of errors​ supported by Ignite can be found here:
+
+link:SQL/JDBC/jdbc-driver#error-codes[JDBC Error Codes]
+
+link:SQL/ODBC/error-codes[ODBC Error Codes]
+
+| `E182` Host language Binding (previously "Module Language")
+| Ignite does not support this feature.
+
+| `F021` Basic information schema
+| Ignite does not support the following sub-features:
+
+`F021–01` COLUMNS view
+
+`F021–02` TABLES view
+
+`F021–03` VIEWS view
+
+`F021–04` TABLE_CONSTRAINTS
+
+`F021–05` REFERENTIAL_CONSTRAINTS view
+
+`F021–06` CHECK_CONSTRAINTS view
+
+| `F031` Basic schema manipulation
+| Ignite fully supports the following feature:
+
+`F031–04` ALTER TABLE statement: ADD COLUMN clause
+
+Ignite provides partial support for the following sub-feature:
+
+`F031–01` CREATE TABLE statement to create persistent base tables.
+
+Basic syntax is supported. 'AS' is supported in H2 but not in Ignite. No support for privileges (INSERT, SELECT, UPDATE, DELETE).
+
+Ignite does not support the following sub-features:
+
+`F031–02` CREATE VIEW statement
+
+`F031–03` GRANT statement
+
+`F031–13` DROP TABLE statement: RESTRICT clause
+
+`F031–16` DROP VIEW statement: RESTRICT clause
+
+`F031–19REVOKE` statement: RESTRICT clause
+
+A link:sql-reference/ddl[DDL, window=_blank] is being actively developed; more features will be supported in upcoming releases.
+
+| `F041` Basic joined table
+| Ignite fully supports the following sub-features:
+
+`F041–01` Inner join (but not necessarily the INNER keyword)
+`F041–02` INNER keyword
+
+`F041–03` LEFT OUTER JOIN
+
+`F041–04` RIGHT OUTER JOIN
+
+`F041–05` Outer joins can be nested
+
+`F041–07` The inner table in a left or right outer join can also be used in an inner join
+
+`F041–08` All comparison operators are supported (rather than just =)
+
+| `F051` Basic date and time
+| Ignite fully supports the following sub-features:
+
+`F051–04` Comparison predicate on DATE, TIME, and TIMESTAMP data types
+
+`F051–05` Explicit CAST between datetime types and character string types
+
+`F051–06` CURRENT_DATE
+
+`F051–07` LOCALTIME
+
+`F051–08` LOCALTIMESTAMP
+
+Ignite provides partial support for the following sub-features:
+
+`F051–01` DATE data type (including support of DATE literal). See link:https://issues.apache.org/jira/browse/IGNITE-7360[IGNITE-7360, window=_blank].
+
+`F051–02` TIME data type (including support of TIME literal) with fractional seconds precision of at least 0. <precision> is not supported correctly for TIME data type. Also see link:https://issues.apache.org/jira/browse/IGNITE-7360[IGNITE-7360, window=_blank].
+
+`F051–03` TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6. <precision> is not supported correctly for TIME data type. Also see link:https://issues.apache.org/jira/browse/IGNITE-7360[IGNITE-7360, window=_blank].
+
+| `F081` UNION and EXCEPT in views
+| Ignite does not support this feature.
+
+| `F131` Grouped operations
+| Ignite does not support the following sub-features:
+
+`F131–01` WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views
+
+`F131–02` Multiple tables supported in queries with grouped views
+
+`F131–03` Set functions supported in queries with grouped views
+
+`F131–04` Subqueries with GROUP BY and HAVING clauses and grouped views
+
+`F131–05` Single row SELECT with GROUP BY and HAVING clauses and grouped views
+
+| `F181` Multiple module support
+| Ignite does not support this feature.
+
+| `F201` CAST function
+| Ignite fully supports this feature.
+
+| `F221` Explicit defaults
+| Ignite fully supports this feature.
+
+| `F261` CASE expression
+| Ignite fully supports the following sub-features:
+
+`F261–01` Simple CASE
+
+`F261–02` Searched CASE
+
+`F261–03` NULLIF
+
+`F261–04` COALESCE
+
+| `F311` Schema definition statement
+| Ignite does not support the following sub-features:
+
+`F311–01` CREATE SCHEMA
+
+`F311–02` CREATE TABLE for persistent base tables
+
+`F311–03` CREATE VIEW
+
+`F311–04` CREATE VIEW: WITH CHECK OPTION
+
+`F311–05` GRANT statement
+
+| `F471` Scalar subquery values
+| Ignite fully supports this feature.
+
+| `F481` Expanded NULL predicate
+| Ignite fully supports this feature.
+
+| `F501` Features and conformance views
+| Ignite does not support the following sub-features:
+
+`F501–01` SQL_FEATURES view
+
+`F501–02` SQL_SIZING view
+
+`F501–03` SQL_LANGUAGES view
+
+| `F812` Basic flagging
+| Ignite does not support this feature.
+
+`S011` Distinct data types
+
+Ignite does not support the following sub-feature:
+
+`S011–01` USER_DEFINED_TYPES view
+
+| `T321` Basic SQL-invoked routines
+| Ignite does not support the following sub-features:
+
+`T321–01` User-defined functions with no overloading
+
+`T321–02` User-defined stored procedures with no overloading
+
+`T321–03` Function invocation
+
+`T321–04` CALL statement
+
+`T321–05` RETURN statement
+
+`T321–06` ROUTINES view
+
+`T321–07` PARAMETERS view
+
+|=======
diff --git a/docs/_docs/sql-reference/string-functions.adoc b/docs/_docs/sql-reference/string-functions.adoc
new file mode 100644
index 0000000..187acd2
--- /dev/null
+++ b/docs/_docs/sql-reference/string-functions.adoc
@@ -0,0 +1,942 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= String Functions
+
+:toclevels:
+
+== ASCII
+
+Return the ASCII value of the first character in the string. This method returns an `int`.
+
+[source,sql]
+----
+ASCII(string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+select ASCII(name) FROM Players;
+----
+
+
+== BIT_LENGTH
+Returns the number of bits in a string. This method returns a `long`. For `BLOB`, `CLOB`, `BYTES`, and `JAVA_OBJECT`, the object's specified precision is used. Each character needs 16 bits.
+
+
+
+[source,sql]
+----
+BIT_LENGTH(string)
+----
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+select BIT_LENGTH(name) FROM Players;
+----
+
+
+== LENGTH
+Returns the number of characters in a string. This method returns a `long`. For `BLOB`, `CLOB`, `BYTES`, and `JAVA_OBJECT`, the object's specified precision is used.
+
+
+
+[source,sql]
+----
+{LENGTH | CHAR_LENGTH | CHARACTER_LENGTH} (string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+SELECT LENGTH(name) FROM Players;
+----
+
+
+== OCTET_LENGTH
+Returns the number of bytes in a string. This method returns a `long`. For `BLOB`, `CLOB`, `BYTES` and `JAVA_OBJECT`, the object's specified precision is used. Each character needs 2 bytes.
+
+
+
+[source,sql]
+----
+OCTET_LENGTH(string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+
+[source,sql]
+----
+SELECT OCTET_LENGTH(name) FROM Players;
+----
+
+
+== CHAR
+
+Returns the character that represents the ASCII value. This method returns a `string`.
+
+[source,sql]
+----
+{CHAR | CHR} (int)
+----
+
+
+Parameters:
+- `int` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT CHAR(65)||name FROM Players;
+----
+
+
+== CONCAT
+Combines strings. Unlike with the `||` operator, NULL parameters are ignored and do not cause the result to become NULL. This method returns a `string`.
+
+
+[source,sql]
+----
+CONCAT(string, string [,...])
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT CONCAT(NAME, '!') FROM Players;
+----
+
+
+== CONCAT_WS
+Combines strings, dividing with a separator. Unlike with the `||` operator, NUL parameters are ignored, and do not cause the result to become NULL. This method returns a string.
+
+
+[source,sql]
+----
+CONCAT_WS(separatorString, string, string [,...])
+----
+
+
+Parameters:
+- `separatorString` - separator.
+- `string` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT CONCAT_WS(',', NAME, '!') FROM Players;
+----
+
+
+== DIFFERENCE
+
+Returns the difference between the `SOUNDEX()` values of two strings. This method returns an `int`.
+
+[source,sql]
+----
+DIFFERENCE(X, Y)
+----
+
+
+Parameters:
+- `X`, `Y` - strings to compare.
+
+
+
+Example:
+Calculates the SOUNDEX() difference for two Players' names:
+
+
+[source,sql]
+----
+select DIFFERENCE(T1.NAME, T2.NAME) FROM players T1, players T2
+   WHERE T1.ID = 10 AND T2.ID = 11;
+----
+
+
+== HEXTORAW
+
+Converts a hex representation of a string to a string. 4 hex characters per string character are used.
+
+[source,sql]
+----
+HEXTORAW(string)
+----
+
+
+Parameters:
+- `string` - a hex string to use for the conversion.
+
+
+
+Example:
+Calculate a harmony for Players' names:
+
+
+[source,sql]
+----
+SELECT HEXTORAW(DATA) FROM Players;
+----
+
+
+== RAWTOHEX
+
+Converts a string to the hex representation. 4 hex characters per string character are used. This method returns a `string`.
+
+[source,sql]
+----
+RAWTOHEX(string)
+----
+
+Parameters:
+- `string` - a string to convert to the hex representation.
+
+
+
+Example:
+Calculate a harmony for Players' names:
+
+
+[source,sql]
+----
+SELECT RAWTOHEX(DATA) FROM Players;
+----
+
+
+== INSTR
+
+Returns the location of a search string in a string. If a start position is used, the characters before it are ignored. If position is negative, the rightmost location is returned. 0 is returned if the search string is not found. Please note this function is case sensitive, even if the parameters are not.
+
+
+
+[source,sql]
+----
+INSTR(string, searchString, [, startInt])
+----
+
+
+Parameters:
+- `string` - any string.
+- `searchString` - any string to search for.
+- `startInt` - start position for the lookup.
+
+
+Example:
+Check if a string includes the "@" symbol:
+
+
+[source,sql]
+----
+SELECT INSTR(EMAIL,'@') FROM Players;
+----
+
+
+== INSERT
+
+Inserts a additional string into the original string at a specified start position. The length specifies the number of characters that are removed at the start position in the original string. This method returns a `string`.
+
+[source,sql]
+----
+INSERT(originalString, startInt, lengthInt, addString)
+----
+
+Parameters:
+
+* `originalString` - an original string.
+* `startInt` - start position.
+* `lengthInt` - the length.
+* `addString` - an additional string.
+
+
+Example:
+
+[source,sql]
+----
+SELECT INSERT(NAME, 1, 1, ' ') FROM Players;
+----
+
+
+== LOWER
+
+Converts a string to lowercase.
+
+[source,sql]
+----
+{LOWER | LCASE} (string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LOWER(NAME) FROM Players;
+----
+
+
+== UPPER
+
+Converts a string to uppercase.
+
+[source,sql]
+----
+{UPPER | UCASE} (string)
+----
+
+
+Parameters:
+- `string` - an argument.
+
+
+Example:
+The following example returns the last name in uppercase for each Player:
+
+
+[source,sql]
+----
+SELECT UPPER(last_name) "LastNameUpperCase" FROM Players;
+----
+
+
+== LEFT
+
+Returns the leftmost number of characters.
+
+[source,sql]
+----
+LEFT(string, int)
+----
+
+
+Parameters:
+- `string` - an argument.
+- `int` - a number of characters to extract.
+
+
+
+Example:
+Get 3 first letters of Players' names:
+
+
+[source,sql]
+----
+SELECT LEFT(NAME, 3) FROM Players;
+----
+
+
+== RIGHT
+
+Returns the rightmost number of characters.
+
+[source,sql]
+----
+RIGHT(string, int)
+----
+
+
+Parameters:
+- `string` - an argument.
+- `int` - a number of characters to extract.
+
+
+
+Example:
+Get the last 3 letters of Players' names:
+
+
+[source,sql]
+----
+SELECT RIGHT(NAME, 3) FROM Players;
+----
+
+
+== LOCATE
+
+Returns the location of a search string in a string. If a start position is used, the characters before it are ignored. If position is negative, the rightmost location is returned. 0 is returned if the search string is not found.
+
+[source,sql]
+----
+LOCATE(searchString, string [, startInt])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LOCATE('.', NAME) FROM Players;
+----
+
+
+== POSITION
+
+Returns the location of a search string in a string. See also <<LOCATE>>.
+
+[source,sql]
+----
+POSITION(searchString, string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT POSITION('.', NAME) FROM Players;
+----
+
+
+== LPAD
+
+Left pad the string to the specified length. If the length is shorter than the string, it will be truncated at the end. If the padding string is not set, spaces will be used.
+
+[source,sql]
+----
+LPAD(string, int[, paddingString])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LPAD(AMOUNT, 10, '*') FROM Players;
+----
+
+
+== RPAD
+
+Right pad the string to the specified length. If the length is shorter than the string, it will be truncated. If the padding string is not set, spaces will be used.
+
+[source,sql]
+----
+RPAD(string, int[, paddingString])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT RPAD(TEXT, 10, '-') FROM Players;
+----
+
+
+== LTRIM
+
+Removes all leading spaces from a string.
+
+[source,sql]
+----
+LTRIM(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT LTRIM(NAME) FROM Players;
+----
+
+
+== RTRIM
+
+Removes all trailing spaces from a string.
+
+[source,sql]
+----
+RTRIM(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT RTRIM(NAME) FROM Players;
+----
+
+
+== TRIM
+
+Removes all leading spaces, trailing spaces, or spaces at both ends, from a string. Other characters can be removed as well.
+
+[source,sql]
+----
+TRIM ([{LEADING | TRAILING | BOTH} [string] FROM] string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT TRIM(BOTH '_' FROM NAME) FROM Players;
+----
+
+
+== REGEXP_REPLACE
+
+Replaces each substring that matches a regular expression. For details, see the Java `String.replaceAll()` method. If any parameter is null (except the optional flagsString parameter), the result is null.
+
+[source,sql]
+----
+REGEXP_REPLACE(inputString, regexString, replacementString [, flagsString])
+----
+
+
+Flags values are limited to 'i', 'c', 'n', 'm'. Other symbols cause an exception. Multiple symbols can be used in one `flagsString` parameter (for example: 'im'). Later flags override earlier ones, for example 'ic' is equivalent to case sensitive, matching 'c'.
+
+- 'i' enables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'c' disables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'n' allows the period to match the newline character (Pattern.DOTALL)
+
+- 'm' enables multiline mode (Pattern.MULTILINE)
+
+
+Example:
+
+[source,sql]
+----
+SELECT REGEXP_REPLACE(name, 'w+', 'W', 'i') FROM Players;
+----
+
+
+== REGEXP_LIKE
+
+Matches string to a regular expression. For details, see the Java `Matcher.find()` method. If any parameter is null (except the optional `flagsString` parameter), the result is null.
+
+[source,sql]
+----
+REGEXP_LIKE(inputString, regexString [, flagsString])
+----
+
+
+
+Flags values are limited to 'i', 'c', 'n', 'm'. Other symbols cause an exception. Multiple symbols can be used in one `flagsString` parameter (for example: 'im'). Later flags override earlier ones, for example 'ic' is equivalent to case sensitive, matching 'c'.
+
+- 'i' enables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'c' disables case insensitive matching (Pattern.CASE_INSENSITIVE)
+
+- 'n' allows the period to match the newline character (Pattern.DOTALL)
+
+- 'm' enables multiline mode (Pattern.MULTILINE)
+
+
+Example:
+
+[source,sql]
+----
+SELECT REGEXP_LIKE(name, '[A-Z ]*', 'i') FROM Players;
+----
+
+
+== REPEAT
+
+Returns a string repeated some number of times.
+
+[source,sql]
+----
+REPEAT(string, int)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT REPEAT(NAME || ' ', 10) FROM Players;
+----
+
+
+== REPLACE
+
+Replaces all occurrences of a search string in specified text with another string. If no replacement is specified, the search string is removed from the original string. If any parameter is null, the result is null.
+
+[source,sql]
+----
+REPLACE(string, searchString [, replacementString])
+----
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT REPLACE(NAME, ' ') FROM Players;
+----
+
+
+== SOUNDEX
+
+Returns a four character code representing the SOUNDEX of a string. See also link:http://www.archives.gov/genealogy/census/soundex.html[http://www.archives.gov/genealogy/census/soundex.html]. This method returns a `string`.
+
+[source,sql]
+----
+SOUNDEX(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT SOUNDEX(NAME) FROM Players;
+----
+
+
+== SPACE
+
+Returns a string consisting of the specified number of spaces.
+
+[source,sql]
+----
+SPACE(int)
+----
+
+
+
+
+Example:
+
+
+[source,sql]
+----
+SELECT name, SPACE(80) FROM Players;
+----
+
+
+== STRINGDECODE
+
+Converts an encoded string using the Java string literal encoding format. Special characters are `\b`, `\t`, `\n`, `\f`, `\r`, `\"`, `\`, `\<octal>`, `\u<unicode>`. This method returns a `string`.
+
+[source,sql]
+----
+STRINGDECODE(string)
+----
+
+Example:
+
+[source,sql]
+----
+STRINGENCODE(STRINGDECODE('Lines 1\nLine 2'));
+----
+
+
+== STRINGENCODE
+
+Encodes special characters in a string using the Java string literal encoding format. Special characters are `\b`, `\t`, `\n`, `\f`, `\r`, `\"`, `\`, `\<octal>`, `\u<unicode>`. This method returns a `string`.
+
+[source,sql]
+----
+STRINGENCODE(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+STRINGENCODE(STRINGDECODE('Lines 1\nLine 2'))
+----
+
+
+== STRINGTOUTF8
+
+Encodes a string to a byte array using the UTF8 encoding format. This method returns `bytes`.
+
+[source,sql]
+----
+STRINGTOUTF8(string)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT UTF8TOSTRING(STRINGTOUTF8(name)) FROM Players;
+----
+
+
+== SUBSTRING
+
+Returns a substring of a string starting at the specified position. If the start index is negative, then the start index is relative to the end of the string. The length is optional. Also supported is: `SUBSTRING(string [FROM start] [FOR length])`.
+
+[source,sql]
+----
+{SUBSTRING | SUBSTR} (string, startInt [, lengthInt])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT SUBSTR(name, 2, 5) FROM Players;
+----
+
+
+== UTF8TOSTRING
+
+Decodes a byte array in UTF8 format to a string.
+
+[source,sql]
+----
+UTF8TOSTRING(bytes)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+SELECT UTF8TOSTRING(STRINGTOUTF8(name)) FROM Players;
+----
+
+
+== XMLATTR
+
+Creates an XML attribute element of the form name=value. The value is encoded as XML text. This method returns a `string`.
+
+[source,sql]
+----
+XMLATTR(nameString, valueString)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLNODE('a', XMLATTR('href', 'http://h2database.com'))
+----
+
+
+== XMLNODE
+
+Create an XML node element. An empty or null attribute string means no attributes are set. An empty or null content string means the node is empty. The content is indented by default if it contains a newline. This method returns a `string`.
+
+[source,sql]
+----
+XMLNODE(elementString [, attributesString [, contentString [, indentBoolean]]])
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLNODE('a', XMLATTR('href', 'http://h2database.com'), 'H2')
+----
+
+
+== XMLCOMMENT
+
+Creates an XML comment. Two dashes (`--`) are converted to `- -`. This method returns a `string`.
+
+[source,sql]
+----
+XMLCOMMENT(commentString)
+----
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLCOMMENT('Test')
+----
+
+
+== XMLCDATA
+
+Creates an XML CDATA element. If the value contains `]]>`, an XML text element is created instead. This method returns a `string`.
+
+[source,sql]
+----
+XMLCDATA(valueString)
+----
+
+Example:
+
+[source,sql]
+----
+XMLCDATA('data')
+----
+
+
+== XMLSTARTDOC
+
+Returns the XML declaration. The result is always `<?xml version=1.0?>`.
+
+[source,sql]
+----
+XMLSTARTDOC()
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLSTARTDOC()
+----
+
+
+== XMLTEXT
+
+
+Creates an XML text element. If enabled, newline and linefeed is converted to an XML entity (`&#`). This method returns a `string`.
+
+[source,sql]
+----
+XMLTEXT(valueString [, escapeNewlineBoolean])
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+XMLSTARTDOC()
+----
+
+
+== TO_CHAR
+
+Formats a timestamp, number, or text.
+
+[source,sql]
+----
+TO_CHAR(value [, formatString[, nlsParamString]])
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+TO_CHAR(TIMESTAMP '2010-01-01 00:00:00', 'DD MON, YYYY')
+----
+
+
+== TRANSLATE
+
+Replaces a sequence of characters in a string with another set of characters.
+
+[source,sql]
+----
+TRANSLATE(value , searchString, replacementString]])
+----
+
+
+
+
+
+Example:
+
+[source,sql]
+----
+TRANSLATE('Hello world', 'eo', 'EO')
+----
+
diff --git a/docs/_docs/sql-reference/system-functions.adoc b/docs/_docs/sql-reference/system-functions.adoc
new file mode 100644
index 0000000..5205d07
--- /dev/null
+++ b/docs/_docs/sql-reference/system-functions.adoc
@@ -0,0 +1,225 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= System Functions
+
+
+== COALESCE
+
+Returns the first value that is not null.
+
+[source,sql]
+----
+{COALESCE | NVL } (aValue, bValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+COALESCE(A, B, C)
+----
+
+
+
+== DECODE
+
+Returns the first matching value. NULL is considered to match NULL. If no match was found, then NULL or the last parameter (if the parameter count is even) is returned.
+
+[source,sql]
+----
+DECODE(value, whenValue, thenValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+DECODE(RAND()>0.5, 0, 'Red', 1, 'Black')
+----
+
+
+== GREATEST
+
+Returns the largest value that is not NULL, or NULL if all values are NULL.
+[source,sql]
+----
+GREATEST(aValue, bValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+GREATEST(1, 2, 3)
+----
+
+
+== IFNULL
+
+Returns the value of 'a' if it is not null, otherwise 'b'.
+
+[source,sql]
+----
+IFNULL(aValue, bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+IFNULL(NULL, '')
+----
+
+
+== LEAST
+
+Returns the smallest value that is not NULL, or NULL if all values are NULL.
+
+[source,sql]
+----
+LEAST(aValue, bValue [,...])
+----
+
+
+
+Examples:
+[source,sql]
+----
+LEAST(1, 2, 3)
+----
+
+
+== NULLIF
+
+Returns NULL if 'a' is equals to 'b', otherwise 'a'.
+
+[source,sql]
+----
+NULLIF(aValue, bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+NULLIF(A, B)
+----
+
+
+== NVL2
+
+If the test value is null, then 'b' is returned. Otherwise, 'a' is returned. The data type of the returned value is the data type of 'a' if this is a text type.
+
+[source,sql]
+----
+NVL2(testValue, aValue, bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+NVL2(X, 'not null', 'null')
+----
+
+
+== CASEWHEN
+
+Returns 'aValue' if the boolean expression is true, otherwise 'bValue'.
+
+[source,sql]
+----
+CASEWHEN (boolean , aValue , bValue)
+----
+
+
+
+Examples:
+[source,sql]
+----
+CASEWHEN(ID=1, 'A', 'B')
+----
+
+
+== CAST
+
+Converts a value to another data type. The following conversion rules are used:
+
+- When converting a number to a boolean, 0 is considered as false and every other value is true.
+- When converting a boolean to a number, false is 0 and true is 1.
+- When converting a number to a number of another type, the value is checked for overflow.
+- When converting a number to binary, the number of bytes will match the precision.
+- When converting a string to binary, it is hex encoded.
+- A hex string can be converted into the binary form and then to a number. If a direct conversion is not possible, the value is first converted to a string.
+
+
+
+[source,sql]
+----
+CAST (value AS dataType)
+----
+
+
+Examples:
+[source,sql]
+----
+CAST(NAME AS INT);
+CAST(65535 AS BINARY);
+CAST(CAST('FFFF' AS BINARY) AS INT);
+----
+
+
+== CONVERT
+
+Converts a value to another data type.
+
+[source,sql]
+----
+CONVERT (value , dataType)
+----
+
+
+
+Examples:
+[source,sql]
+----
+CONVERT(NAME, INT)
+----
+
+
+== TABLE
+
+
+Returns the result set. TABLE_DISTINCT removes duplicate rows.
+
+[source,sql]
+----
+TABLE	| TABLE_DISTINCT	(name dataType = expression)
+----
+
+
+
+Examples:
+[source,sql]
+----
+SELECT * FROM TABLE(ID INT=(1, 2), NAME VARCHAR=('Hello', 'World'))
+----
+
diff --git a/docs/_docs/sql-reference/transactions.adoc b/docs/_docs/sql-reference/transactions.adoc
new file mode 100644
index 0000000..78fc40d
--- /dev/null
+++ b/docs/_docs/sql-reference/transactions.adoc
@@ -0,0 +1,66 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Transactions
+
+IMPORTANT: Support for link:transactions/mvcc[SQL transactions] is currently in the beta stage. For production use, consider key-value transactions.
+
+Ignite supports the following statements that allow users to start, commit, or rollback a transaction.
+
+[source,sql]
+----
+BEGIN [TRANSACTION]
+
+COMMIT [TRANSACTION]
+
+ROLLBACK [TRANSACTION]
+----
+
+- The `BEGIN` statement begins a new transaction.
+- `COMMIT` commits the current transaction.
+- `ROLLBACK` rolls back the current transaction.
+
+NOTE: DDL statements are not supported inside transactions.
+
+== Description
+
+The `BEGIN`, `COMMIT` and `ROLLBACK` commands allow you to manage SQL Transactions. A transaction is a sequence of SQL operations that starts with the `BEGIN` statement and ends with the `COMMIT` statement. Either all of the operations in a transaction succeed or they all fail.
+
+The `ROLLBACK [TRANSACTION]` statement undoes all updates made since the last time a `COMMIT` or `ROLLBACK` command was issued.
+
+== Example
+Add a person and update the city population by 1 in a single transaction.
+
+[source,sql]
+----
+BEGIN;
+
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+
+UPDATE City SET population = population + 1 WHERE id = 3;
+
+COMMIT;
+----
+
+Roll back the changes made by the previous commands.
+
+[source,sql]
+----
+BEGIN;
+
+INSERT INTO Person (id, name, city_id) VALUES (1, 'John Doe', 3);
+
+UPDATE City SET population = population + 1 WHERE id = 3;
+----
+
diff --git a/docs/_docs/starting-nodes.adoc b/docs/_docs/starting-nodes.adoc
new file mode 100644
index 0000000..6e78fad
--- /dev/null
+++ b/docs/_docs/starting-nodes.adoc
@@ -0,0 +1,262 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Starting and Stopping Nodes
+
+This chapter explains how to start server and client nodes.
+
+There are two types of nodes: _server nodes_ and _client nodes_. The client nodes are also referred as _thick clients_ to
+distinguish from the link:thin-clients/getting-started-with-thin-clients[thin clients]. Server nodes participate in caching, compute execution,
+stream processing, etc. Thick clients provide the ability to connect to the servers remotely.
+The client nodes provide the whole set of Ignite APIs, including near caching, transactions, compute, streaming,
+services, etc. from the client side.
+
+By default, all Ignite nodes are started as server nodes, and you should explicitly enable the client mode.
+
+//You can start a node by running the `ignite.sh` script.
+
+:javaFile: {javaCodeDir}/IgniteLifecycle.java
+:csharpFile: {csharpCodeDir}/IgniteLifecycle.cs
+
+== Starting Server Nodes
+
+To start a regular server node, use the following command or code snippet:
+
+
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+ignite.sh path/to/configuration.xml
+----
+
+tab:Java[]
+
+[source,java]
+----
+include::{javaFile}[tag=start,indent=0]
+----
+
+`Ignite` is an `AutoCloseable` object. You can use the _try-with-resource_ statement to close it automatically:
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{javaFile}[tags=autoclose,indent=0]
+-------------------------------------------------------------------------------
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tag=start,indent=0]
+----
+
+`Ignite` is an `IDisposable` object. You can use the _using_ statement to close it automatically:
+
+[source, csharp]
+-------------------------------------------------------------------------------
+include::{csharpFile}[tags=disposable,indent=0]
+-------------------------------------------------------------------------------
+tab:C++[]
+--
+
+== Starting Client Nodes
+
+To start a client node, simply enable the client mode in the node configuration:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/client-node.xml[tags=!*;ignite-config,indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=client-node,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=start-client,indent=0]
+----
+tab:C++[]
+--
+
+Alternatively, for convenience, you can also enable or disable the client mode though the Ignition class to allow clients and servers to reuse the same configuration.
+
+
+[tabs]
+--
+tab:Java[]
+
+[source,java]
+----
+include::{javaCodeDir}/ClusteringOverview.java[tag=clientModeIgnition,indent=0]
+----
+
+tab:C#/.NET[]
+
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringOverview.cs[tag=ClientsAndServers,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+== Shutting Down Nodes
+
+When you perform a hard (forced) shutdown on a node, it can lead to data loss or data inconsistency and can even prevent the node from restarting.
+Non-graceful shutdowns should be used as a last resort when the node is not responding and it cannot be shut down gracefully.
+
+A graceful shutdown allows the node to finish critical operations and correctly complete its lifecycle.
+The proper procedure to perform a graceful shutdown is as follows:
+//is to use one of the following ways to stop the node and remove it from the link:clustering/baseline-topology[baseline topology]:
+//to remove the node from the link:clustering/baseline-topology[baseline topology] and
+
+. Stop the node using one of the following methods:
+//tag::stop-commands[]
+* programmatically call `Ignite.close()`
+* programmatically call `System.exit()`
+* send a user interrupt signal. Ignite uses a JVM shutdown hook to execute custom logic before the JVM stops.
+If you start the node by running `ignite.sh` and don't detach it from the terminal, you can stop the node by hitting `Ctrl+C`.
+//end::stop-commands[]
+. Remove the node from the link:clustering/baseline-topology[baseline topology]. This step may not be necessary if link:clustering/baseline-topology#baseline-topology-autoadjustment[baseline auto-adjustment] is enabled.
+
+
+
+Removing the node from the baseline topology starts the rebalancing process on the remaining nodes.
+If you plan to restart the node shortly after shutdown, you don't have to do the rebalancing.
+In this case, do not remove the nodes from the baseline topology.
+
+
+
+
+////
+
+=== Preventing Partition Loss on Shutdown
+
+TODO: is this section valid for ignite?
+
+WARNING: This feature is experimental.
+
+When you simultaneously stop more nodes than the number of partition backups, some partitions may become unavailable to the remaining nodes (because both the primary and backup copies of the partition happen to be on the nodes that were shut down).
+For example, if the number of backups for a cache is set to 1 and you stop 2 nodes, there is a chance that both the primary and backup copy of a partition becomes unavailable to the rest of the cluster.
+The proper way of dealing with this situation is to stop one node, rebalance the data, and then wait until the rebalancing is finished before stopping the next node, and so on.
+However, when the shutdown is triggered automatically (for example, when you do rolling upgrades or scaling the cluster in Kubernetes), you have no mechanism to wait for the completion of the rebalancing process and so you may lose data.
+
+To prevent this situation, you can define a system property (`IGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOWN`) that delays the shutdown of a node until the shutdown does not lead to a partition loss.
+The node checks if all the partitions are available to the remaining nodes and only then exits the process.
+
+In other words, when the property is set, you won't be able to stop more than `min(CacheConfiguration.backups) + 1` nodes at a time without waiting until the data is rebalanced.
+This is only applicable if you have at least one cache with partition backups and the node is stopped properly, i.e. using the commands described in the previous section:
+
+include::{docfile}[tag=stop-commands]
+
+A non-graceful shutdown (`kill -9`) cannot be prevented.
+
+To enable partition loss prevention, set the `IGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOWN` system property to `true`.
+
+[source, shell]
+----
+./ignite.sh -J-DIGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOWN=true
+----
+
+CAUTION: If you have a cache without partition backups and you stop a node (even with this property set), you will lose the portion of the cache that was kept on this node.
+
+//The behavior of the node depend on whether the baseline topology is configured to be link:clustering/baseline-topology#baseline-topology-autoadjustment[adjusted automatically].
+
+When this property is set, the last node in the cluster will not stop gracefully.
+You will have to terminate the process by sending the `kill -9` signal.
+If you want to shut down the entire cluster, link:control-script#deactivating-cluster[deactivate] it and then stop all the nodes.
+Alternatively, you can stop all the nodes non-gracefully (by sending `kill -9`).
+However, the latter option is not recommended for clusters with persistence.
+////
+
+== Setting JVM Options
+
+There are several ways you can set JVM options when starting a node with the `ignite.sh` script.
+These ways are described in the following sections.
+
+=== JVM_OPTS System Variable
+
+You can set the `JVM_OPTS` environment variable:
+
+[source, shell]
+----
+export JVM_OPTS="$JVM_OPTS -Xmx6G -DIGNITE_TO_STRING_INCLUDE_SENSITIVE=false"; $IGNITE_HOME/bin/ignite.sh
+----
+
+
+
+=== Command Line Arguments
+
+You can also pass JVM options by using the `-J` prefix:
+
+[source, shell]
+----
+./ignite.sh -J-Xmx6G -J-DIGNITE_TO_STRING_INCLUDE_SENSITIVE=false
+----
+== Node Lifecycle Events
+
+
+Lifecycle events give you an opportunity to execute custom code at different stages of the node lifecycle.
+
+There are 4 lifecycle events:
+
+[cols="1,3",opts="header"]
+|===
+|Event Type |  Description
+|BEFORE_NODE_START |  Invoked before the node's startup routine is initiated.
+|AFTER_NODE_START  |  Invoked right after then node has started.
+|BEFORE_NODE_STOP |    Invoked right before the node's stop routine is initiated.
+|AFTER_NODE_STOP | Invoked right after then node has stopped.
+
+|===
+
+The following steps describe how to add a custom lifecycle event listener.
+
+. Create a custom lifecycle bean by implementing the `LifecycleBean` interface.
+The interface has the `onLifecycleEvent()` method, which is called for any lifecycle event.
++
+[source, java]
+----
+include::{javaCodeDir}/MyLifecycleBean.java[tags=bean, indent=0]
+----
+
+. Register the implementation in the node configuration.
++
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/lifecycle.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javafile}[tags=lifecycle, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
diff --git a/docs/_docs/thin-client-comparison.csv b/docs/_docs/thin-client-comparison.csv
new file mode 100644
index 0000000..2e5fb45
--- /dev/null
+++ b/docs/_docs/thin-client-comparison.csv
@@ -0,0 +1,15 @@
+Thin Client Feature,Java,.NET,C++,Python,Node.js,PHP
+Scan Query,{yes},{yes},No,{yes},{yes},{yes}
+Scan Query with a filter,{yes},{yes},No,No,No,No
+SqlFieldsQuery,{yes},{yes},No,{yes},{yes},{yes}
+Binary Object API,{yes},{yes},No,No,{yes},{yes}
+Async Operations,No,{yes},No,{yes},{yes},{yes}
+SSL/TLS,{yes},{yes},{yes},{yes},{yes},{yes}
+Authentication,{yes},{yes},{yes},{yes},{yes},{yes}
+Partition Awareness,{yes},{yes},{yes},{yes},{yes},No
+Failover,{yes},{yes},{yes},{yes},{yes},{yes}
+Transactions,{yes},No,No,No,No,No
+Cluster API,{yes},{yes},No,No,No,No
+Compute,{yes},{yes},No,No,No,No
+Service invocation,{yes},No,No,No,No,No
+Server Discovery,No,{yes},No,No,No,No
\ No newline at end of file
diff --git a/docs/_docs/thin-clients/cpp-thin-client.adoc b/docs/_docs/thin-clients/cpp-thin-client.adoc
new file mode 100644
index 0000000..cdac782
--- /dev/null
+++ b/docs/_docs/thin-clients/cpp-thin-client.adoc
@@ -0,0 +1,117 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= C++ Thin Client
+
+== Prerequisites
+
+* C++ compiler: MS Visual C++ (10.0 and up), g++ (4.4.0 and up)
+* Visual Studio 2010 or newer
+
+
+== Installation
+The source code of the C++ thin client comes with the Ignite distribution package under the `${IGNITE_HOME}/platforms/cpp` directory.
+
+
+[tabs]
+--
+tab:Windows[]
+[source,bat]
+----
+cd %IGNITE_HOME%\platforms\cpp\project\vs
+
+msbuild ignite.sln /p:Configuration=Release /p:Platform=x64
+----
+
+tab:Ubuntu[]
+[source,bash,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake -DCMAKE_BUILD_TYPE=Release -DWITH_THIN_CLIENT=ON ${IGNITE_HOME}/platforms/cpp 
+make
+sudo make install
+----
+
+tab:CentOS/RHEL[]
+[source,shell,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake3 -DCMAKE_BUILD_TYPE=Release -DWITH_THIN_CLIENT=ON ${IGNITE_HOME}/platforms/cpp 
+make 
+sudo make install
+----
+
+--
+
+
+== Creating Client Instance
+The API provided by the thin client is located under the `ignite::thin` namespace.
+The main entry point to the API is the `IgniteClient::Start(IgniteClientConfiguration)` method, which returns an instance of the client.
+
+[source, cpp]
+----
+include::code-snippets/cpp/src/thin_creating_client_instance.cpp[tag=thin-creating-client-instance,indent=0]
+----
+
+=== Partition Awareness
+
+include::includes/partition-awareness.adoc[]
+
+The following code sample illustrates how to use the partition awareness feature with the C++ thin client.
+
+[source, cpp]
+----
+include::code-snippets/cpp/src/thin_partition_awareness.cpp[tag=thin-partition-awareness,indent=0]
+----
+
+== Using Key-Value API
+
+=== Getting Cache Instance
+
+To perform basic key-value operations on a cache, obtain an instance of the cache as follows:
+
+[source, cpp]
+----
+include::code-snippets/cpp/src/thin_client_cache.cpp[tag=thin-getting-cache-instance,indent=0]
+----
+
+The `GetOrCreateCache(cacheName)` returns an instance of the cache if it exists or creates the cache.
+
+=== Basic Cache Operations
+The following code snippet demonstrates how to execute basic cache operations on a specific cache.
+[source, cpp]
+----
+include::code-snippets/cpp/src/thin_client_cache.cpp[tag=basic-cache-operations,indent=0]
+----
+
+== Security
+
+=== SSL/TLS
+
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for instructions on the cluster configuration.
+
+[source, cpp]
+----
+include::code-snippets/cpp/src/thin_client_ssl.cpp[tag=thin-client-ssl,indent=0]
+----
+
+=== Authentication
+
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+
+[source, cpp]
+----
+include::code-snippets/cpp/src/thin_authentication.cpp[tag=thin-authentication,indent=0]
+----
+
diff --git a/docs/_docs/thin-clients/dotnet-thin-client.adoc b/docs/_docs/thin-clients/dotnet-thin-client.adoc
new file mode 100644
index 0000000..fe551a3
--- /dev/null
+++ b/docs/_docs/thin-clients/dotnet-thin-client.adoc
@@ -0,0 +1,260 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= .NET Thin Client
+
+:sourceCodeFile: code-snippets/dotnet/ThinClient.cs
+
+== Prerequisites
+- Supported runtimes: .NET 4.0+, .NET Core 2.0+
+- Supported OS: Windows, Linux, macOS (any OS supported by .NET Core 2.0+)
+
+== Installation
+
+The .NET thin client API is provided by the Ignite.NET API library, which is located in the `{IGNITE_HOME}/platforms/dotnet` directory of the Ignite distribution package.
+The API is located in the `Apache.Ignite.Core` assembly.
+
+== Connecting to Cluster
+
+The thin client API entry point is the `Ignition.StartClient(IgniteClientConfiguration)` method.
+The `IgniteClientConfiguration.Endpoints` property is mandatory; it must point to the host where the server node is running.
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=connecting,indent=0]
+----
+
+=== Failover
+
+You can provide multiple node addresses. In this case thin client connects to a random node in the list, and *failover mechanism* is enabled: if a server node fails, client tries other known addresses and reconnects automatically.
+Note that `IgniteClientException` can be thrown if a server node fails while client operation is being performed -
+user code should handle this exception and implement retry logic accordingly.
+
+[[discovery]]
+=== Automatic Server Node Discovery
+
+Thin client can discover server nodes in the cluster automatically.
+This behavior is enabled when link:#partition_awareness[Partition Awareness] is enabled.
+
+Server discovery is an asynchronous process - it happens in the background.
+Additionally, thin client receives topology updates only when it performs some operations (to minimize server load and network traffic from idle connections).
+
+You can observe the discovery process by enabling logging and/or calling `IIgniteClient.GetConnections`:
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=discovery,indent=0]
+----
+
+[WARNING]
+====
+[discrete]
+Server discovery may not work when servers are behind a NAT server or a proxy.
+Server nodes provide their addresses and ports to the client, but when the client is in a different subnet, those addresses won't work.
+====
+
+[[partition_awareness]]
+== Partition Awareness
+
+Partition awareness allows the thin client to send query requests directly to the node that owns the queried data.
+
+[WARNING]
+====
+[discrete]
+Partition awareness is an experimental feature whose API or design architecture might change before a GA version is released.
+====
+
+Without partition awareness, an application that is connected to the cluster via a thin client executes all queries and operations via a single server node that acts as a proxy for the incoming requests.
+These operations are then re-routed to the node that stores the data that is being requested.
+This results in a bottleneck that could prevent the application from scaling linearly.
+
+image::images/partitionawareness01.png[Without Partition Awareness]
+
+Notice how queries must pass through the proxy server node, where they are routed to the correct node.
+
+With partition awareness in place, the thin client can directly route queries and operations to the primary nodes that own the data required for the queries.
+This eliminates the bottleneck, allowing the application to scale more easily.
+
+image::images/partitionawareness02.png[With Partition Awareness]
+
+To enable partition awareness, set the `IgniteClientConfiguration.EnablePartitionAwareness` property to `true`.
+This enables link:#discovery[server discovery] as well.
+If the client is behind a NAT or a proxy, automatic server discovery may not work.
+In this case provide addresses of all server nodes in the client's connection configuration.
+
+
+== Using Key-Value API
+
+
+=== Getting Cache Instance
+The `ICacheClient` interface provides the key-value API. You can use the following methods to obtain an instance of `ICacheClient`:
+
+- `GetCache(cacheName)` — returns an instance of an existing cache.
+- `CreateCache(cacheName)` — creates a cache with the given name.
+- `GetOrCreateCache(CacheClientConfiguration)` — gets or creates a cache with the given configuration.
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=createCache,indent=0]
+----
+
+Use `IIgniteClient.GetCacheNames()` to obtain a list of all existing caches.
+
+=== Basic Operations
+The following code snippet demonstrates how to execute basic cache operations on a specific cache.
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=basicOperations,indent=0]
+----
+
+////
+=== Asynchronous Execution
+////
+
+
+=== Working With Binary Objects
+The .NET thin client supports the Binary Object API described in the link:key-value-api/binary-objects[Working with Binary Objects] section. Use `ICacheClient.WithKeepBinary()` to switch the cache to binary mode and start working directly with binary objects avoiding serialization/deserialization. Use `IIgniteClient.GetBinary()` to get an instance of `IBinary` and build an object from scratch.
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=binaryObj,indent=0]
+----
+
+== Scan Queries
+Use a scan query to get a set of entries that satisfy a given condition.
+The thin client sends the query to the cluster node where it is executed as a normal scan query.
+
+The query condition is specified by an `ICacheEntryFilter` object that is passed to the query constructor as an argument.
+
+Define a query filter as follows:
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=scanQry,indent=0]
+----
+
+Then execute the scan query:
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=scanQry2,indent=0]
+----
+
+
+== Executing SQL Statements
+
+The thin client provides a SQL API to execute SQL statements. SQL statements are declared using `SqlFieldsQuery` objects and executed through the `ICacheClient.Query(SqlFieldsQuery)` method.
+Alternatively, SQL queries can be performed via Ignite LINQ provider.
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=executingSql,indent=0]
+----
+
+
+== Using Cluster API
+
+The cluster APIs let you create a group of cluster nodes and run various operations against the group. The `IClientCluster`
+interface is the entry-point to the APIs that can be used as follows:
+
+* Get or change the state of a cluster
+* Get a list of all cluster nodes
+* Create logical groups our of cluster nodes and use other Ignite APIs to perform certain operations on the group
+
+Use the instance of `IClientCluster` to obtain a reference to the `IClientCluster` that comprises all cluster nodes, and
+activate the whole cluster as well as write-ahead-logging for the `my-cache` cache:
+[source, csharp]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-cluster,indent=0]
+-------------------------------------------------------------------------------
+
+=== Logical nodes grouping
+
+You can use the `IClientClusterGroup` interface of the cluster APIs to create various groups of cluster nodes. For instance,
+one group can comprise all servers nodes, while the other group can include only those nodes that match a specific
+TCP/IP address format. The example below shows how to create a group of server nodes located in the `dc1` data center:
+
+[source, csharp]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-cluster-groups,indent=0]
+-------------------------------------------------------------------------------
+
+Note, the `IClientCluster` instance implements `IClientClusterGroup` which is the root cluster group that includes all
+nodes of the cluster.
+
+Refer to the main link:distributed-computing/cluster-groups[cluster groups] documentation page for more details on the capability.
+
+== Executing Compute Tasks
+
+Presently, the .NET thin client supports basic link:distributed-computing/distributed-computing[compute capabilities]
+by letting you execute those compute tasks that are *already deployed* in the cluster. You can either run a task across all
+cluster nodes or a specific link:thin-clients/dotnet-thin-client#logical-nodes-grouping[cluster group].
+
+By default, the execution of tasks, triggered by the thin client, is disabled on the cluster side. You need to set the
+`ThinClientConfiguration.MaxActiveComputeTasksPerConnection` parameter to a non-zero value in the configuration of your
+server nodes and thick clients:
+
+[tabs]
+--
+tab:Spring XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+  <property name="clientConnectorConfiguration">
+    <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+      <property name="thinClientConfiguration">
+        <bean class="org.apache.ignite.configuration.ThinClientConfiguration">
+          <property name="maxActiveComputeTasksPerConnection" value="100" />
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+tab:C#[]
+[source,csharp]
+----
+include::{sourceCodeFile}[tag=client-compute-setup,indent=0]
+----
+--
+
+The example below shows how to get access to the compute APIs via the `IComputeClient` interface and execute the compute
+task named `org.foo.bar.AddOneTask` passing `1` as an input parameter:
+[source, csharp]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-compute-task,indent=0]
+-------------------------------------------------------------------------------
+
+== Security
+
+=== SSL/TLS
+
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+
+The following code example demonstrates how to configure SSL parameters in the thin client.
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=ssl,indent=0]
+----
+
+
+=== Authentication
+
+
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+
+[source, csharp]
+----
+include::{sourceCodeFile}[tag=auth,indent=0]
+----
+
diff --git a/docs/_docs/thin-clients/getting-started-with-thin-clients.adoc b/docs/_docs/thin-clients/getting-started-with-thin-clients.adoc
new file mode 100644
index 0000000..5e0c37c
--- /dev/null
+++ b/docs/_docs/thin-clients/getting-started-with-thin-clients.adoc
@@ -0,0 +1,126 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Thin Clients Overview
+
+== Overview
+A thin client is a lightweight Ignite client that connects to the cluster via a standard socket connection.
+It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute grid calculations.
+What it does is simply establish a socket connection to a standard Ignite node and perform all operations through that node.
+
+Thin clients are based on the link:binary-client-protocol/binary-client-protocol[binary client protocol], which makes it possible to support Ignite connectivity from any programming language.
+
+Ignite provides the following thin clients:
+
+* link:thin-clients/java-thin-client[Java Thin Client]
+* link:thin-clients/dotnet-thin-client[.NET/C# Thin Client]
+* link:thin-clients/cpp-thin-client[C++ Thin Client]
+* link:thin-clients/python-thin-client[Python Thin Client]
+* link:thin-clients/nodejs-thin-client[Node.js Thin Client]
+* link:thin-clients/php-thin-client[PHP Thin Client]
+
+////
+*TODO: add a diagram of a thin client connecting to the cluster (multiple nodes) and how a request is rerouted to the node that hosts the data*
+////
+
+== Thin Client Features
+The following table outlines features supported by each client.
+
+:yes: pass:quotes[[.checkmark]#yes#]
+
+[%header,format=csv,cols="2,1,1,1,1,1,1"]
+|===
+include::thin-client-comparison.csv[]
+|===
+
+=== Client Connection Failover
+
+All thin clients support a connection failover mechanism, whereby the client automatically switches to an available node in case of the current node or connection failure.
+For this mechanism to work, you need to provide a list of node addresses you want to use for failover purposes in the client configuration.
+Refer to the specific client documentation for more details.
+
+[#partition-awareness]
+=== Partition Awareness
+
+As explained in the link:data-modeling/data-partitioning[Data Partitioning] section, data in the cluster is distributed between the nodes in a balanced manner for scalability and performance reasons.
+Each cluster node maintains a subset of the data and the partition distribution map, which is used to determine the node that keeps the primary/backup copy of requested entries.
+
+include::includes/partition-awareness.adoc[]
+
+Partition Awareness is available for the Java, .NET, C++, Python, and Node.js thin clients.
+Refer to the documentation of the specific client for more information.
+
+=== Authentication
+
+All thin clients support authentication in the cluster side. Authentication is link:security/authentication[configured in the cluster] configuration, and the client simply provide user credentials.
+Refer to the documentation of the specific client for more information.
+
+== Cluster Configuration
+
+Thin client connection parameters are controlled by the client connector configuration.
+By default, Ignite accepts client connections on port 10800.
+You can change the port, connection buffer size and timeout, enable SSL/TLS, etc.
+
+=== Configuring Thin Client Connector
+
+The following example shows how to configure thin client connection parameters:
+
+:xmlConfigFile: code-snippets/xml/thin-client-cluster-config.xml
+:javaFile: {javaCodeDir}/JavaThinClient.java
+:dotnetFile: code-snippets/dotnet/ThinClient.cs
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+            <property name="port" value="10000"/>
+        </bean>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=clusterConfiguration,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{dotnetFile}[tag=clusterConfiguration,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The following table describes some parameters that you may want to change.
+
+[cols="1,3,1",opts="header",width="100%"]
+|===
+| Parameter | Description  | Default Value
+| `thinClientEnabled`| Enables or disables thin client connectivity. | `true`
+| `port` | The port for thin client connections.  | 10800
+| `portRange`| This parameters sets a range of ports for thin client connections. For example, if `portRange` = 10, thin clients can connect to any port from range 10800–18010. The node tries to bind to each port from the range starting from the `port` until it finds an available one. If all ports are unavailable, the node won't start. | 100
+| `sslEnabled` | Set this property to `true` to enable SSL for thin client connections.  | `false`
+|===
+
+See the complete list of parameters in the link:{javadoc_base_url}/org/apache/ignite/configuration/ClientConnectorConfiguration.html[ClientConnectorConfiguration,window=_blank] javadoc.
+
+
+=== Enabling SSL/TLS for Thin Clients
+
+Refer to the  link:security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section.
+
diff --git a/docs/_docs/thin-clients/java-thin-client.adoc b/docs/_docs/thin-clients/java-thin-client.adoc
new file mode 100644
index 0000000..81f1a86
--- /dev/null
+++ b/docs/_docs/thin-clients/java-thin-client.adoc
@@ -0,0 +1,329 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Java Thin Client
+
+:sourceCodeFile: {javaCodeDir}/JavaThinClient.java
+== Overview
+
+The Java thin client is a lightweight client that connects to the cluster via a standard socket connection. It does not become a part of the cluster topology, never holds any data, and is not used as a destination for compute calculations. The thin client simply establishes a socket connection to a standard node​ and performs all operations through that node.
+
+== Setting Up
+If you use maven or gradle, add the `ignite-core` dependency to your application:
+
+
+[tabs]
+--
+tab:Maven[]
+[source,xml,subs="attributes,specialchars"]
+----
+<properties>
+    <ignite.version>{version}</ignite.version>
+</properties>
+
+<dependencies>
+    <dependency>
+        <groupId>org.apache.ignite</groupId>
+        <artifactId>ignite-core</artifactId>
+        <version>${ignite.version}</version>
+    </dependency>
+</dependencies>
+----
+
+tab:Gradle[]
+[source,groovy,subs="attributes,specialchars"]
+----
+def igniteVersion = '{version}'
+
+dependencies {
+    compile group: 'org.apache.ignite', name: 'ignite-core', version: igniteVersion
+}
+----
+
+--
+
+Alternatively, you can use the `ignite-core-{version}.jar` library from the Ignite distribution package.
+
+== Connecting to Cluster
+
+To initialize a thin client, use the `Ignition.startClient(ClientConfiguration)` method. The method accepts a `ClientConfiguration` object, which defines client connection parameters.
+
+The method returns the `IgniteClient` interface, which provides various methods for accessing data. `IgniteClient` is an auto-closable resource. Use the _try-with-resources_ statement to close the thin client and release the resources associated with the connection.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=clientConnection,indent=0]
+-------------------------------------------------------------------------------
+
+You can provide addresses of multiple nodes. In this case, the thin client randomly tries all the servers in the list and throws `ClientConnectionException` if none is available.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=connect-to-many-nodes,indent=0]
+-------------------------------------------------------------------------------
+
+Note that the code above provides a failover mechanism in case of server node failures. Refer to the <<Handling Node Failures>> section for more information.
+
+== Partition Awareness
+
+include::includes/partition-awareness.adoc[]
+
+The following code sample illustrates how to use the partition awareness feature with the java thin client.
+
+[source, java]
+----
+include::{sourceCodeFile}[tag=partition-awareness,indent=0]
+----
+
+== Using Key-Value API
+
+The Java thin client supports most of the key-value operations available in the thick client.
+To execute key-value operations on a specific cache, you need to get an instance of the cache and use one of its methods.
+
+=== Getting a Cache Instance
+
+The `ClientCache` interface provides the key-value API. You can use the following methods to obtain an instance of `ClientCache`:
+
+* `IgniteClient.cache(String)`: assumes a cache with the specified name exists. The method does not communicate with the cluster to check if the cache really exists. Subsequent cache operations fail if the cache does not exist.
+* `IgniteClient.getOrCreateCache(String)`, `IgniteClient.getOrCreateCache(ClientCacheConfiguration)`: get existing cache with the specified name or create the cache if it does not exist. The former operation creates a cache with default configuration.
+* `IgniteClient.createCache(String)`, `IgniteClient.createCache(ClientCacheConfiguration)`: create a cache with the specified name and fail if the cache already exists.
+
+Use `IgniteClient.cacheNames()` to list all existing caches.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=getOrCreateCache,indent=0]
+-------------------------------------------------------------------------------
+
+=== Basic Cache Operations
+
+The following code snippet demonstrates how to execute basic cache operations from the thin client.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=key-value-operations,indent=0]
+-------------------------------------------------------------------------------
+
+=== Executing Scan Queries
+Use the `ScanQuery<K, V>` class to get a set of entries that satisfy a given condition. The thin client sends the query to the cluster node where it is executed as a normal link:key-value-api/using-scan-queries[scan query].
+
+The query condition is specified by an `IgniteBiPredicate<K, V>` object that is passed to the query constructor as an argument. The predicate is applied on the server side. If you don't provide any predicate, the query returns all cache entries.
+
+NOTE: The classes of the predicates must be available on the server nodes of the cluster.
+
+The results of the query are transferred to the client page by page. Each page contains a specific number of entries and is fetched to the client only when the entries from that page are requested. To change the number of entries in a page, use the `ScanQuery.setPageSize(int pageSize)` method (default value is 1024).
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=scan-query,indent=0]
+-------------------------------------------------------------------------------
+
+The `IgniteClient.query(...)` method returns an instance of `FieldsQueryCursor`. Make sure to always close the cursor after you obtain all results.
+
+=== Transactions
+
+Client transactions are supported for caches with `AtomicityMode.TRANSACTIONAL` mode.
+
+==== Executing Transactions
+
+To start a transaction, obtain the `ClientTransactions` object from `IgniteClient`.
+`ClientTransactions` has a number of  `txStart(...)` methods, each of which starts a new transaction and returns an object (`ClientTransaction`) that represents the transaction.
+Use this object to commit or rollback the transaction.
+
+[source, ruby]
+----
+include::{sourceCodeFile}[tags=tx,indent=0]
+----
+
+
+==== Transaction Configuration
+
+Client transactions can have different link:key-value-api/transactions#concurrency-modes-and-isolation-levels[concurrency modes, isolation levels], and execution timeout, which can be set for all transactions or on a per transaction basis.
+
+The `ClientConfiguration` object supports setting the default concurrency mode, isolation level, and timeout for all transactions started with this client interface.
+
+
+[source, java]
+----
+include::{sourceCodeFile}[tags=transaction-config,indent=0]
+----
+
+You can specify the concurrency mode, isolation level, and timeout when starting an individual transaction.
+In this case, the provided values override the default settings.
+
+
+[source, java]
+----
+include::{sourceCodeFile}[tags=tx-custom-properties,indent=0]
+----
+
+
+=== Working with Binary Objects
+The thin client fully supports Binary Object API described in the link:key-value-api/binary-objects[Working with Binary Objects] section.
+Use `CacheClient.withKeepBinary()` to switch the cache to binary mode and start working directly with binary objects to avoid serialization/deserialization.
+Use `IgniteClient.binary()` to get an instance of `IgniteBinary` and build an object from scratch.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=binary-example,indent=0]
+-------------------------------------------------------------------------------
+
+Refer to the link:key-value-api/binary-objects[Working with Binary Objects] page for detailed information.
+
+== Executing SQL Statements
+
+The Java thin client provides a SQL API to execute SQL statements. SQL statements are declared using the `SqlFieldsQuery` objects and executed through the `IgniteClient.query(SqlFieldsQuery)` method.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=sql,indent=0]
+-------------------------------------------------------------------------------
+The `query(SqlFieldsQuery)` method returns an instance of `FieldsQueryCursor`, which can be used to iterate over the results. After getting the results, the cursor must be closed to release the resources associated with it.
+
+NOTE: The `getAll()` method retrieves the results from the cursor and closes it.
+
+Read more about using `SqlFieldsQuery` and SQL API in the link:SQL/sql-api[Using SQL API] section.
+
+== Using Cluster APIs
+
+The cluster APIs let you create a group of cluster nodes and run various operations against the group. The `ClientCluster`
+interface is the entry-point to the APIs that can be used as follows:
+
+* Get or change the state of a cluster
+* Get a list of all cluster nodes
+* Create logical groups our of cluster nodes and use other Ignite APIs to perform certain operations on the group
+
+Use the instance of `IgniteClient` to obtain a reference to the `ClientCluster` interface:
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-cluster,indent=0]
+-------------------------------------------------------------------------------
+
+=== Logical Nodes Grouping
+
+You can use the `ClientClusterGroup` interface of the cluster APIs to create various groups of cluster nodes. For instance,
+one group can comprise all servers nodes, while the other group can include only those nodes that match a specific
+TCP/IP address format. The example below shows how to create a group of server nodes located in the `dc1` data center:
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-cluster-groups,indent=0]
+-------------------------------------------------------------------------------
+
+Refer to the main link:distributed-computing/cluster-groups[cluster groups] documentation page for more details on the capability.
+
+== Executing Compute Tasks
+
+Presently, the Java thin client supports basic link:distributed-computing/distributed-computing[compute capabilities]
+by letting you execute those compute tasks that are *already deployed* in the cluster. You can either run a task across all
+cluster nodes or a specific link:thin-clients/java-thin-client#logical-nodes-grouping[cluster group]. The deployment
+assumes that you create a JAR file with the compute tasks and add the JAR to the cluster nodes' classpath.
+
+By default, the execution of tasks, triggered by the thin client, is disabled on the cluster side. You need to set the
+`ThinClientConfiguration.maxActiveComputeTasksPerConnection` parameter to a non-zero value in the configuration of your
+server nodes and thick clients:
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+            <property name="thinClientConfiguration">
+                <bean class="org.apache.ignite.configuration.ThinClientConfiguration">
+                    <property name="maxActiveComputeTasksPerConnection" value="100" />
+                </bean>
+            </property>
+        </bean>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{sourceCodeFile}[tag=client-compute-setup,indent=0]
+----
+--
+
+The example below shows how to get access to the compute APIs via the `ClientCompute` interface and execute the compute
+task named `MyTask`:
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-compute-task,indent=0]
+-------------------------------------------------------------------------------
+
+== Executing Ignite Services
+
+You can use the `ClientServices` APIs of the Java thin client to invoke an link:services/services[Ignite Service] that
+is *already deployed* in the cluster.
+
+The example below shows how to invoke the service named `MyService`:
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-services,indent=0]
+-------------------------------------------------------------------------------
+
+== Handling Exceptions
+
+=== Handling Node Failures
+
+When you provide the addresses of multiple nodes in the client configuration, the client automatically switches to the next node if the current connection fails and retries any ongoing operation.
+
+In the case of atomic operations, failover to another node is transparent to the user. However, if you execute a scan query or a SELECT query, iteration over query cursor may throw an `ClientConnectionException`. This can happen because queries return data in pages, and if the node that the client is connected to goes down while the client retrieves the pages, to keep query result consistent exception is thrown.
+
+If explicit transaction is started, cache operations binded to this transaction also can throw an `ClientException` in case of failed connection to server node.
+
+User code should handle these exceptions and implement retry logic accordingly.
+
+== Security
+
+=== SSL/TLS
+
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS in both the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+
+To enable encrypted communication in the thin client, provide a keystore that contains the encryption key and a truststore with the trusted certificates in the thin client configuration.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=ssl-configuration,indent=0]
+-------------------------------------------------------------------------------
+
+The following table explains encryption parameters of the client configuration:
+
+[cols="1,3,1",opts="header,stretch"]
+|===
+| Parameter | Description | Default Value
+| sslMode | Either  `REQURED` or `DISABLED`. | `DISABLED`
+| sslClientCertificateKeyStorePath | The path to the keystore file with the private key. | N/A
+| sslClientCertificateKeyStoreType | The type of the keystore. | `JKS`
+| sslClientCertificateKeyStorePassword | The password to the keystore.| N/A
+| sslTrustCertificateKeyStorePath | The path to the truststore file.| N/A
+| sslTrustCertificateKeyStoreType | The type of the truststore. | `JKS`
+| sslTrustCertificateKeyStorePassword | The password to the truststore. | N/A
+| sslKeyAlgorithm| Sets the key manager algorithm that is used to create a key manager. | `SunX509`
+| sslTrustAll | If this parameter is set to `true`, the certificates are not validated. | N/A
+| sslProtocol | The name of the protocol that is used for data encryption. | `TLS`
+|===
+
+=== Authentication
+
+Configure link:security/authentication[authentication on the cluster side] and provide the user name and password in the client configuration.
+
+[source, java]
+-------------------------------------------------------------------------------
+include::{sourceCodeFile}[tag=client-authentication,indent=0]
+-------------------------------------------------------------------------------
+
+
diff --git a/docs/_docs/thin-clients/nodejs-thin-client.adoc b/docs/_docs/thin-clients/nodejs-thin-client.adoc
new file mode 100644
index 0000000..67ad09a
--- /dev/null
+++ b/docs/_docs/thin-clients/nodejs-thin-client.adoc
@@ -0,0 +1,240 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Node.js Thin Client
+
+:source_code_dir: code-snippets/nodejs
+
+== Prerequisites
+
+* Node.js version 8 or higher. Either download the Node.js https://nodejs.org/en/download/[pre-built binary] for the target platform, or install Node.js via a https://nodejs.org/en/download/package-manager[package manager].
+
+Once `node` and `npm` are installed, you can use one of the following installation options.
+
+== Installation
+
+The Node.js thin client is shipped as an `npm` package and as a zip archive. Use any of the methods to install the client in your environment.
+
+=== Using NPM ===
+
+Use the following command to install the client from the NPM repository:
+
+[source,shell]
+----
+npm install -g apache-ignite-client
+----
+
+=== Using ZIP Archive ===
+
+The thin client can be installed from the zip archive available for download from the Ignite website:
+
+*  Download the link:https://ignite.apache.org/download.cgi#binaries[Apache Ignite binary package,window=_blank].
+*  Unpack the archive and navigate to the `{IGNITE_HOME}/platforms/nodejs` folder.
+*  Run the commands below to finish the installation.
+
+[source,shell]
+----
+npm link
+
+npm link apache-ignite-client
+----
+
+
+== Creating a Client Instance
+The `IgniteClient` class provides the thin client API. You can obtain an instance of the client as follows:
+
+[source, js]
+----
+include::{source_code_dir}/initialize.js[tag=example-block,indent=0]
+----
+
+The constructor accepts one optional parameter that represents a callback function, which is called every time the connection state changes (see below).
+
+You can create as many `IgniteClient` instances as needed. All of them will work independently.
+
+== Connecting to Cluster
+To connect the client to a cluster, use the `IgniteClient.connect()` method.
+It accepts an object of the `IgniteClientConfiguration` class that represents connection parameters. The connection parameters must contain a list of nodes (in the `host:port` format) that will be used for link:thin-clients/getting-started-with-thin-clients#client-connection-failover[failover purposes].
+
+[source, js]
+----
+include::{source_code_dir}/connecting.js[tag=example-block,indent=0]
+----
+
+The client has three connection states: `CONNECTING`, `CONNECTED`, `DISCONNECTED`.
+You can specify a callback function in the client configuration object, which will be called every time the connection state changes.
+
+Interactions with the cluster are only possible in the `CONNECTED` state.
+If the client loses the connection, it automatically switches to the `CONNECTING` state and tries to re-connect using the link:thin-clients/getting-started-with-thin-clients#client-connection-failover[failover mechanism]. If it fails to reconnect to all the endpoints from the provided list, the client switches to the `DISCONNECTED` state.
+
+You can call the `disconnect()` method to close the connection. This will switch the client to the `DISCONNECTED` state.
+
+== Partition Awareness
+
+include::includes/partition-awareness.adoc[]
+
+To enable partition awareness, set the `partitionAwareness` configuration parameter to `true` as shown in the following code snippet:
+
+[source, js]
+----
+const ENDPOINTS = ['127.0.0.1:10800', '127.0.0.1:10801', '127.0.0.1:10802'];
+let cfg = new IgniteClientConfiguration(...ENDPOINTS);
+const useTls = false;
+const partitionAwareness = true;
+
+cfg.setConnectionOptions(useTls, null, partitionAwareness);
+await igniteClient.connect(cfg);
+----
+
+
+== Enabling Debug
+
+////
+TODO: Artem, pls take a look here
+////
+
+[source, js]
+----
+include::{source_code_dir}/enabling-debug.js[tag=example-block,indent=0]
+----
+
+== Using Key-Value API
+
+=== Getting Cache Instance
+
+The key-value API is provided through an instance of a cache. The thin client provides several methods for obtaining a cache instance:
+
+- Get a cache by its name.
+- Create a cache with a specified name and optional cache configuration.
+- Get or create a cache, destroys a cache, etc.
+
+You can obtain as many cache instances as needed - for the same or different caches - and work with all of them in parallel.
+
+The following example shows how to get access to a cache by name and destroy its later:
+
+[source, js]
+----
+include::{source_code_dir}/configuring-cache-1.js[tag=example-block,indent=0]
+----
+
+=== Cache Configuration
+When creating a new cache, you can provide an instance of the cache configuration.
+
+////
+*TODO: need a better example*
+////
+[source, js]
+----
+include::{source_code_dir}/configuring-cache-2.js[tag=example-block,indent=0]
+----
+
+=== Type Mapping Configuration
+The node.js types do not always uniquely map to the java types, and in some cases you may want to explicitly specify the key and value types in the cache configuration.
+The client will use these types to convert the key and value objects between java/javascript data types when executing read/write cache operations.
+
+If you don't specify the types, the client will use the <<Default Mapping>>.
+Here is an example of type mapping:
+[source, js]
+----
+include::{source_code_dir}/types-mapping-configuration.js[tag=mapping,indent=0]
+----
+
+
+=== Data Types
+
+The client supports type mapping between Ignite types and JavaScript types in two ways:
+
+- Explicit mapping
+- Default mapping
+
+==== Explicit Mapping
+
+A mapping occurs every time an application writes or reads a field to/from the cluster via the client's API. The field here is any data stored in Ignite - the whole key or value of an Ignite entry, an element of an array or set, a field of a complex object, etc.
+
+By using the client's API methods, an application can explicitly specify an Ignite type for a particular field. The client uses this information to transform the field from JavaScript to Java type and vice versa during the read/write operations. The field is transformed into JavaScript type as a result of read operations. It validates the corresponding JavaScript type in inputs of write operations.
+
+If an application does not explicitly specify an Ignite type for a field, the client uses the default mapping during the field read/write operations.
+
+==== Default Mapping
+
+The default mapping is explained link:https://www.gridgain.com/sdk/nodejs-thin-client/latest/ObjectType.html[here].
+
+
+=== Basic Key-Value Operations
+The `CacheClient` class provides methods for working with the cache entries using key-value operations - put, get, put all, get all, replace and others.
+The following example shows how to do that:
+
+[source, js]
+----
+include::{source_code_dir}/key-value.js[tag=example-block,indent=0]
+----
+
+////
+=== Asynchronous Execution
+TODO
+////
+
+== Scan Queries
+The `IgniteClient.query(scanquery)` method can be used to fetch all entries from the cache.
+It returns a cursor object that can be used to iterate over a result set lazily or to get all results at once.
+
+To execute a scan query, create a `ScanQuery` object and call `IgniteClient.query(scanquery)`:
+
+[source, js]
+----
+include::{source_code_dir}/scanquery.js[tag="scan-query", indent=0]
+----
+
+== Executing SQL Statements
+The Node.js thin client supports all link:sql-reference[SQL commands] that are supported by Ignite.
+The commands are executed via the `query(SqlFieldQuery)` method of the cache object.
+The method accepts an instance of `SqlFieldsQuery` that represents a SQL statement and returns an instance of the `SqlFieldsCursor` class. Use the cursor to iterate over the result set or get all results at once.
+
+[source, js]
+----
+include::{source_code_dir}/sql.js[tag="sql", indent=0]
+----
+
+////
+TODO: do we need this example?
+[source, js]
+----
+include::{source_code_dir}/sql-fields-query.js[tag=example-block,indent=0]
+----
+////
+
+== Security
+
+=== SSL/TLS
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for instructions on the cluster configuration.
+
+Here is an example configuration for enabling SSL in the thin client:
+
+[source, js]
+----
+include::{source_code_dir}/tls.js[tag=example-block,indent=0]
+----
+
+
+
+=== Authentication
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+
+[source, js]
+----
+include::{source_code_dir}/authentication.js[tag=auth,indent=0]
+----
+
+
+
diff --git a/docs/_docs/thin-clients/php-thin-client.adoc b/docs/_docs/thin-clients/php-thin-client.adoc
new file mode 100644
index 0000000..819a5e9
--- /dev/null
+++ b/docs/_docs/thin-clients/php-thin-client.adoc
@@ -0,0 +1,149 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= PHP Thin Client
+
+== Prerequisites
+* link:http://php.net/manual/en/install.php[PHP version 7.2 or higher,window=_blank] and link:https://getcomposer.org/download[Composer Dependency Manager,window=_blank]
+* php-xml module
+* link:http://php.net/manual/en/mbstring.installation.php[PHP Multibyte String extension].
+Depending on your PHP configuration, you may need to additionally install/configure it.
+
+== Installation
+
+The thin client can be installed from the zip archive:
+
+*  Download the link:https://ignite.apache.org/download.cgi#binaries[Apache Ignite binary package,window=_blank].
+*  Unpack the archive and navigate to the `{IGNITE_HOME}/platforms/php` folder.
+*  Use the command below to install the package.
+
+[source,shell]
+----
+composer install --no-dev
+----
+
+To use the client in your application, include the `vendor/autoload.php` file, generated by Composer, to your source code.
+
+[source,php]
+----
+require_once "<php_client_root_dir>/vendor/autoload.php";
+----
+
+== Creating a Client Instance
+All operations of the PHP thin client are performed through a `Client` instance. You can create as many `Client` instances
+as needed. All of them will work independently.
+
+[source, php]
+----
+use Apache\Ignite\Client;
+
+$client = new Client();
+----
+
+== Connecting to Cluster
+
+To connect to a cluster, define a `ClientConfiguration` object with the desired connection parameters and use the `Client.connect(...)` method.
+
+[source, php]
+----
+include::code-snippets/php/ConnectingToCluster.php[tag=connecting,indent=0]
+----
+
+The `ClientConfiguration` constructor accepts a list of node endpoints. At least one endpoint must be specified. If you specify more than one, the thin client will use them for link:thin-clients/getting-started-with-thin-clients#client-connection-failover[failover purposes].
+
+If the client cannot connect to the cluster, a `NoConnectionException`  is thrown when attempting to perform any remote operation.
+
+If the client unexpectedly loses the connection before or during an operation, an `OperationStatusUnknownException` is thrown.
+In this case, it is not known if the operation has been actually executed in the cluster or not.
+The client will try to reconnect to the next node specified in the configuration when the next operation is called by the application.
+
+Call the `disconnect()` method to close the connection.
+
+
+== Using Key-Value API
+
+=== Getting/Creating a Cache Instance
+
+The client instance provides three methods for obtaining an instance of a cache:
+
+* `getCache(name)` — returns an existing cache by name. The method does not verify if the cache exists in the cluster; instead, you will get an exception when attempting to perform any operation with the cache.
+* `getOrCreateCache(name, config)` — returns an existing cache by name or creates a cache with the given configuration.
+* `createCache(name, config)` — creates a cache with the given name and parameters.
+
+This is how you can create a cache:
+
+[source, php]
+----
+include::code-snippets/php/UsingKeyValueApi.php[tag=createCache,indent=0]
+----
+
+=== Basic Key-Value Operations
+
+The following code snippet illustrates how to perform basic key-value operations with a cache instance:
+
+[source, php]
+----
+include::code-snippets/php/UsingKeyValueApi.php[tag=basicOperations,indent=0]
+----
+
+////
+TODO
+=== Asynchronous Execution
+////
+
+
+== Scan Queries
+The `Cache.query(ScanQuery)` method can be used to fetch all entries from the cache. It returns a cursor object with the standard PHP Iterator interface — use this cursor to iterate over the result set lazily, one by one. In addition, the cursor has methods to get all results at once.
+
+////
+*TODO: illustrate how to use the iterator in the example*
+////
+
+[source, php]
+----
+include::code-snippets/php/UsingKeyValueApi.php[tag=scanQry,indent=0]
+----
+
+== Executing SQL Statements
+The PHP thin client supports all link:sql-reference[SQL commands] that are supported by Ignite.
+The commands are executed via the `query(SqlFieldQuery)` method of the cache object.
+The method accepts an instance of `SqlFieldsQuery` that represents a SQL statement.
+The `query()` method returns a cursor object with the standard PHP Iterator interface — use this cursor to iterate over the result set lazily, one by one. In addition, the cursor has methods to get all results at once.
+
+[source, php]
+----
+include::code-snippets/php/UsingKeyValueApi.php[tag=executingSql,indent=0]
+----
+
+== Security
+
+=== SSL/TLS
+To use encrypted communication between the thin client and the cluster, you have to enable it both in the cluster configuration and the client configuration. Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+
+Here is an example configuration for enabling SSL in the thin client:
+[source, php]
+----
+include::code-snippets/php/Security.php[tag=tls,indent=0]
+----
+
+=== Authentication
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+
+[source, php]
+----
+include::code-snippets/php/Security.php[tag=authentication,indent=0]
+----
+
+
+
diff --git a/docs/_docs/thin-clients/python-thin-client.adoc b/docs/_docs/thin-clients/python-thin-client.adoc
new file mode 100644
index 0000000..f04a567
--- /dev/null
+++ b/docs/_docs/thin-clients/python-thin-client.adoc
@@ -0,0 +1,488 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Python Thin Client
+
+:sourceFileDir: code-snippets/python
+
+== Prerequisites
+
+Python 3.4 or above.
+
+== Installation
+
+You can install the Python thin client either using `pip` or from a zip archive.
+
+=== Using PIP
+
+The python thin client package is called `pyignite`. You can install it using the following command:
+
+include::includes/install-python-pip.adoc[]
+
+=== Using ZIP Archive
+
+The thin client can be installed from the zip archive:
+
+*  Download the link:https://ignite.apache.org/download.cgi#binaries[Apache Ignite binary package,window=_blank].
+*  Unpack the archive and navigate to the root folder.
+*  Install the client using the command below.
+
+
+[tabs]
+--
+tab:pip3[]
+[source,shell]
+----
+pip3 install .
+----
+
+tab:pip[]
+[source,shell]
+----
+pip install .
+----
+--
+
+This will install `pyignite` in your environment in the so-called "develop" or "editable" mode. Learn more
+about the mode from the https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs[official documentation,window=_blank].
+
+Check the `requirements` folder and install additional requirements, if needed, using the following command:
+
+
+[tabs]
+--
+tab:pip3[]
+[source,shell]
+----
+pip3 install -r requirements/<your task>.txt
+----
+tab:pip[]
+[source,shell]
+----
+pip install -r requirements/<your task>.txt
+----
+--
+
+Refer to the https://setuptools.readthedocs.io/en/latest/[Setuptools manual] for more details about `setup.py` usage.
+
+== Connecting to Cluster
+
+The distribution package contains runnable examples that demonstrate basic usage scenarios of the Python thin client.
+The examples are located in the `{ROOT_FOLDER}/examples` directory.
+
+The following code snippet shows how to connect to a cluster from the Python thin client:
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/connect.py[tag=example-block,indent=0]
+-------------------------------------------------------------------------------
+
+== Client Failover
+
+You can configure the client to automatically fail over to another node if the connection to the current node fails or times out.
+
+When the connection fails, the client propagates the initial exception (`OSError` or `SocketError`), but keeps its constructor’s parameters intact and tries to reconnect transparently.
+When the client fails to reconnect, it throws a special `ReconnectError` exception.
+
+In the following example, the client is given the addresses of three cluster nodes.
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/client_reconnect.py[tag=example-block,indent=0]
+-------------------------------------------------------------------------------
+
+
+== Partition Awareness
+
+include::includes/partition-awareness.adoc[]
+
+To enable partition awareness, set the `partition_aware` parameter to true in the client constructor and provide
+addresses of all the server nodes in the connection string.
+
+
+[source, python]
+----
+client = Client(partition_aware=True)
+nodes = [
+    ('127.0.0.1', 10800),
+    ('217.29.2.1', 10800),
+    ('200.10.33.1', 10800),
+]
+
+client.connect(nodes)
+----
+
+== Creating a Cache
+
+You can get an instance of a cache using one of the following methods:
+
+* `get_cache(settings)` — creates a local Cache object with the given name or set of parameters. The cache must exist in the cluster; otherwise, an exception will be thrown when you attempt to perform operations on that cache.
+* `create_cache(settings)` — creates a cache with the given name or set of parameters.
+* `get_or_create_cache(settings)` — returns an existing cache or creates it if the cache does not exist.
+
+Each method accepts a cache name or a dictionary of properties that represents a cache configuration.
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/create_cache.py[tag=example-block,indent=0]
+-------------------------------------------------------------------------------
+
+Here is an example of creating a cache with a set of properties:
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/create_cache_with_properties.py[tag=example-block,indent=0]
+-------------------------------------------------------------------------------
+
+See the next section for the list of supported cache properties.
+
+=== Cache Configuration
+The list of property keys that you can specify are provided in the `prop_codes` module.
+
+[cols="3,1,5",opts="header",stripes=even,width="100%"]
+|===
+|Property name | Type | Description
+
+|PROP_NAME
+|str
+|Cache name. This is the only required property.
+
+|PROP_CACHE_MODE
+|int
+a| link:data-modeling/data-partitioning#partitionedreplicated-mode[Cache mode]:
+
+* REPLICATED=1,
+* PARTITIONED=2
+
+|PROP_CACHE_ATOMICITY_MODE
+|int
+a|link:configuring-caches/atomicity-modes[Cache atomicity mode]:
+
+* TRANSACTIONAL=0,
+* ATOMIC=1
+
+|PROP_BACKUPS_NUMBER
+|int
+|link:data-modeling/data-partitioning#backup-partitions[Number of backup partitions].
+
+|PROP_WRITE_SYNCHRONIZATION_MODE
+|int
+a|Write synchronization mode:
+
+* FULL_SYNC=0,
+* FULL_ASYNC=1,
+* PRIMARY_SYNC=2
+
+|PROP_COPY_ON_READ
+|bool
+|The copy on read flag. The default value is `true`.
+
+|PROP_READ_FROM_BACKUP
+|bool
+|The flag indicating whether entries will be read from the local backup partitions, when available, or will always be requested from the primary partitions. The default value is `true`.
+
+|PROP_DATA_REGION_NAME
+|str
+| link:memory-configuration/data-regions[Data region] name.
+
+|PROP_IS_ONHEAP_CACHE_ENABLED
+|bool
+|Enable link:configuring-caches/on-heap-caching[on-heap caching] for the cache.
+
+|PROP_QUERY_ENTITIES
+|list
+|A list of query entities. See the <<Query Entities>> section below for details.)
+
+|PROP_QUERY_PARALLELISM
+|int
+|link:{javadoc_base_url}/org/apache/ignite/configuration/CacheConfiguration.html#getQueryParallelism[Query parallelism,window=_blank]
+
+|PROP_QUERY_DETAIL_METRIC_SIZE
+|int
+|Query detail metric size
+
+|PROP_SQL_SCHEMA
+|str
+|SQL Schema
+
+|PROP_SQL_INDEX_INLINE_MAX_SIZE
+|int
+|SQL index inline maximum size
+
+|PROP_SQL_ESCAPE_ALL
+|bool
+|Turns on SQL escapes
+
+|PROP_MAX_QUERY_ITERATORS
+|int
+
+|Maximum number of query iterators
+
+|PROP_REBALANCE_MODE
+|int
+a|Rebalancing mode:
+
+- SYNC=0,
+- ASYNC=1,
+- NONE=2
+
+|PROP_REBALANCE_DELAY
+|int
+|Rebalancing delay (ms)
+
+|PROP_REBALANCE_TIMEOUT
+|int
+|Rebalancing timeout (ms)
+
+|PROP_REBALANCE_BATCH_SIZE
+|int
+|Rebalancing batch size
+
+|PROP_REBALANCE_BATCHES_PREFETCH_COUNT
+|int
+|Rebalancing prefetch count
+
+|PROP_REBALANCE_ORDER
+|int
+|Rebalancing order
+
+|PROP_REBALANCE_THROTTLE
+|int
+|Rebalancing throttle interval (ms)
+
+|PROP_GROUP_NAME
+|str
+|Group name
+
+|PROP_CACHE_KEY_CONFIGURATION
+|list
+|Cache Key Configuration
+(see <<Cache key>>)
+
+|PROP_DEFAULT_LOCK_TIMEOUT
+|int
+|Default lock timeout (ms)
+
+|PROP_MAX_CONCURRENT_ASYNC_OPERATIONS
+|int
+|Maximum number of concurrent asynchronous operations
+
+|PROP_PARTITION_LOSS_POLICY
+|int
+a|link:configuring-caches/partition-loss-policy[Partition loss policy]:
+
+- READ_ONLY_SAFE=0,
+- READ_ONLY_ALL=1,
+- READ_WRITE_SAFE=2,
+- READ_WRITE_ALL=3,
+- IGNORE=4
+
+|PROP_EAGER_TTL
+|bool
+|link:configuring-caches/expiry-policies#eager-ttl[Eager TTL]
+
+|PROP_STATISTICS_ENABLED
+|bool
+|The flag that enables statistics.
+|===
+
+==== Query Entities
+Query entities are objects that describe link:SQL/sql-api#configuring-queryable-fields[queryable fields], i.e. the fields of the cache objects that can be queried using SQL queries.
+
+- `table_name`: SQL table name.
+- `key_field_name`: name of the key field.
+- `key_type_name`: name of the key type (Java type or complex object).
+- `value_field_name`: name of the value field.
+- `value_type_name`: name of the value type.
+- `field_name_aliases`: a list of 0 or more dicts of aliases (see <<Field Name Aliases>>).
+- `query_fields`: a list of 0 or more query field names (see <<Query Fields>>).
+- `query_indexes`: a list of 0 or more query indexes (see <<Query Indexes>>).
+
+
+===== Field Name Aliases
+Field name aliases are used to give a convenient name for the full property name (object.name -> objectName).
+
+- `field_name`: field name.
+- `alias`: alias (str).
+
+===== Query Fields
+Query fields define the fields that are queryable.
+
+- `name`: field name.
+- `type_name`: name of Java type or complex object.
+- `is_key_field`: (optional) boolean value, False by default.
+- `is_notnull_constraint_field`: boolean value.
+- `default_value`: (optional) anything that can be converted to type_name type. None (Null) by default.
+- `precision`:  (optional) decimal precision: total number of digits in decimal value. Defaults to -1 (use cluster default). Ignored for non-decimal SQL types (other than java.math.BigDecimal).
+- `scale`: (optional) decimal precision: number of digits after the decimal point. Defaults to -1 (use cluster default). Ignored for non-decimal SQL types.
+
+===== Query Indexes
+Query indexes define the fields that will be indexed.
+
+- `index_name`: index name.
+- `index_type`: index type code as an integer value in unsigned byte range.
+- `inline_size`: integer value.
+- `fields`: a list of 0 or more indexed fields (see Fields).
+
+===== Fields
+
+- `name`: field name.
+- `is_descending`: (optional) boolean value; False by default.
+
+===== Cache key
+
+- `type_name`: name of the complex object.
+- `affinity_key_field_name`: name of the affinity key field.
+
+////
+== Data Type Mapping
+
+*TODO*
+////
+
+== Using Key-Value API
+
+The `pyignite.cache.Cache` class provides methods for working with cache entries by using key-value operations, such as put, get, put all, get all, replace, and others.
+The following example shows how to do that:
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/basic_operations.py[tag=example-block,indent=0]
+-------------------------------------------------------------------------------
+
+=== Using type hints
+The pyignite methods that deal with a single value or key have an additional optional parameter, either `value_hint` or `key_hint`, that accepts a parser/constructor class.
+Nearly any structure element (inside dict or list) can be replaced with a 2-tuple `(the element, type hint)`.
+
+[source, python]
+----
+include::{sourceFileDir}/type_hints.py[tag=example-block,indent=0]
+----
+
+=== Asynchronous Execution
+
+
+== Scan Queries
+The `scan()` method of the cache object can be used to get all objects from the cache. It returns a generator that yields
+`(key,value)` tuples. You can iterate through the generated pairs as follows:
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/scan.py[tag=!dict, tag=example-block, indent=0]
+-------------------------------------------------------------------------------
+
+Alternatively, you can convert the generator to a dictionary in one go:
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/scan.py[tag=dict, indent=0]
+-------------------------------------------------------------------------------
+
+NOTE: Be cautious: if the cache contains a large set of data, the dictionary may consume too much memory!
+
+== Executing SQL Statements
+
+The Python thin client supports all link:sql-reference/index[SQL commands] that are supported by Ignite.
+The commands are executed via the `sql()` method of the cache object.
+The `sql()` method returns a generator that yields the resulting rows.
+
+Refer to the link:sql-reference/index[SQL Reference] section for the list of supported commands.
+
+[source, python]
+-------------------------------------------------------------------------------
+include::{sourceFileDir}/sql.py[tag=!field-names,indent=0]
+-------------------------------------------------------------------------------
+
+////
+TODO
+The `sql()` method supports a number of parameters that
+
+
+[cols="",opts="header", width="100%"]
+|===
+| Parameter | Description
+| `query_str` |
+| `page_size` |
+| `query_args` |
+| `schema` |
+| `statement_type` |
+| `distributed_joins` |
+| `local` |
+| `replicated_only` |
+| `enforce_join_order` |
+| `collocated` |
+| `lazy` |
+| `include_field_names` |
+| `max_rows` |
+| `timeout` |
+|===
+////
+
+The `sql()` method returns a generator that yields the resulting rows.
+
+Note that if you set the `include_field_names` argument to `True`, the `sql()` method will generate a list of column names in the first yield. You can access the field names using the `next` function of Python.
+
+[source, python]
+----
+include::{sourceFileDir}/sql.py[tag=field-names,indent=0]
+----
+
+== Security
+
+=== SSL/TLS
+To use encrypted communication between the thin client and the cluster, you have to enable SSL/TLS both in the cluster configuration and the client configuration.
+Refer to the link:thin-clients/getting-started-with-thin-clients#enabling-ssltls-for-thin-clients[Enabling SSL/TLS for Thin Clients] section for the instruction on the cluster configuration.
+
+Here is an example configuration for enabling SSL in the thin client:
+[source, python]
+----
+include::{sourceFileDir}/client_ssl.py[tag=example-block,indent=0]
+----
+
+Supported parameters:
+
+[cols="1,2",opts="autowidth,header",width="100%"]
+|===
+| Parameter |  Description
+| `use_ssl` | Set to True to enable SSL/TLS on the client.
+| `ssl_keyfile` | Path to the file containing the SSL key.
+| `ssl_certfile` | Path to the file containing the SSL certificate.
+| `ssl_ca_certfile` | The path to the file with trusted certificates.
+| `ssl_cert_reqs` a|
+* `ssl.CERT_NONE` − remote certificate is ignored (default),
+* `ssl.CERT_OPTIONAL` − remote certificate will be validated,
+if provided,
+* `ssl.CERT_REQUIRED` − valid remote certificate is required,
+
+| `ssl_version` |
+| `ssl_ciphers`  |
+|===
+
+=== Authentication
+Configure link:security/authentication[authentication on the cluster side] and provide a valid user name and password in the client configuration.
+
+[source, python]
+----
+include::{sourceFileDir}/auth.py[tag=!no-ssl,indent=0]
+----
+
+Note that supplying credentials automatically turns SSL on.
+This is because sending credentials over an insecure channel is not a best practice and is strongly discouraged.
+If you still want to use authentication without securing the connection, simply disable SSL when creating the client object:
+
+[source, python]
+----
+include::{sourceFileDir}/auth.py[tag=no-ssl,indent=0]
+----
+
diff --git a/docs/_docs/tools/control-script.adoc b/docs/_docs/tools/control-script.adoc
new file mode 100644
index 0000000..9870577
--- /dev/null
+++ b/docs/_docs/tools/control-script.adoc
@@ -0,0 +1,649 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Control Script
+
+
+Ignite provides a command line script — `control.sh|bat` — that you can use to monitor and control your clusters.
+The script is located under the `/bin/` folder of the installation directory.
+
+The control script syntax is as follows:
+
+[tabs]
+--
+tab:Linux[]
+[source, shell]
+----
+control.sh <connection parameters> <command> <arguments>
+----
+tab:Windows[]
+[source, shell]
+----
+control.bat <connection parameters> <command> <arguments>
+----
+--
+
+== Connecting to Cluster
+
+When executed without connection parameters, the control script tries to connect to a node running on localhost (`localhost:11211`).
+If you want to connect to a node that is running on a remove machine, specify the connection parameters.
+
+[cols="2,3,1",opts="header"]
+|===
+|Parameter | Description | Default Value
+
+| --host HOST_OR_IP | The host name or IP address of the node. | `localhost`
+
+| --port PORT | The port to connect to. | `11211`
+
+| --user USER | The user name. |
+| --password PASSWORD |The user password. |
+| --ping-interval PING_INTERVAL | The ping interval. | 5000
+| --ping-timeout PING_TIMEOUT | Ping response timeout. | 30000
+| --ssl-protocol PROTOCOL1, PROTOCOL2... | A list of SSL protocols to try when connecting to the cluster. link:https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSE_Protocols[Supported protocols,window=_blank]. | `TLS`
+| --ssl-cipher-suites CIPHER1,CIPHER2...  | A list of SSL ciphers. link:https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SupportedCipherSuites[Supported ciphers,window=_blank]. |
+| --ssl-key-algorithm ALG | The SSL key algorithm. | `SunX509`
+| --keystore-type KEYSTORE_TYPE | The keystore type. | `JKS`
+| --keystore KEYSTORE_PATH | The path to the keystore. Specify a keystore to enable SSL for the control script.|
+| --keystore-password KEYSTORE_PWD | The password to the keystore.  |
+| --truststore-type TRUSTSTORE_TYPE | The type of the truststore. | `JKS`
+| --truststore TRUSTSTORE_PATH | The path to the truststore. |
+| --truststore-password TRUSTSTORE_PWD | The password to the truststore. |
+|===
+
+
+== Activation, Deactivation and Topology Management
+
+You can use the control script to activate or deactivate your cluster, and manage the link:clustering/baseline-topology[Baseline Topology].
+
+
+=== Getting Cluster State
+
+The cluster can be in one of the three states: active, read only, or inactive. Refer to link:monitoring-metrics/cluster-states[Cluster States] for details.
+
+To get the state of the cluster, run the following command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --state
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --state
+----
+--
+
+=== Activating Cluster
+
+Activation sets the baseline topology of the cluster to the set of nodes available at the moment of activation.
+Activation is required only if you use link:persistence/native-persistence[native persistence].
+
+To activate the cluster, run the following command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --set-state ACTIVE
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --set-state ACTIVE
+----
+--
+
+=== Deactivating Cluster
+
+include::includes/note-on-deactivation.adoc[]
+
+To deactivate the cluster, run the following command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --set-state INACTIVE [--yes]
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --set-state INACTIVE [--yes]
+----
+--
+
+
+
+=== Getting Nodes Registered in Baseline Topology
+
+To get the list of nodes registered in the baseline topology, run the following command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --baseline
+----
+
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --baseline
+----
+--
+
+The output contains the current topology version, the list of consistent IDs of the nodes included in the baseline topology, and the list of nodes that joined the cluster but were not added to the baseline topology.
+
+[source, shell]
+----
+Command [BASELINE] started
+Arguments: --baseline
+--------------------------------------------------------------------------------
+Cluster state: active
+Current topology version: 3
+
+Current topology version: 3 (Coordinator: ConsistentId=dd3d3959-4fd6-4dc2-8199-bee213b34ff1, Order=1)
+
+Baseline nodes:
+    ConsistentId=7d79a1b5-cbbd-4ab5-9665-e8af0454f178, State=ONLINE, Order=2
+    ConsistentId=dd3d3959-4fd6-4dc2-8199-bee213b34ff1, State=ONLINE, Order=1
+--------------------------------------------------------------------------------
+Number of baseline nodes: 2
+
+Other nodes:
+    ConsistentId=30e16660-49f8-4225-9122-c1b684723e97, Order=3
+Number of other nodes: 1
+Command [BASELINE] finished with code: 0
+Control utility has completed execution at: 2019-12-24T16:53:08.392865
+Execution time: 333 ms
+----
+
+=== Adding Nodes to Baseline Topology
+
+To add a node to the baseline topology, run the command given below.
+After the node is added, the link:data-rebalancing[rebalancing process] starts.
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --baseline add _consistentId1,consistentId2,..._ [--yes]
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --baseline add _consistentId1,consistentId2,..._ [--yes]
+----
+--
+
+=== Removing Nodes from Baseline Topology
+
+To remove a node from the baseline topology, use the `remove` command.
+Only offline nodes can be removed from the baseline topology: shut down the node first and then use the `remove` command.
+This operation starts the rebalancing process, which re-distributes the data across the nodes that remain in the baseline topology.
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --baseline remove _consistentId1,consistentId2,..._ [--yes]
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --baseline remove _consistentId1,consistentId2,..._ [--yes]
+----
+--
+
+=== Setting Baseline Topology
+
+You can set the baseline topology by either providing a list of nodes (consistent IDs) or by specifying the desired version of the baseline topology.
+
+To set a list of node as the baseline topology, use the following command:
+
+[tabs]
+--
+tab:Linux[]
+
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --baseline set _consistentId1,consistentId2,..._ [--yes]
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --baseline set _consistentId1,consistentId2,..._ [--yes]
+----
+--
+
+
+To restore a specific version of the baseline topology, use the following command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --baseline version _topologyVersion_ [--yes]
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --baseline version _topologyVersion_ [--yes]
+----
+--
+
+=== Enabling Baseline Topology Autoadjustment
+
+link:clustering/baseline-topology#baseline-topology-autoadjustment[Baseline topology autoadjustment] refers to automatic update of baseline topology after the topology has been stable for a specific amount of time.
+
+For in-memory clusters, autoadjustment is enabled by default with the timeout set to 0. It means that baseline topology changes immediately after server nodes join or leave the cluster.
+For clusters with persistence, the automatic baseline adjustment is disabled by default.
+To enable it, use the following command:
+
+[tabs]
+--
+tab:Linux[]
+
+[source, shell]
+----
+control.sh --baseline auto_adjust enable timeout 30000
+----
+tab:Windows[]
+[source, shell]
+----
+control.bat --baseline auto_adjust enable timeout 30000
+----
+--
+
+The timeout is set in milliseconds. The baseline is set to the current topology when a given number of milliseconds has passed after the last JOIN/LEFT/FAIL event.
+Every new JOIN/LEFT/FAIL event restarts the timeout countdown.
+
+To disable baseline autoadjustment, use the following command:
+
+[tabs]
+--
+tab:Linux[]
+
+[source, shell]
+----
+control.sh --baseline auto_adjust disable
+----
+tab:Windows[]
+[source, shell]
+----
+control.bat --baseline auto_adjust disable
+----
+--
+
+
+== Transaction Management
+
+The control script allows you to get the information about the transactions being executed in the cluster.
+You can also cancel specific transactions.
+
+The following command returns a list of transactions that satisfy a given filter (or all transactions if no filter is provided):
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --tx _<transaction filter>_ --info
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --tx _<transaction filter>_ --info
+----
+--
+
+The transaction filter parameters are listed in the following table.
+
+[cols="2,5",opts="header"]
+|===
+|Parameter | Description
+| --xid _XID_ | Transaction ID.
+| --min-duration _SECONDS_ | Minimum number of seconds a transaction has been executing.
+|--min-size _SIZE_ | Minimum size of a transaction
+|--label _LABEL_ | User label for transactions. You can use a regular expression.
+|--servers\|--clients | Limit the scope of the operation to either server or client nodes.
+| --nodes _nodeId1,nodeId2..._ |  The list of consistent IDs of the nodes you want to get transactions from.
+|--limit _NUMBER_ | Limit the number of transactions to the given value.
+|--order DURATION\|SIZE\|START_TIME | The parameter that is used to sort the output.
+|===
+
+
+To cancel transactions, use the following command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --tx _<transaction filter>_ --kill
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --tx _<transaction filter>_ --kill
+----
+--
+
+For example, to cancel the transactions that have been running for more than 100 seconds, execute the following command:
+
+[source, shell]
+----
+control.sh --tx --min-duration 100 --kill
+----
+
+== Contention Detection in Transactions
+
+The `contention` command detects when multiple transactions are in contention to create a lock for the same key. The command is useful if you have long-running or hanging transactions.
+
+Example:
+
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+# Reports all keys that are point of contention for at least 5 transactions on all cluster nodes.
+control.sh|bat --cache contention 5
+
+# Reports all keys that are point of contention for at least 5 transactions on specific server node.
+control.sh|bat --cache contention 5 f2ea-5f56-11e8-9c2d-fa7a
+----
+--
+
+If there are any highly contended keys, the utility dumps extensive information including the keys, transactions, and nodes where the contention took place.
+
+Example:
+
+[source,text]
+----
+[node=TcpDiscoveryNode [id=d9620450-eefa-4ab6-a821-644098f00001, addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, lastExchangeTime=1527169443913, loc=false, ver=2.5.0#20180518-sha1:02c9b2de, isClient=false]]
+
+// No contention on node d9620450-eefa-4ab6-a821-644098f00001.
+
+[node=TcpDiscoveryNode [id=03379796-df31-4dbd-80e5-09cef5000000, addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1527169443913, loc=false, ver=2.5.0#20180518-sha1:02c9b2de, isClient=false]]
+    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=CREATE, val=UserCacheObjectImpl [val=0, hasValBytes=false], tx=GridNearTxLocal[xid=e9754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439646, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1247], other=[]]
+    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=8a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439656, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
+    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=6a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439654, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
+    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=7a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439655, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
+    TxEntry [cacheId=1544803905, key=KeyCacheObjectImpl [part=0, val=0, hasValBytes=false], queue=10, op=READ, val=null, tx=GridNearTxLocal[xid=4a754629361-00000000-0843-9f61-0000-000000000001, xidVersion=GridCacheVersion [topVer=138649441, order=1527169439652, nodeOrder=1], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=ACTIVE, invalidate=false, rollbackOnly=false, nodeId=03379796-df31-4dbd-80e5-09cef5000000, timeout=0, duration=1175], other=[]]
+
+// Node 03379796-df31-4dbd-80e5-09cef5000000 is place for contention on key KeyCacheObjectImpl [part=0, val=0, hasValBytes=false].
+----
+
+
+== Monitoring Cache State
+
+One of the most important commands that `control.sh|bat` provides is `--cache list`, which is used for cache monitoring. The command provides a list of deployed caches and their affinity/distributiong parameters and distribution within cache groups. There is also a command for viewing existing atomic sequences.
+
+[source,shell]
+----
+# Displays a list of all caches
+control.sh|bat --cache list .
+
+# Displays a list of caches whose names start with "account-".
+control.sh|bat --cache list account-.*
+
+# Displays info about cache group distribution for all caches.
+control.sh|bat --cache list . --groups
+
+# Displays info about cache group distribution for the caches whose names start with "account-".
+control.sh|bat --cache list account-.* --groups
+
+# Displays info about all atomic sequences.
+control.sh|bat --cache list . --seq
+
+# Displays info about the atomic sequnces whose names start with "counter-".
+control.sh|bat --cache list counter-.* --seq
+----
+
+== Resetting Lost Partitions
+
+You can use the control script to reset lost partitions for specific caches.
+Refer to link:configuring-caches/partition-loss-policy[Partition Loss Policy] for details.
+
+[source, shell]
+----
+control.sh --cache reset_lost_partitions cacheName1,cacheName2,...
+----
+
+
+== Consistency Check Commands
+
+`control.sh|bat` includes a set of consistency check commands that enable you to verify internal data consistency.
+
+First, the commands can be used for debugging and troubleshooting purposes especially if you're in active development.
+
+Second, if there is a suspicion that a query (such as a SQL query, etc.) returns an incomplete or wrong result set, the commands can verify whether there is inconsistency in the data.
+
+Finally, the consistency check commands can be utilized as part of regular cluster health monitoring.
+
+Let's review these usage scenarios in more detail.
+
+=== Verifying Partition Checksums
+
+//Even if update counters and size are equal on the primary and backup nodes, there might be a case when the primary and backup  diverge due to some critical failure.
+The `idle_verify` command compares the hash of the primary partition with that of the backup partitions and reports any differences.
+The differences might be the result of node failure or incorrect shutdown during an update operation.
+If any inconsistency is detected, we recommend remove the incorrect partitions.
+
+[source,shell]
+----
+# Checks partitions of all caches that their partitions actually contain same data.
+control.sh|bat --cache idle_verify
+
+# Checks partitions of specific caches that their partitions actually contain same data.
+control.sh|bat --cache idle_verify cache1,cache2,cache3
+----
+
+If any partitions diverge, a list of conflict partitions is printed out, as follows:
+
+[source,text]
+----
+idle_verify check has finished, found 2 conflict partitions.
+
+Conflict partition: PartitionKey [grpId=1544803905, grpName=default, partId=5]
+Partition instances: [PartitionHashRecord [isPrimary=true, partHash=97506054, updateCntr=3, size=3, consistentId=bltTest1], PartitionHashRecord [isPrimary=false, partHash=65957380, updateCntr=3, size=2, consistentId=bltTest0]]
+Conflict partition: PartitionKey [grpId=1544803905, grpName=default, partId=6]
+Partition instances: [PartitionHashRecord [isPrimary=true, partHash=97595430, updateCntr=3, size=3, consistentId=bltTest1], PartitionHashRecord [isPrimary=false, partHash=66016964, updateCntr=3, size=2, consistentId=bltTest0]]
+----
+
+[WARNING]
+====
+[discrete]
+=== Cluster Should Be Idle During `idle_verify` Check
+All updates should be stopped when `idle_verify` calculates hashes, otherwise it may show false positive error results. It's impossible to compare big datasets in a distributed system if they are being constantly updated​.
+====
+
+=== Validating SQL Index Consistency
+The `validate_indexes` command validates the indexes of given caches on all cluster nodes.
+
+The following is checked by the validation process:
+
+. All the key-value entries that are referenced from a primary index has to be reachable from secondary SQL indexes.
+. All the key-value entries that are referenced from a primary index has to be reachable. A reference from the primary index shouldn't point to nowhere.
+. All the key-value entries that are referenced from secondary SQL indexes have to be reachable from the primary index.
+
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+# Checks indexes of all caches on all cluster nodes.
+control.sh|bat --cache validate_indexes
+
+# Checks indexes of specific caches on all cluster nodes.
+control.sh|bat --cache validate_indexes cache1,cache2
+
+# Checks indexes of specific caches on node with given node ID.
+control.sh|bat --cache validate_indexes cache1,cache2 f2ea-5f56-11e8-9c2d-fa7a
+----
+--
+
+If indexes refer to non-existing entries (or some entries are not indexed), errors are dumped to the output, as follows:
+
+[source,text]
+----
+PartitionKey [grpId=-528791027, grpName=persons-cache-vi, partId=0] ValidateIndexesPartitionResult [updateCntr=313, size=313, isPrimary=true, consistentId=bltTest0]
+IndexValidationIssue [key=0, cacheName=persons-cache-vi, idxName=_key_PK], class org.apache.ignite.IgniteCheckedException: Key is present in CacheDataTree, but can't be found in SQL index.
+IndexValidationIssue [key=0, cacheName=persons-cache-vi, idxName=PERSON_ORGID_ASC_IDX], class org.apache.ignite.IgniteCheckedException: Key is present in CacheDataTree, but can't be found in SQL index.
+validate_indexes has finished with errors (listed above).
+----
+
+[WARNING]
+====
+[discrete]
+=== Cluster Should Be Idle During `validate_indexes` Check
+Like `idle_verify`, index validation tool works correctly only if updates are stopped. Otherwise, there may be a race between the checker thread and the thread that updates the entry/index, which can result in a false positive error report.
+====
+
+
+== Tracing Configuration
+
+You can enable or disable sampling of traces for a specific API by using the `--tracing-configuration` command.
+Refer to the link:monitoring-metrics/tracing[Tracing] section for details.
+
+Before using the command, enable experimental features of the control script:
+
+[source, shell]
+----
+export IGNITE_ENABLE_EXPERIMENTAL_COMMAND=true
+----
+
+To view the current tracing configuration, execute the following command:
+
+[source, shell]
+----
+control.sh --tracing-configuration
+----
+
+To enable trace sampling for a specific API:
+
+
+[source, shell]
+----
+control.sh --tracing-configuration set --scope <scope> --sampling-rate <rate> --label <label>
+----
+
+Parameters:
+
+[cols="1,3",opts="header"]
+|===
+| Parameter | Description
+| `--scope` a| The API you want to trace:
+
+* `DISCOVERY`: discovery events
+* `EXCHANGE`: exchange events
+* `COMMUNICATION`: communication events
+* `TX`: transactions
+
+| `--sampling-rate` a|  The probabilistic sampling rate, a number between `0.0` and `1.0` inclusive.
+ `0` means no sampling (default), `1` means always sampling. Ex. `0.5` means every trace is sampled with the probability of 50%.
+
+| `--label` | Only applicable to the `TX` scope. The parameter defines the sampling rate for the transactions with the given label.
+When the `--label` parameter is specified, Ignite will trace transactions with the given label. You can configure different sampling rates for different labels.
+
+Transaction traces with no label will be sampled at the default sampling rate.
+The default rate for the `TX` scope can be set by using this command without the `--label` parameter.
+|===
+
+
+Examples:
+
+* Trace all discovery events:
++
+[source, shell]
+----
+control.sh --tracing-configuration set --scope DISCOVER --sampling-rate 1
+----
+* Trace all transactions:
++
+[source, shell]
+----
+control.sh --tracing-configuration set --scope TX --sampling-rate 1
+----
+* Trace transactions with label "report" at a 50% rate:
++
+[source, shell]
+----
+control.sh --tracing-configuration set --scope TX --sampling-rate 0.5
+----
+
+
+
+== Cluster ID and Tag
+
+A cluster ID is a unique identifier of the cluster that is generated automatically when the cluster starts for the first time. Read link:monitoring-metrics/cluster-id[Cluster ID and Tag] for more information.
+
+To view the cluster ID, run the `--state` command:
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --state
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --state
+----
+--
+
+And check the output:
+
+[source, text]
+----
+Command [STATE] started
+Arguments: --state
+--------------------------------------------------------------------------------
+Cluster  ID: bf9764ea-995e-4ea9-b35d-8c6d078b0234
+Cluster tag: competent_black
+--------------------------------------------------------------------------------
+Cluster is active
+Command [STATE] finished with code: 0
+----
+
+A cluster tag is a user friendly name that you can assign to your cluster.
+To change the tag, use the following command (the tag must contain no more than 280 characters):
+
+[tabs]
+--
+tab:Linux[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.sh --change-tag _<new-tag>_
+----
+tab:Windows[]
+[source,shell,subs="verbatim,quotes"]
+----
+control.bat --change-tag _<new-tag>_
+----
+--
+
+
diff --git a/docs/_docs/tools/gg-control-center.adoc b/docs/_docs/tools/gg-control-center.adoc
new file mode 100644
index 0000000..40809d0
--- /dev/null
+++ b/docs/_docs/tools/gg-control-center.adoc
@@ -0,0 +1,34 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using GridGain Control Center With Apache Ignite
+
+== Overview
+
+https://www.gridgain.com/products/software/control-center[GridGain Control Center, window=_blank] is a management and
+monitoring tool designed for Apache Ignite that allows you to do the following:
+
+* Monitor the state of the cluster with customizable dashboards.
+* Define custom alerts to track and react on over 200 cluster, node, and storage metrics.
+* Execute and optimize SQL queries as well as monitor already running commands.
+* Perform OpenCensus-based root cause analysis with visual debugging of API calls as they execute on nodes across the cluster.
+* Take full, incremental, and continuous cluster backups to enable disaster recovery in the event of data loss or corruption.
+* And more...
+
+image::images/tools/gg-control-center.png[GridGain Contro Center]
+
+== Installation and Usage
+
+Refer to the https://www.gridgain.com/docs/control-center/latest/overview[official documentation of GridGain Control Center, window=_blank]
+for detailed installation and usage instructions.
diff --git a/docs/_docs/tools/informatica.adoc b/docs/_docs/tools/informatica.adoc
new file mode 100644
index 0000000..93c95ec
--- /dev/null
+++ b/docs/_docs/tools/informatica.adoc
@@ -0,0 +1,304 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using Informatica With Apache Ignite
+
+== Overview
+
+Informatica is a cloud data management and data integration tool. You can connect Informatica to Ignite through the ODBC driver.
+
+== Connecting from Informatica PowerCenter Designer
+
+You need to install the 32-bit Ignite OBDC driver to connect an Ignite cluster with the Power Center Designer. Use the
+following links to build and install the driver:
+
+* link:SQL/ODBC/odbc-driver#installing-on-windows[Install the driver on Windows]
+* link:SQL/ODBC/connection-string-dsn#configuring-dsn[Configure DSN]
+
+Then do the following:
+
+. Select the `Sources` or `Targets` menu and choose `Import from Database...` to import tables from Ignite.
+. Connect to the cluster by choosing `Apache Ignite DSN` as the ODBC data source.
+
+image::images/tools/informatica-import-tables.png[Informatica Import Tables]
+
+== Installing Ignite ODBC on an Informatica Service Node
+
+Refer to the link:SQL/ODBC/odbc-driver#building-on-linux[Building on Linux] and
+link:SQL/ODBC/odbc-driver#installing-on-linux[Installing on Linux] instructions to install the Ignite ODBC on an Ignite service node.
+
+Informatica uses configuration files referenced by the `$ODBCINI` and `$ODBCISTINI` environment variables
+(https://kb.informatica.com/howto/6/Pages/19/499306.aspx[Configure the UNIX environment for ODBC, window=_blank]). Configure
+the Ignite ODBC driver and create a new DSN as shown below:
+
+[tabs]
+--
+tab:odbc.ini[]
+[source,text]
+----
+[ApacheIgnite]
+Driver      = /usr/local/lib/libignite-odbc.so
+Description = Apache Ignite ODBC
+Address = 192.168.0.105
+User = ignite
+Password = ignite
+Schema = PUBLIC
+----
+tab:odbcinst.ini[]
+[source,text]
+----
+[ApacheIgnite]
+Driver  = /usr/local/lib/libignite-odbc.so
+----
+--
+
+To check the ODBC connection, use the `ssgodbc.linux64` utility included in the Informatica deployment, as show below:
+
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+<INFORMATICA_HOME>/tools/debugtools/ssgodbc/linux64/ssgodbc.linux64 -d ApacheIgnite -u ignite -p ignite -v
+----
+--
+
+If the unixODBC or Ignite ODBC libraries are not installed in the default directory - `/usr/local/lib`, add them to `LD_LIBRARY_PATH`
+and then check the connection, like so:
+
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+UNIXODBC_LIB=/opt/unixodbc/lib/
+IGNITE_ODBC_LIB=/opt/igniteodbc/lib
+LD_LIBRARY_PATH=<UNIXODBC_LIB>:<IGNITE_ODBC_LIB>
+
+<INFORMATICA_HOME>/tools/debugtools/ssgodbc/linux64/ssgodbc.linux64 -d ApacheIgnite -u ignite -p ignite -v
+----
+--
+
+== Configuring Relation Connection
+
+Choose `Connections > Relational..` to show the Relational Connection Browser.
+
+Select the ODBC type and create a new connection.
+
+image::images/tools/informatica-rel-connection.png[Informatica Relational Connection]
+
+
+== Installing Ignite ODBC on Suse 11.4
+
+Follow the steps below to build and install Ignite with the Ignite ODBC driver on Suse 11.4:
+
+. Add repositories - `oss`, `non-oss`, `openSUSE_Factory`, `devel_gcc`
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo zypper ar http://download.opensuse.org/distribution/11.4/repo/oss/ oss
+sudo zypper ar http://download.opensuse.org/distribution/11.4/repo/non-oss/ non-oss
+sudo zypper ar https://download.opensuse.org/repositories/devel:/tools:/building/openSUSE_Factory/ openSUSE_Factory
+sudo zypper ar http://download.opensuse.org/repositories/devel:/gcc/SLE-11/  devel_gcc
+----
+--
+
+. Install `automake` and `autoconf`
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo zypper install autoconf automake
+----
+--
+
+. Install `libtool`
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo zypper install libtool-2.4.6-7.1.x86_64
+
+Loading repository data...
+Reading installed packages...
+Resolving package dependencies...
+
+Problem: nothing provides m4 >= 1.4.16 needed by libtool-2.4.6-7.1.x86_64
+ Solution 1: do not install libtool-2.4.6-7.1.x86_64
+ Solution 2: break libtool-2.4.6-7.1.x86_64 by ignoring some of its dependencies
+
+Choose from above solutions by number or cancel [1/2/c] (c): 2
+----
+--
+
+. Install OpenSSL
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo zypper install openssl openssl-devel
+
+Loading repository data...
+Reading installed packages...
+'openssl-devel' not found in package names. Trying capabilities.
+Resolving package dependencies...
+
+Problem: libopenssl-devel-1.0.0c-17.1.x86_64 requires zlib-devel, but this requirement cannot be provided
+  uninstallable providers: zlib-devel-1.2.5-8.1.i586[oss]
+                   zlib-devel-1.2.5-8.1.x86_64[oss]
+ Solution 1: downgrade of zlib-1.2.7-0.12.3.x86_64 to zlib-1.2.5-8.1.x86_64
+ Solution 2: do not ask to install a solvable providing openssl-devel
+ Solution 3: do not ask to install a solvable providing openssl-devel
+ Solution 4: break libopenssl-devel-1.0.0c-17.1.x86_64 by ignoring some of its dependencies
+
+Choose from above solutions by number or cancel [1/2/3/4/c] (c): 1
+----
+--
+
+. Install the GCC Compiler
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo zypper install gcc5 gcc5-c++
+
+Loading repository data...
+Reading installed packages...
+Resolving package dependencies...
+2 Problems:
+Problem: gcc5-5.5.0+r253576-1.1.x86_64 requires libgcc_s1 >= 5.5.0+r253576-1.1, but this requirement cannot be provided
+Problem: gcc5-c++-5.5.0+r253576-1.1.x86_64 requires gcc5 = 5.5.0+r253576-1.1, but this requirement cannot be provided
+
+Problem: gcc5-5.5.0+r253576-1.1.x86_64 requires libgcc_s1 >= 5.5.0+r253576-1.1, but this requirement cannot be provided
+  uninstallable providers: libgcc_s1-5.5.0+r253576-1.1.i586[devel_gcc]
+                   libgcc_s1-5.5.0+r253576-1.1.x86_64[devel_gcc]
+                   libgcc_s1-6.4.1+r251631-80.1.i586[devel_gcc]
+                   libgcc_s1-6.4.1+r251631-80.1.x86_64[devel_gcc]
+                   libgcc_s1-7.3.1+r258812-103.1.i586[devel_gcc]
+                   libgcc_s1-7.3.1+r258812-103.1.x86_64[devel_gcc]
+                   libgcc_s1-8.1.1+r260570-32.1.i586[devel_gcc]
+                   libgcc_s1-8.1.1+r260570-32.1.x86_64[devel_gcc]
+ Solution 1: install libgcc_s1-8.1.1+r260570-32.1.x86_64 (with vendor change)
+  SUSE LINUX Products GmbH, Nuernberg, Germany  -->  obs://build.opensuse.org/devel:gcc
+ Solution 2: do not install gcc5-5.5.0+r253576-1.1.x86_64
+ Solution 3: do not install gcc5-5.5.0+r253576-1.1.x86_64
+ Solution 4: break gcc5-5.5.0+r253576-1.1.x86_64 by ignoring some of its dependencies
+
+Choose from above solutions by number or skip, retry or cancel [1/2/3/4/s/r/c] (c): 1
+
+Problem: gcc5-c++-5.5.0+r253576-1.1.x86_64 requires gcc5 = 5.5.0+r253576-1.1, but this requirement cannot be provided
+  uninstallable providers: gcc5-5.5.0+r253576-1.1.i586[devel_gcc]
+                   gcc5-5.5.0+r253576-1.1.x86_64[devel_gcc]
+ Solution 1: install libgomp1-8.1.1+r260570-32.1.x86_64 (with vendor change)
+  SUSE LINUX Products GmbH, Nuernberg, Germany  -->  obs://build.opensuse.org/devel:gcc
+ Solution 2: do not install gcc5-c++-5.5.0+r253576-1.1.x86_64
+ Solution 3: do not install gcc5-c++-5.5.0+r253576-1.1.x86_64
+ Solution 4: break gcc5-c++-5.5.0+r253576-1.1.x86_64 by ignoring some of its dependencies
+
+Choose from above solutions by number or skip, retry or cancel [1/2/3/4/s/r/c] (c): 1
+Resolving dependencies...
+Resolving package dependencies...
+
+Problem: gcc5-c++-5.5.0+r253576-1.1.x86_64 requires libstdc++6-devel-gcc5 = 5.5.0+r253576-1.1, but this requirement cannot be provided
+  uninstallable providers: libstdc++6-devel-gcc5-5.5.0+r253576-1.1.i586[devel_gcc]
+                   libstdc++6-devel-gcc5-5.5.0+r253576-1.1.x86_64[devel_gcc]
+ Solution 1: install libstdc++6-8.1.1+r260570-32.1.x86_64 (with vendor change)
+  SUSE LINUX Products GmbH, Nuernberg, Germany  -->  obs://build.opensuse.org/devel:gcc
+ Solution 2: do not install gcc5-c++-5.5.0+r253576-1.1.x86_64
+ Solution 3: do not install gcc5-c++-5.5.0+r253576-1.1.x86_64
+ Solution 4: break gcc5-c++-5.5.0+r253576-1.1.x86_64 by ignoring some of its dependencies
+
+Choose from above solutions by number or cancel [1/2/3/4/c] (c): 1
+----
+--
+
+. Provide symlinks to compiler executables.
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo rm /usr/bin/gcc
+sudo rm /usr/bin/g++
+
+sudo ln -s /usr/bin/g++-5 /usr/bin/g++
+sudo ln -s /usr/bin/gcc-5 /usr/bin/gcc
+----
+--
+
+. Install unixODBC from sources. Download and install the latest unixODBC(2.3.6 or later) library from http://www.unixodbc.org/.
+
+. Check that all required libraries and tools are installed with specified versions.
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+1. libtool --version
+libtool (GNU libtool) 2.4.6
+2. m4 --version
+m4 (GNU M4) 1.4.12
+3. autoconf --version
+autoconf (GNU Autoconf) 2.69
+4. automake --version
+automake (GNU automake) 1.16.1
+5. openssl version
+OpenSSL 1.0.0c 2 Dec 2010
+6. g++ --version
+g++ (SUSE Linux) 5.5.0 20171010 [gcc-5-branch revision 253640]
+7. JDK 1.8
+----
+--
+
+. Check if the environment variable `JAVA_HOME` is set. Then issue the following commands:
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+cd $IGNITE_HOME/platforms/cpp
+export LDFLAGS=-lrt
+
+libtoolize && aclocal && autoheader && automake --add-missing && autoreconf
+./configure --enable-odbc
+make
+sudo make install
+----
+--
+. Reboot the system.
+
+. Install the ODBC driver
++
+[tabs]
+--
+tab:Shell[]
+[source,shell]
+----
+sudo odbcinst -i -d -f $IGNITE_HOME/platforms/cpp/odbc/install/ignite-odbc-install.ini
+----
+--
diff --git a/docs/_docs/tools/pentaho.adoc b/docs/_docs/tools/pentaho.adoc
new file mode 100644
index 0000000..cf2ed1e
--- /dev/null
+++ b/docs/_docs/tools/pentaho.adoc
@@ -0,0 +1,65 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using Pentaho With Apache Ignite
+
+== Overview
+
+http://www.pentaho.com[Pentaho, window=_blank] is a comprehensive platform that provides the ability to extract,
+transform, visualize, and analyze your data easily. Pentaho Data Integration uses the Java Database Connectivity (JDBC)
+API in order to connect to your database.
+
+Apache Ignite is shipped with its own implementation of the JDBC driver which makes it possible to connect to Ignite
+from the Pentaho platform and analyze the data stored in a distributed Ignite cluster.
+
+== Installation and Configuration
+
+* Download and Install Pentaho platform. Refer to the official https://help.pentaho.com/Documentation/7.1/Installation[Pentaho documentation, window=_blank].
+* After Pentaho is successfully installed, you will need to install the Apache Ignite JDBC Driver using the JDBC Distribution Tool.
+To do so, download Apache Ignite and locate `{apache-ignite}/libs/ignite-core-{version}.jar` and copy the file to the `{pentaho}/jdbc-distribution` directory.
+* Open a command line tool, navigate to the `{pentaho}/jdbc-distribution` directory and run the following script `./distribute-files.sh ignite-core-{version}.jar`
+
+== Ignite JDBC Driver Setup
+
+The next step is to set up the JDBC driver and connect to the cluster. Below you will find the minimal number of actions
+that need to be taken. Refer to the link:SQL/JDBC/jdbc-driver[JDBC Thin Driver] documentation for the more details.
+
+. Open your command line tool, go to the `{pentaho}/design-tools/data-integration` directory and launch Pentaho Data Integration using the `./spoon.sh` script.
+. Once the screen like the one below appears, click on the `File` menu option and create a new transformation -  `New -> Transformation`
++
+image::images/tools/pentaho-new-transformation.png[Pentaho New Transformation]
+
+. You can create a new Database Connection using setting the following parameters in Pentaho:
++
+[opts="header"]
+|===
+|Pentaho Property Name | Value
+
+| Connection Name| Set some custom name such as `IgniteConnection`
+| Connection Type| Select the `Generic database` option.
+| Access| Select the `Native (JDBC)` option.
+| Custom Connection URL| `jdbc:ignite:thin://localhost:10800` or the real address of a cluster node to connect to.
+| Custom Driver Class Name| `org.apache.ignite.IgniteJdbcThinDriver`
+|===
+
+. Click the `Test` button to check that the connection​ can be established:
++
+image::images/tools/pentaho-ignite-connection.png[Pentaho Ignite Connection]
+
+== Data Querying and Analysis
+
+Once the connection between Ignite and Pentaho is established , you can query, transform, and analyze the data in a
+variety of ways supported by Pentaho. For more details, refer to the official Pentaho documentation.
+
+image::images/tools/pentaho-running-and-inspecting-data.png[Pentaho Running Queries]
diff --git a/docs/_docs/tools/sqlline.adoc b/docs/_docs/tools/sqlline.adoc
new file mode 100644
index 0000000..f0c4fe8
--- /dev/null
+++ b/docs/_docs/tools/sqlline.adoc
@@ -0,0 +1,225 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using SQLLine With Apache Ignite
+
+
+Command line tool for SQL connectivity.
+
+== Overview
+Apache Ignite is shipped with the SQLLine tool – a console-based utility for connecting to relational databases and executing SQL commands.
+This documentation describes how to connect SQLLine to your cluster, as well as various supported SQLLine commands.
+
+== Connecting to Ignite Cluster
+From your {IGNITE_HOME}/bin directory, run `sqlline.sh -u jdbc:ignite:thin:[host]` to connect SQLLine to the cluster. Substitute [host] with your actual value. For example:
+
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+./sqlline.sh --verbose=true -u jdbc:ignite:thin://127.0.0.1/
+----
+
+tab:Windows[]
+[source,shell]
+----
+sqlline.bat --verbose=true -u jdbc:ignite:thin://127.0.0.1/
+----
+
+--
+
+
+
+Use the `-h` or `help` option to see the various options available with SQLLine:
+
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+./sqlline.sh -h
+./sqlline.sh --help
+----
+
+tab:Windows[]
+[source,shell]
+----
+sqlline.bat -h
+sqlline.bat --help
+----
+--
+
+
+=== Authentication
+If you have authentication enabled for your cluster, then from your `{IGNITE_HOME}/bin' directory, run `jdbc:ignite:thin://[address]:[port];user=[username];password=[password]` to connect SQLLine to the cluster. Substitute `[address]`, `[port]`, `[username]` and `[password]` with your actual values. For example:
+
+
+[tabs]
+--
+tab:Unix[]
+[source,shell]
+----
+./sqlline.sh --verbose=true -u "jdbc:ignite:thin://127.0.0.1:10800;user=ignite;password=ignite"
+----
+
+tab:Windows[]
+[source,shell]
+----
+sqlline.bat --verbose=true -u "jdbc:ignite:thin://127.0.0.1:10800;user=ignite;password=ignite"
+----
+--
+
+If you do not have authentication set, omit `[username]` and `[password]`.
+
+[NOTE]
+====
+[discrete]
+=== Put JDBC URL in Quotes When Connecting from bash
+Make sure to put the connection URL in " " quotes when connecting from a bash environment, as follows: "jdbc:ignite:thin://[address]:[port];user=[username];password=[password]"
+====
+
+== Commands
+Here is the list of supported link:http://sqlline.sourceforge.net#commands[SQLLine commands, window=_blank]:
+
+[width="100%", cols="25%, 75%"]
+|=======
+|Command |	Description
+
+|`!all`
+|Execute the specified SQL against all the current connections.
+
+|`!batch`
+|Start or execute a batch of SQL statements.
+
+|`!brief`
+|Enable terse output mode.
+
+|`!closeall`
+|Close all current open connections.
+
+|`!columns`
+|Display columns of a table.
+
+|`!connect`
+|Connect to a database.
+
+|`!dbinfo`
+|List metadata information about the current connection.
+
+|`!dropall`
+|Drop all tables in the database.
+
+|`!go`
+|Change to a different active connection.
+
+|`!help`
+|Display help information.
+
+|`!history`
+|Display the command history.
+
+|`!indexes`
+|Display indexes for a table.
+
+|`!list`
+|Display all active connections.
+
+|`!manual`
+|Display SQLLine manual.
+
+|`!metadata`
+|Invoke arbitrary metadata commands.
+
+|`!nickname`
+|Create a friendly name for the connection (updates command prompt).
+
+|`!outputformat`
+|Change the method for displaying SQL results.
+
+|`!primarykeys`
+|Display the primary key columns for a table.
+
+|`!properties`
+|Connect to the database defined in the specified properties file.
+
+|`!quit`
+|Exit SQLLine.
+
+|`!reconnect`
+|Reconnect to the current database.
+
+|`!record`
+|Begin recording all output from SQL commands.
+
+|`!run`
+|Execute a command script.
+
+|`!script`
+|Save executed commands to a file.
+
+|`!sql`
+|Execute a SQL against a database.
+
+|`!tables`
+|List all the tables in the database.
+
+|`!verbose`
+|Enable verbose output mode.
+|=======
+
+Note that the above list may not be complete. Support for additional SQLLine commands can be added.
+
+== Example
+After connecting to the cluster, you can execute SQL statements and SQLLine commands:
+
+
+Create tables:
+[source,sql]
+----
+0: jdbc:ignite:thin://127.0.0.1/> CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR) WITH "template=replicated";
+No rows affected (0.301 seconds)
+
+0: jdbc:ignite:thin://127.0.0.1/> CREATE TABLE Person (id LONG, name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id))WITH "backups=1, affinityKey=city_id";
+No rows affected (0.078 seconds)
+
+0: jdbc:ignite:thin://127.0.0.1/> !tables
++-----------+--------------+--------------+-------------+-------------+
+| TABLE_CAT | TABLE_SCHEM  |  TABLE_NAME  | TABLE_TYPE  | REMARKS     |
++-----------+--------------+--------------+-------------+-------------+
+|           | PUBLIC       | CITY         | TABLE       |             |
+|           | PUBLIC       | PERSON       | TABLE       |             |
++-----------+--------------+--------------+-------------+-------------+
+----
+
+Define indexes:
+
+[source,sql]
+----
+0: jdbc:ignite:thin://127.0.0.1/> CREATE INDEX idx_city_name ON City (name);
+No rows affected (0.039 seconds)
+
+0: jdbc:ignite:thin://127.0.0.1/> CREATE INDEX idx_person_name ON Person (name);
+No rows affected (0.013 seconds)
+
+0: jdbc:ignite:thin://127.0.0.1/> !indexes
++-----------+--------------+--------------+-------------+-----------------+
+| TABLE_CAT | TABLE_SCHEM  |  TABLE_NAME  | NON_UNIQUE  | INDEX_QUALIFIER |
++-----------+--------------+--------------+-------------+-----------------+
+|           | PUBLIC       | CITY         | true        |                 |
+|           | PUBLIC       | PERSON       | true        |                 |
++-----------+--------------+--------------+-------------+-----------------+
+----
+
+You can also watch a link:https://www.youtube.com/watch?v=FKS8A86h-VY[screencast, window=_blank] to learn more about how to use SQLLine.
diff --git a/docs/_docs/tools/tableau.adoc b/docs/_docs/tools/tableau.adoc
new file mode 100644
index 0000000..07c5bef
--- /dev/null
+++ b/docs/_docs/tools/tableau.adoc
@@ -0,0 +1,66 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Using Tableau With Apache Ignite
+
+== Overview
+
+http://www.tableau.com[Tableau, window=_blank] is an interactive data-visualization tool focused on business intelligence.
+It uses ODBC APIs to connect to a variety of databases and data platforms allowing to analyze data stored there.
+
+You can use the link:SQL/ODBC/odbc-driver[Ignite ODBC driver] to interconnect Ignite with Tableau and analyze the data stored
+in the cluster.
+
+== Installation and Configuration
+
+To connect to an Apache Ignite cluster from Tableau, you need to do the following:
+
+* Download and install Tableau Desktop. Refer to an official Tableau documentation located on http://www.tableau.com[the product's main website, window=_blank].
+* Install the Apache Ignite ODBC driver on a Windows or Unix-based operating system. The detailed instructions can be found on the link:SQL/ODBC/odbc-driver[driver's configuration page].
+* Finalize the driver configuration by link:SQL/ODBC/connection-string-dsn#configuring-dsn[setting up a DSN (Data Source Name)].
+Tableau will connect to the DSN configured at this step.
+* The ODBC driver communicates to the Ignite cluster over a so-called `ODBC processor`. Make sure that this component is
+enabled on the link:SQL/ODBC/querying-modifying-data#configuring-the-cluster[cluster side].
+
+After all the above steps are accomplished, you can connect to the cluster and analyze its data.
+
+== Connecting to Ignite Cluster
+
+. Launch Tableau application and find `Other Databases (ODBC)` setting located under `Connect` \-> `To a Server` \-> `+More...+` window.
++
+image::images/tools/tableau-choosing_driver_01.png[Tableau Driver Selection]
+
+
+. Click on the `Edit connection` reference.
++
+image::images/tools/tableau-edit_connection.png[Tableau Edit Connection]
+
+. Set the `DSN` property to the name you configured before. In the example below it's equal to `LocalApacheIgniteDSN`.
+Once this is done, click on the `Connect` button.
++
+image::images/tools/tableau-choose_dsn_01.png[Tableau Choose DSN]
+
+. Tableau will try to check the connection by opening a temporary one. If the validation succeeds, the `Sign In` button
+as well as additional connection related fields will become active. Click on `Sign In` to finalize the connection process.
++
+image::images/tools/tableau-choose_dsn_02.png[Tableau Choose DSN]
+
+== Data Querying and Analysis
+
+Once the connection is successfully established between Ignite and Tableau, the data can be queried and analyzed in a
+variety of ways supported by Tableau. For more details, refer to the  http://www.tableau.com/learn/training[official Tableau documentation, window=_blank].
+
+image::images/tools/tableau-creating_dataset.png[Tableau Creating DataSet]
+
+image::images/tools/tableau-visualizing_data.png[Tableau Visualizing Data]
diff --git a/docs/_docs/tools/visor-cmd.adoc b/docs/_docs/tools/visor-cmd.adoc
new file mode 100644
index 0000000..590ec16
--- /dev/null
+++ b/docs/_docs/tools/visor-cmd.adoc
@@ -0,0 +1,68 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Visor CMD
+
+== Overview
+
+Visor Command Line Interface (CMD) is a command-line tool for Ignite clusters monitoring. It provides basic statistics
+about cluster nodes, caches, and compute tasks. It also lets you manage the size of your cluster by starting or stopping nodes.
+
+[NOTE]
+====
+[discrete]
+=== Ignite Control Script
+The link:tools/control-script[Control Script] is another command-line tool developed by the Ignite community.
+It complements and expands capabilities of Visor CMD.
+====
+
+image::images/tools/visor-cmd.png[Visor CMD]
+
+== Usage
+
+Ignite ships the `IGNITE_HOME/bin/ignitevisorcmd.{sh|bat}` script that starts Visor CMD. To connect Visor CMD to a cluster,
+use the `open` command.
+
+The following commands are supported by Visor. To get full information on a command, type `help "cmd"` or `? "cmd"`.
+
+[cols="15%,15%,70%", opts="header"]
+|===
+|Command | Alias | Description
+
+| `ack`| | Acks arguments on all remote nodes.
+| `alert`| | Alerts for user-defined events.
+| `cache`| | Prints cache statistics, clears cache, prints list of all entries from cache.
+| `close`| | Disconnects Visor CMD console from the cluster.
+| `config`| | Prints nodes' configurations.
+| `deploy`| | Copies file or folder to remote host.
+| `disco`| | Prints topology change log.
+| `events`| | Prints events from a node.
+| `gc`| | Runs GC on remote nodes.
+| `help`| `?`| Visor CMD's help.
+| `kill`| | Kills or restarts a node.
+| `log`| | Starts or stops the cluster-wide events logging.
+| `mclear`| | Clears Visor CMD's memory variables.
+| `mget`| | Gets Visor CMD' memory variables
+| `mlist`| | Prints Visor CMD's memory variables.
+| `node`| | Prints node's statistics.
+| `open`| | Connects Visor CMD to the cluster.
+| `ping`| | Pings a node.
+| `quit`| | Close Visor CMD's connection.
+| `start`| | Starts or restarts remote nodes.
+| `status`| `!`| Prints detailed Visor CMD's status.
+| `tasks`| | Prints tasks' execution statistics.
+| `top`| | Prints the current cluster topology.
+| `vvm`| | Opens VisualVM for nodes in the cluster.
+|===
+
diff --git a/docs/_docs/transactions/mvcc.adoc b/docs/_docs/transactions/mvcc.adoc
new file mode 100644
index 0000000..86cfaf7
--- /dev/null
+++ b/docs/_docs/transactions/mvcc.adoc
@@ -0,0 +1,193 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Multiversion Concurrency Control
+
+IMPORTANT: MVCC is currently in beta.
+
+== Overview
+
+Caches with the `TRANSACTIONAL_SNAPSHOT` atomicity mode support SQL transactions as well as link:key-value-api/transactions[key-value transactions] and enable multiversion concurrency control (MVCC) for both types of transactions.
+
+
+== Multiversion Concurrency Control
+
+
+Multiversion Concurrency Control (MVCC) is a method of controlling the consistency of data accessed by multiple users concurrently. MVCC implements the https://en.wikipedia.org/wiki/Snapshot_isolation[snapshot isolation] guarantee which ensures that each transaction always sees a consistent snapshot of data.
+
+Each transaction obtains a consistent snapshot of data when it starts and can only view and modify data in this snapshot.
+When the transaction updates an entry, Ignite verifies that the entry has not been updated by other transactions and creates a new version of the entry.
+The new version becomes visible to other transactions only when and if this transaction commits successfully.
+If the entry has been updated, the current transaction fails with an exception (see the <<Concurrent Updates>> section for the information on how to handle update conflicts).
+
+////
+*TODO* Artem - we should explain what a physical vs logical snapshot is. I don't know.
+////
+
+The snapshots are not physical snapshots but logical snapshots that are generated by the MVCC-coordinator: a cluster node that coordinates transactional activity in the cluster. The coordinator keeps track of all active transactions and is notified when each transaction finishes. All operations with an MVCC-enabled cache request a snapshot of data from the coordinator.
+
+== Enabling MVCC
+To enable MVCC for a cache, use the `TRANSACTIONAL_SNAPSHOT` atomicity mode in the cache configuration. If you create a table with the `CREATE TABLE` command, specify the atomicity mode as a parameter in the `WITH` part of the command:
+
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/mvcc.xml[tags=ignite-config;!discovery, indent=0]
+----
+
+tab:SQL[]
+[source,sql]
+----
+CREATE TABLE Person WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT"
+----
+--
+
+NOTE: The `TRANSACTIONAL_SNAPSHOT` mode only supports the default concurrency mode (`PESSIMISTIC`) and default isolation level (`REPEATABLE_READ`). See link:key-value-api/transactions#concurrency-modes-and-isolation-levels[Concurrency modes and isolation levels] for details.
+
+
+== Concurrent Updates
+
+If an entry is read and then updated within a single transaction, it is possible that another transaction could be processed in between the two operations and update the entry first. In this case, an exception is thrown when the first transaction attempts to update the entry and the transaction is marked as "rollback only". You have to retry the transaction.
+
+This is how to tell that an update conflict has occurred:
+
+* When Java transaction API is used, a `CacheException` is thrown with the message `Cannot serialize transaction due to write conflict (transaction is marked for rollback)` and the `Transaction.rollbackOnly` flag is set to `true`.
+* When SQL transactions are executed through the JDBC or ODBC driver, the `SQLSTATE:40001` error code is returned.
+
+[tabs]
+--
+
+tab:Ignite Java[]
+[source,java]
+----
+for(int i = 1; i <=5 ; i++) {
+    try (Transaction tx = Ignition.ignite().transactions().txStart()) {
+        System.out.println("attempt #" + i + ", value: " + cache.get(1));
+        try {
+            cache.put(1, "new value");
+            tx.commit();
+            System.out.println("attempt #" + i + " succeeded");
+            break;
+        } catch (CacheException e) {
+            if (!tx.isRollbackOnly()) {
+              // Transaction was not marked as "rollback only",
+              // so it's not a concurrent update issue.
+              // Process the exception here.
+                break;
+            }
+        }
+    }
+}
+----
+tab:JDBC[]
+[source,java]
+----
+Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+// Open JDBC connection.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1");
+
+PreparedStatement updateStmt = null;
+PreparedStatement selectStmt = null;
+
+try {
+    // starting a transaction
+    conn.setAutoCommit(false);
+
+    selectStmt = conn.prepareStatement("select name from Person where id = 1");
+    selectStmt.setInt(1, 1);
+    ResultSet rs = selectStmt.executeQuery();
+
+    if (rs.next())
+        System.out.println("name = " + rs.getString("name"));
+
+    updateStmt = conn.prepareStatement("update Person set name = ? where id = ? ");
+
+    updateStmt.setString(1, "New Name");
+    updateStmt.setInt(2, 1);
+    updateStmt.executeUpdate();
+
+    // committing the transaction
+    conn.commit();
+} catch (SQLException e) {
+    if ("40001".equals(e.getSQLState())) {
+        // retry the transaction
+    } else {
+        // process the exception
+    }
+} finally {
+    if (updateStmt != null) updateStmt.close();
+    if (selectStmt != null) selectStmt.close();
+}
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlTransactions.cs[tag=mvccConcurrentUpdates,indent=0]
+----
+
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/concurrent_updates.cpp[tag=concurrent-updates,indent=0]
+----
+
+--
+
+
+
+== Limitations
+
+=== Cross-Cache Transactions
+The `TRANSACTIONAL_SNAPSHOT` mode is enabled per cache and does not permit caches with different atomicity modes within the same transaction. As a consequence, if you want to cover multiple tables in one SQL transaction, all tables must be created with the `TRANSACTIONAL_SNAPSHOT` mode.
+
+=== Nested Transactions
+Ignite supports three modes of handling nested SQL transactions. They can be enabled via a JDBC/ODBC connection parameter.
+
+
+[source, shell]
+----
+jdbc:ignite:thin://127.0.0.1/?nestedTransactionsMode=COMMIT
+----
+
+When a nested transaction occurs within another transaction, the `nestedTransactionsMode` parameter dictates the system behavior:
+
+- `ERROR` — When the nested transaction is encountered, an error is thrown and the enclosing transaction is rolled back. This is the default behavior.
+- `COMMIT` — The enclosing transaction is committed; the nested transaction starts and is committed when its COMMIT statement is encountered. The rest of the statements in the enclosing transaction are executed as implicit transactions.
+- `IGNORE` — DO NOT USE THIS MODE. The beginning of the nested transaction is ignored, statements within the nested transaction will be executed as part of the enclosing transaction, and all changes will be committed with the commit of the nested transaction. The subsequent statements of the enclosing transaction will be executed as implicit transactions.
+
+
+=== Continuous Queries
+If you use link:key-value-api/continuous-queries[Continuous Queries] with an MVCC-enabled cache, there are several limitations that you should be aware of:
+
+* When an update event is received, subsequent reads of the updated key may return the old value for a period of time before the MVCC-coordinator learns of the update. This is because the update event is sent from the node where the key is updated, as soon as it is updated. In such a case, the MVCC-coordinator may not be immediately aware of that update, and therefore, subsequent reads may return outdated information during that period of time.
+* There is a limit on the number of keys per node a single transaction can update when continuous queries are used. The updated values are kept in memory, and if there are too many updates, the node might not have enough RAM to keep all the objects. To avoid OutOfMemory errors, each transaction is allowed to update at most 20,000 keys (the default value) on a single node. If this value is exceeded, the transaction will throw an exception and will be rolled back. This number can be changed by specifying the `IGNITE_MVCC_TX_SIZE_CACHING_THRESHOLD` system property.
+
+=== Other Limitations
+The following features are not supported for the MVCC-enabled caches. These limitations may be addressed in future releases.
+
+* link:configuring-caches/near-cache[Near Caches]
+* link:configuring-caches/expiry-policies[Expiry Policies]
+* link:events/listening-to-events[Events]
+* link:{javadoc_base_url}/org/apache/ignite/cache/CacheInterceptor.html[Cache Interceptors]
+* link:persistence/external-storage[External Storage]
+* link:configuring-caches/on-heap-caching[On-Heap Caching]
+* link:{javadoc_base_url}/org/apache/ignite/IgniteCache.html#lock-K-[Explicit Locks]
+* The link:{javadoc_base_url}/org/apache/ignite/IgniteCache.html#localEvict-java.util.Collection-[localEvict()] and link:{javadoc_base_url}/org/apache/ignite/IgniteCache.html#localPeek-K-org.apache.ignite.cache.CachePeekMode...-[localPeek()] methods
diff --git a/docs/_docs/understanding-configuration.adoc b/docs/_docs/understanding-configuration.adoc
new file mode 100644
index 0000000..6bd6d78
--- /dev/null
+++ b/docs/_docs/understanding-configuration.adoc
@@ -0,0 +1,111 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Understanding Configuration
+
+This chapter explains different ways of setting configuration parameters in an Ignite cluster. The chapter covers the most
+ubiquitous approaches for Java and C++ applications.
+
+[NOTE]
+====
+[discrete]
+=== Configuring .NET, Python, Node.JS and other programming languages
+
+* .NET developers: refer to the link:net-specific/net-configuration-options[Ignite.NET Configuration] section
+* Developers of Python, Node.JS, and other programming languages: use this page to configure your
+Java-powered Ignite cluster and link:thin-clients/getting-started-with-thin-clients[thin clients] section to set up
+your language-specific applications that will be working with the cluster.
+====
+
+== Overview
+
+You can specify custom configuration parameters by providing an instance of the javadoc:org.apache.ignite.configuration.IgniteConfiguration[] class to Ignite when starting the node.
+You can set the parameters either programmatically or via an XML configuration file.
+These 2 ways are fully interchangeable.
+
+The XML configuration file is a Spring Bean definition file that must contain the `IgniteConfiguration` bean.
+When starting a node from the command line, pass the configuration file as a parameter to the `ignite.sh|bat` script, as follows:
+
+[source,shell]
+----
+ignite.sh ignite-config.xml
+----
+
+If you don't specify a configuration file, the default file `{IGNITE_HOME}/config/default-config.xml` is used.
+
+== Spring XML Configuration
+
+To create a configuration in a Spring XML format, you need to define the
+`IgniteConfiguration` bean and set the parameters that you want to be different from the default. For detailed information on how to use XML Schema-based configuration, see the
+https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/xsd-configuration.html[official
+Spring documentation].
+
+In the example below, we create an `IgniteConfiguration` bean, set the `workDirectory` property, and configure a link:data-modeling/data-partitioning#partitioned[partitioned cache].
+
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+
+<beans xmlns="http://www.springframework.org/schema/beans"
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+       xsi:schemaLocation="
+        http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd">
+
+    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
+        <property name="workDirectory" value="/path/to/work/directory"/>
+
+        <property name="cacheConfiguration">
+            <bean class="org.apache.ignite.configuration.CacheConfiguration">
+                <!-- Set the cache name. -->
+                <property name="name" value="myCache"/>
+                <!-- Set the cache mode. -->
+                <property name="cacheMode" value="PARTITIONED"/>
+                <!-- Other cache parameters. -->
+            </bean>
+        </property>
+    </bean>
+</beans>
+----
+
+== Programmatic Configuration
+
+Create an instance of the `IgniteConfiguration` class and set the required
+parameters, as shown in the example below.
+
+[tabs]
+--
+
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/UnderstandingConfiguration.java[tag=cfg,indent=0]
+----
+
+See the link:{javadoc_base_url}/org/apache/ignite/configuration/IgniteConfiguration.html[IgniteConfiguration,window=_blank] javadoc for the complete list of parameters.
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UnderstandingConfiguration.cs[tag=UnderstandingConfigurationProgrammatic,indent=0]
+----
+
+See the https://ignite.apache.org/releases/{version}/dotnetdoc/api/Apache.Ignite.Core.IgniteConfiguration.html[API docs,window=_blank] for details.
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/setting_work_directory.cpp[tag=setting-work-directory,indent=0]
+----
+--
diff --git a/docs/_includes/copyright.html b/docs/_includes/copyright.html
new file mode 100644
index 0000000..00e196c
--- /dev/null
+++ b/docs/_includes/copyright.html
@@ -0,0 +1,22 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<div class="copyright">
+ © {{ "today" | date: "%Y" }} The Apache Software Foundation.<br/>
+Apache, Apache Ignite, the Apache feather and the Apache Ignite logo are either registered trademarks or trademarks of The Apache Software Foundation. 
+
+</div>
diff --git a/docs/_includes/footer.html b/docs/_includes/footer.html
new file mode 100644
index 0000000..76a3ffe
--- /dev/null
+++ b/docs/_includes/footer.html
@@ -0,0 +1,20 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+      {% assign doc_var = page.leftNav | append: "_var" %}
+      {% assign base_url = '' %}
+<footer>
+</footer>
diff --git a/docs/_includes/header.html b/docs/_includes/header.html
new file mode 100644
index 0000000..649dcba
--- /dev/null
+++ b/docs/_includes/header.html
@@ -0,0 +1,36 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!--header>
+    <button type='button' class='menu' title='Docs menu'>
+	  <img src="{{'assets/images/menu-icon.svg'|relative_url}}"/>
+    </button>
+    
+    <nav>
+   
+    </nav>
+    <form class='search'>
+        <button class="search-close" type='button'><img src='{{"assets/images/cancel.svg"|relative_url}}'></button>
+        <input type="search" placeholder="Search…" id="search-input">
+    </form>
+	<button type='button' class='search-toggle'><img src='{{"assets/images/search.svg"|relative_url}}'></button>
+    <button type='button' class='top-nav-toggle'>⋮</button>
+    <a href="https://github.com/ignite" title='GitHub' class='github' target="_blank">
+        <img src="{{'assets/images/github-gray.svg'|relative_url}}" alt="GitHub logo">
+    </a>
+</header-->
+
diff --git a/docs/_includes/left-nav.html b/docs/_includes/left-nav.html
new file mode 100644
index 0000000..6e4d223
--- /dev/null
+++ b/docs/_includes/left-nav.html
@@ -0,0 +1,88 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+{% assign prefix = site.attrs.base_url  %}
+{% assign normalized_path = page.url | replace: ".html","" | remove_first: prefix %}
+{% if page.toc != false %}
+<nav class='left-nav' data-swiftype-index='false'>
+
+    {% for guide in site.data.toc %}
+    <li>
+        {% if guide.items %}
+
+        {% assign guide_class = 'collapsed' %}
+
+        {% capture submenu %}
+        {% for chapter in guide.items %}
+
+        {% assign chapter_class = 'collapsed' %}
+        {% assign normalized_chapter_url = chapter.url | prepend: "/" %}
+        {% if normalized_path == normalized_chapter_url %}
+        {% assign guide_class = 'expanded' %}
+        {% assign chapter_class = 'expanded' %}
+        {% endif %}
+
+    <li>
+        {% if chapter.items %}
+        {% assign matching_items_count = chapter.items | where: 'url', normalized_path | size %}
+        {% if matching_items_count != 0 %}
+        {% assign chapter_class = 'expanded parent' %}
+        {% endif %}
+        <button
+                type='button'
+                class='{{chapter_class}} {% if normalized_path == normalized_chapter_url %}active{% endif %}'>
+            {{chapter.title}}<img class="state-indicator" src="{{'assets/images/left-nav-arrow.svg' | relative_url}}">
+        </button>
+        <nav class="sub_pages {{chapter_class}}">
+
+            {% for subpage in chapter.items %}
+            {% assign normalized_subpage_url = subpage.url | prepend: "/" %}
+            {% if normalized_path == normalized_subpage_url %}
+            {% assign guide_class = 'expanded' %}
+            {% assign chapter_class = 'expanded' %}
+            {% endif %}
+
+    <li><a href="{{prefix}}/{{subpage.url}}"
+           class='{% if normalized_path == normalized_subpage_url %}active{% endif %}'>{{subpage.title}}</a></li>
+    {% endfor %}
+</nav>
+{% else %}
+<a href="{{prefix}}{{chapter.url|relative_url}}"
+
+   class='{% if normalized_path == normalized_chapter_url %}active{% endif %}'
+>{{chapter.title}}</a>
+{% endif %}
+</li>
+{% endfor %}
+
+{% endcapture %}
+
+<button type='button' data-guide-url="{{guide.url}}"
+        class='group-toggle {{guide_class}} {% if page.url contains guide.url %}parent{% endif %}'>{{guide.title}}<img
+        class="state-indicator" src="{{'assets/images/left-nav-arrow.svg'|relative_url}}"></button>
+<nav class='nav-group {{guide_class}}'>
+    {{ submenu }}
+</nav>
+{% else %}
+
+<a href="{{prefix}}{{guide.url|relative_url}}" class='{% if guide.url == normalized_path %}active{% endif %}'>{{guide.title}}</a>
+{% endif %}
+</li>
+{% endfor %}
+</nav>
+<div class="left-nav__overlay"></div>
+{% endif %}
diff --git a/docs/_includes/right-nav.html b/docs/_includes/right-nav.html
new file mode 100644
index 0000000..7fccc8a
--- /dev/null
+++ b/docs/_includes/right-nav.html
@@ -0,0 +1,21 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<nav class="right-nav" data-swiftype-index='false'>
+    {{ page.document | tocify_asciidoc: 6 }}
+    {% include footer.html %}
+</nav>
diff --git a/docs/_includes/section-toc.html b/docs/_includes/section-toc.html
new file mode 100644
index 0000000..8e793fc
--- /dev/null
+++ b/docs/_includes/section-toc.html
@@ -0,0 +1,31 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+{% assign s = include.section %}
+
+{% if include.title %}
+<a {% if s.url %} href="{{site.attrs.base_url}}/{{s.url}}" {% endif %}>{{s.title}}</a>
+{% endif %}
+
+{% if s.items %}
+    <ul class='sub_pages'>
+    {% for subpage in s.items %}
+        <li><a href="{{site.attrs.base_url}}/{{subpage.url}}" class=''>{{subpage.title}}</a></li>
+    {% endfor %}
+    </ul>
+{% endif %}
+
diff --git a/docs/_includes/toc.html b/docs/_includes/toc.html
new file mode 100644
index 0000000..683b400
--- /dev/null
+++ b/docs/_includes/toc.html
@@ -0,0 +1,63 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<div class="toc">
+{% assign path =  page.path | remove: "/index.adoc" | remove_first: "_" | remove_first:  page.collection | remove_first: "/"  %}
+
+{% assign current_guide = path | split: "/" | first %}
+{% assign section = path | remove: current_guide | remove : "/" %}
+
+
+{% if current_guide != nil %}
+
+    {% assign current_guide_url = current_guide | prepend: "/" %} 
+
+
+    {% assign guide = site.data.toc | where: "url", current_guide_url | first %} 
+
+
+    {% if section != "" %}
+       {% assign section_url = "/" | append: current_guide | append: "/" | append: section %}
+       {% assign sect = guide.items | where: "url", section_url | first %}
+
+       {% include section-toc.html section=sect title=false %}   
+    {% else %}
+        <ul>
+            {% for sect in guide.items %}
+               <li>
+                   {% include section-toc.html section=sect title=true%}   
+               </li>
+            {% endfor %}
+        </ul> 
+    {% endif %}
+{% else %}  
+    {% for guide in site.data.toc %}
+
+        <h2> 
+<a href="{{site.attrs.base_url}}{{guide.url}}" class=''>{{guide.title}}</a> </h2>
+        {% if guide.items %}
+        <ul>
+          {% for sect in guide.items %}
+              <li>
+                  {% include section-toc.html section=sect title=true%}   
+              </li>
+          {% endfor %}
+        </ul>
+        {% endif %}
+    {% endfor %}
+{% endif %}
+</div>
diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html
new file mode 100644
index 0000000..9c7a42e
--- /dev/null
+++ b/docs/_layouts/default.html
@@ -0,0 +1,72 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<!DOCTYPE html>
+
+      {% assign doc_var = page.leftNav | append: "_var" %}
+      
+<html lang="en">
+<head>
+    <!-- Global site tag (gtag.js) - Google Analytics -->
+    <script async src="https://www.googletagmanager.com/gtag/js?id=UA-1382082-1"></script>
+    <script>
+    window.dataLayer = window.dataLayer || [];
+    function gtag(){dataLayer.push(arguments);}
+    gtag('js', new Date());
+
+    gtag('config', 'UA-61232409-1');
+    </script>
+
+    {% if page.content == nil or page.content == ""  %}
+<META NAME="ROBOTS" CONTENT="NOINDEX">
+{% endif %}
+
+    <meta charset="UTF-8">
+    <title>{{page.title}} | Ignite Documentation</title>
+    {% if site.attrs.base_url contains "/latest" %}
+    <link rel="canonical" href="{{page.id | replace_first: site.version, 'latest' }}" />
+    {% else %}
+    <link rel="canonical" href="{{page.id}}" />
+    {% endif %}
+	{% capture timestamp %}{{"now"| date: '%s'}}{% endcapture %}
+	<link rel="stylesheet" href="{{'assets/css/styles.css?'|append: timestamp |relative_url}}">
+    <link rel="stylesheet" href="{{'assets/css/asciidoc-pygments.css'|relative_url}}">
+    <link rel="shortcut icon" href="{{'/favicon.ico'|relative_url}}">
+    <meta name='viewport' content='width=device-width, height=device-height, initial-scale=1.0, minimum-scale=1.0'>
+
+	<link rel="stylesheet"
+	  href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
+
+    <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
+    <script type="text/javascript" src="{{'assets/js/jquery.swiftype.autocomplete.js?' |append: timestamp |relative_url}}"></script>
+    <script type="text/javascript" src="{{'assets/js/anchor.min.js?'|append: timestamp|relative_url}}"></script>
+    
+
+</head>
+<body>
+    {% include header.html %}
+    {{content}}
+    <script>
+    // inits deep anchors -- needs to be done here because of https://www.bryanbraun.com/anchorjs/#dont-run-it-too-late 
+    anchors.add('.page-docs h1, .page-docs h2, .page-docs h3:not(.discrete), .page-docs h4, .page-docs h5');
+    anchors.options = {
+        placement: 'right',
+        visible: 'always'
+    };
+    </script>
+</body>
+<script type='module' src='{{"assets/js/index.js?"|append: timestamp | relative_url}}' async></script>
+</html>
diff --git a/docs/_layouts/doc.html b/docs/_layouts/doc.html
new file mode 100644
index 0000000..9d5e831
--- /dev/null
+++ b/docs/_layouts/doc.html
@@ -0,0 +1,33 @@
+---
+layout: default
+---
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+	 <link rel="stylesheet" href="{{'assets/css/docs.css'|relative_url}}">
+<section class='page-docs'>
+    {% include left-nav.html %}
+    <article data-swiftype-index='true'>
+        <a class='edit-link' href="{{site.attrs.docSourceUrl}}/{{page.path}}" target="_blank">Edit</a>
+        {% if page.path contains ".adoc" %}
+            <h1>{{page.title}}</h1>
+        {% endif %}
+        {{content}}
+        {% include copyright.html %}
+    </article>
+    {% include right-nav.html %}    
+</section>
+<script type='module' src='{{"assets/js/code-copy-to-clipboard.js"|relative_url}}' async></script>
diff --git a/docs/_layouts/toc.html b/docs/_layouts/toc.html
new file mode 100644
index 0000000..682f77c
--- /dev/null
+++ b/docs/_layouts/toc.html
@@ -0,0 +1,32 @@
+---
+layout: default
+---
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+	 <link rel="stylesheet" href="{{'assets/css/docs.css'|relative_url}}">
+<section class='page-docs'>
+    {% include left-nav.html %}
+    <article data-swiftype-index='true'>
+        {% if page.path contains ".adoc" %}
+            <h1>{{page.title}}</h1>
+        {% endif %}
+        {{ content }}
+		{% include toc.html %}
+    </article>
+    {% include right-nav.html %}    
+</section>
+<script type='module' src='{{"assets/js/code-copy-to-clipboard.js"|relative_url}}' async></script>
diff --git a/docs/_plugins/asciidoctor-extensions.rb b/docs/_plugins/asciidoctor-extensions.rb
new file mode 100644
index 0000000..715d33d
--- /dev/null
+++ b/docs/_plugins/asciidoctor-extensions.rb
@@ -0,0 +1,180 @@
+# MIT License
+#
+# Copyright (C) 2012-2020 Dan Allen, Sarah White, Ryan Waldron, and the
+# individual contributors to Asciidoctor.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+require 'asciidoctor'
+require 'asciidoctor/extensions'
+require 'set'
+
+include Asciidoctor
+
+class TabsBlock < Asciidoctor::Extensions::BlockProcessor
+  use_dsl
+
+  named :tabs
+  on_context :open
+  parse_content_as :simple
+
+  def render_tab(parent, name, options, tab_content, use_xml)
+    if (options == 'unsupported')
+        content = Asciidoctor.convert "[source]\n----\nThis API is not presently available for #{name}."+( use_xml ? " You can use XML configuration." : "")+"\n----", parent: parent.document
+        return "<code-tab data-tab='#{name}' data-unavailable='true'>#{content}</code-tab>"
+    else 
+        if tab_content.empty?
+          warn "There is an empty tab (#{name}) on the " + parent.document.attributes['doctitle'] + " page: " + parent.document.attributes['docfile']
+         # File.write("log.txt", "There is an empty tab (#{name}) on the " + parent.document.attributes['doctitle'] + " page: " + parent.document.attributes['docfile'] + "\n", mode: "a")
+        end
+        content =  Asciidoctor.convert tab_content, parent: parent.document
+        return "<code-tab data-tab='#{name}'>#{content}</code-tab>"
+    end
+  end
+
+
+  def process parent, reader, attrs
+    lines = reader.lines
+
+    html = ''
+    tab_content = ''
+    name = ''
+    options = ''
+    tabs = Set.new
+    lines.each do |line| 
+       if (line =~ /^tab:.*\[.*\]/ ) 
+          # render the previous tab if there is one
+          unless name.empty?
+              html = html + render_tab(parent, name, options, tab_content, tabs.include?("XML"))
+          end
+
+          tab_content = ''; 
+          name = line[/tab:(.*)\[.*/,1] 
+          tabs << name
+          options = line[/tab:.*\[(.*)\]/,1] 
+       else
+          tab_content = tab_content + "\n" + line; 
+       end  
+    end 
+
+    unless name.empty?
+       html = html + render_tab(parent, name, options, tab_content, tabs.include?("XML"))
+    end
+
+
+    html = %(<code-tabs>#{html}</code-tabs>)
+
+    create_pass_block parent, html, attrs
+    
+  end
+end
+
+
+Asciidoctor::Extensions.register do
+  block TabsBlock
+end
+
+
+class JavadocUrlMacro < Extensions::InlineMacroProcessor
+  use_dsl
+
+  named :javadoc
+  name_positional_attributes 'text'
+
+  def process parent, target, attrs
+
+    parts = target.split('.')
+
+    if attrs['text'] == nil
+      text = parts.last();
+    else
+      text = attrs['text'] 
+    end
+
+    target = parent.document.attributes['javadoc_base_url'] + '/' + parts.join('/') + ".html" 
+    attrs.store('window', '_blank')
+
+    (create_anchor parent, text, type: :link, target: target, attributes: attrs).render
+  end
+end
+
+Asciidoctor::Extensions.register do
+  inline_macro JavadocUrlMacro  
+end
+Extensions.register do 
+ inline_macro do
+   named :link
+   parse_content_as :text
+
+   process do |parent, target, attrs|
+#     if(parent.document.attributes['latest'])
+#      base_url = parent.document.attributes['base_url'] + '/latest' 
+#     else
+#      base_url = parent.document.attributes['base_url'] + '/' + parent.document.attributes['version'] 
+#     end
+
+#    print parent.document.attributes
+    base_url = parent.document.attributes['base_url'] # + '/' + parent.document.attributes['version']
+   
+     if (text = attrs['text']).empty?
+       text = target
+     end
+
+     if text =~ /(\^|, *window=_?blank *)$/
+       text = text.sub(/\^$/,'')
+       text = text.sub(/, *window=_?blank *$/,'')
+       attrs.store('window', '_blank')
+     end
+
+     if target.start_with? 'http','ftp', '/', '#'
+     else 
+       target = base_url + '/' + %(#{target})
+     end
+
+     (create_anchor parent, text, type: :link, target: target, attributes: attrs).render
+   end
+ end
+end
+
+class ImageTreeProcessor < Extensions::Treeprocessor
+  def process document
+
+    image_width = (document.attr 'image_width', "")
+
+    imagedir =    document.attributes['docdir'] 
+
+    #scan for images
+    (document.find_by context: :image).each do |img| 
+
+        imagefile = imagedir + '/' + img.attributes['target']
+
+       if !File.file?(imagefile) 
+          warn 'Image does not exist: ' +imagefile 
+       end
+
+       if !(img.attributes['width'] || image_width.empty?)
+           img.attributes['width'] = image_width
+       end
+    end
+  end
+end
+
+Extensions.register do
+  treeprocessor ImageTreeProcessor 
+end
diff --git a/docs/_sass/callouts.scss b/docs/_sass/callouts.scss
new file mode 100644
index 0000000..2aad06f
--- /dev/null
+++ b/docs/_sass/callouts.scss
@@ -0,0 +1,75 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+aside {
+    border-left: 6px solid var(--callout-border);
+    background: var(--callout-background);
+    color: var(--callout-text);
+    margin-left: 0;
+    padding-right: 10px;
+	padding-left:20px;
+    position: relative;
+    display: flex;
+    margin-bottom: 16px;
+	
+	h3 {
+		font-weight: bold;
+		color:var(--callout-text);
+	}
+
+    &+aside {
+        margin-top: 1em;
+    }
+
+    &:before {
+       	font-size: 18px;
+        width: 78px;
+        flex: 0 0 auto;
+        align-self: center;
+        text-align: center;
+    }
+
+    .callout__margin-collapse-root {
+        margin-top: 16px;
+        margin-bottom: 16px;
+    }
+
+    &.note {
+        --callout-text: #723c81;
+        --callout-border: #723c81;
+        --callout-background: #f7f7f7;
+        --callout-icon: "\2B50";
+    }
+	
+    &.tip {
+        --callout-text: #af4e17;
+        --callout-border: #f18329;
+        --callout-background: #f7f7f7;
+        --callout-icon: "\2B50";
+    }
+
+    &.caution, &.important {
+        --callout-text: #65666a;
+        --callout-border: #e9502c;
+        --callout-background: #f7f7f7;
+        --callout-icon: "\01F449";
+    }
+
+    &.warning {
+        --callout-text: #df2226;
+        --callout-border: #df2226;
+        --callout-background: #f7f7f7;
+        --callout-icon: "\01F4CD";
+    }
+}
diff --git a/docs/_sass/code.scss b/docs/_sass/code.scss
new file mode 100644
index 0000000..d0e2eea
--- /dev/null
+++ b/docs/_sass/code.scss
@@ -0,0 +1,115 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+pre, pre.rouge {
+    padding: 8px 15px;
+    background: var(--block-code-background) !important;
+    border-radius: 5px;
+    border: 1px solid #e5e5e5;
+    overflow-x: auto;
+    // So code copy button doesn't overflow
+    min-height: 36px;
+	line-height: 18px;
+    color: #545454;
+}
+
+code {
+    color: #545454;
+}
+
+pre.rouge code {
+    background: none !important;
+}
+
+pre.rouge .tok-err {
+  	border: none !important;
+  }
+
+
+.highlight .err {
+  background: initial !important;
+  color: initial !important;
+}
+
+code-tabs.code-tabs__initialized {
+    display: block;
+    margin-bottom: 1.5em;
+
+    nav {
+        border-bottom: 1px solid #e0e0e0
+    }
+
+    nav button {
+        background: white;
+        color: inherit;
+        border: none;
+        padding: 0.7em 1em;
+        cursor: pointer;
+        transform: translateY(1px);
+		font-size: .9em;
+
+        &.active {
+            border-bottom: var(--orange-line-thickness) solid var(--link-color);
+        }
+		&.grey {
+		  color: grey;
+		}
+    }
+
+    code-tab:not([hidden]) {
+        display: block;
+    }
+}
+
+*:not(pre) > code {
+    background: var(--inline-code-background);
+    padding: 0.1em 0.5em;
+    background-clip: padding-box;
+    border-radius: 3px;
+    color: #545454;
+    font-size: 90%
+}
+
+// Required for copy button positioning
+.listingblock .content {
+    position: relative;
+}
+
+.copy-to-clipboard-button {
+    margin: 0;
+    padding: 0;
+    width: 36px;
+    height: 36px;
+    display: flex;
+    align-items: center;
+    justify-content: center;
+    background: none;
+    border: none;
+
+    position: absolute;
+    top: 0;
+    right: 0;
+    background: url('../images/copy-icon.svg') center center no-repeat;
+
+    &.copy-to-clipboard-button__success {
+        color: green;
+        background: none;
+        font-size: 20px;
+        font-weight: bold;
+    }
+
+    &:hover:not(.copy-to-clipboard-button__success) {
+        filter: var(--gg-orange-filter);
+    }
+}
diff --git a/docs/_sass/docs.scss b/docs/_sass/docs.scss
new file mode 100644
index 0000000..58dc0f2
--- /dev/null
+++ b/docs/_sass/docs.scss
@@ -0,0 +1,238 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+section.page-docs {
+    display: grid;
+    transition: grid-template-columns 0.15s;
+    grid-template-columns: auto 1fr auto;
+    grid-template-rows: 100%;
+    grid-template-areas: 'left-nav content right-nav';
+    line-height: 20px;
+    max-width: 1440px;
+    margin: auto;
+    width: 100%;
+
+    &>article {
+        // box-shadow: -1px 13px 20px 0 #696c70;
+        border-left: 1px solid #eeeeee;
+        background-color: #ffffff;
+        padding: 0 50px 30px;
+        grid-area: content;
+        overflow: hidden;
+        font-family: sans-serif;
+        font-size: 16px;
+        color: #545454;
+        line-height: 1.6em;
+
+        h1, h2, h3:not(.discrete), h4, h5, strong, th {
+            font-family: 'Open Sans';
+        }
+
+        li {
+            margin-bottom: 0.5em;
+
+            > p {
+                margin-top: 0;
+                margin-bottom: 0;
+            }
+        }
+
+        @media (max-width: 800px) {
+            padding-left: 15px;
+            padding-right: 15px
+        }
+    }
+
+    .edit-link {
+        position:relative;
+        top: 10px;
+        right:10px;
+        float: right;
+        padding-top: calc(var(--header-height) + var(--padding-top));
+        margin-top: calc((-1 * var(--header-height)));
+    }
+
+    h1, h2, h3:not(.discrete), h4, h5 {
+        margin-bottom: 0;
+
+        &[id] {
+            margin-top:  var(--margin-top);
+            margin-bottom: calc(var(--margin-top) * 0.5);
+            // padding-top: calc(var(--header-height) + var(--padding-top));
+            z-index: -1;
+        }
+    }
+
+    .toc > ul {
+        margin: 0;
+    }
+
+
+    .content > .pygments.highlight {
+        margin-top: 0px;
+    }
+
+    .title {
+        font-style: italic;
+    }
+
+    .checkmark:before {
+        content: '\f14a';
+        visibility: visible;
+        font-family: FontAwesome;
+        color: #00a100;
+    }
+    .checkmark {
+        visibility: hidden;
+    }
+
+    .stretch {width: 100%;}
+    h1[id] {
+        --margin-top: 1em;
+    }
+    h2[id] {
+        --margin-top: 1.2em;
+    }
+    .toc > h2 {
+        --margin-top: 1em;
+    }
+
+    h3[id] {
+        --margin-top: 1.2em;
+    }
+    h4[id] {
+        --margin-top: 0.5em;
+    }
+    h5[id] {
+        --margin-top: 1.67em;
+    }
+    .imageblock .content, .image {
+        text-align: center;
+        display: block;
+    }
+    .imageblock, .image {
+        img:not([width]):not([height]) {
+            width: auto;
+            height: auto;
+            max-width: 100%;
+            max-height: 450px;
+        }
+    }
+    strong {
+        color: #757575;
+    }
+
+    th.valign-top,td.valign-top {
+        vertical-align:top;
+    }
+
+    table {
+        margin: 16px 0;
+    }
+
+    table tr td {
+        hyphens: auto;
+    }
+
+    table thead,table tfoot {
+        background:#f7f8f7;
+        color: #757575;
+    }
+    table tr.even,table tr.alt{background:#f8f8f7}
+    table.stripes-all tr,table.stripes-odd tr:nth-of-type(odd),table.stripes-even tr:nth-of-type(even),table.stripes-hover tr:hover{background:#f8f8f7}
+
+}
+.copyright {
+    margin-top: 3em;
+    padding-top: 1em;
+    border-top: 1px solid #f0f0f0;
+    font-size: 0.9em;
+    line-height: 1.8em;
+    color: #757575;
+}
+
+body.hide-left-nav {
+    .left-nav {
+        display: none;
+    }
+}
+
+.left-nav {
+    top: 0;
+    bottom: 0;
+    position: -webkit-sticky;
+    position: sticky;
+}
+.left-nav {
+    max-height: calc(100vh );
+    grid-area: left-nav;
+}
+.right-nav {
+    grid-area: right-nav;
+}
+.left-nav__overlay {
+    display: none;
+    background: rgba(0, 0, 0, 0.50);
+    z-index: 1;
+    position: fixed;
+    top: var(--header-height);
+    bottom: 0;
+    left: 0;
+    right: 0;
+}
+@media (max-width: 990px) {
+    body:not(.hide-left-nav) {
+        .left-nav__overlay {
+            display: block;
+        }
+    }
+    nav.left-nav {
+        background: #fafafa;
+        grid-area: left-nav;
+        box-shadow: 0 4px 4px 0 rgba(0, 0, 0, 0.24), 0 0 4px 0 rgba(0, 0, 0, 0.12);
+        min-height: calc(100vh - var(--header-height));
+        max-height: calc(100vh - var(--header-height));
+        position: fixed;
+        bottom: 0;
+        top: var(--header-height);
+        z-index: 2;
+    }
+    section.page-docs > article {
+        grid-column-start: left-nav;
+        grid-column-end: content;
+        grid-row: content;
+    }
+}
+@media (max-width: 800px) {
+    nav.right-nav {
+        display: none;
+    }
+}
+
+:target:before {
+    content: "";
+    display: block;
+    margin-top: calc(var(--header-height) * -1);
+    height: var(--header-height);
+    width: 1px;
+}
+@media (min-width: 600px) and  (max-width: 900px) {
+    :target:before {
+        content: "";
+        display: block;
+        width: 1px;
+        margin-top: -150px;
+        height: 150px;
+    }
+}
diff --git a/docs/_sass/footer.scss b/docs/_sass/footer.scss
new file mode 100644
index 0000000..c8afea4
--- /dev/null
+++ b/docs/_sass/footer.scss
@@ -0,0 +1,48 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+body > footer {
+    border-top: 2px solid #dddddd;
+    height: var(--footer-height);
+    font-size: 16px;
+    color: #393939;
+    display: flex;
+    justify-content: space-between;
+    align-items: center;
+
+
+    @media (max-width: 570px) {
+        .copyright__extra {
+            display: none;
+        }
+    }
+}
+.right-nav footer {
+    font-size: 12px;
+    padding: calc(var(--footer-gap) * 0.3) 0 5px;;
+    text-align: left;
+    margin: auto 0 0;
+
+    a {
+        margin: 0;
+    }
+
+    img {
+        width: 70px;
+    }
+
+    .copyright {
+        display: none;
+    }
+}
diff --git a/docs/_sass/github.scss b/docs/_sass/github.scss
new file mode 100644
index 0000000..069805c
--- /dev/null
+++ b/docs/_sass/github.scss
@@ -0,0 +1,223 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+.highlight table td { padding: 5px; }
+.highlight table pre { margin: 0; }
+.highlight .cm {
+  color: #777772;
+  font-style: italic;
+}
+.highlight .cp {
+  color: #797676;
+  font-weight: bold;
+}
+.highlight .c1 {
+  color: #777772;
+  font-style: italic;
+}
+.highlight .cs {
+  color: #797676;
+  font-weight: bold;
+  font-style: italic;
+}
+.highlight .c, .highlight .cd {
+  color: #777772;
+  font-style: italic;
+}
+.highlight .err {
+  color: #a61717;
+  background-color: #e3d2d2;
+}
+.highlight .gd {
+  color: #000000;
+  background-color: #ffdddd;
+}
+.highlight .ge {
+  color: #000000;
+  font-style: italic;
+}
+.highlight .gr {
+  color: #aa0000;
+}
+.highlight .gh {
+  color: #797676;
+}
+.highlight .gi {
+  color: #000000;
+  background-color: #ddffdd;
+}
+.highlight .go {
+  color: #888888;
+}
+.highlight .gp {
+  color: #555555;
+}
+.highlight .gs {
+  font-weight: bold;
+}
+.highlight .gu {
+  color: #aaaaaa;
+}
+.highlight .gt {
+  color: #aa0000;
+}
+.highlight .kc {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .kd {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .kn {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .kp {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .kr {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .kt {
+  color: #445588;
+  font-weight: bold;
+}
+.highlight .k, .highlight .kv {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .mf {
+  color: #009999;
+}
+.highlight .mh {
+  color: #009999;
+}
+.highlight .il {
+  color: #009999;
+}
+.highlight .mi {
+  color: #009999;
+}
+.highlight .mo {
+  color: #009999;
+}
+.highlight .m, .highlight .mb, .highlight .mx {
+  color: #009999;
+}
+.highlight .sb {
+  color: #d14;
+}
+.highlight .sc {
+  color: #d14;
+}
+.highlight .sd {
+  color: #d14;
+}
+.highlight .s2 {
+  color: #d14;
+}
+.highlight .se {
+  color: #d14;
+}
+.highlight .sh {
+  color: #d14;
+}
+.highlight .si {
+  color: #d14;
+}
+.highlight .sx {
+  color: #d14;
+}
+.highlight .sr {
+  color: #009926;
+}
+.highlight .s1 {
+  color: #d14;
+}
+.highlight .ss {
+  color: #990073;
+}
+.highlight .s {
+  color: #d14;
+}
+.highlight .na {
+  color: #008080;
+}
+.highlight .bp {
+  color: #797676;
+}
+.highlight .nb {
+  color: #0086B3;
+}
+.highlight .nc {
+  color: #445588;
+  font-weight: bold;
+}
+.highlight .no {
+  color: #008080;
+}
+.highlight .nd {
+  color: #3c5d5d;
+  font-weight: bold;
+}
+.highlight .ni {
+  color: #800080;
+}
+.highlight .ne {
+  color: #990000;
+  font-weight: bold;
+}
+.highlight .nf {
+  color: #990000;
+  font-weight: bold;
+}
+.highlight .nl {
+  color: #990000;
+  font-weight: bold;
+}
+.highlight .nn {
+  color: #555555;
+}
+.highlight .nt {
+  color: #000080;
+}
+.highlight .vc {
+  color: #008080;
+}
+.highlight .vg {
+  color: #008080;
+}
+.highlight .vi {
+  color: #008080;
+}
+.highlight .nv {
+  color: #008080;
+}
+.highlight .ow {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .o {
+  color: #000000;
+  font-weight: bold;
+}
+.highlight .w {
+  color: #bbbbbb;
+}
+.highlight {
+  background-color: #f8f8f8;
+}
diff --git a/docs/_sass/header.scss b/docs/_sass/header.scss
new file mode 100644
index 0000000..e45b349
--- /dev/null
+++ b/docs/_sass/header.scss
@@ -0,0 +1,374 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+header {
+
+    min-height: var(--header-height);
+    background: white;
+    display: grid;
+    grid-template-columns: auto auto 1fr auto auto auto;
+    grid-template-areas: 'left-toggle home nav search gh gg';
+    grid-template-rows: 40px;
+    flex-direction: row;
+    align-items: center;
+    justify-content: flex-start;
+    padding: 12px 20px;
+    box-shadow: 0 4px 4px 0 rgba(0, 0, 0, 0.24), 0 0 4px 0 rgba(0, 0, 0, 0.12);
+    z-index: 1;
+
+    a:hover, button:hover {
+        opacity: 0.85;
+    }
+
+    li:hover .dropdown, a:focus + .dropdown {
+        display: block;
+    }
+
+    .dropdown-arrow {
+        margin-left: 5px;
+        margin-bottom: 3px;
+
+        width: 8px;
+        height: 4px;
+    }
+
+    .dropdown {
+        display: none;
+        position: fixed;
+        top: calc(var(--header-height) - 12px);
+        width: max-content;
+        background: white;
+        box-shadow: 0 4px 4px 0 rgba(0, 0, 0, 0.24), 0 0 4px 0 rgba(0, 0, 0, 0.12);
+        border-radius: 4px;
+        padding-top: 10px;
+        padding-bottom: 12px;
+        z-index: 2;
+
+        li {
+            display: flex;
+        }
+
+        a {
+            color: grey !important;
+            font-size: 16px;
+            padding-top: 5px;
+            padding-bottom: 4px;
+            &:hover {
+                color: var(--gg-orange) !important;
+            }
+        }
+    }
+
+    .menu {
+        border: none;
+        background: none;
+        width: 40px;
+        height: 40px;
+        margin-right: 12px;
+        cursor: pointer;
+        grid-area: left-toggle;
+
+        img {
+            width: 18px;
+            height: 12px;
+        }
+    }
+
+    .search-toggle, .top-nav-toggle, .github, .search-close {
+        background: none;
+        border: none;
+        padding: 0;
+        width: 36px;
+        height: 36px;
+        display: inline-flex;
+        align-items: center;
+        justify-content: center;
+        color: var(--gg-dark-gray);
+        font-size: 26px;
+    }
+    .search-toggle {
+        grid-area: search;
+    }
+    .top-nav-toggle {
+        grid-area: top-toggle;
+    }
+
+    .home {
+        
+        grid-area: home;
+        margin-right: auto;
+        img {
+            height: 36px;
+        }
+    }
+	
+
+    .search {
+        margin-left: auto;
+        margin-right: 20px;
+        grid-area: search;
+
+        input[type='search'] {
+            color: var(--gg-dark-gray);
+            
+            background: rgba(255, 255, 255, 0.8);
+            border: 1px solid #cccccc;
+            padding: 10px 15px;
+            font-family: inherit;
+            max-width: 148px;
+            height: 37px;
+            font-size: 14px;
+            -webkit-appearance: unset;
+            appearance: unset;
+
+            &[disabled] {
+                opacity: 0.5;
+                cursor: not-allowed;
+            }
+        }
+
+    }
+
+
+    
+         
+
+    &>nav {
+        grid-area: nav;
+        font-size: 18px;
+        display: flex;
+        flex-direction: row;
+        margin: 0 20px;
+
+        li {
+            list-style: none;
+            margin-right: 0.5em;
+            display: flex;
+        }
+
+        a {
+            padding: 9px 14px;
+            color: var(--gg-dark-gray) !important;
+            text-decoration: none;
+            white-space: nowrap;
+
+            &.active {
+                border-radius: 3px;
+                background-color: #f0f0f0;
+            }
+        }
+    }
+
+    .github {
+        grid-area: gh;
+    }
+
+
+    .search-close {
+        margin-right: 10px;
+    }
+
+
+    @media (max-width: 900px) {
+        grid-template-columns: auto auto 1fr auto auto auto;
+        grid-template-areas: 
+            'left-toggle home spacer top-toggle search gh gg'
+            'nav         nav  nav    nav        nav    nav nav';
+
+        nav {
+            justify-content: center;
+            margin: 20px 0 10px;
+        }
+
+        & > nav > li {
+            position: relative;
+        }
+
+        .dropdown {
+            
+            top: calc(var(--header-height) + 25px);
+        }
+    }
+
+    @media (max-width: 600px) {
+        .search {
+            margin-right: 5px;
+            input[type='search'] {
+                max-width: 110px;
+            }
+        }
+    }
+
+    &:not(.narrow-header) {
+        .search-toggle, .top-nav-toggle, .search-close {
+            display: none;
+        }
+    }
+    &.narrow-header {
+        a.home {
+            top: 0;
+        }
+        &:not(.show-nav) {
+            nav {
+                display: none;
+            }
+        }
+        &.show-search {
+            .search-toggle, .home, .top-nav-toggle, .github, .menu {
+                display: none;
+            }
+            .search {
+                grid-column-start: home;
+                grid-column-end: github;
+                width: 100%;
+                display: flex;
+
+                input {
+                    max-width: initial;
+                    width: 100%;
+                }
+            }
+        }
+        &:not(.show-search) {
+            .search {
+                display: none;
+            }
+        }
+        nav {
+            flex-direction: column;
+            justify-content: stretch;
+
+            li {
+                display: flex;
+            }
+
+            a {
+                width: 100%;
+            }
+        }
+    }
+}
+.swiftype-widget {
+
+    .autocomplete {
+        background-color: white;
+        display: block;
+        list-style-type: none;
+        margin: 0;
+        padding: 0;
+        box-shadow: 0px 0px 10px 1px rgba(0, 0, 0, 0.37);
+        position: absolute;
+        border-radius: 3px;
+        text-align: left;
+        right: 75px !important;
+        min-width: 350px;
+
+        ul {
+
+            background-color: white;
+            display: block;
+            list-style-type: none;
+            margin: 0;
+            padding: 0;
+            border-radius: 3px;
+            text-align: left;
+            max-height: 70vh;
+            overflow: auto;
+
+            li {
+                border-top: 1px solid #e5e5e5;
+                border-bottom: 1px solid #fff;
+                cursor: pointer;
+                padding: 10px 8px;
+                font-size: 13px;
+                list-style-type: none;
+                background-image: none;
+                margin: 0;
+              }
+
+            li.active {
+                border-top: 1px solid #cccccc;
+                border-bottom: 1px solid #cccccc;
+                background-color: #f0f0f0;
+            }
+            
+            p {
+                font-size: 13px;
+                line-height: 16px;
+                margin: 0;
+                padding: 0;
+
+                &.url {
+                    font-size: 11px;
+                    color: #999;
+                }
+            }
+            
+            a {
+                font-size: 15px;
+            }
+            em {
+                font-weight: bold
+            }
+        }
+    }   
+}
+section.hero {
+    background-image: url(../images/dev-internal-bg.jpg);
+    background-position: center;
+    background-position-x: left;
+    background-repeat: no-repeat;
+    background-size: cover;
+    display: grid;
+    grid-template-columns: 1fr auto;
+    grid-template-areas: 'title versions';
+    grid-template-rows: 60px;
+    align-items: center;
+    padding: 5px 30px;
+    flex: unset;
+    
+    
+    .title {
+        color: #f3f3f3;
+        text-transform: uppercase;
+        font-size: 22px;
+    }
+
+    select {
+        list-style: none;
+        
+        line-height: 28px;
+        border-radius: 3px;
+    
+        
+        color: #333333;
+        line-height: 24px;
+        padding: 5px 10px;
+        white-space: nowrap;
+        font-size: 14px;
+        background:  #f0f0f0 url("/assets/images/arrow-down.svg") no-repeat center right 5px;
+    }
+}
+
+@media (max-width: 450px) {
+    section.hero {
+        grid-template-rows: auto;
+        padding: 15px;
+    
+        .title {
+            font-size: 18px;
+        }
+    }
+
+}
diff --git a/docs/_sass/layout.scss b/docs/_sass/layout.scss
new file mode 100644
index 0000000..cd0c288
--- /dev/null
+++ b/docs/_sass/layout.scss
@@ -0,0 +1,45 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+body {
+    --header-height: 64px;
+    --footer-height: 104px;
+    --footer-gap: 60px;
+
+    @media (min-width: 451px) and (max-width: 850px) {
+        --header-height: 111px;        
+    }
+
+    padding: 0;
+    margin: 0;
+    display: flex;
+    flex-direction: column;
+    min-height: 100vh;
+
+    &>section {
+        flex: 1;
+    }
+}
+header {
+    position: -webkit-sticky;
+    position: sticky;
+    top: 0;
+    z-index: 2;
+}
+body > footer {
+    margin: var(--footer-gap) 30px 0;
+}
+* {
+    box-sizing: border-box;
+}
diff --git a/docs/_sass/left-nav.scss b/docs/_sass/left-nav.scss
new file mode 100644
index 0000000..0fd55fd
--- /dev/null
+++ b/docs/_sass/left-nav.scss
@@ -0,0 +1,109 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+.left-nav {
+    padding: 10px 20px;
+    width: 289px;
+    overflow-y: auto;
+/*    height: calc(100vh - var(--header-height)); */
+    font-family: 'Open Sans';
+    padding-top: var(--padding-top);
+
+
+    li {
+        list-style: none;
+    }    
+    a, button {
+        text-decoration: none;
+        color: #757575;
+        font-size: 16px;
+        display: inline-flex;
+        width: 100%;
+        margin: 2px 0;
+        padding: 0.25em 0.375em;
+        background: none;
+        border: none;
+        cursor: pointer;
+        font: inherit;
+        text-align: left;
+
+        &.active, &:hover {
+            color: var(--link-color);
+        }
+    }
+
+	*:focus {
+	   	 outline: none;
+	}
+
+    .nav-group {
+        margin-left: 6px;
+        font-size: 14px;
+    }
+
+    nav {
+        border-left: 2px solid #dddddd;
+//        margin-top: 5px;
+        margin-bottom: 5px;
+
+        &.collapsed {
+            display: none;
+        }
+    }
+
+    nav > li > a, nav > li > button {
+        padding-left: 20px;
+        text-align: left;
+
+        &.active {
+            border-left: var(--orange-line-thickness) solid var(--active-color);
+            padding-left: calc(20px - var(--orange-line-thickness));
+        }
+    }
+
+    nav.sub_pages {
+        border: none;
+    }
+
+	nav.sub_pages a, nav.sub_pages button {
+        padding-left: 32px;
+
+        &.active {
+            padding-left: calc(32px - var(--orange-line-thickness));
+        }
+	}
+
+    .parent {
+        color: #393939;
+    }
+
+    .state-indicator {
+        margin-left: auto;
+        margin-top: 5px;
+        width: 6.2px;
+        height: 10px;
+        flex: 0 0 auto;
+        transition: transform 0.1s;
+        filter: invert(49%) sepia(4%) saturate(5%) hue-rotate(23deg) brightness(92%) contrast(90%);
+    }
+
+    button:hover .state-indicator,
+    button.current .state-indicator {
+        filter: invert(47%) sepia(61%) saturate(1950%) hue-rotate(345deg) brightness(100%) contrast(95%);
+    }
+
+    button.expanded .state-indicator {
+        transform: rotate(90deg);
+    }
+}
diff --git a/docs/_sass/right-nav.scss b/docs/_sass/right-nav.scss
new file mode 100644
index 0000000..68589c0
--- /dev/null
+++ b/docs/_sass/right-nav.scss
@@ -0,0 +1,73 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+.right-nav {
+    width: 289px;
+    padding: 12px 26px;
+    overflow-y: auto;
+    height: calc(100vh - var(--header-height));
+    top: 0;
+    position: -webkit-sticky;
+    position: sticky;
+    display: flex;
+    flex-direction: column;
+    font-family: 'Open sans';
+    padding-top: var(--padding-top);
+
+    h6 {
+        margin: 12px 0;
+        font-size: 16px;
+        font-weight: normal;
+    }
+
+    ul {
+        list-style: none;
+        padding: 0;
+        margin: 0;
+        // margin-bottom: auto;
+    }
+
+    li {
+        padding: 0;
+    }
+
+    a {
+        --border-width: 0px;
+        font-size: 14px;
+        color: #757575;
+        padding-left: calc(15px * var(--nesting-level) + 8px - var(--border-width));
+        margin: 0.3em 0;
+        display: inline-block;
+
+        &:hover {
+          color: var(--link-color);
+        }
+
+        &.active {
+            --border-width: var(--orange-line-thickness);
+            border-left: var(--border-width) solid var(--link-color);
+            color: #393939;
+        }
+    }
+
+    .sectlevel1 {
+        border-left: 2px solid #dddddd;
+    }
+
+    @for $i from 1 through 6 {
+        .sectlevel#{$i} {
+            --nesting-level: #{$i - 1};
+        }
+    }
+}
diff --git a/docs/_sass/rouge-base16-solarized.scss b/docs/_sass/rouge-base16-solarized.scss
new file mode 100644
index 0000000..10b1891
--- /dev/null
+++ b/docs/_sass/rouge-base16-solarized.scss
@@ -0,0 +1,99 @@
+//# MIT license.  See http://www.opensource.org/licenses/mit-license.php
+//
+//Copyright (c) 2012 Jeanine Adkisson.
+//
+//Permission is hereby granted, free of charge, to any person obtaining a copy
+//of this software and associated documentation files (the "Software"), to deal
+//in the Software without restriction, including without limitation the rights
+//to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+//copies of the Software, and to permit persons to whom the Software is
+//furnished to do so, subject to the following conditions:
+//
+//The above copyright notice and this permission notice shall be included in
+//all copies or substantial portions of the Software.
+//
+//THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+//IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+//FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+//AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+//LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+//OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+//THE SOFTWARE.
+
+.highlight table td { padding: 5px; }
+.highlight table pre { margin: 0; }
+.highlight, .highlight .w {
+  color: #586e75;
+}
+.highlight .err {
+  color: #002b36;
+  background-color: #dc322f;
+}
+.highlight .c, .highlight .cd, .highlight .cm, .highlight .c1, .highlight .cs {
+  color: #657b83;
+}
+.highlight .cp {
+  color: #b58900;
+}
+.highlight .nt {
+  color: #b58900;
+}
+.highlight .o, .highlight .ow {
+  color: #93a1a1;
+}
+.highlight .p, .highlight .pi {
+  color: #93a1a1;
+}
+.highlight .gi {
+  color: #859900;
+}
+.highlight .gd {
+  color: #dc322f;
+}
+.highlight .gh {
+  color: #268bd2;
+  background-color: #002b36;
+  font-weight: bold;
+}
+.highlight .k, .highlight .kn, .highlight .kp, .highlight .kr, .highlight .kv {
+  color: #6c71c4;
+}
+.highlight .kc {
+  color: #cb4b16;
+}
+.highlight .kt {
+  color: #cb4b16;
+}
+.highlight .kd {
+  color: #cb4b16;
+}
+.highlight .s, .highlight .sb, .highlight .sc, .highlight .sd, .highlight .s2, .highlight .sh, .highlight .sx, .highlight .s1 {
+  color: #859900;
+}
+.highlight .sr {
+  color: #2aa198;
+}
+.highlight .si {
+  color: #d33682;
+}
+.highlight .se {
+  color: #d33682;
+}
+.highlight .nn {
+  color: #b58900;
+}
+.highlight .nc {
+  color: #b58900;
+}
+.highlight .no {
+  color: #b58900;
+}
+.highlight .na {
+  color: #268bd2;
+}
+.highlight .m, .highlight .mf, .highlight .mh, .highlight .mi, .highlight .il, .highlight .mo, .highlight .mb, .highlight .mx {
+  color: #859900;
+}
+.highlight .ss {
+  color: #859900;
+}
diff --git a/docs/_sass/text.scss b/docs/_sass/text.scss
new file mode 100644
index 0000000..711ba7d
--- /dev/null
+++ b/docs/_sass/text.scss
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+body {
+    font-family: 'Open Sans', sans-serif;
+}
+
+h1, h2, h3, h4 {
+    color: #000;
+    font-weight: normal;
+    font-family: 'Open Sans';
+}
+
+h1 {
+    font-size: 36px;
+    line-height: 40px;
+}
+
+a {
+    text-decoration: none;
+    color: var(--link-color);
+}
+
+
+section {
+    color: #545454;
+}
+
+table {
+    border-collapse: collapse;
+
+    td, th {
+        text-align: left;
+        padding: 5px 10px;
+        border-bottom: 1px solid hsl(0, 0%, 85%);
+        border-top: 1px solid hsl(0, 0%, 85%);
+    }
+
+    td p.tableblock {
+        margin-top: 0.5em;
+        margin-bottom: 0.5em;
+
+        &:first-child {
+            margin-top: 0.125em;
+        }
+
+        &:last-child {
+            margin-bottom: 0.125em;
+        }
+    }
+}
diff --git a/docs/_sass/variables.scss b/docs/_sass/variables.scss
new file mode 100644
index 0000000..9b63c5b
--- /dev/null
+++ b/docs/_sass/variables.scss
@@ -0,0 +1,33 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+:root {
+    --gg-red: #ec1c24;
+    --gg-orange: #ec1c24;
+    --gg-orange-dark: #bc440b;
+    --gg-orange-filter: invert(47%) sepia(61%) saturate(1950%) hue-rotate(345deg) brightness(100%) contrast(95%);
+    --gg-dark-gray: #333333;
+    --orange-line-thickness: 3px;
+    --block-code-background: rgba(241, 241, 241, 20%);
+    --inline-code-background: rgba(241, 241, 241, 90%);
+    --padding-top: 25px; 
+    --link-color: #ec1c24;
+}
+
+@font-face {
+    font-family: 'Open Sans';
+    font-weight: 300;
+    font-display: swap;
+    font-style: normal;
+}
diff --git a/docs/assets/css/asciidoc-pygments.css b/docs/assets/css/asciidoc-pygments.css
new file mode 100644
index 0000000..6de1084
--- /dev/null
+++ b/docs/assets/css/asciidoc-pygments.css
@@ -0,0 +1,59 @@
+/*
+ * Copyright (C) 2013-2018 Dan Allen, Paul Rayner, and the Asciidoctor Project
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+pre.pygments .hll { background-color: #ffffcc }
+pre.pygments, pre.pygments code { background: #ffffff; }
+pre.pygments .tok-c { color: #008000 } /* Comment */
+pre.pygments .tok-err { border: 1px solid #FF0000 } /* Error */
+pre.pygments .tok-k { color: #0000ff } /* Keyword */
+pre.pygments .tok-ch { color: #008000 } /* Comment.Hashbang */
+pre.pygments .tok-cm { color: #008000 } /* Comment.Multiline */
+pre.pygments .tok-cp { color: #0000ff } /* Comment.Preproc */
+pre.pygments .tok-cpf { color: #008000 } /* Comment.PreprocFile */
+pre.pygments .tok-c1 { color: #008000 } /* Comment.Single */
+pre.pygments .tok-cs { color: #008000 } /* Comment.Special */
+pre.pygments .tok-ge { font-style: italic } /* Generic.Emph */
+pre.pygments .tok-gh { font-weight: bold } /* Generic.Heading */
+pre.pygments .tok-gp { font-weight: bold } /* Generic.Prompt */
+pre.pygments .tok-gs { font-weight: bold } /* Generic.Strong */
+pre.pygments .tok-gu { font-weight: bold } /* Generic.Subheading */
+pre.pygments .tok-kc { color: #0000ff } /* Keyword.Constant */
+pre.pygments .tok-kd { color: #0000ff } /* Keyword.Declaration */
+pre.pygments .tok-kn { color: #0000ff } /* Keyword.Namespace */
+pre.pygments .tok-kp { color: #0000ff } /* Keyword.Pseudo */
+pre.pygments .tok-kr { color: #0000ff } /* Keyword.Reserved */
+pre.pygments .tok-kt { color: #2b91af } /* Keyword.Type */
+pre.pygments .tok-s { color: #a31515 } /* Literal.String */
+pre.pygments .tok-nc { color: #2b91af } /* Name.Class */
+pre.pygments .tok-ow { color: #0000ff } /* Operator.Word */
+pre.pygments .tok-sa { color: #a31515 } /* Literal.String.Affix */
+pre.pygments .tok-sb { color: #a31515 } /* Literal.String.Backtick */
+pre.pygments .tok-sc { color: #a31515 } /* Literal.String.Char */
+pre.pygments .tok-dl { color: #a31515 } /* Literal.String.Delimiter */
+pre.pygments .tok-sd { color: #a31515 } /* Literal.String.Doc */
+pre.pygments .tok-s2 { color: #a31515 } /* Literal.String.Double */
+pre.pygments .tok-se { color: #a31515 } /* Literal.String.Escape */
+pre.pygments .tok-sh { color: #a31515 } /* Literal.String.Heredoc */
+pre.pygments .tok-si { color: #a31515 } /* Literal.String.Interpol */
+pre.pygments .tok-sx { color: #a31515 } /* Literal.String.Other */
+pre.pygments .tok-sr { color: #a31515 } /* Literal.String.Regex */
+pre.pygments .tok-s1 { color: #a31515 } /* Literal.String.Single */
+pre.pygments .tok-ss { color: #a31515 } /* Literal.String.Symbol */
diff --git a/docs/assets/css/docs.scss b/docs/assets/css/docs.scss
new file mode 100644
index 0000000..2e05a98
--- /dev/null
+++ b/docs/assets/css/docs.scss
@@ -0,0 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+---
+---
+// Docs-specific global styles
+body {
+}
diff --git a/docs/assets/css/styles.scss b/docs/assets/css/styles.scss
new file mode 100644
index 0000000..f23e704
--- /dev/null
+++ b/docs/assets/css/styles.scss
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+---
+---
+@import "variables";
+@import "header";
+@import "code";
+@import "rouge-base16-solarized";
+@import "text";
+@import "callouts";
+@import "layout";
+@import "left-nav";
+@import "right-nav";
+@import "footer";
+
+@import "docs";
diff --git a/docs/assets/images/apple-blob.svg b/docs/assets/images/apple-blob.svg
new file mode 100644
index 0000000..308bf94
--- /dev/null
+++ b/docs/assets/images/apple-blob.svg
@@ -0,0 +1,16 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="111" height="274" viewBox="0 0 111 274">
+    <defs>
+        <linearGradient id="a" x1="50%" x2="50%" y1="0%" y2="100%">
+            <stop offset="0%" stop-color="#FCA44F"/>
+            <stop offset="100%" stop-color="#F86B27"/>
+        </linearGradient>
+        <linearGradient id="b" x1="50%" x2="50%" y1="0%" y2="100%">
+            <stop offset="0%" stop-color="#F1474E"/>
+            <stop offset="100%" stop-color="#DF2226"/>
+        </linearGradient>
+    </defs>
+    <g fill="none" fill-rule="evenodd" transform="translate(-13.406 3.425)">
+        <path fill="url(#a)" d="M120.697 3.889L142.52 48.75c2.899 5.96.418 13.141-5.542 16.04a12 12 0 0 1-5.25 1.209H88.315c-6.628 0-12-5.373-12-12a12 12 0 0 1 1.187-5.204L99.093 3.934c2.875-5.972 10.046-8.483 16.017-5.609a12 12 0 0 1 5.587 5.564z" transform="rotate(-7 110 35)"/>
+        <rect width="195" height="196" x="27" y="60.979" fill="url(#b)" opacity=".9" rx="50" transform="rotate(-19 124.5 158.979)"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/arrow-down-white.svg b/docs/assets/images/arrow-down-white.svg
new file mode 100644
index 0000000..12a5613
--- /dev/null
+++ b/docs/assets/images/arrow-down-white.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="8" height="4" viewBox="0 0 8 4">
+    <path fill="#f3f3f3" fill-rule="nonzero" d="M0 0l4 4 4-4z"/>
+</svg>
diff --git a/docs/assets/images/arrow-down.svg b/docs/assets/images/arrow-down.svg
new file mode 100644
index 0000000..170a167
--- /dev/null
+++ b/docs/assets/images/arrow-down.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="8" height="4" viewBox="0 0 8 4">
+    <path fill="#333" fill-rule="nonzero" d="M0 0l4 4 4-4z"/>
+</svg>
diff --git a/docs/assets/images/background-lines.svg b/docs/assets/images/background-lines.svg
new file mode 100644
index 0000000..50524eb
--- /dev/null
+++ b/docs/assets/images/background-lines.svg
@@ -0,0 +1,54 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1440" height="998" viewBox="0 0 1440 998">
+    <g fill="none" fill-rule="evenodd" stroke="#979797" opacity=".06">
+        <path d="M1087.77 102.125C914.927 152.38 790.92 262.896 715.745 433.67 602.985 689.833 378.618 770.826 265.85 780.258c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1106.995 96.612C934.152 146.868 810.144 257.383 734.97 428.158 622.21 684.32 397.843 765.313 285.075 774.746c-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1126.22 91.1c-172.843 50.255-296.85 160.77-372.024 331.545C641.436 678.807 417.068 759.8 304.3 769.233c-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1145.446 85.586C972.603 135.842 848.594 246.358 773.42 417.132 660.66 673.294 436.294 754.288 323.525 763.72c-112.768 9.432-535.598 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1164.671 80.074C991.828 130.33 867.82 240.845 792.646 411.62 679.886 667.782 455.519 748.775 342.75 758.207c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1183.896 74.56c-172.843 50.257-296.851 160.772-372.025 331.547-112.76 256.162-337.127 337.155-449.895 346.588-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1203.122 69.048C1030.278 119.304 906.27 229.82 831.096 400.594 718.336 656.756 493.97 737.75 381.201 747.182c-112.769 9.432-535.599 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1222.347 63.535c-172.843 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.127 337.156-449.896 346.588-112.769 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1241.572 58.023C1068.73 108.279 944.72 218.794 869.547 389.569 756.787 645.73 532.42 726.724 419.65 736.156c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.125 182.837-187.229 243.16"/>
+        <path d="M1260.797 52.51c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.837-187.229 243.161"/>
+        <path d="M1280.023 46.997C1107.179 97.253 983.17 207.77 907.997 378.543 795.237 634.705 570.87 715.698 458.102 725.131c-112.769 9.432-535.599 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1299.248 41.484C1126.405 91.74 1002.396 202.256 927.223 373.03c-112.76 256.162-337.128 337.156-449.896 346.588-112.769 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1318.473 35.972c-172.843 50.256-296.852 160.771-372.025 331.546C833.688 623.68 609.32 704.673 496.552 714.105c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1337.698 30.459c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.837-187.229 243.161"/>
+        <path d="M1356.923 24.946c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.895 346.588-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1376.149 19.433c-172.843 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.128 337.156-449.896 346.588C441.459 707 18.63 817.833-114.442 970.508c-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1395.374 13.92c-172.843 50.257-296.852 160.772-372.025 331.547-112.76 256.162-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1414.6 8.408c-172.844 50.256-296.852 160.771-372.026 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.837-187.229 243.161"/>
+        <path d="M1433.824 2.895c-172.843 50.256-296.851 160.772-372.025 331.546C949.04 590.603 724.672 671.596 611.904 681.03c-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1453.05-2.618c-172.844 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.128 337.156-449.896 346.588C518.36 684.95 95.53 795.782-37.541 948.457c-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1472.275-8.13c-172.843 50.256-296.852 160.771-372.025 331.546C987.49 579.578 763.123 660.57 650.354 670.003c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1491.5-13.643c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588C556.811 673.923 133.981 784.757.91 937.43c-88.715 101.784-151.125 182.837-187.229 243.161"/>
+        <path d="M1510.725-19.156C1337.882 31.1 1213.874 141.616 1138.7 312.39c-112.76 256.162-337.127 337.155-449.895 346.588-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1529.95-24.669c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.156-449.895 346.588C595.26 662.898 172.43 773.731 39.36 926.406c-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1549.176-30.181c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1568.401-35.694c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.229 243.161"/>
+        <path d="M1587.626-41.207C1414.783 9.05 1290.775 119.565 1215.601 290.34c-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94C8.32 1011.652-54.089 1092.705-90.194 1153.029"/>
+        <path d="M1606.852-46.72C1434.008 3.536 1310 114.052 1234.826 284.826 1122.066 540.988 897.7 621.982 784.931 631.414c-112.769 9.433-535.599 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1626.077-52.232c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1645.302-57.745C1472.459-7.49 1348.45 103.026 1273.277 273.8 1160.517 529.963 936.15 610.956 823.38 620.389c-112.768 9.432-535.598 120.266-668.67 272.94C65.996 995.114 3.586 1076.167-32.52 1136.49"/>
+        <path d="M1664.527-63.258c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94C85.221 989.6 22.812 1070.655-13.293 1130.978"/>
+        <path d="M1683.752-68.77C1510.91-18.516 1386.901 92 1311.727 262.774 1198.967 518.937 974.6 599.931 861.832 609.363c-112.769 9.433-535.599 120.266-668.67 272.941-88.716 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1702.978-74.283c-172.844 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1722.203-79.796C1549.36-29.54 1425.35 80.975 1350.178 251.75c-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1741.428-85.309c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.838-187.229 243.161"/>
+        <path d="M1760.653-90.822C1587.81-40.566 1463.802 69.95 1388.628 240.724c-112.76 256.162-337.127 337.156-449.895 346.588-112.769 9.433-535.599 120.266-668.67 272.941-88.716 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1779.879-96.334c-172.844 50.256-296.852 160.771-372.026 331.546-112.76 256.162-337.127 337.155-449.895 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1799.104-101.847c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1818.33-107.36c-172.844 50.256-296.852 160.772-372.026 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.838-187.229 243.161"/>
+        <path d="M1837.554-112.873C1664.711-62.616 1540.703 47.9 1465.53 218.674c-112.76 256.161-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1856.78-118.385C1683.936-68.13 1559.928 42.386 1484.754 213.16c-112.76 256.162-337.127 337.155-449.895 346.587-112.769 9.433-535.599 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1876.005-123.898c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1895.23-129.41c-172.843 50.255-296.852 160.77-372.025 331.545-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.838-187.23 243.161"/>
+        <path d="M1914.455-134.924c-172.843 50.257-296.851 160.772-372.025 331.547-112.76 256.161-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1933.68-140.436C1760.838-90.18 1636.83 20.335 1561.656 191.11c-112.76 256.162-337.127 337.155-449.895 346.587C998.99 547.13 576.16 657.964 443.09 810.638c-88.716 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1952.906-145.949c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94C373.6 906.91 311.19 987.963 275.084 1048.287"/>
+        <path d="M1972.131-151.462c-172.843 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.838-187.23 243.161"/>
+        <path d="M1991.356-156.975c-172.843 50.257-296.851 160.772-372.025 331.547-112.76 256.162-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M2010.581-162.487c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.895 346.587-112.769 9.433-535.599 120.267-668.67 272.941-88.716 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M2029.807-168c-172.844 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/cancel.svg b/docs/assets/images/cancel.svg
new file mode 100644
index 0000000..1ca9a6a
--- /dev/null
+++ b/docs/assets/images/cancel.svg
@@ -0,0 +1,11 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<svg width="14px" height="14px" viewBox="0 0 14 14" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+    <!-- Generator: Sketch 53.2 (72643) - https://sketchapp.com -->
+    <title>Cancel-icon</title>
+    <desc>Created with Sketch.</desc>
+    <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
+        <g id="Pathes" transform="translate(-718.000000, -27.000000)" fill="#FFFFFF">
+            <polyline id="Cancel-icon" points="732 28.41 730.59 27 725 32.59 719.41 27 718 28.41 723.59 34 718 39.59 719.41 41 725 35.41 730.59 41 732 39.59 726.41 34 732 28.41"></polyline>
+        </g>
+    </g>
+</svg>
\ No newline at end of file
diff --git a/docs/assets/images/checkmark-green.svg b/docs/assets/images/checkmark-green.svg
new file mode 100644
index 0000000..7f4bd06
--- /dev/null
+++ b/docs/assets/images/checkmark-green.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="20" height="15" viewBox="0 0 20 15">
+    <path fill="#157136" fill-rule="evenodd" d="M19.753.348a.846.846 0 0 0-1.189 0L6.666 12.167 1.436 7.01a.846.846 0 0 0-1.19 0 .827.827 0 0 0 0 1.177l5.828 5.745a.855.855 0 0 0 1.19 0l12.49-12.407a.826.826 0 0 0 0-1.177c-.329-.326.328.325 0 0z"/>
+</svg>
diff --git a/docs/assets/images/copy-icon.svg b/docs/assets/images/copy-icon.svg
new file mode 100644
index 0000000..9ee5957
--- /dev/null
+++ b/docs/assets/images/copy-icon.svg
@@ -0,0 +1,6 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="15" height="18" viewBox="0 0 15 18">
+    <defs>
+        <path id="a" d="M589 97.75h-9c-.825 0-1.5.675-1.5 1.5v10.5h1.5v-10.5h9v-1.5zm2.25 3H583c-.825 0-1.5.675-1.5 1.5v10.5c0 .825.675 1.5 1.5 1.5h8.25c.825 0 1.5-.675 1.5-1.5v-10.5c0-.825-.675-1.5-1.5-1.5zm0 12H583v-10.5h8.25v10.5z"/>
+    </defs>
+    <use fill="#757575" fill-rule="nonzero" transform="translate(-578 -97)" xlink:href="#a"/>
+</svg>
diff --git a/docs/assets/images/cpp.svg b/docs/assets/images/cpp.svg
new file mode 100644
index 0000000..2ad3e6d
--- /dev/null
+++ b/docs/assets/images/cpp.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="58" height="66" viewBox="0 0 58 66">
+    <g fill="none" fill-rule="nonzero">
+        <path fill="#00599C" d="M57.262 48.952c.455-.789.738-1.677.738-2.474V18.79c0-.797-.282-1.685-.738-2.474L29 32.634l28.262 16.318z"/>
+        <path fill="#004482" d="M31.511 64.67L55.49 50.829c.69-.4 1.318-1.088 1.773-1.876L29 32.634.738 48.952c.455.788 1.083 1.477 1.773 1.876L26.49 64.67c1.38.797 3.641.797 5.022 0z"/>
+        <path fill="#659AD2" d="M57.262 16.317c-.455-.788-1.083-1.477-1.773-1.876L31.51.598c-1.38-.797-3.641-.797-5.022 0L2.51 14.441C1.131 15.24 0 17.196 0 18.791v27.687c0 .797.283 1.685.738 2.474L29 32.634l28.262-16.317z"/>
+        <path fill="#FFF" d="M29 51.968c-10.66 0-19.333-8.673-19.333-19.334 0-10.66 8.673-19.333 19.333-19.333 6.879 0 13.294 3.702 16.742 9.66l-8.366 4.842A9.708 9.708 0 0 0 29 22.968c-5.33 0-9.667 4.336-9.667 9.666S23.67 42.301 29 42.301c3.44 0 6.65-1.853 8.376-4.836l8.367 4.841c-3.448 5.96-9.864 9.662-16.743 9.662z"/>
+        <path fill="#FFF" d="M48.333 31.56h-2.148v-2.148h-2.148v2.148H41.89v2.148h2.148v2.149h2.148v-2.149h2.148zM56.389 31.56H54.24v-2.148h-2.148v2.148h-2.149v2.148h2.149v2.149h2.148v-2.149h2.148z"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/dev-internal-bg.jpg b/docs/assets/images/dev-internal-bg.jpg
new file mode 100644
index 0000000..847a3f5
--- /dev/null
+++ b/docs/assets/images/dev-internal-bg.jpg
Binary files differ
diff --git a/docs/assets/images/dotnet.svg b/docs/assets/images/dotnet.svg
new file mode 100644
index 0000000..5801271
--- /dev/null
+++ b/docs/assets/images/dotnet.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="57" height="57" viewBox="0 0 57 57">
+    <g fill="none" fill-rule="nonzero">
+        <path fill="#672572" d="M.118.118h56.763v56.763H.118z"/>
+        <g fill="#FFF">
+            <path d="M36.649 21.809c.29-.258.436-.58.436-.967 0-.353-.117-.644-.348-.872-.177-.18-.47-.357-.876-.526-.361-.153-.6-.286-.714-.394-.126-.118-.19-.28-.19-.49 0-.196.075-.355.228-.48.15-.122.352-.184.605-.184.406 0 .765.113 1.078.34v-.766a2.25 2.25 0 0 0-1.016-.221c-.492 0-.893.13-1.206.392-.31.262-.468.59-.468.983 0 .352.1.638.301.858.167.186.45.364.848.539.386.169.645.315.778.437.133.12.2.273.2.459 0 .441-.302.663-.903.663-.45 0-.86-.153-1.235-.455v.817c.335.19.727.286 1.177.286.55-.002.985-.14 1.305-.42zM43.8 22.112h.76v-4.105h1.113v-.643H44.56v-.732c0-.67.264-1.007.796-1.007.189 0 .355.042.506.124v-.687c-.133-.056-.317-.082-.55-.082-.406 0-.748.128-1.025.383-.326.295-.486.706-.486 1.227v.77h-.807v.643h.807v4.11H43.8zM19.564 16.157c.14 0 .257-.048.355-.144a.474.474 0 0 0 .146-.352.472.472 0 0 0-.146-.35.483.483 0 0 0-.355-.143.484.484 0 0 0-.35.142.479.479 0 0 0-.145.35c0 .147.05.265.145.358a.49.49 0 0 0 .35.14zM23.372 22.226c.501 0 .936-.111 1.3-.335v-.723a1.908 1.908 0 0 1-1.169.408c-.492 0-.88-.16-1.175-.482-.293-.323-.44-.758-.44-1.313 0-.574.162-1.035.48-1.38.303-.336.698-.505 1.182-.505.401 0 .778.122 1.13.373v-.78a2.423 2.423 0 0 0-1.097-.243c-.754 0-1.355.24-1.801.719-.448.479-.67 1.109-.67 1.885 0 .694.204 1.26.612 1.697.417.455.967.679 1.648.679zM50.402 17.743h.148c.105 0 .2.096.289.29l.175.387h.275l-.21-.428c-.09-.178-.178-.275-.267-.298v-.006a.548.548 0 0 0 .302-.158.394.394 0 0 0 .108-.281.386.386 0 0 0-.133-.304c-.104-.091-.255-.136-.457-.136h-.452v1.61h.226v-.676h-.004zm0-.74h.2c.14 0 .237.024.297.07.05.045.077.11.077.202 0 .184-.11.275-.332.275h-.242v-.548zM46.902 20.817c0 .934.415 1.402 1.244 1.402.295 0 .535-.051.714-.153v-.648a.808.808 0 0 1-.492.153c-.253 0-.433-.066-.543-.204-.11-.133-.162-.36-.162-.678v-2.682h1.195v-.643h-1.195v-1.406c-.267.086-.52.17-.761.246v1.16h-.816v.643h.816v2.81z"/>
+            <path d="M50.668 18.881c.36 0 .66-.122.905-.362.246-.24.368-.538.368-.893 0-.362-.122-.664-.37-.9a1.242 1.242 0 0 0-.894-.349c-.362 0-.663.122-.907.364a1.213 1.213 0 0 0-.364.894c0 .361.12.658.36.893.237.233.538.353.902.353zm-.796-2.045c.213-.213.479-.32.8-.32.31 0 .577.107.792.32.215.213.324.48.324.796 0 .32-.109.588-.326.799a1.075 1.075 0 0 1-.79.324 1.08 1.08 0 0 1-.79-.32 1.084 1.084 0 0 1-.327-.803c0-.317.106-.58.317-.796zM19.176 17.364h1v4.749h-1zM30.949 22.226c.723 0 1.297-.229 1.725-.686.426-.457.639-1.062.639-1.82 0-.77-.2-1.376-.595-1.815-.399-.437-.951-.659-1.659-.659-.72 0-1.293.211-1.723.63-.463.453-.696 1.09-.696 1.917 0 .723.204 1.306.612 1.747.417.457.985.686 1.697.686zM29.87 18.36c.284-.309.663-.464 1.135-.464.48 0 .852.153 1.118.46.273.319.41.782.41 1.39 0 .581-.128 1.03-.386 1.346-.261.324-.645.486-1.144.486-.483 0-.867-.16-1.153-.481-.289-.32-.43-.766-.43-1.333 0-.606.15-1.072.45-1.404zM41.87 21.54c.425-.457.64-1.062.64-1.82 0-.77-.201-1.376-.596-1.815-.397-.437-.95-.659-1.66-.659-.72 0-1.297.211-1.725.63-.461.453-.694 1.09-.694 1.917 0 .723.204 1.306.612 1.747.417.457.983.686 1.697.686.723 0 1.3-.229 1.725-.686zm-3.254-1.776c0-.606.148-1.074.448-1.404.286-.309.665-.464 1.135-.464.48 0 .852.153 1.118.46.275.319.413.782.413 1.39 0 .581-.129 1.03-.386 1.346-.262.324-.643.486-1.145.486-.48 0-.867-.16-1.155-.481-.286-.318-.428-.763-.428-1.333zM30.691 33.799h6.3V31.95h-6.3v-5.8h6.798v-1.847h-8.838v17.435h9.24V39.89h-7.2zM39.81 26.151h5.031V41.74h2.043V26.15h5.021v-1.847H39.81zM6.396 39.272c-.372 0-.687.133-.947.402a1.34 1.34 0 0 0-.388.973c0 .373.129.695.388.965.26.27.577.408.947.408.382 0 .706-.137.974-.408a1.33 1.33 0 0 0 .402-.965c0-.372-.134-.696-.402-.967a1.305 1.305 0 0 0-.974-.408zM27.116 17.54c-.23.188-.404.454-.515.802h-.02v-.98h-.758v4.748h.759v-2.422c0-.55.117-.98.357-1.293.21-.277.47-.417.778-.417.255 0 .446.05.581.153v-.787a1.244 1.244 0 0 0-.454-.064c-.269 0-.513.088-.728.26zM11.467 16.357h.015l2.47 5.755h.382l2.467-5.755h.017l-.064 5.755h.778v-6.649h-.975l-2.402 5.432h-.034l-2.342-5.432H10.75v6.65h.752zM22.758 39.028l-9.36-14.724h-2.652v17.435h2.042V26.783L22.31 41.74h2.48V24.304h-2.032z"/>
+        </g>
+    </g>
+</svg>
diff --git a/docs/assets/images/edition-ce.svg b/docs/assets/images/edition-ce.svg
new file mode 100644
index 0000000..fad4eb5
--- /dev/null
+++ b/docs/assets/images/edition-ce.svg
@@ -0,0 +1,16 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="86" height="74" viewBox="0 0 86 74">
+    <defs>
+        <path id="a" d="M.022.27h85.154v72.867H.022z"/>
+    </defs>
+    <g fill="none" fill-rule="evenodd">
+        <path fill="#DF2226" d="M46.604 43.541a4.004 4.004 0 0 1-4.006 4.004 3.998 3.998 0 0 1-3.995-4.004 3.998 3.998 0 0 1 3.995-4.003 4.004 4.004 0 0 1 4.006 4.003"/>
+        <path fill="#DF2226" d="M42.602 40.9a2.65 2.65 0 0 0-2.648 2.642 2.649 2.649 0 0 0 2.648 2.64 2.648 2.648 0 0 0 2.645-2.64 2.649 2.649 0 0 0-2.645-2.642m0 8.001c-2.957 0-5.365-2.405-5.365-5.359a5.372 5.372 0 0 1 5.365-5.361 5.371 5.371 0 0 1 5.362 5.361 5.369 5.369 0 0 1-5.362 5.359"/>
+        <path fill="#DF2226" d="M41.073 45.073L29.516 31.201a.582.582 0 0 1 .074-.828.583.583 0 0 1 .753 0l13.873 11.56a2.227 2.227 0 0 1 .282 3.14 2.229 2.229 0 0 1-3.425 0M13.345 44.517H1.142a1.142 1.142 0 0 1 0-2.283h12.203a1.142 1.142 0 1 1 0 2.283M83.873 44.517H71.67a1.142 1.142 0 0 1 0-2.283h12.203a1.142 1.142 0 0 1 0 2.283"/>
+        <g transform="translate(0 .696)">
+            <mask id="b" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="#DF2226" d="M13.165 73.137h-.003a1.14 1.14 0 0 1-.806-.338C4.402 64.772.022 54.131.022 42.845.022 19.367 19.126.269 42.602.269c23.477 0 42.575 19.098 42.575 42.576 0 11.286-4.377 21.924-12.33 29.954-.217.217-.487.258-.81.338-.303 0-.593-.121-.807-.332l-8.656-8.632a1.142 1.142 0 0 1 0-1.614 1.133 1.133 0 0 1 1.61 0l7.838 7.809c7.022-7.494 10.875-17.224 10.875-27.523 0-22.218-18.078-40.296-40.296-40.296S2.306 20.627 2.306 42.845c0 10.299 3.852 20.029 10.871 27.523l7.835-7.844a1.138 1.138 0 1 1 1.61 1.61l-8.653 8.668a1.136 1.136 0 0 1-.804.335" mask="url(#b)"/>
+        </g>
+    </g>
+</svg>
diff --git a/docs/assets/images/edition-ee.svg b/docs/assets/images/edition-ee.svg
new file mode 100644
index 0000000..fbc466d
--- /dev/null
+++ b/docs/assets/images/edition-ee.svg
@@ -0,0 +1,25 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="84" height="84" viewBox="0 0 84 84">
+    <defs>
+        <path id="a" d="M.559.511h34.1v39.027H.56z"/>
+        <path id="c" d="M.333.109h34.099v11.383H.333z"/>
+    </defs>
+    <g fill="none" fill-rule="evenodd">
+        <path fill="#713C80" d="M2.164 69.539l14.888 8.59 14.888-8.59V52.352L17.052 43.74 2.164 52.352V69.54zM17.052 80.46c-.19 0-.371-.052-.54-.145L.54 71.096a1.084 1.084 0 0 1-.54-.931V51.726c0-.385.21-.743.54-.932l15.972-9.24a1.116 1.116 0 0 1 1.08 0l15.972 9.24c.334.19.537.547.537.932v18.439c0 .382-.203.736-.537.932l-15.972 9.22a1.114 1.114 0 0 1-.54.144zM51.654 69.539l14.885 8.59 14.888-8.59V52.352L66.54 43.74l-14.885 8.613V69.54zM66.539 80.46a1.1 1.1 0 0 1-.537-.145l-15.972-9.22a1.076 1.076 0 0 1-.54-.931V51.726c0-.385.207-.743.54-.932L66 41.554c.337-.189.75-.186 1.08 0l15.972 9.24c.334.19.54.547.54.932v18.439c0 .385-.206.74-.54.932l-15.972 9.22c-.165.092-.35.144-.54.144z"/>
+        <path fill="#713C80" d="M29.045 73.711l13.643 7.888 12.763-7.375-5.418-3.127a1.066 1.066 0 0 1-.544-.936V51.726c0-.385.207-.736.544-.932l15.965-9.24c.337-.193.75-.186 1.083 0L72.52 44.7V29.937l-12.78-7.372v6.608c0 .386-.206.747-.54.933L43.228 39.32c-.337.2-.746.2-1.076 0L26.18 30.106a1.061 1.061 0 0 1-.54-.933v-6.608l-12.76 7.372v13.719l3.63-2.102a1.123 1.123 0 0 1 1.084 0l15.968 9.24c.334.196.54.547.54.932V70.16c0 .385-.206.746-.54.936l-4.517 2.614zm13.643 10.217c-.185 0-.375-.048-.54-.145l-15.803-9.136a1.07 1.07 0 0 1-.54-.936c0-.378.203-.743.54-.932l5.597-3.24V52.351l-14.889-8.614-4.712 2.728a1.088 1.088 0 0 1-1.084 0 1.073 1.073 0 0 1-.54-.932v-16.22c0-.388.203-.743.54-.939l14.923-8.62c.333-.19.746-.19 1.08 0 .334.192.543.557.543.939v7.853l14.885 8.59 14.889-8.59v-7.853c0-.382.206-.747.54-.94.333-.189.746-.189 1.083 0l14.94 8.621c.337.196.544.55.544.94v17.261c0 .392-.207.743-.544.936a1.082 1.082 0 0 1-1.08 0l-6.522-3.774-14.885 8.614v17.186l6.498 3.75a1.083 1.083 0 0 1 0 1.875l-14.923 8.62c-.165.097-.35.145-.54.145z"/>
+        <g transform="translate(25.08 -.072)">
+            <mask id="b" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="#713C80" d="M2.722 28.62l14.885 8.589 14.889-8.59V11.433l-14.889-8.59-14.885 8.59v17.186zm14.885 10.918c-.185 0-.371-.049-.536-.145L1.099 30.177a1.075 1.075 0 0 1-.54-.928V10.81c0-.389.206-.746.54-.935L17.07.655a1.075 1.075 0 0 1 1.076 0l15.972 9.22c.337.189.54.546.54.935V29.25c0 .381-.203.736-.54.928l-15.972 9.216c-.165.096-.35.145-.54.145z" mask="url(#b)"/>
+        </g>
+        <g transform="translate(49.16 50.528)">
+            <mask id="d" fill="#fff">
+                <use xlink:href="#c"/>
+            </mask>
+            <path fill="#713C80" d="M17.383 11.492c-.19 0-.372-.048-.54-.144L.874 2.128C.358 1.83.18 1.168.48.656c.299-.523.96-.698 1.475-.402l15.429 8.91L32.81.253c.52-.3 1.18-.12 1.476.402.3.513.124 1.173-.392 1.472l-15.972 9.22a1.087 1.087 0 0 1-.54.144" mask="url(#d)"/>
+        </g>
+        <path fill="#713C80" d="M66.543 80.46c-.599 0-1.08-.485-1.08-1.073V60.94c0-.598.481-1.08 1.08-1.08.598 0 1.08.482 1.08 1.08v18.446c0 .588-.482 1.073-1.08 1.073M17.059 62.02c-.19 0-.372-.048-.54-.144L.547 52.656c-.516-.298-.691-.959-.392-1.472.296-.522.96-.698 1.476-.402l15.428 8.91 15.428-8.91c.516-.3 1.177-.12 1.476.402.3.513.12 1.174-.395 1.473l-15.969 9.219a1.087 1.087 0 0 1-.54.144"/>
+        <path fill="#713C80" d="M17.059 80.46c-.599 0-1.08-.485-1.08-1.073V60.94c0-.598.481-1.08 1.08-1.08.599 0 1.08.482 1.08 1.08v18.446c0 .588-.481 1.073-1.08 1.073M42.695 21.033c-.19 0-.372-.048-.54-.145l-15.972-9.222a1.073 1.073 0 0 1-.396-1.47c.3-.522.96-.7 1.476-.402l15.432 8.91 15.428-8.91c.516-.299 1.176-.12 1.476.403.299.512.12 1.17-.396 1.469l-15.972 9.222c-.168.097-.35.145-.536.145"/>
+        <path fill="#713C80" d="M42.695 39.472a1.08 1.08 0 0 1-1.08-1.076V19.954a1.078 1.078 0 1 1 2.156 0v18.442c0 .591-.481 1.076-1.076 1.076"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/edition-ue.svg b/docs/assets/images/edition-ue.svg
new file mode 100644
index 0000000..0d788ed
--- /dev/null
+++ b/docs/assets/images/edition-ue.svg
@@ -0,0 +1,28 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="86" height="87" viewBox="0 0 86 87">
+    <defs>
+        <path id="a" d="M.841.473h15.423v15.365H.841z"/>
+        <path id="c" d="M1.26.484h11.02v10.983H1.26z"/>
+        <path id="e" d="M0 88.393h85.903V2.866H0z"/>
+    </defs>
+    <g fill="none" fill-rule="evenodd" transform="translate(0 -2)">
+        <path fill="#F16623" d="M42.953 31.386c-7.892 0-14.314 6.39-14.314 14.242 0 7.854 6.422 14.244 14.314 14.244s14.31-6.39 14.31-14.244c0-7.852-6.418-14.242-14.31-14.242m0 30.696c-9.113 0-16.524-7.381-16.524-16.454 0-9.07 7.41-16.452 16.524-16.452 9.11 0 16.523 7.381 16.523 16.452 0 9.073-7.413 16.454-16.523 16.454"/>
+        <g transform="translate(34.4 2.393)">
+            <mask id="b" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="#F16623" d="M8.553 2.683c-3.036 0-5.504 2.454-5.504 5.47 0 3.018 2.468 5.472 5.504 5.472 3.035 0 5.5-2.454 5.5-5.472 0-3.016-2.465-5.47-5.5-5.47m0 13.155C4.3 15.838.84 12.392.84 8.153c0-4.235 3.46-7.68 7.712-7.68 4.251 0 7.711 3.445 7.711 7.68 0 4.24-3.46 7.685-7.711 7.685" mask="url(#b)"/>
+        </g>
+        <path fill="#F16623" d="M5.51 13.844a3.292 3.292 0 0 0-3.297 3.283 3.292 3.292 0 0 0 3.297 3.28c1.82 0 3.3-1.474 3.3-3.28 0-1.81-1.48-3.283-3.3-3.283m0 8.772c-3.039 0-5.51-2.462-5.51-5.49 0-3.027 2.471-5.492 5.51-5.492 3.041 0 5.512 2.465 5.512 5.493 0 3.027-2.47 5.49-5.512 5.49M12.12 35.77c-3.037 0-5.505 2.454-5.505 5.472 0 3.019 2.468 5.473 5.504 5.473 3.033 0 5.501-2.454 5.501-5.473 0-3.018-2.468-5.472-5.5-5.472m0 13.155c-4.252 0-7.712-3.446-7.712-7.683 0-4.234 3.46-7.682 7.711-7.682s7.709 3.448 7.709 7.682c0 4.237-3.458 7.683-7.709 7.683M75.986 52.393c-4.245 0-7.705 3.437-7.705 7.666 0 4.225 3.46 7.665 7.705 7.665 4.249 0 7.706-3.44 7.706-7.665 0-4.229-3.457-7.666-7.706-7.666m0 17.541c-5.463 0-9.912-4.432-9.912-9.875 0-5.447 4.449-9.876 9.912-9.876 5.467 0 9.916 4.429 9.916 9.876 0 5.443-4.449 9.875-9.916 9.875M78.19 18.231c-3.036 0-5.501 2.454-5.501 5.473 0 3.015 2.465 5.47 5.5 5.47 3.037 0 5.505-2.455 5.505-5.47 0-3.019-2.468-5.473-5.504-5.473m0 13.155c-4.251 0-7.712-3.445-7.712-7.682s3.46-7.683 7.712-7.683c4.251 0 7.711 3.446 7.711 7.683s-3.46 7.682-7.711 7.682M9.365 78.523a3.295 3.295 0 0 0-3.297 3.286c0 1.803 1.476 3.276 3.297 3.276 1.82 0 3.3-1.473 3.3-3.276a3.297 3.297 0 0 0-3.3-3.286m0 8.775c-3.04 0-5.51-2.468-5.51-5.49 0-3.03 2.47-5.495 5.51-5.495 3.041 0 5.51 2.466 5.51 5.496 0 3.021-2.469 5.49-5.51 5.49M34.143 68.658c-3.036 0-5.504 2.454-5.504 5.473 0 3.018 2.468 5.472 5.504 5.472 3.036 0 5.501-2.454 5.501-5.472 0-3.019-2.465-5.473-5.501-5.473m0 13.155c-4.251 0-7.711-3.445-7.711-7.682 0-4.234 3.46-7.683 7.71-7.683 4.252 0 7.712 3.449 7.712 7.683 0 4.237-3.46 7.682-7.711 7.682"/>
+        <g transform="translate(51.6 76.927)">
+            <mask id="d" fill="#fff">
+                <use xlink:href="#c"/>
+            </mask>
+            <path fill="#F16623" d="M6.77 2.694c-1.821 0-3.3 1.47-3.3 3.285 0 1.803 1.479 3.274 3.3 3.274 1.82 0 3.296-1.47 3.296-3.274a3.291 3.291 0 0 0-3.297-3.285m0 8.775c-3.038 0-5.51-2.468-5.51-5.49C1.26 2.946 3.732.484 6.77.484c3.039 0 5.51 2.462 5.51 5.495 0 3.022-2.471 5.49-5.51 5.49" mask="url(#d)"/>
+        </g>
+        <path fill="#F16623" d="M31.313 35.8L8.695 20.227l1.252-1.817L32.565 33.98z"/>
+        <mask id="f" fill="#fff">
+            <use xlink:href="#e"/>
+        </mask>
+        <path fill="#F16623" d="M41.69 30.728h2.21V17.466h-2.21zM56.71 38.684l-1.087-1.926 17.11-9.632 1.087 1.926zM67.224 57.816l-10.32-6.052 1.121-1.906 10.32 6.051zM55.147 79.531l-7.955-19.049 2.041-.848 7.952 19.046zM36.293 67.897l-2.099-.685 2.449-7.497 2.1.688zM12.95 79.834l-1.665-1.454 19.82-22.74 1.669 1.45zM27.595 44.535l-9.116-.82.2-2.201 9.114.82zM42.12 55.189c.578.06 1.086.06 1.665 0l.37-1.669a.997.997 0 0 1 .705-.745 6.945 6.945 0 0 0 1.493-.62c.324-.18.72-.168 1.032.032l1.436.915c.425-.356.82-.751 1.176-1.178l-.918-1.437a1.01 1.01 0 0 1-.025-1.029c.258-.461.467-.966.616-1.496.1-.353.387-.625.745-.705l1.669-.37a7.687 7.687 0 0 0 0-1.663l-1.669-.37a.996.996 0 0 1-.745-.705 6.984 6.984 0 0 0-.616-1.499c-.181-.321-.17-.717.025-1.026l.918-1.436a9.708 9.708 0 0 0-1.176-1.179l-1.436.915c-.312.2-.708.209-1.029.031a6.877 6.877 0 0 0-1.496-.616c-.356-.1-.625-.39-.706-.748l-.37-1.669a7.703 7.703 0 0 0-1.665 0l-.37 1.669c-.077.358-.35.648-.702.748a6.784 6.784 0 0 0-1.5.616 1 1 0 0 1-1.026-.031l-1.436-.912a9.313 9.313 0 0 0-1.178 1.176l.914 1.436c.198.31.21.705.032 1.026-.26.464-.467.969-.62 1.5a.996.996 0 0 1-.747.704l-1.669.37c-.026.29-.043.565-.043.832 0 .266.017.541.043.83l1.669.37c.36.081.647.353.748.706a7.03 7.03 0 0 0 .619 1.496c.178.324.166.717-.032 1.03l-.914 1.436c.355.427.751.82 1.178 1.178l1.436-.915a.993.993 0 0 1 1.027-.031c.464.26.969.467 1.499.619.353.1.625.384.702.745l.37 1.669zm.833 2.05c-.501 0-1.015-.04-1.568-.121a1.36 1.36 0 0 1-1.124-1.05l-.344-1.553a8.899 8.899 0 0 1-.802-.33l-1.342.855a1.366 1.366 0 0 1-1.545-.06 11.11 11.11 0 0 1-2.199-2.202 1.356 1.356 0 0 1-.06-1.542l.854-1.342a8.622 8.622 0 0 1-.33-.805l-1.556-.344a1.36 1.36 0 0 1-1.05-1.121c-.08-.56-.12-1.07-.12-1.568 0-.496.04-1.01.12-1.566a1.367 1.367 0 0 1 1.053-1.126l1.554-.344c.097-.273.206-.542.33-.803l-.855-1.341a1.356 1.356 0 0 1 .06-1.543 11.11 11.11 0 0 1 2.2-2.201 1.366 1.366 0 0 1 1.544-.06l1.342.854c.26-.12.53-.232.802-.33l.344-1.554a1.36 1.36 0 0 1 1.124-1.049c1.112-.16 2.021-.16 3.133 0 .55.083 1.004.502 1.124 1.05l.344 1.553c.275.098.545.21.806.33l1.336-.855a1.37 1.37 0 0 1 1.548.06c.834.628 1.57 1.368 2.201 2.202.336.447.361 1.067.06 1.543l-.854 1.341c.12.261.232.53.33.803l1.554.344c.544.12.966.576 1.046 1.126.083.557.12 1.07.12 1.566s-.037 1.006-.12 1.568c-.078.55-.502 1-1.046 1.12l-1.554.345a9.48 9.48 0 0 1-.33.805l.854 1.342a1.366 1.366 0 0 1-.06 1.542 11.202 11.202 0 0 1-2.201 2.202c-.45.338-1.07.36-1.546.06l-1.338-.855c-.261.124-.53.233-.806.33l-.344 1.557a1.36 1.36 0 0 1-1.124 1.046c-.556.08-1.066.12-1.565.12z" mask="url(#f)"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/events-nav-arrow.svg b/docs/assets/images/events-nav-arrow.svg
new file mode 100644
index 0000000..217fdaa
--- /dev/null
+++ b/docs/assets/images/events-nav-arrow.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="14" height="26" viewBox="0 0 14 26">
+    <path fill="none" fill-rule="evenodd" stroke="#F86B27" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M13 1L1.444 12.556 13 24.11"/>
+</svg>
diff --git a/docs/assets/images/feature-easy-installation.svg b/docs/assets/images/feature-easy-installation.svg
new file mode 100644
index 0000000..0d788ed
--- /dev/null
+++ b/docs/assets/images/feature-easy-installation.svg
@@ -0,0 +1,28 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="86" height="87" viewBox="0 0 86 87">
+    <defs>
+        <path id="a" d="M.841.473h15.423v15.365H.841z"/>
+        <path id="c" d="M1.26.484h11.02v10.983H1.26z"/>
+        <path id="e" d="M0 88.393h85.903V2.866H0z"/>
+    </defs>
+    <g fill="none" fill-rule="evenodd" transform="translate(0 -2)">
+        <path fill="#F16623" d="M42.953 31.386c-7.892 0-14.314 6.39-14.314 14.242 0 7.854 6.422 14.244 14.314 14.244s14.31-6.39 14.31-14.244c0-7.852-6.418-14.242-14.31-14.242m0 30.696c-9.113 0-16.524-7.381-16.524-16.454 0-9.07 7.41-16.452 16.524-16.452 9.11 0 16.523 7.381 16.523 16.452 0 9.073-7.413 16.454-16.523 16.454"/>
+        <g transform="translate(34.4 2.393)">
+            <mask id="b" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="#F16623" d="M8.553 2.683c-3.036 0-5.504 2.454-5.504 5.47 0 3.018 2.468 5.472 5.504 5.472 3.035 0 5.5-2.454 5.5-5.472 0-3.016-2.465-5.47-5.5-5.47m0 13.155C4.3 15.838.84 12.392.84 8.153c0-4.235 3.46-7.68 7.712-7.68 4.251 0 7.711 3.445 7.711 7.68 0 4.24-3.46 7.685-7.711 7.685" mask="url(#b)"/>
+        </g>
+        <path fill="#F16623" d="M5.51 13.844a3.292 3.292 0 0 0-3.297 3.283 3.292 3.292 0 0 0 3.297 3.28c1.82 0 3.3-1.474 3.3-3.28 0-1.81-1.48-3.283-3.3-3.283m0 8.772c-3.039 0-5.51-2.462-5.51-5.49 0-3.027 2.471-5.492 5.51-5.492 3.041 0 5.512 2.465 5.512 5.493 0 3.027-2.47 5.49-5.512 5.49M12.12 35.77c-3.037 0-5.505 2.454-5.505 5.472 0 3.019 2.468 5.473 5.504 5.473 3.033 0 5.501-2.454 5.501-5.473 0-3.018-2.468-5.472-5.5-5.472m0 13.155c-4.252 0-7.712-3.446-7.712-7.683 0-4.234 3.46-7.682 7.711-7.682s7.709 3.448 7.709 7.682c0 4.237-3.458 7.683-7.709 7.683M75.986 52.393c-4.245 0-7.705 3.437-7.705 7.666 0 4.225 3.46 7.665 7.705 7.665 4.249 0 7.706-3.44 7.706-7.665 0-4.229-3.457-7.666-7.706-7.666m0 17.541c-5.463 0-9.912-4.432-9.912-9.875 0-5.447 4.449-9.876 9.912-9.876 5.467 0 9.916 4.429 9.916 9.876 0 5.443-4.449 9.875-9.916 9.875M78.19 18.231c-3.036 0-5.501 2.454-5.501 5.473 0 3.015 2.465 5.47 5.5 5.47 3.037 0 5.505-2.455 5.505-5.47 0-3.019-2.468-5.473-5.504-5.473m0 13.155c-4.251 0-7.712-3.445-7.712-7.682s3.46-7.683 7.712-7.683c4.251 0 7.711 3.446 7.711 7.683s-3.46 7.682-7.711 7.682M9.365 78.523a3.295 3.295 0 0 0-3.297 3.286c0 1.803 1.476 3.276 3.297 3.276 1.82 0 3.3-1.473 3.3-3.276a3.297 3.297 0 0 0-3.3-3.286m0 8.775c-3.04 0-5.51-2.468-5.51-5.49 0-3.03 2.47-5.495 5.51-5.495 3.041 0 5.51 2.466 5.51 5.496 0 3.021-2.469 5.49-5.51 5.49M34.143 68.658c-3.036 0-5.504 2.454-5.504 5.473 0 3.018 2.468 5.472 5.504 5.472 3.036 0 5.501-2.454 5.501-5.472 0-3.019-2.465-5.473-5.501-5.473m0 13.155c-4.251 0-7.711-3.445-7.711-7.682 0-4.234 3.46-7.683 7.71-7.683 4.252 0 7.712 3.449 7.712 7.683 0 4.237-3.46 7.682-7.711 7.682"/>
+        <g transform="translate(51.6 76.927)">
+            <mask id="d" fill="#fff">
+                <use xlink:href="#c"/>
+            </mask>
+            <path fill="#F16623" d="M6.77 2.694c-1.821 0-3.3 1.47-3.3 3.285 0 1.803 1.479 3.274 3.3 3.274 1.82 0 3.296-1.47 3.296-3.274a3.291 3.291 0 0 0-3.297-3.285m0 8.775c-3.038 0-5.51-2.468-5.51-5.49C1.26 2.946 3.732.484 6.77.484c3.039 0 5.51 2.462 5.51 5.495 0 3.022-2.471 5.49-5.51 5.49" mask="url(#d)"/>
+        </g>
+        <path fill="#F16623" d="M31.313 35.8L8.695 20.227l1.252-1.817L32.565 33.98z"/>
+        <mask id="f" fill="#fff">
+            <use xlink:href="#e"/>
+        </mask>
+        <path fill="#F16623" d="M41.69 30.728h2.21V17.466h-2.21zM56.71 38.684l-1.087-1.926 17.11-9.632 1.087 1.926zM67.224 57.816l-10.32-6.052 1.121-1.906 10.32 6.051zM55.147 79.531l-7.955-19.049 2.041-.848 7.952 19.046zM36.293 67.897l-2.099-.685 2.449-7.497 2.1.688zM12.95 79.834l-1.665-1.454 19.82-22.74 1.669 1.45zM27.595 44.535l-9.116-.82.2-2.201 9.114.82zM42.12 55.189c.578.06 1.086.06 1.665 0l.37-1.669a.997.997 0 0 1 .705-.745 6.945 6.945 0 0 0 1.493-.62c.324-.18.72-.168 1.032.032l1.436.915c.425-.356.82-.751 1.176-1.178l-.918-1.437a1.01 1.01 0 0 1-.025-1.029c.258-.461.467-.966.616-1.496.1-.353.387-.625.745-.705l1.669-.37a7.687 7.687 0 0 0 0-1.663l-1.669-.37a.996.996 0 0 1-.745-.705 6.984 6.984 0 0 0-.616-1.499c-.181-.321-.17-.717.025-1.026l.918-1.436a9.708 9.708 0 0 0-1.176-1.179l-1.436.915c-.312.2-.708.209-1.029.031a6.877 6.877 0 0 0-1.496-.616c-.356-.1-.625-.39-.706-.748l-.37-1.669a7.703 7.703 0 0 0-1.665 0l-.37 1.669c-.077.358-.35.648-.702.748a6.784 6.784 0 0 0-1.5.616 1 1 0 0 1-1.026-.031l-1.436-.912a9.313 9.313 0 0 0-1.178 1.176l.914 1.436c.198.31.21.705.032 1.026-.26.464-.467.969-.62 1.5a.996.996 0 0 1-.747.704l-1.669.37c-.026.29-.043.565-.043.832 0 .266.017.541.043.83l1.669.37c.36.081.647.353.748.706a7.03 7.03 0 0 0 .619 1.496c.178.324.166.717-.032 1.03l-.914 1.436c.355.427.751.82 1.178 1.178l1.436-.915a.993.993 0 0 1 1.027-.031c.464.26.969.467 1.499.619.353.1.625.384.702.745l.37 1.669zm.833 2.05c-.501 0-1.015-.04-1.568-.121a1.36 1.36 0 0 1-1.124-1.05l-.344-1.553a8.899 8.899 0 0 1-.802-.33l-1.342.855a1.366 1.366 0 0 1-1.545-.06 11.11 11.11 0 0 1-2.199-2.202 1.356 1.356 0 0 1-.06-1.542l.854-1.342a8.622 8.622 0 0 1-.33-.805l-1.556-.344a1.36 1.36 0 0 1-1.05-1.121c-.08-.56-.12-1.07-.12-1.568 0-.496.04-1.01.12-1.566a1.367 1.367 0 0 1 1.053-1.126l1.554-.344c.097-.273.206-.542.33-.803l-.855-1.341a1.356 1.356 0 0 1 .06-1.543 11.11 11.11 0 0 1 2.2-2.201 1.366 1.366 0 0 1 1.544-.06l1.342.854c.26-.12.53-.232.802-.33l.344-1.554a1.36 1.36 0 0 1 1.124-1.049c1.112-.16 2.021-.16 3.133 0 .55.083 1.004.502 1.124 1.05l.344 1.553c.275.098.545.21.806.33l1.336-.855a1.37 1.37 0 0 1 1.548.06c.834.628 1.57 1.368 2.201 2.202.336.447.361 1.067.06 1.543l-.854 1.341c.12.261.232.53.33.803l1.554.344c.544.12.966.576 1.046 1.126.083.557.12 1.07.12 1.566s-.037 1.006-.12 1.568c-.078.55-.502 1-1.046 1.12l-1.554.345a9.48 9.48 0 0 1-.33.805l.854 1.342a1.366 1.366 0 0 1-.06 1.542 11.202 11.202 0 0 1-2.201 2.202c-.45.338-1.07.36-1.546.06l-1.338-.855c-.261.124-.53.233-.806.33l-.344 1.557a1.36 1.36 0 0 1-1.124 1.046c-.556.08-1.066.12-1.565.12z" mask="url(#f)"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/feature-fast.svg b/docs/assets/images/feature-fast.svg
new file mode 100644
index 0000000..5889878
--- /dev/null
+++ b/docs/assets/images/feature-fast.svg
@@ -0,0 +1,16 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="86" height="74" viewBox="0 0 86 74">
+    <defs>
+        <path id="a" d="M.022 1.27h85.154v72.867H.022z"/>
+    </defs>
+    <g fill="none" fill-rule="evenodd">
+        <path fill="#F16623" d="M46.604 43.541a4.004 4.004 0 0 1-4.006 4.004 3.998 3.998 0 0 1-3.995-4.004 3.998 3.998 0 0 1 3.995-4.003 4.004 4.004 0 0 1 4.006 4.003"/>
+        <path fill="#F16623" d="M42.602 40.9a2.65 2.65 0 0 0-2.648 2.642 2.649 2.649 0 0 0 2.648 2.64 2.648 2.648 0 0 0 2.645-2.64 2.649 2.649 0 0 0-2.645-2.642m0 8.001c-2.957 0-5.365-2.405-5.365-5.359a5.372 5.372 0 0 1 5.365-5.361 5.371 5.371 0 0 1 5.362 5.361 5.369 5.369 0 0 1-5.362 5.359"/>
+        <path fill="#F16623" d="M41.073 45.073L29.516 31.201a.582.582 0 0 1 .074-.828.583.583 0 0 1 .753 0l13.873 11.56a2.227 2.227 0 0 1 .282 3.14 2.229 2.229 0 0 1-3.425 0M13.345 44.517H1.142a1.142 1.142 0 0 1 0-2.283h12.203a1.142 1.142 0 1 1 0 2.283M83.873 44.517H71.67a1.142 1.142 0 0 1 0-2.283h12.203a1.142 1.142 0 0 1 0 2.283"/>
+        <g transform="translate(0 -.304)">
+            <mask id="b" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="#F16623" d="M13.165 74.137h-.003a1.14 1.14 0 0 1-.806-.338C4.402 65.772.022 55.131.022 43.845c0-23.478 19.104-42.576 42.58-42.576 23.477 0 42.575 19.098 42.575 42.576 0 11.286-4.377 21.924-12.33 29.954-.217.217-.487.258-.81.338-.303 0-.593-.121-.807-.332l-8.656-8.632a1.142 1.142 0 0 1 0-1.614 1.133 1.133 0 0 1 1.61 0l7.838 7.809c7.022-7.494 10.875-17.224 10.875-27.523 0-22.218-18.078-40.296-40.296-40.296S2.306 21.627 2.306 43.845c0 10.299 3.852 20.029 10.871 27.523l7.835-7.844a1.138 1.138 0 1 1 1.61 1.61l-8.653 8.668a1.136 1.136 0 0 1-.804.335" mask="url(#b)"/>
+        </g>
+    </g>
+</svg>
diff --git a/docs/assets/images/feature-reliable.svg b/docs/assets/images/feature-reliable.svg
new file mode 100644
index 0000000..d677c70
--- /dev/null
+++ b/docs/assets/images/feature-reliable.svg
@@ -0,0 +1,25 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="84" height="84" viewBox="0 0 84 84">
+    <defs>
+        <path id="a" d="M1.559 2.511h34.1v39.027H1.56z"/>
+        <path id="c" d="M1.333 1.109h34.099v11.383H1.333z"/>
+    </defs>
+    <g fill="none" fill-rule="evenodd">
+        <path fill="#FFF" d="M2.164 69.539l14.888 8.59 14.888-8.59V52.352L17.052 43.74 2.164 52.352V69.54zM17.052 80.46c-.19 0-.371-.052-.54-.145L.54 71.096a1.084 1.084 0 0 1-.54-.931V51.726c0-.385.21-.743.54-.932l15.972-9.24a1.116 1.116 0 0 1 1.08 0l15.972 9.24c.334.19.537.547.537.932v18.439c0 .382-.203.736-.537.932l-15.972 9.22a1.114 1.114 0 0 1-.54.144zM51.654 69.539l14.885 8.59 14.888-8.59V52.352L66.54 43.74l-14.885 8.613V69.54zM66.539 80.46a1.1 1.1 0 0 1-.537-.145l-15.972-9.22a1.076 1.076 0 0 1-.54-.931V51.726c0-.385.207-.743.54-.932L66 41.554c.337-.189.75-.186 1.08 0l15.972 9.24c.334.19.54.547.54.932v18.439c0 .385-.206.74-.54.932l-15.972 9.22c-.165.092-.35.144-.54.144z"/>
+        <path fill="#FFF" d="M29.045 73.711l13.643 7.888 12.763-7.375-5.418-3.127a1.066 1.066 0 0 1-.544-.936V51.726c0-.385.207-.736.544-.932l15.965-9.24c.337-.193.75-.186 1.083 0L72.52 44.7V29.937l-12.78-7.372v6.608c0 .386-.206.747-.54.933L43.228 39.32c-.337.2-.746.2-1.076 0L26.18 30.106a1.061 1.061 0 0 1-.54-.933v-6.608l-12.76 7.372v13.719l3.63-2.102a1.123 1.123 0 0 1 1.084 0l15.968 9.24c.334.196.54.547.54.932V70.16c0 .385-.206.746-.54.936l-4.517 2.614zm13.643 10.217c-.185 0-.375-.048-.54-.145l-15.803-9.136a1.07 1.07 0 0 1-.54-.936c0-.378.203-.743.54-.932l5.597-3.24V52.351l-14.889-8.614-4.712 2.728a1.088 1.088 0 0 1-1.084 0 1.073 1.073 0 0 1-.54-.932v-16.22c0-.388.203-.743.54-.939l14.923-8.62c.333-.19.746-.19 1.08 0 .334.192.543.557.543.939v7.853l14.885 8.59 14.889-8.59v-7.853c0-.382.206-.747.54-.94.333-.189.746-.189 1.083 0l14.94 8.621c.337.196.544.55.544.94v17.261c0 .392-.207.743-.544.936a1.082 1.082 0 0 1-1.08 0l-6.522-3.774-14.885 8.614v17.186l6.498 3.75a1.083 1.083 0 0 1 0 1.875l-14.923 8.62c-.165.097-.35.145-.54.145z"/>
+        <g transform="translate(24.08 -2.072)">
+            <mask id="b" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="#FFF" d="M3.722 30.62l14.885 8.589 14.889-8.59V13.433l-14.889-8.59-14.885 8.59v17.186zm14.885 10.918c-.185 0-.371-.049-.536-.145L2.099 32.177a1.075 1.075 0 0 1-.54-.928V12.81c0-.389.206-.746.54-.935l15.972-9.22a1.075 1.075 0 0 1 1.076 0l15.972 9.22c.337.189.54.546.54.935V31.25c0 .381-.203.736-.54.928l-15.972 9.216c-.165.096-.35.145-.54.145z" mask="url(#b)"/>
+        </g>
+        <g transform="translate(48.16 49.528)">
+            <mask id="d" fill="#fff">
+                <use xlink:href="#c"/>
+            </mask>
+            <path fill="#FFF" d="M18.383 12.492c-.19 0-.372-.048-.54-.144L1.874 3.128c-.516-.299-.695-.96-.395-1.472.299-.523.96-.698 1.475-.402l15.429 8.91 15.428-8.91c.52-.3 1.18-.12 1.476.402.3.513.124 1.173-.392 1.472l-15.972 9.22a1.087 1.087 0 0 1-.54.144" mask="url(#d)"/>
+        </g>
+        <path fill="#FFF" d="M66.543 80.46c-.599 0-1.08-.485-1.08-1.073V60.94c0-.598.481-1.08 1.08-1.08.598 0 1.08.482 1.08 1.08v18.446c0 .588-.482 1.073-1.08 1.073M17.059 62.02c-.19 0-.372-.048-.54-.144L.547 52.656c-.516-.298-.691-.959-.392-1.472.296-.522.96-.698 1.476-.402l15.428 8.91 15.428-8.91c.516-.3 1.177-.12 1.476.402.3.513.12 1.174-.395 1.473l-15.969 9.219a1.087 1.087 0 0 1-.54.144"/>
+        <path fill="#FFF" d="M17.059 80.46c-.599 0-1.08-.485-1.08-1.073V60.94c0-.598.481-1.08 1.08-1.08.599 0 1.08.482 1.08 1.08v18.446c0 .588-.481 1.073-1.08 1.073M42.695 21.033c-.19 0-.372-.048-.54-.145l-15.972-9.222a1.073 1.073 0 0 1-.396-1.47c.3-.522.96-.7 1.476-.402l15.432 8.91 15.428-8.91c.516-.299 1.176-.12 1.476.403.299.512.12 1.17-.396 1.469l-15.972 9.222c-.168.097-.35.145-.536.145"/>
+        <path fill="#FFF" d="M42.695 39.472a1.08 1.08 0 0 1-1.08-1.076V19.954a1.078 1.078 0 1 1 2.156 0v18.442c0 .591-.481 1.076-1.076 1.076"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/github-gray.svg b/docs/assets/images/github-gray.svg
new file mode 100644
index 0000000..664368e
--- /dev/null
+++ b/docs/assets/images/github-gray.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
+    <path fill="#333" fill-rule="nonzero" d="M12 0C5.373 0 0 5.373 0 12c0 5.303 3.438 9.8 8.207 11.386.6.11.819-.26.819-.577 0-.286-.011-1.232-.017-2.234-3.337.725-4.042-1.415-4.042-1.415-.547-1.386-1.333-1.755-1.333-1.755-1.09-.744.083-.73.083-.73 1.205.084 1.84 1.237 1.84 1.237 1.07 1.834 2.809 1.304 3.491.996.11-.773.42-1.304.762-1.602-2.664-.304-5.466-1.333-5.466-5.932 0-1.31.468-2.38 1.234-3.22-.122-.305-.535-1.526.119-3.177 0 0 1.006-.322 3.3 1.23.957-.267 1.983-.399 3.003-.403 1.02.004 2.046.137 3.004.405 2.29-1.554 3.298-1.23 3.298-1.23.656 1.652.243 2.872.12 3.175.769.84 1.233 1.91 1.233 3.22 0 4.61-2.806 5.626-5.48 5.923.432.372.815 1.101.815 2.22 0 1.605-.016 2.898-.016 3.294 0 .319.218.692.826.575C20.565 21.796 24 17.3 24 12c0-6.627-5.373-12-12-12z"/>
+</svg>
diff --git a/docs/assets/images/github-white.svg b/docs/assets/images/github-white.svg
new file mode 100644
index 0000000..5ae1ae6
--- /dev/null
+++ b/docs/assets/images/github-white.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
+    <path fill="#FFF" fill-rule="nonzero" d="M12 0C5.373 0 0 5.373 0 12c0 5.303 3.438 9.8 8.207 11.386.6.11.819-.26.819-.577 0-.286-.011-1.232-.017-2.234-3.337.725-4.042-1.415-4.042-1.415-.547-1.386-1.333-1.755-1.333-1.755-1.09-.744.083-.73.083-.73 1.205.084 1.84 1.237 1.84 1.237 1.07 1.834 2.809 1.304 3.491.996.11-.773.42-1.304.762-1.602-2.664-.304-5.466-1.333-5.466-5.932 0-1.31.468-2.38 1.234-3.22-.122-.305-.535-1.526.119-3.177 0 0 1.006-.322 3.3 1.23.957-.267 1.983-.399 3.003-.403 1.02.004 2.046.137 3.004.405 2.29-1.554 3.298-1.23 3.298-1.23.656 1.652.243 2.872.12 3.175.769.84 1.233 1.91 1.233 3.22 0 4.61-2.806 5.626-5.48 5.923.432.372.815 1.101.815 2.22 0 1.605-.016 2.898-.016 3.294 0 .319.218.692.826.575C20.565 21.796 24 17.3 24 12c0-6.627-5.373-12-12-12z"/>
+</svg>
diff --git a/docs/assets/images/glowing-box.svg b/docs/assets/images/glowing-box.svg
new file mode 100644
index 0000000..c384c4a
--- /dev/null
+++ b/docs/assets/images/glowing-box.svg
@@ -0,0 +1,170 @@
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="347" height="366" viewBox="0 0 347 366">
+    <defs>
+        <path id="a" d="M62.597 186.739c-2.392-.01-4.672-.5-6.38-1.486L2.575 154.282c-1.983-1.144-2.78-2.734-2.494-4.353H0V0h126.242v149.929h-.049c.057 1.722-1.129 3.549-3.593 4.972l-51.503 29.734c-2.387 1.378-5.438 2.091-8.34 2.104h-.16z"/>
+        <linearGradient id="b" x1="50%" x2="50%" y1="100%" y2="18.88%">
+            <stop offset="0%" stop-color="#F86B27"/>
+            <stop offset="100%" stop-color="#F9F9F9"/>
+        </linearGradient>
+        <path id="d" d="M6.886 6.98c-9.181 9.181-9.181 24.068 0 33.249 9.182 9.182 24.068 9.182 33.249 0 9.182-9.181 9.182-24.068 0-33.249A23.433 23.433 0 0 0 23.511.094 23.437 23.437 0 0 0 6.886 6.98"/>
+        <path id="f" d="M.206 10.482c0 5.377 4.36 9.736 9.736 9.736 5.378 0 9.737-4.359 9.737-9.736 0-5.378-4.359-9.736-9.737-9.736C4.566.746.206 5.104.206 10.482"/>
+        <path id="h" d="M2.202 10.533c-4.299 8.655-.77 19.157 7.884 23.457 8.655 4.299 19.156.77 23.456-7.884a17.425 17.425 0 0 0 1.832-7.665v-.216c-.04-6.395-3.597-12.535-9.715-15.575A17.425 17.425 0 0 0 17.886.819c-6.434 0-12.627 3.562-15.684 9.714"/>
+        <path id="j" d="M95.387 4.612L7.023 55.629c-4.13 2.385-6.5 5.581-6.5 8.769 0 2.67 1.658 5.098 4.668 6.835l92.032 53.135c3.029 1.749 7.086 2.606 11.336 2.606 5.112 0 10.499-1.241 14.693-3.663l88.365-51.018c4.131-2.385 6.5-5.581 6.5-8.769 0-2.671-1.657-5.098-4.668-6.836L121.416 3.554c-3.027-1.748-7.085-2.605-11.334-2.605-5.112 0-10.499 1.241-14.695 3.663m2.336 118.891L5.691 70.368c-2.688-1.552-4.168-3.672-4.168-5.97 0-2.821 2.187-5.702 6-7.904L95.887 5.476c4.059-2.342 9.263-3.544 14.194-3.544 4.066 0 7.945.818 10.836 2.487l92.033 53.135c2.687 1.552 4.167 3.672 4.167 5.97 0 2.82-2.187 5.701-6.001 7.904l-88.363 51.017c-4.059 2.343-9.265 3.545-14.195 3.545-4.066-.001-7.945-.818-10.835-2.487"/>
+        <path id="l" d="M74.607 3.655L36.262 25.793H.546V125.89h.01c-.145 2.003.971 3.934 3.433 5.356l72.056 41.601c5.123 2.958 14.071 2.587 19.988-.828l69.183-39.943c3.073-1.774 4.71-4.021 4.867-6.186h.027V25.793h-35.735L94.596 2.826C92.275 1.486 89.168.83 85.914.829c-3.928 0-8.071.958-11.307 2.826"/>
+        <linearGradient id="m" x1="50%" x2="50%" y1="0%" y2="98.762%">
+            <stop offset="0%" stop-color="#DF2226"/>
+            <stop offset="100%" stop-color="#713C80"/>
+        </linearGradient>
+        <path id="o" d="M74.607 3.665L5.424 43.608c-5.916 3.416-6.559 8.583-1.435 11.54L76.045 96.75c5.123 2.957 14.071 2.586 19.988-.828l69.183-39.944c5.916-3.415 6.558-8.582 1.434-11.54L94.596 2.837C92.274 1.497 89.168.84 85.914.84c-3.928 0-8.071.957-11.307 2.825"/>
+        <linearGradient id="p" x1="-.815%" x2="99.309%" y1="50.001%" y2="49.999%">
+            <stop offset="0%" stop-color="#713C80"/>
+            <stop offset="100%" stop-color="#DF2226"/>
+        </linearGradient>
+        <path id="r" d="M.247 1.705l2.79 3.181V28.48c0 5.152 3.132 11.135 6.996 13.366L61.377 71.49c.055.032.107.052.16.082l1.658 1.889 2.608-1.521c.036-.019.071-.042.107-.063l.247-.144-.008-.007c1.366-.914 2.225-2.835 2.225-5.524V37.808L3.037.086.247 1.705z"/>
+        <linearGradient id="s" x1="-84.629%" x2="93.369%" y1="17.044%" y2="61.111%">
+            <stop offset="0%" stop-color="#781416"/>
+            <stop offset="100%" stop-color="#8C413D"/>
+        </linearGradient>
+        <path id="u" d="M.615 1.995l2.79 3.181V28.77c0 5.151 3.132 11.135 6.995 13.367l51.346 29.644c.053.03.106.052.16.081l1.657 1.889 2.607-1.52c.037-.019.072-.043.108-.063l.246-.144-.007-.008c1.365-.915 2.225-2.834 2.225-5.524V38.098L3.405.376.615 1.995z"/>
+        <linearGradient id="v" x1="-84.629%" x2="93.369%" y1="17.044%" y2="61.111%">
+            <stop offset="0%" stop-color="#781416"/>
+            <stop offset="100%" stop-color="#8C413D"/>
+        </linearGradient>
+        <path id="x" d="M.615 29.389c0 5.151 3.132 11.136 6.995 13.366L58.956 72.4c3.863 2.23 6.996-.138 6.996-5.289V38.717L.615.996v28.393z"/>
+        <linearGradient id="y" x1="-4.361%" x2="157.759%" y1="50.276%" y2="49.409%">
+            <stop offset="0%" stop-color="#DF2226"/>
+            <stop offset="100%" stop-color="#F86B27"/>
+        </linearGradient>
+    </defs>
+    <g fill="none" fill-rule="evenodd">
+        <g transform="translate(112)">
+            <mask id="c" fill="#fff">
+                <use xlink:href="#a"/>
+            </mask>
+            <path fill="url(#b)" d="M126.242 0H0v149.929h.081c-.286 1.619.511 3.209 2.494 4.353l53.642 30.971c3.813 2.202 10.476 1.925 14.88-.618l51.503-29.734c2.464-1.423 3.65-3.25 3.593-4.972h.049V0z" mask="url(#c)"/>
+        </g>
+        <g transform="translate(0 262.5)">
+            <mask id="e" fill="#fff">
+                <use xlink:href="#d"/>
+            </mask>
+            <path d="M6.886 6.98c-9.181 9.181-9.181 24.068 0 33.249 9.182 9.182 24.068 9.182 33.249 0 9.182-9.181 9.182-24.068 0-33.249A23.433 23.433 0 0 0 23.511.094 23.437 23.437 0 0 0 6.886 6.98" mask="url(#e)"/>
+        </g>
+        <g transform="translate(44 109.5)">
+            <mask id="g" fill="#fff">
+                <use xlink:href="#f"/>
+            </mask>
+            <path d="M.206 10.482c0 5.377 4.36 9.736 9.736 9.736 5.378 0 9.737-4.359 9.737-9.736 0-5.378-4.359-9.736-9.737-9.736C4.566.746.206 5.104.206 10.482" mask="url(#g)"/>
+        </g>
+        <g transform="translate(315 196.5)">
+            <mask id="i" fill="#fff">
+                <use xlink:href="#h"/>
+            </mask>
+            <path d="M2.202 10.533c-4.299 8.655-.77 19.157 7.884 23.457 8.655 4.299 19.156.77 23.456-7.884a17.425 17.425 0 0 0 1.832-7.665v-.216c-.04-6.395-3.597-12.535-9.715-15.575A17.425 17.425 0 0 0 17.886.819c-6.434 0-12.627 3.562-15.684 9.714" mask="url(#i)"/>
+        </g>
+        <path fill="#757575" d="M337.354 285.698c-7.036 1.365-14.658-.82-17.023-4.884-2.366-4.062 1.421-8.464 8.458-9.829 7.037-1.365 14.659.821 17.024 4.884 2.365 4.062-1.422 8.463-8.459 9.83M29.748 364.935c-9.791 1.899-20.395-1.143-23.686-6.795-3.291-5.652 1.978-11.775 11.769-13.675 9.79-1.9 20.395 1.143 23.685 6.795 3.29 5.653-1.978 11.775-11.768 13.675M56.081 159.338c-3.515.682-7.32-.41-8.503-2.44-1.18-2.03.711-4.227 4.225-4.91 3.516-.68 7.322.412 8.504 2.44 1.18 2.03-.71 4.228-4.226 4.91"/>
+        <path fill="#DF2226" d="M120.757 79.471a2.494 2.494 0 1 1-4.988 0 2.494 2.494 0 0 1 4.988 0M77.548 333.106a3.005 3.005 0 1 1-6.01 0 3.005 3.005 0 0 1 6.01 0"/>
+        <path fill="#F86B27" d="M57.036 319.807a1.547 1.547 0 1 1-3.094 0 1.547 1.547 0 0 1 3.094 0M319.627 262.7a1.6 1.6 0 1 1-3.199-.001 1.6 1.6 0 0 1 3.199 0M23.789 154.769a2.25 2.25 0 1 1-4.501 0 2.25 2.25 0 1 1 4.5 0"/>
+        <path fill="#713C80" d="M191.996 72.7a2.25 2.25 0 1 1-4.501-.001 2.25 2.25 0 0 1 4.5 0"/>
+        <path fill="#F86B27" d="M32.951 149.781a1.247 1.247 0 1 1-2.494 0 1.247 1.247 0 0 1 2.494 0"/>
+        <path fill="#DF2226" d="M247.481 80.738a1.473 1.473 0 1 1-2.946 0 1.473 1.473 0 0 1 2.946 0"/>
+        <path fill="#F86B27" d="M31.24 228.566l-.538-1.926 4.28-1.197-1.197-4.28 1.926-.539 1.736 6.206z"/>
+        <path fill="#F86B27" d="M27.463 235.276l-.539-1.926 4.28-1.197-1.196-4.28 1.926-.539 1.735 6.206z"/>
+        <path fill="#F86B27" d="M31.474 233.116l-1.735-6.206 6.206-1.736.54 1.925-4.282 1.198 1.197 4.28z"/>
+        <path fill="#713C80" d="M179.164 77.977a2.764 2.764 0 0 0-2.761 2.76 2.765 2.765 0 0 0 2.76 2.762 2.764 2.764 0 0 0 2.76-2.762 2.764 2.764 0 0 0-2.76-2.76m0 7.521a4.767 4.767 0 0 1-4.76-4.76 4.766 4.766 0 0 1 4.76-4.76 4.765 4.765 0 0 1 4.76 4.76 4.766 4.766 0 0 1-4.76 4.76"/>
+        <path fill="#F86B27" d="M281.002 318.256l2.573 2.574 2.575-2.574-2.575-2.574-2.573 2.574zm2.573 5.402l-5.401-5.402 5.401-5.402 5.404 5.402-5.404 5.402z"/>
+        <path fill="#DF2226" d="M300.805 133.69l-5.783-5.783 1.414-1.414 5.783 5.783z"/>
+        <path fill="#DF2226" d="M296.436 133.69l-1.414-1.414 5.783-5.783 1.414 1.414z"/>
+        <g transform="translate(64 231.5)">
+            <mask id="k" fill="#fff">
+                <use xlink:href="#j"/>
+            </mask>
+            <path d="M95.387 4.612L7.023 55.629c-4.13 2.385-6.5 5.581-6.5 8.769 0 2.67 1.658 5.098 4.668 6.835l92.032 53.135c3.029 1.749 7.086 2.606 11.336 2.606 5.112 0 10.499-1.241 14.693-3.663l88.365-51.018c4.131-2.385 6.5-5.581 6.5-8.769 0-2.671-1.657-5.098-4.668-6.836L121.416 3.554c-3.027-1.748-7.085-2.605-11.334-2.605-5.112 0-10.499 1.241-14.695 3.663m2.336 118.891L5.691 70.368c-2.688-1.552-4.168-3.672-4.168-5.97 0-2.821 2.187-5.702 6-7.904L95.887 5.476c4.059-2.342 9.263-3.544 14.194-3.544 4.066 0 7.945.818 10.836 2.487l92.033 53.135c2.687 1.552 4.167 3.672 4.167 5.97 0 2.82-2.187 5.701-6.001 7.904l-88.363 51.017c-4.059 2.343-9.265 3.545-14.195 3.545-4.066-.001-7.945-.818-10.835-2.487" mask="url(#k)"/>
+        </g>
+        <path fill="#757575" fill-opacity=".3" d="M164.566 339.775l-68.002-39.26c-4.835-2.792-4.229-7.669 1.354-10.89l65.292-37.698c5.583-3.222 14.028-3.573 18.862-.78l68.003 39.26c4.835 2.792 4.23 7.668-1.354 10.89l-65.29 37.697c-5.584 3.222-14.03 3.573-18.865.781"/>
+        <g transform="translate(88 149.5)">
+            <mask id="n" fill="#fff">
+                <use xlink:href="#l"/>
+            </mask>
+            <path fill="url(#m)" d="M74.607 3.655L36.262 25.793H.546V125.89h.01c-.145 2.003.971 3.934 3.433 5.356l72.056 41.601c5.123 2.958 14.071 2.587 19.988-.828l69.183-39.943c3.073-1.774 4.71-4.021 4.867-6.186h.027V25.793h-35.735L94.596 2.826C92.275 1.486 89.168.83 85.914.829c-3.928 0-8.071.958-11.307 2.826" mask="url(#n)"/>
+        </g>
+        <g>
+            <path fill="#D4088C" d="M97.595 175.293h-9.05v100.098h.012c-.104 1.421.442 2.801 1.64 3.995-.765-1.021-1.112-2.144-1.028-3.293h-.012V175.996h9.145a13.392 13.392 0 0 0-.707-.703"/>
+            <path fill="#F86B27" d="M246.671 175.293c-.064.201-.134.401-.205.602h11.001v100.097h-.026c-.098 1.342-.766 2.715-1.99 3.988 1.627-1.436 2.518-3.033 2.632-4.589h.026V175.293h-11.438z"/>
+            <path fill="#DF2226" d="M113.786 293.328v-116.1H100.81v108.608zM121.754 297.929V185.025h-3.545v110.857zM128.1 301.592V186.388h-3.545v113.157z" style="mix-blend-mode:overlay"/>
+        </g>
+        <g transform="translate(88 125.5)">
+            <mask id="q" fill="#fff">
+                <use xlink:href="#o"/>
+            </mask>
+            <path fill="url(#p)" d="M74.607 3.665L5.424 43.608c-5.916 3.416-6.559 8.583-1.435 11.54L76.045 96.75c5.123 2.957 14.071 2.586 19.988-.828l69.183-39.944c5.916-3.415 6.558-8.582 1.434-11.54L94.596 2.837C92.274 1.497 89.168.84 85.914.84c-3.928 0-8.071.957-11.307 2.825" mask="url(#q)"/>
+        </g>
+        <g>
+            <path fill="#D4088C" d="M164.838 222.25l-72.056-41.602c-5.123-2.957-4.481-8.124 1.435-11.54l69.184-39.942c2.93-1.692 6.604-2.632 10.19-2.795-3.834.05-7.835.977-10.984 2.795l-69.183 39.942c-5.916 3.416-6.558 8.583-1.435 11.54l72.056 41.602c2.585 1.493 6.145 2.132 9.797 1.966-3.368.044-6.608-.582-9.004-1.966"/>
+            <path fill="#F86B27" d="M254.65 169.938l-72.055-41.602c-2.59-1.494-6.156-2.133-9.814-1.964 3.327-.028 6.522.597 8.89 1.964l72.057 41.602c5.123 2.958 4.48 8.125-1.435 11.54l-69.183 39.943c-2.926 1.689-6.594 2.628-10.174 2.793 3.868-.032 7.917-.957 11.097-2.793l69.183-39.943c5.915-3.415 6.558-8.582 1.435-11.54"/>
+            <path fill="#F86B27" d="M166.415 210.252l-53.646-30.972c-3.814-2.202-3.335-6.049 1.069-8.591l51.507-29.738c4.404-2.543 11.067-2.819 14.88-.617l53.646 30.972c3.815 2.203 3.336 6.049-1.068 8.592l-51.507 29.737c-4.404 2.543-11.067 2.82-14.881.617"/>
+            <path fill="#4A2754" d="M113.837 176.475l51.507-29.737c4.404-2.543 11.067-2.819 14.881-.617l53.645 30.972c.5.288.921.606 1.274.944 2.05-2.315 1.691-5.019-1.274-6.73l-53.645-30.973c-3.814-2.202-10.477-1.926-14.88.617l-51.508 29.737c-3.828 2.21-4.686 5.404-2.342 7.648.588-.662 1.362-1.295 2.342-1.86"/>
+            <path fill="#F86B27" d="M186.088 229.354l60.952-35.256c.558-.322 1.01-1.185 1.01-1.929v-1c0-.743-.452-1.085-1.01-.763l-60.952 35.256c-.558.322-1.01 1.185-1.01 1.929v1c0 .743.452 1.085 1.01.763"/>
+            <path fill="#C7531B" d="M245.706 193.336c.558-.322 1.01-1.186 1.01-1.93v-.812l.325-.188c.558-.321 1.01.02 1.01.763v1c0 .744-.452 1.607-1.01 1.93l-60.952 35.255c-.558.322-1.01-.02-1.01-.763v-.187l60.627-35.068z"/>
+            <path fill="#F86B27" d="M186.088 240.081l60.952-35.255c.558-.323 1.01-1.186 1.01-1.93v-1c0-.743-.452-1.085-1.01-.763l-60.952 35.256c-.558.322-1.01 1.185-1.01 1.928v1c0 .744.452 1.086 1.01.764"/>
+            <path fill="#C7531B" d="M245.706 204.063c.558-.322 1.01-1.186 1.01-1.929v-.813l.325-.188c.558-.321 1.01.02 1.01.763v1c0 .744-.452 1.607-1.01 1.93l-60.952 35.255c-.558.322-1.01-.019-1.01-.764v-.186l60.627-35.068z"/>
+            <path fill="#F86B27" d="M186.088 250.808l60.952-35.255c.558-.322 1.01-1.185 1.01-1.93v-1c0-.742-.452-1.084-1.01-.762l-60.952 35.256c-.558.32-1.01 1.185-1.01 1.928v1c0 .743.452 1.085 1.01.763"/>
+            <path fill="#C7531B" d="M245.706 214.79c.558-.322 1.01-1.186 1.01-1.929v-.813l.325-.188c.558-.321 1.01.02 1.01.763v1c0 .744-.452 1.607-1.01 1.93l-60.952 35.255c-.558.322-1.01-.02-1.01-.763v-.187l60.627-35.068z"/>
+            <path fill="#F86B27" d="M186.088 261.535l60.952-35.255c.558-.323 1.01-1.186 1.01-1.93v-1c0-.743-.452-1.085-1.01-.763l-60.952 35.256c-.558.322-1.01 1.185-1.01 1.928v1c0 .744.452 1.086 1.01.764"/>
+            <path fill="#C7531B" d="M245.706 225.517c.558-.322 1.01-1.186 1.01-1.929v-.813l.325-.188c.558-.32 1.01.02 1.01.763v1c0 .744-.452 1.607-1.01 1.93l-60.952 35.255c-.558.322-1.01-.019-1.01-.764v-.186l60.627-35.068z"/>
+            <path fill="#F86B27" d="M186.088 272.262l60.952-35.255c.558-.322 1.01-1.185 1.01-1.93v-1c0-.742-.452-1.084-1.01-.762l-60.952 35.256c-.558.32-1.01 1.185-1.01 1.928v1c0 .743.452 1.085 1.01.763"/>
+            <path fill="#C7531B" d="M245.706 236.244c.558-.322 1.01-1.186 1.01-1.929v-.813l.325-.188c.558-.32 1.01.02 1.01.763v1c0 .744-.452 1.607-1.01 1.93l-60.952 35.255c-.558.322-1.01-.02-1.01-.763v-.187l60.627-35.068z"/>
+            <path fill="#F86B27" d="M186.088 282.99l60.952-35.257c.558-.32 1.01-1.185 1.01-1.929v-1c0-.743-.452-1.085-1.01-.762l-60.952 35.255c-.558.322-1.01 1.185-1.01 1.928v1c0 .744.452 1.086 1.01.764"/>
+            <path fill="#C7531B" d="M245.706 246.972c.558-.322 1.01-1.186 1.01-1.93v-.812l.325-.188c.558-.322 1.01.02 1.01.763v1c0 .743-.452 1.607-1.01 1.929l-60.952 35.256c-.558.322-1.01-.02-1.01-.764v-.187l60.627-35.067z"/>
+            <path fill="#F86B27" d="M186.088 293.716l60.952-35.255c.558-.322 1.01-1.185 1.01-1.93v-1c0-.743-.452-1.085-1.01-.762l-60.952 35.256c-.558.32-1.01 1.185-1.01 1.928v1c0 .743.452 1.085 1.01.763"/>
+            <path fill="#C7531B" d="M245.706 257.698c.558-.322 1.01-1.186 1.01-1.929v-.813l.325-.188c.558-.322 1.01.02 1.01.764v1c0 .743-.452 1.606-1.01 1.93l-60.952 35.254c-.558.322-1.01-.02-1.01-.763v-.187l60.627-35.068z"/>
+            <path fill="#F86B27" d="M186.088 304.443l60.952-35.256c.558-.32 1.01-1.185 1.01-1.929v-1c0-.743-.452-1.085-1.01-.762l-60.952 35.255c-.558.322-1.01 1.185-1.01 1.928v1c0 .744.452 1.086 1.01.764"/>
+            <path fill="#C7531B" d="M245.706 268.426c.558-.322 1.01-1.186 1.01-1.93v-.812l.325-.188c.558-.322 1.01.02 1.01.763v1c0 .743-.452 1.607-1.01 1.929l-60.952 35.256c-.558.322-1.01-.02-1.01-.764v-.187l60.627-35.067z"/>
+            <path fill="#F86B27" d="M186.088 315.17l60.952-35.255c.558-.322 1.01-1.185 1.01-1.93v-1c0-.743-.452-1.085-1.01-.762l-60.952 35.256c-.558.32-1.01 1.185-1.01 1.928v1c0 .743.452 1.085 1.01.763"/>
+            <path fill="#C7531B" d="M245.706 279.152c.558-.322 1.01-1.186 1.01-1.929v-.813l.325-.188c.558-.32 1.01.02 1.01.764v1c0 .743-.452 1.606-1.01 1.93l-60.952 35.254c-.558.322-1.01-.02-1.01-.763v-.187l60.627-35.068z"/>
+            <path fill="#F86B27" d="M174.956 231.099a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 232.121a1.636 1.636 0 0 1-1.605-1.329 1.636 1.636 0 1 0 3.21 0 1.636 1.636 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 241.825a1.636 1.636 0 1 1-3.272.002 1.636 1.636 0 0 1 3.272-.002"/>
+            <path fill="#C7531B" d="M173.32 242.848a1.635 1.635 0 0 1-1.605-1.33 1.636 1.636 0 1 0 3.21 0 1.635 1.635 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 252.553a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 253.575a1.635 1.635 0 0 1-1.605-1.329 1.636 1.636 0 1 0 3.21 0 1.635 1.635 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 263.28a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 264.303a1.636 1.636 0 0 1-1.605-1.33 1.636 1.636 0 1 0 3.21 0 1.636 1.636 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 274.007a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 275.03a1.635 1.635 0 0 1-1.605-1.33 1.636 1.636 0 1 0 3.21 0 1.635 1.635 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 284.733a1.636 1.636 0 1 1-3.272.002 1.636 1.636 0 0 1 3.272-.002"/>
+            <path fill="#C7531B" d="M173.32 285.757a1.636 1.636 0 0 1-1.605-1.33 1.636 1.636 0 1 0 3.21 0 1.636 1.636 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 295.46a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 296.483a1.635 1.635 0 0 1-1.605-1.329 1.636 1.636 0 1 0 3.21 0 1.635 1.635 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 306.188a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 307.21a1.636 1.636 0 0 1-1.605-1.33 1.636 1.636 0 1 0 3.21 0 1.636 1.636 0 0 1-1.605 1.33"/>
+            <path fill="#F86B27" d="M174.956 316.915a1.636 1.636 0 1 1-3.272 0 1.636 1.636 0 0 1 3.272 0"/>
+            <path fill="#C7531B" d="M173.32 317.938a1.635 1.635 0 0 1-1.605-1.33 1.636 1.636 0 1 0 3.21 0 1.635 1.635 0 0 1-1.605 1.33"/>
+            <path fill="#4A2754" d="M145.675 227.347a.92.92 0 0 0-.463.119c-.423.244-.665.839-.665 1.63v65.839c0 1.599.97 3.459 2.158 4.145l2.441 1.41c.468.27.91.31 1.245.12.422-.244.665-.84.665-1.631V233.14c0-1.6-.969-3.458-2.158-4.146l-2.44-1.41c-.276-.158-.543-.238-.783-.238m4.254 73.88c-.327 0-.678-.104-1.03-.308l-2.442-1.41c-1.327-.765-2.406-2.817-2.406-4.574v-65.838c0-.975.332-1.726.912-2.061.494-.286 1.113-.243 1.742.12l2.441 1.41c1.327.765 2.406 2.818 2.406 4.575v65.838c0 .974-.333 1.725-.913 2.06a1.402 1.402 0 0 1-.71.188M106.1 275.992l2.44 1.408c.47.272.912.312 1.246.121.422-.244.665-.838.665-1.63v-65.838c0-1.599-.969-3.459-2.159-4.147l-2.44-1.408c-.47-.27-.912-.312-1.245-.12-.422.244-.665.84-.665 1.631v65.837c0 1.6.97 3.46 2.158 4.146m3.225 2.146c-.328 0-.679-.103-1.033-.308l-2.44-1.408c-1.327-.766-2.406-2.819-2.406-4.576V206.01c0-.975.333-1.726.913-2.06.493-.288 1.113-.244 1.741.12l2.44 1.407c1.328.767 2.407 2.82 2.407 4.577v65.837c0 .975-.332 1.726-.913 2.061a1.397 1.397 0 0 1-.709.187M125.373 215.803a.92.92 0 0 0-.463.119c-.423.244-.665.839-.665 1.63v65.839c0 1.599.969 3.459 2.158 4.145l2.44 1.41c.468.27.912.31 1.246.12.423-.244.665-.84.665-1.631v-65.838c0-1.6-.97-3.46-2.158-4.146l-2.441-1.41c-.275-.158-.542-.238-.782-.238m4.254 73.88c-.327 0-.678-.104-1.031-.308l-2.441-1.41c-1.327-.765-2.406-2.817-2.406-4.574v-65.838c0-.975.332-1.726.912-2.061.494-.286 1.113-.244 1.742.12l2.44 1.41c1.328.765 2.407 2.818 2.407 4.575v65.838c0 .974-.332 1.725-.912 2.06a1.405 1.405 0 0 1-.711.188M237.873 160.17l-2.306 12.536 11.966-6.863z"/>
+        </g>
+        <g transform="translate(172 99.5)">
+            <mask id="t" fill="#fff">
+                <use xlink:href="#r"/>
+            </mask>
+            <path fill="url(#s)" d="M.247 1.705l2.79 3.181V28.48c0 5.152 3.132 11.135 6.996 13.366L61.377 71.49c.055.032.107.052.16.082l1.658 1.889 2.608-1.521c.036-.019.071-.042.107-.063l.247-.144-.008-.007c1.366-.914 2.225-2.835 2.225-5.524V37.808L3.037.086.247 1.705z" mask="url(#t)"/>
+        </g>
+        <g>
+            <path fill="#F86B27" d="M172.247 101.205l65.336 37.722v28.394c0 5.151-3.13 7.52-6.996 5.288l-51.344-29.644c-3.864-2.23-6.996-8.215-6.996-13.366v-28.394z"/>
+            <path fill="#4A2754" d="M170.707 211.602l-12.507 7.274-56.137-32.411 11.628-6.68z"/>
+        </g>
+        <g transform="translate(108 137.5)">
+            <mask id="w" fill="#fff">
+                <use xlink:href="#u"/>
+            </mask>
+            <path fill="url(#v)" d="M.615 1.995l2.79 3.181V28.77c0 5.151 3.132 11.135 6.995 13.367l51.346 29.644c.053.03.106.052.16.081l1.657 1.889 2.607-1.52c.037-.019.072-.043.108-.063l.246-.144-.007-.008c1.365-.915 2.225-2.834 2.225-5.524V38.098L3.405.376.615 1.995z" mask="url(#w)"/>
+        </g>
+        <g transform="translate(108 138.5)">
+            <mask id="z" fill="#fff">
+                <use xlink:href="#x"/>
+            </mask>
+            <path fill="url(#y)" d="M.615 29.389c0 5.151 3.132 11.136 6.995 13.366L58.956 72.4c3.863 2.23 6.996-.138 6.996-5.289V38.717L.615.996v28.393z" mask="url(#z)"/>
+        </g>
+        <g>
+            <path fill="#38025B" d="M247.724 165.956l-9.852-5.787 9.852 5.787z"/>
+            <path fill="#DF2226" d="M3.612 196.375a1.473 1.473 0 1 1-2.946 0 1.473 1.473 0 0 1 2.946 0"/>
+        </g>
+    </g>
+</svg>
diff --git a/docs/assets/images/integrations/hibernate.svg b/docs/assets/images/integrations/hibernate.svg
new file mode 100644
index 0000000..38e4f31
--- /dev/null
+++ b/docs/assets/images/integrations/hibernate.svg
@@ -0,0 +1,6 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="32" height="34" viewBox="0 0 32 34">
+    <g fill="none" fill-rule="nonzero">
+        <path fill="#59666C" d="M19.209 22.367l-.117.344 6.143 10.418.347.246L32 22.277l-6.418-11.142-6.373 11.232zM6.284.038L0 11.136l6.53 11.23 6.216-11.23-.016-.454L6.57.264 6.285.038z"/>
+        <path fill="#BCAE79" d="M6.284.038l6.463 11.098h12.835L19.075.037H6.285zm.245 22.329l6.329 11.008h12.724L19.21 22.367H6.529z"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/integrations/kafka.svg b/docs/assets/images/integrations/kafka.svg
new file mode 100644
index 0000000..db145e2
--- /dev/null
+++ b/docs/assets/images/integrations/kafka.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="33" height="52" viewBox="0 0 33 52">
+    <path fill="#000" fill-rule="nonzero" d="M25.248 28.802c-2.024 0-3.84.897-5.083 2.31l-3.186-2.256c.338-.931.533-1.93.533-2.977a8.697 8.697 0 0 0-.515-2.929l3.179-2.231a6.758 6.758 0 0 0 5.072 2.298 6.786 6.786 0 0 0 6.78-6.78 6.786 6.786 0 0 0-6.78-6.778 6.786 6.786 0 0 0-6.778 6.779c0 .669.1 1.314.282 1.925l-3.18 2.232a8.75 8.75 0 0 0-5.422-3.15v-3.833c3.071-.645 5.385-3.373 5.385-6.633A6.786 6.786 0 0 0 8.755 0a6.786 6.786 0 0 0-6.778 6.779c0 3.216 2.254 5.91 5.263 6.602v3.883C3.133 17.984 0 21.568 0 25.879c0 4.331 3.164 7.928 7.3 8.625v4.1c-3.04.668-5.323 3.38-5.323 6.617A6.786 6.786 0 0 0 8.756 52a6.786 6.786 0 0 0 6.779-6.779c0-3.238-2.284-5.949-5.324-6.617v-4.1a8.753 8.753 0 0 0 5.33-3.1l3.206 2.27a6.746 6.746 0 0 0-.277 1.906 6.786 6.786 0 0 0 6.778 6.779 6.786 6.786 0 0 0 6.78-6.779 6.786 6.786 0 0 0-6.78-6.778zm0-15.85a3.29 3.29 0 0 1 3.287 3.286 3.29 3.29 0 0 1-3.287 3.286 3.29 3.29 0 0 1-3.286-3.286 3.29 3.29 0 0 1 3.286-3.287zM5.47 6.778a3.29 3.29 0 0 1 3.287-3.287 3.29 3.29 0 0 1 3.286 3.287 3.29 3.29 0 0 1-3.286 3.286A3.29 3.29 0 0 1 5.469 6.78zm6.573 38.442a3.29 3.29 0 0 1-3.286 3.287 3.29 3.29 0 0 1-3.287-3.287 3.29 3.29 0 0 1 3.287-3.286 3.29 3.29 0 0 1 3.286 3.286zM8.756 30.462A4.589 4.589 0 0 1 4.17 25.88a4.59 4.59 0 0 1 4.585-4.584 4.59 4.59 0 0 1 4.584 4.584 4.589 4.589 0 0 1-4.584 4.583zm16.492 8.405a3.29 3.29 0 0 1-3.286-3.287 3.29 3.29 0 0 1 3.286-3.286 3.29 3.29 0 0 1 3.287 3.286 3.29 3.29 0 0 1-3.287 3.287z"/>
+</svg>
diff --git a/docs/assets/images/integrations/more.svg b/docs/assets/images/integrations/more.svg
new file mode 100644
index 0000000..42a3af1
--- /dev/null
+++ b/docs/assets/images/integrations/more.svg
@@ -0,0 +1,18 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+<svg width="33.459" height="8" version="1.1" viewBox="0 0 8.8528 2.1167" xmlns="http://www.w3.org/2000/svg" xmlns:cc="http://creativecommons.org/ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
+ <metadata>
+  <rdf:RDF>
+   <cc:Work rdf:about="">
+    <dc:format>image/svg+xml</dc:format>
+    <dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage"/>
+    <dc:title/>
+   </cc:Work>
+  </rdf:RDF>
+ </metadata>
+ <g transform="translate(-5.163 -286.42)" fill="#ededed" stroke="#8e8e8e" stroke-width=".13935">
+  <circle cx="9.525" cy="287.47" r=".98866" style="paint-order:markers fill stroke"/>
+  <circle cx="6.2213" cy="287.47" r=".98866" style="paint-order:markers fill stroke"/>
+  <circle cx="12.957" cy="287.47" r=".98866" style="paint-order:markers fill stroke"/>
+ </g>
+</svg>
diff --git a/docs/assets/images/integrations/oracle.svg b/docs/assets/images/integrations/oracle.svg
new file mode 100644
index 0000000..484c750
--- /dev/null
+++ b/docs/assets/images/integrations/oracle.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="92" height="12" viewBox="0 0 92 12">
+    <path fill="#F80000" fill-rule="nonzero" d="M38.297 7.504h5.853l-3.095-4.98-5.68 9.003H32.79L39.699.713c.3-.437.8-.7 1.356-.7.537 0 1.038.254 1.329.682l6.936 10.832h-2.585l-1.22-2.012H39.59l-1.293-2.011zm26.852 2.011V.121h-2.194v10.313c0 .283.11.556.32.765.208.21.49.328.8.328h10.004l1.292-2.012H65.15zM28.858 7.831a3.854 3.854 0 1 0 0-7.71H19.26v11.406h2.193V2.133h7.258c1.02 0 1.839.828 1.839 1.848s-.82 1.848-1.839 1.848l-6.184-.01 6.548 5.708h3.186L27.856 7.83h1.002zM5.762 11.527A5.7 5.7 0 0 1 .058 5.829 5.708 5.708 0 0 1 5.762.12h6.63a5.707 5.707 0 0 1 5.702 5.708 5.699 5.699 0 0 1-5.703 5.698h-6.63zm6.482-2.012a3.687 3.687 0 0 0 3.692-3.686 3.694 3.694 0 0 0-3.692-3.696H5.909a3.695 3.695 0 0 0-3.692 3.696 3.687 3.687 0 0 0 3.692 3.686h6.335zm41.654 2.012a5.703 5.703 0 0 1-5.707-5.698A5.71 5.71 0 0 1 53.898.12h7.874l-1.284 2.012h-6.444a3.69 3.69 0 1 0 0 7.382h7.91l-1.292 2.012h-6.764zm26.825-2.012a3.686 3.686 0 0 1-3.55-2.685h9.376l1.292-2.012H77.173a3.697 3.697 0 0 1 3.55-2.685h6.436L88.46.121h-7.883a5.71 5.71 0 0 0-5.707 5.707 5.703 5.703 0 0 0 5.707 5.699h6.763l1.292-2.012h-7.91zm8.912-8.183A.998.998 0 0 1 90.636.331a1 1 0 0 1 1.01 1.001c0 .564-.446 1.01-1.01 1.01a1 1 0 0 1-1.001-1.01zm1.001 1.293c.71 0 1.284-.574 1.284-1.284a1.28 1.28 0 0 0-2.558 0c0 .71.573 1.284 1.274 1.284zM90.518.576c.2 0 .282.01.373.046.255.082.282.31.282.4 0 .019 0 .064-.018.119a.369.369 0 0 1-.173.246c-.018.009-.027.018-.064.036l.328.592h-.319l-.291-.546h-.2v.546h-.282V.576h.364zm.1.656c.09-.01.182-.01.236-.091a.192.192 0 0 0 .037-.128.204.204 0 0 0-.11-.172c-.063-.028-.127-.028-.263-.028h-.082v.419h.182z"/>
+</svg>
diff --git a/docs/assets/images/integrations/osgi.svg b/docs/assets/images/integrations/osgi.svg
new file mode 100644
index 0000000..dae8409
--- /dev/null
+++ b/docs/assets/images/integrations/osgi.svg
@@ -0,0 +1,17 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="42" height="39" viewBox="0 0 42 39">
+    <g fill="none" fill-rule="nonzero">
+        <path fill="#FF7300" d="M9.376 12.332S15.346.008 21.069.008c5.722 0 11.483 12.324 11.483 12.324l-.936 1.815S27.87 9.256 21.069 9.256c-6.802 0-10.7 4.89-10.7 4.89l-.993-1.814z"/>
+        <path fill="#FFB780" d="M13.537 12.488a13.108 13.108 0 0 0-2.595 2.374c1.753 2.27 5.022 5.247 10.128 5.247 5.077 0 8.24-2.942 9.927-5.206a13.129 13.129 0 0 0-2.557-2.368c-1.64 1.413-4.071 2.779-7.37 2.779-3.35 0-5.852-1.4-7.533-2.826z"/>
+        <path fill="#F30" d="M31.034 9.46c-.033.06-.642 1.174-1.886 2.42 1.62 1.16 2.468 2.267 2.468 2.267l.936-1.816s-.565-1.21-1.518-2.872zm-20.108.03c-.97 1.645-1.552 2.841-1.552 2.841l.997 1.816s.872-1.096 2.514-2.25c-1.301-1.26-1.948-2.388-1.96-2.408z"/>
+        <g>
+            <path fill="#FF7300" d="M36.238 18.719s7.688 11.333 4.826 16.288c-2.86 4.956-16.414 3.783-16.414 3.783l-1.103-1.718s6.108-.798 9.509-6.688c3.4-5.891 1.114-11.712 1.114-11.712l2.068.047z"/>
+            <path fill="#FFB780" d="M34.022 22.245a13.108 13.108 0 0 0-.759-3.435c-2.841.383-7.054 1.726-9.607 6.148-2.538 4.397-1.573 8.607-.455 11.2 1.148-.194 2.271-.54 3.33-1.03-.405-2.127-.372-4.915 1.278-7.772 1.675-2.901 4.137-4.369 6.213-5.111z"/>
+            <path fill="#F30" d="M27.897 38.912c-.036-.059-.696-1.143-1.154-2.843-1.814.822-3.197 1.003-3.197 1.003l1.105 1.719s1.331.115 3.246.121zm10.028-17.43a44.635 44.635 0 0 0-1.685-2.764l-2.071-.045s.513 1.304.692 3.303c1.742-.497 3.042-.494 3.064-.493z"/>
+        </g>
+        <g>
+            <path fill="#FF7300" d="M17.276 38.79s-13.658.99-16.52-3.966c-2.86-4.955 4.932-16.106 4.932-16.106l2.04-.097s-2.364 5.69 1.037 11.58 9.586 6.82 9.586 6.82l-1.075 1.768z"/>
+            <path fill="#FFB780" d="M15.33 35.107c1.066.503 2.195.86 3.354 1.06 1.09-2.652 2.032-6.972-.52-11.394-2.54-4.396-6.668-5.665-9.473-5.994a13.129 13.129 0 0 0-.771 3.399c2.043.713 4.441 2.136 6.09 4.992 1.676 2.902 1.715 5.768 1.32 7.937z"/>
+            <path fill="#F30" d="M3.958 21.469c.07-.002 1.339-.031 3.04.422.195-1.982.73-3.27.73-3.27l-2.041.097s-.766 1.095-1.729 2.75zm10.08 17.4c1.911.016 3.238-.078 3.238-.078l1.074-1.771s-1.385-.208-3.206-1.052c-.44 1.757-1.094 2.88-1.105 2.9z"/>
+        </g>
+    </g>
+</svg>
diff --git a/docs/assets/images/integrations/spark.svg b/docs/assets/images/integrations/spark.svg
new file mode 100644
index 0000000..c0cb1b8
--- /dev/null
+++ b/docs/assets/images/integrations/spark.svg
@@ -0,0 +1,7 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="72" height="38" viewBox="0 0 72 38">
+    <g fill="none" fill-rule="evenodd">
+        <path fill="#E25A1C" d="M65.712 19.035c-.062-.133-.09-.2-.124-.264-.9-1.71-1.798-3.423-2.706-5.13-.091-.171-.08-.273.045-.42 1.432-1.672 2.856-3.351 4.281-5.028a.451.451 0 0 0 .115-.227c-.417.11-.834.217-1.25.327-1.73.459-3.46.915-5.188 1.38-.162.043-.235-.004-.315-.137-.982-1.641-1.97-3.278-2.956-4.916a.67.67 0 0 0-.215-.24c-.079.438-.16.875-.237 1.312-.275 1.544-.55 3.088-.823 4.632-.03.166-.071.332-.085.5-.014.16-.096.218-.24.264-2.039.64-4.075 1.286-6.111 1.931a.676.676 0 0 0-.273.152l5.024 1.997c-.062.049-.102.086-.147.116-1.042.674-2.085 1.345-3.124 2.022-.124.081-.223.093-.363.03-1.244-.56-2.493-1.11-3.739-1.667-.559-.25-1.062-.581-1.454-1.06-.886-1.084-.711-2.317.47-3.07a5.57 5.57 0 0 1 1.262-.574c1.996-.65 4-1.278 6.003-1.907.168-.053.246-.128.277-.31.267-1.545.544-3.089.822-4.632.15-.825.228-1.671.63-2.427.154-.29.339-.575.559-.82.795-.881 1.905-.916 2.745-.075.284.284.528.618.739.962.926 1.51 1.837 3.03 2.744 4.552.107.18.203.216.402.163 2.233-.599 4.47-1.188 6.705-1.779.461-.121.928-.166 1.4-.078 1.031.193 1.482.978 1.133 1.973-.159.454-.432.837-.74 1.199-1.56 1.834-3.118 3.671-4.683 5.501-.128.15-.13.258-.042.425.934 1.76 1.86 3.524 2.788 5.287.223.422.392.86.397 1.344.011 1.102-.794 2.004-1.89 2.165-.613.09-1.183-.041-1.757-.219-1.4-.433-2.804-.858-4.207-1.28-.13-.04-.18-.09-.203-.23-.162-.988-.338-1.974-.508-2.96-.005-.028.003-.057.007-.117 1.599.44 3.183.878 4.832 1.333"/>
+        <path fill="#3C3A3E" d="M62.981 33.422c-1.263-.002-2.526-.008-3.789-.003-.166 0-.261-.048-.354-.19-1.495-2.278-2.997-4.55-4.498-6.825-.048-.072-.099-.142-.184-.264l-.955 7.265h-3.308c.039-.322.073-.634.113-.944l.972-7.398c.31-2.356.618-4.712.932-7.068.01-.07.056-.16.113-.196 1.139-.743 2.28-1.478 3.423-2.214.016-.011.04-.013.098-.03l-1.03 7.863.04.028 5.414-6c.053.305.099.566.143.828.129.749.252 1.499.39 2.246.027.15-.013.244-.113.349-1.159 1.212-2.313 2.43-3.468 3.645l-.157.17c.037.06.069.116.106.169 1.993 2.797 3.985 5.593 5.98 8.39.036.05.088.09.132.135v.044M23.776 26.151c-.05-.258-.087-.636-.201-.99-.554-1.709-2.308-2.646-4.123-2.223-1.991.463-3.414 2.028-3.62 4.06-.152 1.503.657 2.95 2.162 3.494 1.212.438 2.38.255 3.463-.395 1.436-.862 2.216-2.141 2.319-3.946zm-8.744 6.353a510.144 510.144 0 0 0-.641 4.888c-.014.107-.047.154-.162.154-.909-.004-1.817-.003-2.726-.005-.02 0-.041-.011-.09-.025.055-.435.108-.871.165-1.308.2-1.528.4-3.057.604-4.586.233-1.748.417-3.504.715-5.24.527-3.076 3.136-5.726 6.197-6.386 1.775-.383 3.474-.206 5.023.793 1.545.996 2.432 2.441 2.634 4.26.286 2.57-.66 4.704-2.486 6.469-1.199 1.159-2.633 1.895-4.288 2.144-1.705.257-3.308-.027-4.746-1.032-.052-.037-.109-.07-.2-.126zM13.551 17.891l-2.988 2.224c-.159-.25-.301-.5-.467-.733-.428-.598-.96-1.045-1.733-1.1-.643-.045-1.193.167-1.63.641-.392.424-.443 1.027-.078 1.506a21.55 21.55 0 0 0 1.294 1.515c.746.809 1.526 1.587 2.267 2.4.675.74 1.212 1.57 1.378 2.581.198 1.203-.042 2.348-.61 3.405-1.053 1.955-2.711 3.09-4.9 3.445-.966.157-1.93.126-2.872-.152-1.252-.368-2.123-1.189-2.683-2.347-.198-.41-.35-.841-.529-1.28l3.24-1.734c.037.09.064.165.099.235.184.369.335.76.563 1.1.676 1.005 1.769 1.311 2.877.814.285-.128.56-.308.793-.515.714-.634.848-1.517.32-2.314-.305-.459-.685-.87-1.054-1.28-.884-.982-1.804-1.932-2.668-2.931-.595-.69-1-1.492-1.127-2.415a4.14 4.14 0 0 1 .586-2.806c1.303-2.108 3.196-3.204 5.701-3.113 1.428.052 2.565.72 3.448 1.829.26.328.506.67.773 1.025M37.745 29.207c-.166 1.27-.322 2.477-.485 3.683-.009.06-.056.144-.109.168-2.465 1.142-5.711.982-7.736-1.317-1.088-1.235-1.544-2.701-1.476-4.332.158-3.776 3.288-7.072 7.032-7.533 2.187-.27 4.105.322 5.587 2.02 1.01 1.158 1.476 2.54 1.407 4.066-.045 1.007-.203 2.01-.328 3.014-.178 1.42-.372 2.839-.56 4.258-.006.05-.017.1-.028.165h-2.95c.04-.326.076-.646.117-.964.214-1.64.449-3.277.636-4.92.116-1.022.043-2.037-.425-2.985-.497-1.006-1.332-1.539-2.43-1.655-2.27-.24-4.43 1.337-4.915 3.572-.32 1.476.184 2.89 1.359 3.671 1.145.762 2.372.765 3.62.27.632-.25 1.171-.646 1.684-1.18M50.165 20.163l-.403 3.058c-.624 0-1.235-.004-1.846.001a1.163 1.163 0 0 0-1.097.786c-.058.181-.08.375-.104.565l-.927 7.044-.23 1.786h-3.07c.057-.452.11-.889.168-1.325.2-1.522.4-3.043.602-4.565.174-1.316.33-2.635.536-3.947.275-1.758 2.025-3.32 3.8-3.397.846-.037 1.695-.006 2.57-.006"/>
+        <path fill="#3C3A3E" fill-rule="nonzero" d="M66.6 33.403v-1.296h-.007l-.508 1.296h-.162l-.509-1.296h-.008v1.296h-.256V31.85h.396l.463 1.179.456-1.18h.392v1.553h-.256zm-2.252-1.345v1.345h-.256v-1.345h-.487v-.207h1.229v.207h-.486zM17.664 15.711h.76l-.179-1.152-.581 1.152zm.88.745h-1.266l-.402.788h-.897l1.978-3.696h.864l.674 3.696h-.831l-.12-.788zM23.075 14.293h-.451l-.152.853h.45c.273 0 .49-.18.49-.516 0-.223-.136-.337-.337-.337zm-1.092-.745h1.185c.62 0 1.054.37 1.054 1.005 0 .8-.565 1.337-1.37 1.337h-.51l-.24 1.354h-.771l.652-3.696zM26.843 15.711h.761l-.18-1.152-.58 1.152zm.88.745h-1.266l-.402.788h-.897l1.979-3.696H28l.674 3.696h-.832l-.12-.788zM32.73 17.113c-.26.12-.548.19-.837.19-.978 0-1.592-.733-1.592-1.652 0-1.174.989-2.163 2.163-2.163.293 0 .56.07.777.19l-.109.887c-.162-.18-.424-.305-.75-.305-.673 0-1.271.609-1.271 1.315 0 .538.337.957.87.957.326 0 .64-.125.853-.299l-.104.88M37.446 15.755h-1.608l-.261 1.489h-.772l.652-3.696h.772l-.26 1.462h1.608l.26-1.462h.772l-.651 3.696h-.772l.26-1.49M40.277 17.244l.651-3.696h2.05l-.13.745H41.57l-.13.717h1.173l-.13.745H41.31l-.13.744h1.277l-.13.745h-2.05"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/integrations/spring.svg b/docs/assets/images/integrations/spring.svg
new file mode 100644
index 0000000..0e5f74e
--- /dev/null
+++ b/docs/assets/images/integrations/spring.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="37" height="37" viewBox="0 0 37 37">
+    <path fill="#77BC1F" fill-rule="nonzero" d="M33.598 1.925a17.06 17.06 0 0 1-1.972 3.502A18.534 18.534 0 0 0 18.524 0C8.362 0 0 8.372 0 18.548c0 5.076 2.079 9.936 5.754 13.438l.684.607a18.525 18.525 0 0 0 11.932 4.364c9.661 0 17.783-7.574 18.471-17.225.505-4.725-.88-10.7-3.243-17.807zM8.386 32.092c-.298.37-.755.587-1.231.587-.871 0-1.583-.717-1.583-1.587 0-.871.717-1.588 1.583-1.588a1.593 1.593 0 0 1 1.232 2.589zm25.135-5.552c-4.57 6.096-14.333 4.042-20.593 4.335 0 0-1.111.067-2.227.25 0 0 .418-.178.962-.385 4.393-1.53 6.471-1.828 9.142-3.2 5.028-2.559 9.998-8.16 11.032-13.986-1.915 5.605-7.717 10.421-13.005 12.38C15.209 27.27 8.665 28.57 8.665 28.57l-.264-.14c-4.456-2.17-4.59-11.826 3.507-14.944 3.546-1.366 6.938-.615 10.768-1.53 4.09-.971 8.82-4.041 10.744-8.044 2.156 6.404 4.75 16.43.101 22.628z"/>
+</svg>
diff --git a/docs/assets/images/java.svg b/docs/assets/images/java.svg
new file mode 100644
index 0000000..962f9d7
--- /dev/null
+++ b/docs/assets/images/java.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="56" height="76" viewBox="0 0 56 76">
+    <g fill="none" fill-rule="nonzero">
+        <path fill="#5382A1" d="M18.059 58.51s-2.887 1.679 2.054 2.247c5.987.683 9.047.585 15.645-.664 0 0 1.734 1.088 4.157 2.03-14.79 6.339-33.473-.367-21.856-3.613M16.251 50.238s-3.238 2.397 1.708 2.908c6.395.66 11.446.714 20.185-.969 0 0 1.209 1.226 3.11 1.896-17.882 5.229-37.8.412-25.003-3.835"/>
+        <path fill="#E76F00" d="M31.487 36.206c3.645 4.196-.957 7.972-.957 7.972s9.253-4.777 5.004-10.759c-3.97-5.578-7.013-8.35 9.464-17.907 0 0-25.864 6.46-13.51 20.694"/>
+        <path fill="#5382A1" d="M51.048 64.628s2.137 1.76-2.353 3.122c-8.537 2.586-35.532 3.367-43.03.103-2.697-1.172 2.359-2.8 3.949-3.141 1.658-.36 2.606-.293 2.606-.293-2.998-2.112-19.377 4.147-8.32 5.94 30.155 4.89 54.97-2.203 47.148-5.731M19.447 41.667s-13.731 3.262-4.863 4.446c3.745.502 11.21.388 18.163-.194 5.683-.48 11.39-1.499 11.39-1.499s-2.005.858-3.454 1.848C26.739 49.935-.2 48.229 7.556 44.478c6.559-3.17 11.891-2.81 11.891-2.81M44.08 55.436c14.174-7.366 7.62-14.444 3.046-13.49-1.121.233-1.621.435-1.621.435s.416-.652 1.211-.934c9.05-3.182 16.01 9.384-2.921 14.36 0 0 .219-.196.285-.37"/>
+        <path fill="#E76F00" d="M35.534.081s7.85 7.853-7.446 19.929c-12.266 9.686-2.797 15.21-.005 21.52-7.16-6.46-12.414-12.147-8.89-17.44 5.175-7.768 19.508-11.535 16.34-24.009"/>
+        <path fill="#5382A1" d="M20.84 75.396c13.606.87 34.5-.484 34.994-6.922 0 0-.95 2.44-11.244 4.38-11.613 2.185-25.936 1.93-34.431.529 0 0 1.739 1.44 10.68 2.013"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/left-nav-arrow.svg b/docs/assets/images/left-nav-arrow.svg
new file mode 100644
index 0000000..747a83d
--- /dev/null
+++ b/docs/assets/images/left-nav-arrow.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="7" height="10" viewBox="0 0 7 10">
+    <path fill="#F86D24" fill-rule="nonzero" d="M1.175 0L0 1.175 3.817 5 0 8.825 1.175 10l5-5z"/>
+</svg>
diff --git a/docs/assets/images/lines-bg-1.svg b/docs/assets/images/lines-bg-1.svg
new file mode 100644
index 0000000..c6f714a
--- /dev/null
+++ b/docs/assets/images/lines-bg-1.svg
@@ -0,0 +1,54 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1440" height="519" viewBox="0 0 1440 519">
+    <g fill="none" fill-rule="evenodd" stroke="#979797" opacity=".104">
+        <path d="M186.834 10.225c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.94 534.119-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M167.138 13.698c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.941 534.119-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M147.442 17.17c177.381-30.6 337.277 14.317 479.687 134.751 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M127.746 20.644c177.38-30.6 337.276 14.316 479.686 134.75 213.615 180.65 450.778 155.08 556.271 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M108.05 24.117c177.38-30.6 337.276 14.316 479.686 134.75 213.615 180.65 450.778 155.08 556.271 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M88.353 27.59C265.735-3.01 425.63 41.906 568.04 162.34c213.615 180.65 450.778 155.08 556.271 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.345 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M68.657 31.063c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.778 155.08 556.271 114.139 105.493-40.941 534.118-126.673 720.636-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M48.961 34.536c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M29.265 38.008c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.652 450.777 155.08 556.27 114.14 105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M9.569 41.481c177.381-30.6 337.277 14.317 479.687 134.75 213.614 180.652 450.777 155.08 556.27 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-10.127 44.954c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-29.823 48.427c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.345 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-49.52 51.9c177.382-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.493-40.941 534.118-126.674 720.637-47.865 124.345 52.54 215.962 97.987 274.851 136.344"/>
+        <path d="M-69.216 55.373c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-88.912 58.846c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-108.608 62.32c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.079 556.27 114.138 105.494-40.94 534.118-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-128.304 65.792c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.494-40.942 534.118-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-148 69.265c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.494-40.942 534.118-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-167.697 72.738C9.685 42.138 169.58 87.055 311.99 207.488c213.615 180.651 450.778 155.08 556.271 114.14 105.493-40.942 534.118-126.674 720.637-47.865 124.345 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-187.393 76.211c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.778 155.08 556.27 114.14 105.494-40.942 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-207.089 79.684c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.942 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-226.785 83.157c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.139 105.494-40.94 534.119-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-246.481 86.63C-69.1 56.03 90.796 100.947 233.206 221.38 446.82 402.031 683.983 376.46 789.476 335.52c105.494-40.94 534.118-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-266.177 90.103c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.139 105.494-40.94 534.118-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.851 136.343"/>
+        <path d="M-285.873 93.576c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.139 105.494-40.94 534.118-126.673 720.637-47.864 124.345 52.539 215.963 97.987 274.851 136.343"/>
+        <path d="M-305.57 97.049c177.382-30.6 337.277 14.317 479.687 134.75 213.615 180.65 450.778 155.08 556.271 114.139 105.493-40.94 534.118-126.673 720.637-47.865 124.345 52.54 215.962 97.988 274.851 136.344"/>
+        <path d="M-325.266 100.522c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.94 534.119-126.673 720.637-47.865 124.346 52.54 215.963 97.988 274.852 136.344"/>
+        <path d="M-344.962 103.995c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.94 534.119-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-364.658 107.468c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-384.354 110.94c177.38-30.6 337.276 14.317 479.686 134.751 213.615 180.65 450.778 155.08 556.271 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-404.05 114.414c177.38-30.6 337.276 14.316 479.686 134.75 213.615 180.65 450.778 155.08 556.271 114.139 105.494-40.941 534.118-126.673 720.637-47.865 124.345 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-423.747 117.887c177.382-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.778 155.08 556.271 114.139 105.493-40.941 534.118-126.673 720.637-47.865 124.345 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-443.443 121.36c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.778 155.08 556.27 114.139 105.494-40.941 534.119-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-463.139 124.833c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.941 534.119-126.673 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-482.835 128.306c177.381-30.6 337.277 14.316 479.687 134.75 213.615 180.65 450.777 155.08 556.27 114.139 105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-502.531 131.778c177.381-30.6 337.277 14.317 479.686 134.75 213.615 180.652 450.778 155.08 556.271 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-522.227 135.251c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.652 450.778 155.08 556.271 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-541.924 138.724c177.382-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.345 52.54 215.963 97.987 274.851 136.344"/>
+        <path d="M-561.62 142.197c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.493-40.941 534.118-126.674 720.637-47.865 124.345 52.54 215.962 97.987 274.851 136.344"/>
+        <path d="M-581.316 145.67c177.381-30.6 337.277 14.317 479.687 134.75C111.986 461.072 349.148 435.5 454.64 394.56c105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-601.012 149.143c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.941 534.119-126.674 720.637-47.865 124.346 52.54 215.963 97.987 274.852 136.344"/>
+        <path d="M-620.708 152.616c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-640.404 156.09c177.38-30.6 337.276 14.316 479.686 134.75C52.897 471.49 290.06 445.919 395.553 404.978c105.494-40.94 534.118-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-660.1 159.562c177.38-30.6 337.276 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.494-40.941 534.118-126.674 720.637-47.865 124.345 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-679.797 163.035c177.382-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.778 155.08 556.271 114.14 105.493-40.942 534.118-126.674 720.637-47.865 124.345 52.539 215.962 97.987 274.851 136.344"/>
+        <path d="M-699.493 166.508c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.778 155.08 556.27 114.14 105.494-40.942 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-719.189 169.981c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.942 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-738.885 173.454c177.381-30.6 337.277 14.317 479.687 134.75 213.615 180.651 450.777 155.08 556.27 114.14 105.494-40.942 534.119-126.674 720.637-47.865 124.346 52.539 215.963 97.987 274.852 136.344"/>
+        <path d="M-758.581 176.927c177.38-30.6 337.277 14.317 479.686 134.75 213.615 180.651 450.778 155.08 556.271 114.139 105.494-40.94 534.118-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.851 136.344"/>
+        <path d="M-778.277 180.4c177.38-30.6 337.276 14.317 479.686 134.75C-84.976 495.801 152.187 470.23 257.68 429.29c105.494-40.94 534.118-126.673 720.637-47.864 124.346 52.539 215.963 97.987 274.851 136.344"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/lines-bg-2.svg b/docs/assets/images/lines-bg-2.svg
new file mode 100644
index 0000000..42c4afa
--- /dev/null
+++ b/docs/assets/images/lines-bg-2.svg
@@ -0,0 +1,54 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1440" height="553" viewBox="0 0 1440 553">
+    <g fill="none" fill-rule="evenodd" stroke="#979797" opacity=".1">
+        <path d="M196.463 545.723c178.34 24.391 336.571-26.078 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.483 106.857 37.235 538.213 107.956 721.867 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M176.657 542.94C354.998 567.33 513.23 516.86 651.35 391.53c207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.926-145.853"/>
+        <path d="M156.852 540.156c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.717 551.948-133.482 106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M137.047 537.373c178.34 24.39 336.571-26.079 474.691-151.41 207.18-187.995 445.091-170.716 551.95-133.482 106.857 37.234 538.212 107.956 721.867 22.685 122.436-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M117.241 534.59c178.341 24.39 336.572-26.08 474.692-151.41 207.18-187.995 445.09-170.717 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M97.436 531.806c178.34 24.39 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.854"/>
+        <path d="M77.63 529.022c178.342 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.091-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M57.825 526.239c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.925-145.853"/>
+        <path d="M38.02 523.456c178.34 24.39 336.571-26.08 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M18.214 520.672c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.091-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-1.59 517.889c178.34 24.39 336.57-26.08 474.69-151.41 207.181-187.995 445.092-170.716 551.95-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.925-145.853"/>
+        <path d="M-21.396 515.105c178.34 24.391 336.571-26.079 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.483 106.857 37.234 538.213 107.956 721.867 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-41.202 512.322c178.341 24.39 336.572-26.079 474.692-151.41C640.67 172.918 878.58 190.197 985.44 227.43c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.926-145.853"/>
+        <path d="M-61.007 509.538C117.334 533.93 275.565 483.46 413.685 358.13c207.18-187.996 445.09-170.717 551.948-133.483 106.858 37.235 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-80.812 506.755c178.34 24.39 336.571-26.079 474.691-151.41C601.06 167.35 838.97 184.63 945.83 221.864c106.857 37.234 538.212 107.955 721.867 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-100.618 503.971c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.717 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-120.423 501.188c178.341 24.39 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-140.228 498.404c178.34 24.391 336.571-26.078 474.691-151.409C541.643 159 779.554 176.278 886.412 213.513c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-160.034 495.62c178.341 24.392 336.572-26.078 474.692-151.408 207.18-187.996 445.09-170.717 551.949-133.483 106.858 37.234 538.213 107.956 721.868 22.686C1710.91 176.568 1800.886 127.95 1858.4 87.56"/>
+        <path d="M-179.84 492.837c178.342 24.391 336.572-26.078 474.693-151.409 207.18-187.995 445.09-170.716 551.948-133.482C953.66 245.18 1385.015 315.9 1568.67 230.63c122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-199.645 490.054c178.342 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.091-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686C1671.3 171 1761.276 122.383 1818.79 81.995"/>
+        <path d="M-219.45 487.27c178.341 24.392 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-239.255 484.487c178.34 24.391 336.571-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-259.06 481.704c178.34 24.39 336.571-26.08 474.691-151.41C422.811 142.3 660.721 159.578 767.58 196.812c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.926-145.853"/>
+        <path d="M-278.866 478.92c178.341 24.391 336.572-26.079 474.692-151.409 207.18-187.996 445.09-170.717 551.949-133.483 106.857 37.234 538.213 107.956 721.867 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-298.671 476.137c178.34 24.39 336.571-26.079 474.691-151.41C383.2 136.733 621.111 154.012 727.97 191.246c106.857 37.234 538.213 107.955 721.867 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-318.477 473.353c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.996 445.09-170.717 551.949-133.483 106.858 37.235 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.925-145.853"/>
+        <path d="M-338.282 470.57C-159.941 494.96-1.71 444.49 136.41 319.16 343.59 131.166 581.5 148.445 688.358 185.679c106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-358.087 467.786c178.34 24.391 336.571-26.078 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.482 106.857 37.234 538.212 107.955 721.867 22.685 122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-377.893 465.003c178.341 24.39 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.925-145.853"/>
+        <path d="M-397.698 462.22c178.34 24.39 336.571-26.08 474.692-151.41 207.18-187.995 445.09-170.717 551.948-133.482 106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-417.503 459.436c178.34 24.39 336.571-26.079 474.691-151.41C264.368 120.032 502.28 137.31 609.137 174.545c106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.412-105.465 269.926-145.854"/>
+        <path d="M-437.309 456.652c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-457.114 453.869c178.34 24.391 336.571-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-476.92 451.086c178.341 24.39 336.572-26.08 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685C1394.025 132.032 1484 83.415 1541.515 43.026"/>
+        <path d="M-496.725 448.302c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.857 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.925-145.853"/>
+        <path d="M-516.53 445.519c178.34 24.39 336.571-26.08 474.691-151.41 207.18-187.995 445.091-170.716 551.95-133.482 106.857 37.234 538.213 107.955 721.867 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-536.336 442.735c178.341 24.391 336.572-26.079 474.692-151.409 207.18-187.996 445.09-170.717 551.949-133.483 106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.926-145.853"/>
+        <path d="M-556.141 439.952c178.341 24.39 336.572-26.079 474.692-151.41C125.73 100.548 363.64 117.827 470.499 155.06c106.858 37.234 538.214 107.955 721.868 22.685C1314.804 120.9 1404.78 72.281 1462.293 31.892"/>
+        <path d="M-575.946 437.168c178.34 24.391 336.571-26.079 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.483 106.857 37.235 538.212 107.956 721.867 22.686 122.436-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-595.752 434.385c178.341 24.39 336.572-26.079 474.692-151.41C86.12 94.98 324.03 112.26 430.889 149.494c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.925-145.853"/>
+        <path d="M-615.557 431.601c178.34 24.391 336.572-26.078 474.692-151.409C66.315 92.196 304.225 109.475 411.083 146.71c106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-635.362 428.818c178.34 24.39 336.571-26.079 474.691-151.41C46.51 89.414 284.42 106.693 391.278 143.927c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-655.168 426.034c178.341 24.391 336.572-26.078 474.692-151.409C26.704 86.63 264.614 103.908 371.473 141.143c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-674.973 423.25c178.34 24.392 336.571-26.078 474.692-151.408C6.899 83.846 244.809 101.125 351.667 138.359c106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.854"/>
+        <path d="M-694.779 420.467c178.341 24.391 336.572-26.078 474.692-151.409C-12.907 81.063 225.004 98.342 331.862 135.576 438.72 172.81 870.075 243.53 1053.73 158.26c122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-714.584 417.684c178.341 24.391 336.572-26.079 474.692-151.41C-32.712 78.28 205.198 95.559 312.057 132.793c106.858 37.234 538.213 107.956 721.868 22.686C1156.36 98.63 1246.336 50.013 1303.85 9.625"/>
+        <path d="M-734.39 414.9c178.342 24.392 336.572-26.078 474.692-151.409C-52.518 75.496 185.393 92.775 292.252 130.01c106.857 37.234 538.213 107.955 721.867 22.685C1136.556 95.847 1226.531 47.23 1284.045 6.841"/>
+        <path d="M-754.195 412.117c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686C1116.75 93.064 1206.725 44.446 1264.24 4.058"/>
+        <path d="M-774 409.334c178.341 24.39 336.572-26.08 474.692-151.41C-92.128 69.93 145.782 87.208 252.64 124.442c106.858 37.234 538.214 107.955 721.868 22.685C1096.945 90.28 1186.92 41.663 1244.434 1.274"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/lines-bg-3.svg b/docs/assets/images/lines-bg-3.svg
new file mode 100644
index 0000000..42c4afa
--- /dev/null
+++ b/docs/assets/images/lines-bg-3.svg
@@ -0,0 +1,54 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1440" height="553" viewBox="0 0 1440 553">
+    <g fill="none" fill-rule="evenodd" stroke="#979797" opacity=".1">
+        <path d="M196.463 545.723c178.34 24.391 336.571-26.078 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.483 106.857 37.235 538.213 107.956 721.867 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M176.657 542.94C354.998 567.33 513.23 516.86 651.35 391.53c207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.926-145.853"/>
+        <path d="M156.852 540.156c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.717 551.948-133.482 106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M137.047 537.373c178.34 24.39 336.571-26.079 474.691-151.41 207.18-187.995 445.091-170.716 551.95-133.482 106.857 37.234 538.212 107.956 721.867 22.685 122.436-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M117.241 534.59c178.341 24.39 336.572-26.08 474.692-151.41 207.18-187.995 445.09-170.717 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M97.436 531.806c178.34 24.39 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.854"/>
+        <path d="M77.63 529.022c178.342 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.091-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M57.825 526.239c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.925-145.853"/>
+        <path d="M38.02 523.456c178.34 24.39 336.571-26.08 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M18.214 520.672c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.091-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-1.59 517.889c178.34 24.39 336.57-26.08 474.69-151.41 207.181-187.995 445.092-170.716 551.95-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.925-145.853"/>
+        <path d="M-21.396 515.105c178.34 24.391 336.571-26.079 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.483 106.857 37.234 538.213 107.956 721.867 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-41.202 512.322c178.341 24.39 336.572-26.079 474.692-151.41C640.67 172.918 878.58 190.197 985.44 227.43c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.926-145.853"/>
+        <path d="M-61.007 509.538C117.334 533.93 275.565 483.46 413.685 358.13c207.18-187.996 445.09-170.717 551.948-133.483 106.858 37.235 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-80.812 506.755c178.34 24.39 336.571-26.079 474.691-151.41C601.06 167.35 838.97 184.63 945.83 221.864c106.857 37.234 538.212 107.955 721.867 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-100.618 503.971c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.717 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-120.423 501.188c178.341 24.39 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-140.228 498.404c178.34 24.391 336.571-26.078 474.691-151.409C541.643 159 779.554 176.278 886.412 213.513c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-160.034 495.62c178.341 24.392 336.572-26.078 474.692-151.408 207.18-187.996 445.09-170.717 551.949-133.483 106.858 37.234 538.213 107.956 721.868 22.686C1710.91 176.568 1800.886 127.95 1858.4 87.56"/>
+        <path d="M-179.84 492.837c178.342 24.391 336.572-26.078 474.693-151.409 207.18-187.995 445.09-170.716 551.948-133.482C953.66 245.18 1385.015 315.9 1568.67 230.63c122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-199.645 490.054c178.342 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.091-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686C1671.3 171 1761.276 122.383 1818.79 81.995"/>
+        <path d="M-219.45 487.27c178.341 24.392 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-239.255 484.487c178.34 24.391 336.571-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-259.06 481.704c178.34 24.39 336.571-26.08 474.691-151.41C422.811 142.3 660.721 159.578 767.58 196.812c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.926-145.853"/>
+        <path d="M-278.866 478.92c178.341 24.391 336.572-26.079 474.692-151.409 207.18-187.996 445.09-170.717 551.949-133.483 106.857 37.234 538.213 107.956 721.867 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-298.671 476.137c178.34 24.39 336.571-26.079 474.691-151.41C383.2 136.733 621.111 154.012 727.97 191.246c106.857 37.234 538.213 107.955 721.867 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-318.477 473.353c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.996 445.09-170.717 551.949-133.483 106.858 37.235 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.925-145.853"/>
+        <path d="M-338.282 470.57C-159.941 494.96-1.71 444.49 136.41 319.16 343.59 131.166 581.5 148.445 688.358 185.679c106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-358.087 467.786c178.34 24.391 336.571-26.078 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.482 106.857 37.234 538.212 107.955 721.867 22.685 122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-377.893 465.003c178.341 24.39 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.925-145.853"/>
+        <path d="M-397.698 462.22c178.34 24.39 336.571-26.08 474.692-151.41 207.18-187.995 445.09-170.717 551.948-133.482 106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-417.503 459.436c178.34 24.39 336.571-26.079 474.691-151.41C264.368 120.032 502.28 137.31 609.137 174.545c106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.412-105.465 269.926-145.854"/>
+        <path d="M-437.309 456.652c178.341 24.391 336.572-26.078 474.692-151.409 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-457.114 453.869c178.34 24.391 336.571-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.948-133.482 106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-476.92 451.086c178.341 24.39 336.572-26.08 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.955 721.868 22.685C1394.025 132.032 1484 83.415 1541.515 43.026"/>
+        <path d="M-496.725 448.302c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.857 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.925-145.853"/>
+        <path d="M-516.53 445.519c178.34 24.39 336.571-26.08 474.691-151.41 207.18-187.995 445.091-170.716 551.95-133.482 106.857 37.234 538.213 107.955 721.867 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-536.336 442.735c178.341 24.391 336.572-26.079 474.692-151.409 207.18-187.996 445.09-170.717 551.949-133.483 106.858 37.234 538.213 107.956 721.868 22.686 122.436-56.847 212.411-105.465 269.926-145.853"/>
+        <path d="M-556.141 439.952c178.341 24.39 336.572-26.079 474.692-151.41C125.73 100.548 363.64 117.827 470.499 155.06c106.858 37.234 538.214 107.955 721.868 22.685C1314.804 120.9 1404.78 72.281 1462.293 31.892"/>
+        <path d="M-575.946 437.168c178.34 24.391 336.571-26.079 474.691-151.409 207.18-187.996 445.091-170.717 551.95-133.483 106.857 37.235 538.212 107.956 721.867 22.686 122.436-56.847 212.412-105.465 269.926-145.853"/>
+        <path d="M-595.752 434.385c178.341 24.39 336.572-26.079 474.692-151.41C86.12 94.98 324.03 112.26 430.889 149.494c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.411-105.464 269.925-145.853"/>
+        <path d="M-615.557 431.601c178.34 24.391 336.572-26.078 474.692-151.409C66.315 92.196 304.225 109.475 411.083 146.71c106.858 37.234 538.214 107.955 721.868 22.685 122.437-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-635.362 428.818c178.34 24.39 336.571-26.079 474.691-151.41C46.51 89.414 284.42 106.693 391.278 143.927c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.846 212.412-105.464 269.926-145.853"/>
+        <path d="M-655.168 426.034c178.341 24.391 336.572-26.078 474.692-151.409C26.704 86.63 264.614 103.908 371.473 141.143c106.858 37.234 538.213 107.955 721.868 22.685 122.436-56.847 212.411-105.464 269.925-145.853"/>
+        <path d="M-674.973 423.25c178.34 24.392 336.571-26.078 474.692-151.408C6.899 83.846 244.809 101.125 351.667 138.359c106.858 37.234 538.214 107.956 721.868 22.686 122.437-56.847 212.412-105.465 269.926-145.854"/>
+        <path d="M-694.779 420.467c178.341 24.391 336.572-26.078 474.692-151.409C-12.907 81.063 225.004 98.342 331.862 135.576 438.72 172.81 870.075 243.53 1053.73 158.26c122.436-56.847 212.412-105.464 269.926-145.853"/>
+        <path d="M-714.584 417.684c178.341 24.391 336.572-26.079 474.692-151.41C-32.712 78.28 205.198 95.559 312.057 132.793c106.858 37.234 538.213 107.956 721.868 22.686C1156.36 98.63 1246.336 50.013 1303.85 9.625"/>
+        <path d="M-734.39 414.9c178.342 24.392 336.572-26.078 474.692-151.409C-52.518 75.496 185.393 92.775 292.252 130.01c106.857 37.234 538.213 107.955 721.867 22.685C1136.556 95.847 1226.531 47.23 1284.045 6.841"/>
+        <path d="M-754.195 412.117c178.341 24.391 336.572-26.079 474.692-151.41 207.18-187.995 445.09-170.716 551.949-133.482 106.858 37.234 538.213 107.956 721.868 22.686C1116.75 93.064 1206.725 44.446 1264.24 4.058"/>
+        <path d="M-774 409.334c178.341 24.39 336.572-26.08 474.692-151.41C-92.128 69.93 145.782 87.208 252.64 124.442c106.858 37.234 538.214 107.955 721.868 22.685C1096.945 90.28 1186.92 41.663 1244.434 1.274"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/lines-bg-4.svg b/docs/assets/images/lines-bg-4.svg
new file mode 100644
index 0000000..87b0a9b
--- /dev/null
+++ b/docs/assets/images/lines-bg-4.svg
@@ -0,0 +1,54 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1440" height="973" viewBox="0 0 1440 973">
+    <g fill="none" fill-rule="evenodd" stroke="#979797" opacity=".06">
+        <path d="M1087.77 102.125C914.927 152.38 790.92 262.896 715.745 433.67 602.985 689.833 378.618 770.826 265.85 780.258c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1106.995 96.612C934.152 146.868 810.144 257.383 734.97 428.158 622.21 684.32 397.843 765.313 285.075 774.746c-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1126.22 91.1c-172.843 50.255-296.85 160.77-372.024 331.545C641.436 678.807 417.068 759.8 304.3 769.233c-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1145.446 85.586C972.603 135.842 848.594 246.358 773.42 417.132 660.66 673.294 436.294 754.288 323.525 763.72c-112.768 9.432-535.598 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1164.671 80.074C991.828 130.33 867.82 240.845 792.646 411.62 679.886 667.782 455.519 748.775 342.75 758.207c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1183.896 74.56c-172.843 50.257-296.851 160.772-372.025 331.547-112.76 256.162-337.127 337.155-449.895 346.588-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1203.122 69.048C1030.278 119.304 906.27 229.82 831.096 400.594 718.336 656.756 493.97 737.75 381.201 747.182c-112.769 9.432-535.599 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1222.347 63.535c-172.843 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.127 337.156-449.896 346.588-112.769 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1241.572 58.023C1068.73 108.279 944.72 218.794 869.547 389.569 756.787 645.73 532.42 726.724 419.65 736.156c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.125 182.837-187.229 243.16"/>
+        <path d="M1260.797 52.51c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.837-187.229 243.161"/>
+        <path d="M1280.023 46.997C1107.179 97.253 983.17 207.77 907.997 378.543 795.237 634.705 570.87 715.698 458.102 725.131c-112.769 9.432-535.599 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1299.248 41.484C1126.405 91.74 1002.396 202.256 927.223 373.03c-112.76 256.162-337.128 337.156-449.896 346.588-112.769 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1318.473 35.972c-172.843 50.256-296.852 160.771-372.025 331.546C833.688 623.68 609.32 704.673 496.552 714.105c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1337.698 30.459c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.837-187.229 243.161"/>
+        <path d="M1356.923 24.946c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.895 346.588-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1376.149 19.433c-172.843 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.128 337.156-449.896 346.588C441.459 707 18.63 817.833-114.442 970.508c-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1395.374 13.92c-172.843 50.257-296.852 160.772-372.025 331.547-112.76 256.162-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1414.6 8.408c-172.844 50.256-296.852 160.771-372.026 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.837-187.229 243.161"/>
+        <path d="M1433.824 2.895c-172.843 50.256-296.851 160.772-372.025 331.546C949.04 590.603 724.672 671.596 611.904 681.03c-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1453.05-2.618c-172.844 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.128 337.156-449.896 346.588C518.36 684.95 95.53 795.782-37.541 948.457c-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1472.275-8.13c-172.843 50.256-296.852 160.771-372.025 331.546C987.49 579.578 763.123 660.57 650.354 670.003c-112.768 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1491.5-13.643c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588C556.811 673.923 133.981 784.757.91 937.43c-88.715 101.784-151.125 182.837-187.229 243.161"/>
+        <path d="M1510.725-19.156C1337.882 31.1 1213.874 141.616 1138.7 312.39c-112.76 256.162-337.127 337.155-449.895 346.588-112.769 9.432-535.599 120.266-668.67 272.94-88.716 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1529.95-24.669c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.156-449.895 346.588C595.26 662.898 172.43 773.731 39.36 926.406c-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1549.176-30.181c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1568.401-35.694c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.229 243.161"/>
+        <path d="M1587.626-41.207C1414.783 9.05 1290.775 119.565 1215.601 290.34c-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94C8.32 1011.652-54.089 1092.705-90.194 1153.029"/>
+        <path d="M1606.852-46.72C1434.008 3.536 1310 114.052 1234.826 284.826 1122.066 540.988 897.7 621.982 784.931 631.414c-112.769 9.433-535.599 120.266-668.67 272.941-88.715 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1626.077-52.232c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1645.302-57.745C1472.459-7.49 1348.45 103.026 1273.277 273.8 1160.517 529.963 936.15 610.956 823.38 620.389c-112.768 9.432-535.598 120.266-668.67 272.94C65.996 995.114 3.586 1076.167-32.52 1136.49"/>
+        <path d="M1664.527-63.258c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94C85.221 989.6 22.812 1070.655-13.293 1130.978"/>
+        <path d="M1683.752-68.77C1510.91-18.516 1386.901 92 1311.727 262.774 1198.967 518.937 974.6 599.931 861.832 609.363c-112.769 9.433-535.599 120.266-668.67 272.941-88.716 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1702.978-74.283c-172.844 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1722.203-79.796C1549.36-29.54 1425.35 80.975 1350.178 251.75c-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1741.428-85.309c-172.843 50.256-296.851 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.124 182.838-187.229 243.161"/>
+        <path d="M1760.653-90.822C1587.81-40.566 1463.802 69.95 1388.628 240.724c-112.76 256.162-337.127 337.156-449.895 346.588-112.769 9.433-535.599 120.266-668.67 272.941-88.716 101.783-151.125 182.837-187.23 243.16"/>
+        <path d="M1779.879-96.334c-172.844 50.256-296.852 160.771-372.026 331.546-112.76 256.162-337.127 337.155-449.895 346.587-112.769 9.433-535.598 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1799.104-101.847c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1818.33-107.36c-172.844 50.256-296.852 160.772-372.026 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.838-187.229 243.161"/>
+        <path d="M1837.554-112.873C1664.711-62.616 1540.703 47.9 1465.53 218.674c-112.76 256.161-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1856.78-118.385C1683.936-68.13 1559.928 42.386 1484.754 213.16c-112.76 256.162-337.127 337.155-449.895 346.587-112.769 9.433-535.599 120.267-668.67 272.941-88.715 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1876.005-123.898c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+        <path d="M1895.23-129.41c-172.843 50.255-296.852 160.77-372.025 331.545-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.838-187.23 243.161"/>
+        <path d="M1914.455-134.924c-172.843 50.257-296.851 160.772-372.025 331.547-112.76 256.161-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M1933.68-140.436C1760.838-90.18 1636.83 20.335 1561.656 191.11c-112.76 256.162-337.127 337.155-449.895 346.587C998.99 547.13 576.16 657.964 443.09 810.638c-88.716 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M1952.906-145.949c-172.843 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94C373.6 906.91 311.19 987.963 275.084 1048.287"/>
+        <path d="M1972.131-151.462c-172.843 50.256-296.852 160.772-372.025 331.546-112.76 256.162-337.127 337.155-449.896 346.588-112.768 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.838-187.23 243.161"/>
+        <path d="M1991.356-156.975c-172.843 50.257-296.851 160.772-372.025 331.547-112.76 256.162-337.127 337.155-449.896 346.587-112.768 9.433-535.598 120.266-668.67 272.941-88.715 101.783-151.124 182.837-187.229 243.16"/>
+        <path d="M2010.581-162.487c-172.843 50.256-296.851 160.771-372.025 331.546-112.76 256.162-337.127 337.155-449.895 346.587-112.769 9.433-535.599 120.267-668.67 272.941-88.716 101.784-151.125 182.837-187.23 243.16"/>
+        <path d="M2029.807-168c-172.844 50.256-296.852 160.771-372.025 331.546-112.76 256.162-337.128 337.155-449.896 346.588-112.769 9.432-535.598 120.266-668.67 272.94-88.715 101.784-151.125 182.837-187.23 243.161"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/menu-icon.svg b/docs/assets/images/menu-icon.svg
new file mode 100644
index 0000000..0bf1e7f
--- /dev/null
+++ b/docs/assets/images/menu-icon.svg
@@ -0,0 +1,3 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="18" height="12" viewBox="0 0 18 12">
+    <path fill="#333" d="M0 12h18v-2H0v2zm0-5h18V5H0v2zm0-7v2h18V0H0z"/>
+</svg>
diff --git a/docs/assets/images/mousepad-blob.svg b/docs/assets/images/mousepad-blob.svg
new file mode 100644
index 0000000..a05769b
--- /dev/null
+++ b/docs/assets/images/mousepad-blob.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="216" height="382" viewBox="0 0 216 382">
+    <defs>
+        <linearGradient id="a" x1="50%" x2="50%" y1="0%" y2="100%">
+            <stop offset="0%" stop-color="#9634B2"/>
+            <stop offset="100%" stop-color="#713C80"/>
+        </linearGradient>
+    </defs>
+    <path fill="url(#a)" fill-rule="evenodd" d="M1419.757 436.537l160.517 46.028c48.154 13.808 75.997 64.038 62.189 112.192l-46.028 160.517c-13.808 48.154-64.038 75.997-112.192 62.189l-160.517-46.028c-48.154-13.808-75.997-64.038-62.189-112.192l46.028-160.517c13.808-48.154 64.038-75.997 112.192-62.189z" opacity=".7" transform="translate(-1258 -433)"/>
+</svg>
diff --git a/docs/assets/images/piece-of-paper-with-folded-top-right-corner.svg b/docs/assets/images/piece-of-paper-with-folded-top-right-corner.svg
new file mode 100644
index 0000000..aa493a7
--- /dev/null
+++ b/docs/assets/images/piece-of-paper-with-folded-top-right-corner.svg
@@ -0,0 +1,117 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:xlink="http://www.w3.org/1999/xlink"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   width="177.42053"
+   height="229.022"
+   viewBox="0 0 177.42053 229.022"
+   version="1.1"
+   id="svg24"
+   sodipodi:docname="piece-of-paper-with-folded-top-right-corner.svg"
+   inkscape:version="0.92.1 r15371">
+  <metadata
+     id="metadata28">
+    <rdf:RDF>
+      <cc:Work
+         rdf:about="">
+        <dc:format>image/svg+xml</dc:format>
+        <dc:type
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+        <dc:title></dc:title>
+      </cc:Work>
+    </rdf:RDF>
+  </metadata>
+  <sodipodi:namedview
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1"
+     objecttolerance="10"
+     gridtolerance="10"
+     guidetolerance="10"
+     inkscape:pageopacity="0"
+     inkscape:pageshadow="2"
+     inkscape:window-width="2482"
+     inkscape:window-height="1057"
+     id="namedview26"
+     showgrid="false"
+     fit-margin-top="0"
+     fit-margin-left="0"
+     fit-margin-right="0"
+     fit-margin-bottom="0"
+     inkscape:zoom="1.026087"
+     inkscape:cx="89"
+     inkscape:cy="114.946"
+     inkscape:window-x="-8"
+     inkscape:window-y="-8"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="svg24" />
+  <defs
+     id="defs4">
+    <path
+       id="a"
+       d="M 0,6.37 H 177.418 V 235.385 H 0 Z"
+       inkscape:connector-curvature="0" />
+    <path
+       id="c"
+       d="m 1.588,6.374 h 65.06 V 71.43 H 1.588 Z"
+       inkscape:connector-curvature="0" />
+  </defs>
+  <g
+     id="g22"
+     style="fill:none;fill-rule:evenodd"
+     transform="translate(0,-0.92398933)">
+    <g
+       transform="translate(0,-5.446)"
+       id="g11">
+      <mask
+         id="b"
+         fill="#fff">
+        <use
+           xlink:href="#a"
+           id="use6"
+           x="0"
+           y="0"
+           width="100%"
+           height="100%" />
+      </mask>
+      <path
+         d="m 7.027,11.692 c -0.942,0 -1.703,0.762 -1.703,1.703 v 214.962 c 0,0.942 0.761,1.703 1.703,1.703 h 163.37 c 0.935,0 1.704,-0.761 1.704,-1.703 V 69.867 L 113.919,11.692 Z m 163.37,223.699 H 7.028 C 3.15,235.39 0,232.227 0,228.357 V 13.395 C 0,9.532 3.15,6.37 7.027,6.37 h 108 a 2.64,2.64 0 0 1 1.883,0.782 l 59.732,59.732 c 0.492,0.499 0.776,1.184 0.776,1.883 v 159.591 c 0,3.87 -3.15,7.034 -7.02,7.034 z"
+         mask="url(#b)"
+         id="path9"
+         inkscape:connector-curvature="0"
+         style="fill:#ffffff" />
+    </g>
+    <g
+       transform="translate(110.77,-5.446)"
+       id="g18">
+      <mask
+         id="d"
+         fill="#fff">
+        <use
+           xlink:href="#c"
+           id="use13"
+           x="0"
+           y="0"
+           width="100%"
+           height="100%" />
+      </mask>
+      <path
+         d="m 6.919,15.458 v 48.94 c 0,0.94 0.761,1.702 1.703,1.702 h 48.94 L 6.918,15.458 Z M 63.986,71.43 H 8.622 c -3.87,0 -7.034,-3.157 -7.034,-7.034 V 9.033 c 0,-1.08 0.65,-2.049 1.648,-2.457 A 2.68,2.68 0 0 1 6.144,7.15 l 59.732,59.732 c 0.754,0.762 0.983,1.904 0.574,2.915 a 2.69,2.69 0 0 1 -2.464,1.634 z"
+         mask="url(#d)"
+         id="path16"
+         inkscape:connector-curvature="0"
+         style="fill:#ffffff" />
+    </g>
+    <path
+       d="M 88.71,46.81 H 33.978 a 2.671,2.671 0 0 1 -2.665,-2.665 2.671,2.671 0 0 1 2.665,-2.665 h 54.734 a 2.671,2.671 0 0 1 2.665,2.665 2.671,2.671 0 0 1 -2.665,2.666 m 11.702,30.674 h -66.44 a 2.6655,2.6655 0 0 1 0,-5.331 h 66.44 a 2.671,2.671 0 0 1 2.666,2.665 2.671,2.671 0 0 1 -2.666,2.666 m 44.473,30.667 H 33.972 a 2.67,2.67 0 0 1 -2.658,-2.666 2.67,2.67 0 0 1 2.658,-2.665 h 110.915 a 2.667,2.667 0 0 1 0,5.33 m 0,61.335 H 33.972 a 2.67,2.67 0 0 1 -2.658,-2.665 2.67,2.67 0 0 1 2.658,-2.666 h 110.915 a 2.667,2.667 0 0 1 0,5.331 m 0,-30.667 H 33.972 a 2.67,2.67 0 0 1 -2.658,-2.665 2.67,2.67 0 0 1 2.658,-2.666 h 110.915 a 2.667,2.667 0 0 1 0,5.331 m 0,61.334 H 89.433 a 2.671,2.671 0 0 1 -2.665,-2.665 2.671,2.671 0 0 1 2.665,-2.665 h 55.454 a 2.667,2.667 0 0 1 0,5.33"
+       id="path20"
+       inkscape:connector-curvature="0"
+       style="fill:#ffffff" />
+  </g>
+</svg>
diff --git a/docs/assets/images/scala.svg b/docs/assets/images/scala.svg
new file mode 100644
index 0000000..5f27789
--- /dev/null
+++ b/docs/assets/images/scala.svg
@@ -0,0 +1,31 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="43" height="73" viewBox="0 0 43 73">
+    <defs>
+        <linearGradient id="a" x1=".171%" x2="99.819%" y1="49.649%" y2="49.649%">
+            <stop offset="0%" stop-color="#656565"/>
+            <stop offset="100%" stop-color="#010101"/>
+        </linearGradient>
+        <linearGradient id="b" x1=".171%" x2="99.819%" y1="49.82%" y2="49.82%">
+            <stop offset="0%" stop-color="#656565"/>
+            <stop offset="100%" stop-color="#010101"/>
+        </linearGradient>
+        <linearGradient id="c" x1=".171%" x2="99.819%" y1="50.052%" y2="50.052%">
+            <stop offset="0%" stop-color="#9F1C20"/>
+            <stop offset="100%" stop-color="#ED2224"/>
+        </linearGradient>
+        <linearGradient id="d" x1=".171%" x2="99.819%" y1="50.259%" y2="50.259%">
+            <stop offset="0%" stop-color="#9F1C20"/>
+            <stop offset="100%" stop-color="#ED2224"/>
+        </linearGradient>
+        <linearGradient id="e" x1=".171%" x2="99.819%" y1="50.44%" y2="50.44%">
+            <stop offset="0%" stop-color="#9F1C20"/>
+            <stop offset="100%" stop-color="#ED2224"/>
+        </linearGradient>
+    </defs>
+    <g fill="none" fill-rule="nonzero">
+        <path fill="url(#a)" d="M.61 27.854s41.78 4.178 41.78 11.142V22.283s0-6.963-41.78-11.141v16.712z"/>
+        <path fill="url(#b)" d="M.61 50.138s41.78 4.178 41.78 11.141V44.567s0-6.964-41.78-11.142v16.713z"/>
+        <path fill="url(#c)" d="M42.39 0v16.713s0 6.963-41.78 11.141V11.142S42.39 6.964 42.39 0"/>
+        <path fill="url(#d)" d="M.61 33.425s41.78-4.178 41.78-11.142v16.713s0 6.964-41.78 11.142V33.425z"/>
+        <path fill="url(#e)" d="M.61 72.421V55.71s41.78-4.179 41.78-11.142v16.712s0 6.964-41.78 11.142"/>
+    </g>
+</svg>
diff --git a/docs/assets/images/search.svg b/docs/assets/images/search.svg
new file mode 100644
index 0000000..4123a57
--- /dev/null
+++ b/docs/assets/images/search.svg
@@ -0,0 +1,15 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<svg width="18px" height="19px" viewBox="0 0 18 19" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+    <!-- Generator: sketchtool 53.2 (72643) - https://sketchapp.com -->
+    <title>76A07A59-5A01-45B5-86ED-1007B13A770C</title>
+    <desc>Created with sketchtool.</desc>
+    <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
+        <g id="Article-320" transform="translate(-248.000000, -22.000000)" fill="#333" fill-rule="nonzero">
+            <g id="screen">
+                <g id="GitHub-icon" transform="translate(248.000000, 19.000000)">
+                    <path d="M6.6146307,16.2292614 C2.96147085,16.2292614 0,13.2677905 0,9.6146307 C0,5.96147085 2.96147085,3 6.6146307,3 C10.2677905,3 13.2292614,5.96147085 13.2292614,9.6146307 C13.2292614,11.4412106 12.4888937,13.0948683 11.291881,14.291881 L17.8421457,20.9358099 C18.0561908,21.149855 18.0509881,21.4930606 17.8360963,21.7079524 C17.6197064,21.9243423 17.2755046,21.9255525 17.0639538,21.7140017 L10.4488725,15.0052562 C9.36728942,15.7759493 8.04389847,16.2292614 6.6146307,16.2292614 Z M6.6146307,15.4510695 C9.83800701,15.4510695 12.4510695,12.838007 12.4510695,9.6146307 C12.4510695,6.39125437 9.83800701,3.77819185 6.6146307,3.77819185 C3.39125439,3.77819185 0.778191847,6.39125437 0.778191847,9.6146307 C0.778191847,12.838007 3.39125439,15.4510695 6.6146307,15.4510695 L6.6146307,15.4510695 Z" id="Search"></path>
+                </g>
+            </g>
+        </g>
+    </g>
+</svg>
\ No newline at end of file
diff --git a/docs/assets/images/violent-blob.svg b/docs/assets/images/violent-blob.svg
new file mode 100644
index 0000000..4bef98b
--- /dev/null
+++ b/docs/assets/images/violent-blob.svg
@@ -0,0 +1,28 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Generator: Adobe Illustrator 22.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
+<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
+	 viewBox="0 0 735 673" style="enable-background:new 0 0 735 673;" xml:space="preserve">
+<style type="text/css">
+	.st0{opacity:0.85;}
+	.st1{opacity:0.85;fill:url(#Rectangle-3_2_);}
+	.st2{opacity:0.85;fill:url(#Rectangle-3_3_);}
+</style>
+<g id="Page-1" class="st0">
+	<g id="Group" transform="translate(-55.973403, -60.591451)">
+		
+			<linearGradient id="Rectangle-3_2_" gradientUnits="userSpaceOnUse" x1="3.5474" y1="693.9244" x2="3.5474" y2="692.9244" gradientTransform="matrix(715.9998 0 0 -661.9999 -2125.9568 459438.9375)">
+			<stop  offset="0" style="stop-color:#C07670"/>
+			<stop  offset="1" style="stop-color:#8C413D"/>
+		</linearGradient>
+		<path id="Rectangle-3" class="st1" d="M213.3,78.8C334-1.9,784,213.5,771.8,294.2S709.6,709.5,588.1,722
+			c-121.5,12.5-408.1-95.2-496.5-203.6S92.7,159.6,213.3,78.8z"/>
+		
+			<linearGradient id="Rectangle-3_3_" gradientUnits="userSpaceOnUse" x1="3.5511" y1="693.9073" x2="3.5511" y2="692.9073" gradientTransform="matrix(745.4993 -131.4516 -121.5537 -689.3654 82073.4297 478888.8438)">
+			<stop  offset="0" style="stop-color:#AA6FB7"/>
+			<stop  offset="1" style="stop-color:#713C80"/>
+		</linearGradient>
+		<path id="Rectangle-3_1_" class="st2" d="M168.1,121.7c110.8-106.2,619,35.5,621,121.7c2,86.2,11.6,443.9-112.7,479.3
+			S233.9,698.4,122,601.9S57.3,227.9,168.1,121.7z"/>
+	</g>
+</g>
+</svg>
diff --git a/docs/assets/images/watermelon-blob.svg b/docs/assets/images/watermelon-blob.svg
new file mode 100644
index 0000000..c2fe445
--- /dev/null
+++ b/docs/assets/images/watermelon-blob.svg
@@ -0,0 +1,9 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="63" height="91" viewBox="0 0 63 91">
+    <defs>
+        <linearGradient id="a" x1="50%" x2="50%" y1="0%" y2="100%">
+            <stop offset="0%" stop-color="#F1474E"/>
+            <stop offset="100%" stop-color="#DF2226"/>
+        </linearGradient>
+    </defs>
+    <path fill="url(#a)" fill-rule="evenodd" d="M67.752 1154.058l24.137 48.02c4.961 9.869.982 21.89-8.887 26.851a20 20 0 0 1-8.982 2.13H26c-11.046 0-20-8.953-20-20a20 20 0 0 1 2.093-8.905l23.882-48.02c4.919-9.89 16.924-13.92 26.814-9.001a20 20 0 0 1 8.963 8.925z" transform="rotate(-7 -9310.93 893.495)"/>
+</svg>
diff --git a/docs/assets/js/anchor.min.js b/docs/assets/js/anchor.min.js
new file mode 100755
index 0000000..e302d89
--- /dev/null
+++ b/docs/assets/js/anchor.min.js
@@ -0,0 +1,9 @@
+// @license magnet:?xt=urn:btih:d3d9a9a6595521f9666a5e94cc830dab83b65699&dn=expat.txt Expat
+//
+// AnchorJS - v4.2.0 - 2019-01-01
+// https://github.com/bryanbraun/anchorjs
+// Copyright (c) 2019 Bryan Braun; Licensed MIT
+//
+// @license magnet:?xt=urn:btih:d3d9a9a6595521f9666a5e94cc830dab83b65699&dn=expat.txt Expat
+!function(A,e){"use strict";"function"==typeof define&&define.amd?define([],e):"object"==typeof module&&module.exports?module.exports=e():(A.AnchorJS=e(),A.anchors=new A.AnchorJS)}(this,function(){"use strict";return function(A){function f(A){A.icon=A.hasOwnProperty("icon")?A.icon:"",A.visible=A.hasOwnProperty("visible")?A.visible:"hover",A.placement=A.hasOwnProperty("placement")?A.placement:"right",A.ariaLabel=A.hasOwnProperty("ariaLabel")?A.ariaLabel:"Anchor",A.class=A.hasOwnProperty("class")?A.class:"",A.base=A.hasOwnProperty("base")?A.base:"",A.truncate=A.hasOwnProperty("truncate")?Math.floor(A.truncate):64,A.titleText=A.hasOwnProperty("titleText")?A.titleText:""}function p(A){var e;if("string"==typeof A||A instanceof String)e=[].slice.call(document.querySelectorAll(A));else{if(!(Array.isArray(A)||A instanceof NodeList))throw new Error("The selector provided to AnchorJS was invalid.");e=[].slice.call(A)}return e}this.options=A||{},this.elements=[],f(this.options),this.isTouchDevice=function(){return!!("ontouchstart"in window||window.DocumentTouch&&document instanceof DocumentTouch)},this.add=function(A){var e,t,i,n,o,s,a,r,c,h,l,u,d=[];if(f(this.options),"touch"===(l=this.options.visible)&&(l=this.isTouchDevice()?"always":"hover"),A||(A="h2, h3, h4, h5, h6"),0===(e=p(A)).length)return this;for(function(){if(null===document.head.querySelector("style.anchorjs")){var A,e=document.createElement("style");e.className="anchorjs",e.appendChild(document.createTextNode("")),void 0===(A=document.head.querySelector('[rel="stylesheet"], style'))?document.head.appendChild(e):document.head.insertBefore(e,A),e.sheet.insertRule(" .anchorjs-link {   opacity: 0;   text-decoration: none;   -webkit-font-smoothing: antialiased;   -moz-osx-font-smoothing: grayscale; }",e.sheet.cssRules.length),e.sheet.insertRule(" *:hover > .anchorjs-link, .anchorjs-link:focus  {   opacity: 1; }",e.sheet.cssRules.length),e.sheet.insertRule(" [data-anchorjs-icon]::after {   content: attr(data-anchorjs-icon); }",e.sheet.cssRules.length),e.sheet.insertRule(' @font-face {   font-family: "anchorjs-icons";   src: url(data:n/a;base64,AAEAAAALAIAAAwAwT1MvMg8yG2cAAAE4AAAAYGNtYXDp3gC3AAABpAAAAExnYXNwAAAAEAAAA9wAAAAIZ2x5ZlQCcfwAAAH4AAABCGhlYWQHFvHyAAAAvAAAADZoaGVhBnACFwAAAPQAAAAkaG10eASAADEAAAGYAAAADGxvY2EACACEAAAB8AAAAAhtYXhwAAYAVwAAARgAAAAgbmFtZQGOH9cAAAMAAAAAunBvc3QAAwAAAAADvAAAACAAAQAAAAEAAHzE2p9fDzz1AAkEAAAAAADRecUWAAAAANQA6R8AAAAAAoACwAAAAAgAAgAAAAAAAAABAAADwP/AAAACgAAA/9MCrQABAAAAAAAAAAAAAAAAAAAAAwABAAAAAwBVAAIAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAMCQAGQAAUAAAKZAswAAACPApkCzAAAAesAMwEJAAAAAAAAAAAAAAAAAAAAARAAAAAAAAAAAAAAAAAAAAAAQAAg//0DwP/AAEADwABAAAAAAQAAAAAAAAAAAAAAIAAAAAAAAAIAAAACgAAxAAAAAwAAAAMAAAAcAAEAAwAAABwAAwABAAAAHAAEADAAAAAIAAgAAgAAACDpy//9//8AAAAg6cv//f///+EWNwADAAEAAAAAAAAAAAAAAAAACACEAAEAAAAAAAAAAAAAAAAxAAACAAQARAKAAsAAKwBUAAABIiYnJjQ3NzY2MzIWFxYUBwcGIicmNDc3NjQnJiYjIgYHBwYUFxYUBwYGIwciJicmNDc3NjIXFhQHBwYUFxYWMzI2Nzc2NCcmNDc2MhcWFAcHBgYjARQGDAUtLXoWOR8fORYtLTgKGwoKCjgaGg0gEhIgDXoaGgkJBQwHdR85Fi0tOAobCgoKOBoaDSASEiANehoaCQkKGwotLXoWOR8BMwUFLYEuehYXFxYugC44CQkKGwo4GkoaDQ0NDXoaShoKGwoFBe8XFi6ALjgJCQobCjgaShoNDQ0NehpKGgobCgoKLYEuehYXAAAADACWAAEAAAAAAAEACAAAAAEAAAAAAAIAAwAIAAEAAAAAAAMACAAAAAEAAAAAAAQACAAAAAEAAAAAAAUAAQALAAEAAAAAAAYACAAAAAMAAQQJAAEAEAAMAAMAAQQJAAIABgAcAAMAAQQJAAMAEAAMAAMAAQQJAAQAEAAMAAMAAQQJAAUAAgAiAAMAAQQJAAYAEAAMYW5jaG9yanM0MDBAAGEAbgBjAGgAbwByAGoAcwA0ADAAMABAAAAAAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAH//wAP) format("truetype"); }',e.sheet.cssRules.length)}}(),t=document.querySelectorAll("[id]"),i=[].map.call(t,function(A){return A.id}),o=0;o<e.length;o++)if(this.hasAnchorJSLink(e[o]))d.push(o);else{if(e[o].hasAttribute("id"))n=e[o].getAttribute("id");else if(e[o].hasAttribute("data-anchor-id"))n=e[o].getAttribute("data-anchor-id");else{for(c=r=this.urlify(e[o].textContent),a=0;void 0!==s&&(c=r+"-"+a),a+=1,-1!==(s=i.indexOf(c)););s=void 0,i.push(c),e[o].setAttribute("id",c),n=c}n.replace(/-/g," "),(h=document.createElement("a")).className="anchorjs-link "+this.options.class,h.setAttribute("aria-label",this.options.ariaLabel),h.setAttribute("data-anchorjs-icon",this.options.icon),this.options.titleText&&(h.title=this.options.titleText),u=document.querySelector("base")?window.location.pathname+window.location.search:"",u=this.options.base||u,h.href=u+"#"+n,"always"===l&&(h.style.opacity="1"),""===this.options.icon&&(h.style.font="1em/1 anchorjs-icons","left"===this.options.placement&&(h.style.lineHeight="inherit")),"left"===this.options.placement?(h.style.position="absolute",h.style.marginLeft="-1em",h.style.paddingRight="0.5em",e[o].insertBefore(h,e[o].firstChild)):(h.style.paddingLeft="0.375em",e[o].appendChild(h))}for(o=0;o<d.length;o++)e.splice(d[o]-o,1);return this.elements=this.elements.concat(e),this},this.remove=function(A){for(var e,t,i=p(A),n=0;n<i.length;n++)(t=i[n].querySelector(".anchorjs-link"))&&(-1!==(e=this.elements.indexOf(i[n]))&&this.elements.splice(e,1),i[n].removeChild(t));return this},this.removeAll=function(){this.remove(this.elements)},this.urlify=function(A){return this.options.truncate||f(this.options),A.trim().replace(/\'/gi,"").replace(/[& +$,:;=?@"#{}|^~[`%!'<>\]\.\/\(\)\*\\\n\t\b\v]/g,"-").replace(/-{2,}/g,"-").substring(0,this.options.truncate).replace(/^-+|-+$/gm,"").toLowerCase()},this.hasAnchorJSLink=function(A){var e=A.firstChild&&-1<(" "+A.firstChild.className+" ").indexOf(" anchorjs-link "),t=A.lastChild&&-1<(" "+A.lastChild.className+" ").indexOf(" anchorjs-link ");return e||t||!1}}});
+// @license-end
\ No newline at end of file
diff --git a/docs/assets/js/code-copy-to-clipboard.js b/docs/assets/js/code-copy-to-clipboard.js
new file mode 100644
index 0000000..3fd42b2
--- /dev/null
+++ b/docs/assets/js/code-copy-to-clipboard.js
@@ -0,0 +1,70 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//       http://www.apache.org/licenses/LICENSE-2.0
+//
+//  Unless required by applicable law or agreed to in writing, software
+//  distributed under the License is distributed on an "AS IS" BASIS,
+//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//  See the License for the specific language governing permissions and
+//  limitations under the License.
+
+const BUTTON_CLASSNAME = 'copy-to-clipboard-button'
+const BUTTON_CLASSNAME_SUCCESS = 'copy-to-clipboard-button__success'
+
+const TEMPLATE = document.createElement('button')
+TEMPLATE.classList.add(BUTTON_CLASSNAME)
+TEMPLATE.title = 'Copy to clipboard'
+TEMPLATE.type = 'button'
+
+const SECOND = 1000
+const RESULT_DISPLAY_DURATION = 0.5 * SECOND
+
+/**
+ * @param {HTMLElement?} el Element to copy text from
+ * @returns {boolean} Copy success/failure
+ */
+function copyCode(el) {
+	if (!el) return
+	if (!el.matches('code')) return
+
+	const range = document.createRange()
+	range.selectNode(el)
+	window.getSelection().addRange(range)
+
+	try {
+		return document.execCommand('copy')
+	} catch (err) {} finally {
+		window.getSelection().removeAllRanges()		
+	}
+}
+
+function init() {
+	for (const code of document.querySelectorAll('pre>code')) {
+		try {
+			const container = code.closest('.listingblock .content')
+			if (!container) break
+			const button = TEMPLATE.cloneNode(true)
+			container.appendChild(button)			
+		} catch (err) {}
+	}
+	document.addEventListener('click', e => {
+		if (e.target.classList.contains(BUTTON_CLASSNAME)) {
+			const result = copyCode(e.target.parentElement.querySelector('code'))
+			if (result) {
+				e.target.innerText = '✓'
+				e.target.classList.add(BUTTON_CLASSNAME_SUCCESS)
+				setTimeout(() => {
+					e.target.innerText = TEMPLATE.textContent
+					e.target.classList.remove(BUTTON_CLASSNAME_SUCCESS)
+				}, RESULT_DISPLAY_DURATION)
+			}
+		}
+	})
+}
+
+window.addEventListener('load', init)
diff --git a/docs/assets/js/code-tabs.js b/docs/assets/js/code-tabs.js
new file mode 100644
index 0000000..6d8118a
--- /dev/null
+++ b/docs/assets/js/code-tabs.js
@@ -0,0 +1,155 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//       http://www.apache.org/licenses/LICENSE-2.0
+//
+//  Unless required by applicable law or agreed to in writing, software
+//  distributed under the License is distributed on an "AS IS" BASIS,
+//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//  See the License for the specific language governing permissions and
+//  limitations under the License.
+
+const TAB_BUTTON = document.createRange().createContextualFragment(`
+    <button class='code-tabs__tab'></button>
+`)
+
+const getAllCodeTabs = () => document.querySelectorAll('code-tabs')
+
+/**
+ * @typedef CodeTabsState
+ * @prop {string?} currentTab
+ * @prop {string[]} tabs
+ * @prop {number?} boundingClientRectTop
+ */
+
+/**
+ * @typedef {number} ScrollState
+ */
+
+class CodeTabs {
+    /** @param {HTMLElement} el */
+    constructor(el) {
+        this.el = el
+        this.el.codeTabs = this
+        /**
+         * @type {CodeTabsState}
+         */
+        this._state = {tabs: []}
+    }
+    get state() {
+        return this._state
+    }
+    /**
+     * @param {CodeTabsState} newState
+     */
+    set state(newState) {
+        const oldState = this._state
+        this._state = newState
+        this._render(oldState, newState)
+    }
+    connectedCallback() {
+        this._tabElements = this.el.querySelectorAll('code-tab')
+        this.state = {
+            currentTab: this._tabElements[0].dataset.tab,
+            tabs: [...this._tabElements].map(el => el.dataset.tab),
+        }
+    }
+    /**
+     * @private
+     * @param {CodeTabsState} oldState
+     * @param {CodeTabsState} newState
+     */
+    _render(oldState, newState) {
+        if (!oldState.tabs.length && newState.tabs.length) {
+            /** @type {HTMLElement} */
+            this.el.prepend(newState.tabs.reduce((nav, tab, i) => {
+                const button = TAB_BUTTON.firstElementChild.cloneNode()
+                button.dataset.tab = tab
+                button.innerText = tab
+                button.onclick = () => {
+                    const scrollState = this._rememberScrollState()
+                    this._openTab(tab)
+                    this._restoreScrollState(scrollState)
+                }
+                if (this._tabElements[i].dataset.unavailable) {
+                    button.classList.add('grey')      
+                }
+
+                this._tabElements[i].button = button
+                nav.appendChild(button)
+                return nav
+            }, document.createElement('NAV')))
+            this.el.classList.add('code-tabs__initialized')
+        }
+        if (oldState.currentTab !== newState.currentTab) {
+            for (const tab of this._tabElements) {
+                const hidden = tab.dataset.tab !== newState.currentTab
+                if (hidden) {
+                    tab.setAttribute('hidden', 'hidden')
+                } else {
+                    tab.removeAttribute('hidden')
+                }
+                tab.button.classList.toggle('active', !hidden)
+            }
+        }
+    }
+    /** 
+     * @private
+     * @param {string} tab
+     */
+    _openTab(tab, emitEvent = true) {
+        if (!this.state.tabs.includes(tab)) return
+        this.state = Object.assign({}, this.state, {currentTab: tab})
+        if (emitEvent) this.el.dispatchEvent(new CustomEvent('tabopen', {
+            bubbles: true,
+            detail: {tab}
+        }))
+    }
+    /** 
+     * @param {string} tab
+     */
+    openTab(tab) {
+        this._openTab(tab, false)
+    }
+
+    /**
+     * @private
+     * @returns {ScrollState}
+     */
+    _rememberScrollState() {
+        return this.el.getBoundingClientRect().top
+    }
+
+    /**
+     * @private
+     * @param {ScrollState} scrollState
+     * @returns {void}
+     */
+    _restoreScrollState(scrollState) {
+        const currentRectTop = this.el.getBoundingClientRect().top
+        const delta = currentRectTop - scrollState
+        document.scrollingElement.scrollBy(0, delta)
+    }
+}
+
+/**
+ * @param {NodeListOf<Element>} tabs
+ */
+const setupSameLanguageSync = (tabs) => {
+    document.addEventListener('tabopen', (e) => {
+        [...tabs].filter(tab => tab !== e.target).forEach(tab => {
+            tab.codeTabs.openTab(e.detail.tab)
+        })
+    })
+}
+
+// Edge does not support custom elements V1
+for (const el of getAllCodeTabs()) {
+    const instance = new CodeTabs(el)
+    instance.connectedCallback()
+}
+setupSameLanguageSync(getAllCodeTabs())
diff --git a/docs/assets/js/docs-menu.js b/docs/assets/js/docs-menu.js
new file mode 100644
index 0000000..8e598da
--- /dev/null
+++ b/docs/assets/js/docs-menu.js
@@ -0,0 +1,64 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//       http://www.apache.org/licenses/LICENSE-2.0
+//
+//  Unless required by applicable law or agreed to in writing, software
+//  distributed under the License is distributed on an "AS IS" BASIS,
+//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//  See the License for the specific language governing permissions and
+//  limitations under the License.
+
+const button = document.querySelector('button.menu')
+const overlay = document.querySelector('.left-nav__overlay')
+
+const eventTypes = {
+    show: 'leftNavigationShow',
+    hide: 'leftNavigationHide'
+}
+
+/**
+ * @param {keyof eventTypes} type
+ */
+const emit = type => {
+    if (!CustomEvent) return
+    document.dispatchEvent(new CustomEvent(type, {bubbles: true}))
+}
+
+/**
+ * @param {boolean} force
+ */
+const toggleMenu = (force) => {
+    const body = document.querySelector('body')
+    const HIDE_CLASS = 'hide-left-nav'
+    body.classList.toggle(HIDE_CLASS, force)
+    emit(eventTypes[body.classList.contains(HIDE_CLASS) ? 'hide' : 'show'])
+}
+
+export const hideLeftNav = () => {
+    toggleMenu(true, false)
+}
+
+if (button && overlay) {
+    const query = window.matchMedia('(max-width: 990px)')
+
+    button.addEventListener('click', () => toggleMenu())
+    overlay.addEventListener('click', () => toggleMenu())
+    query.addListener((e) => {
+        toggleMenu(e.matches)
+    })
+    toggleMenu(query.matches)
+}
+
+document.addEventListener('click', e => {
+    if (e.target.matches('.left-nav button')) {
+        e.target.classList.toggle('expanded')
+        e.target.classList.toggle('collapsed')
+        e.target.nextElementSibling.classList.toggle('expanded')
+        e.target.nextElementSibling.classList.toggle('collapsed')
+    }
+})
diff --git a/docs/assets/js/index.js b/docs/assets/js/index.js
new file mode 100644
index 0000000..0fa36f7
--- /dev/null
+++ b/docs/assets/js/index.js
@@ -0,0 +1,51 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//       http://www.apache.org/licenses/LICENSE-2.0
+//
+//  Unless required by applicable law or agreed to in writing, software
+//  distributed under the License is distributed on an "AS IS" BASIS,
+//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//  See the License for the specific language governing permissions and
+//  limitations under the License.
+
+import './code-tabs.js?1'
+import {hideLeftNav} from './docs-menu.js'
+//import {hideTopNav} from './top-navigation.js'
+import './page-nav.js'
+
+
+
+
+document.addEventListener('topNavigationShow', hideLeftNav)
+document.addEventListener('leftNavigationShow', hideTopNav)
+
+
+
+
+// enables search with Swiftype widget
+jQuery(document).ready(function(){
+
+    var customRenderFunction = function(document_type, item) {
+        var out = '<a href="' + Swiftype.htmlEscape(item['url']) + '" class="st-search-result-link">' + item.highlight['title'] + '</a>';
+        return out.concat('<p class="url">' + String(item['url']).replace("https://www.", '') + '</p><p class="body">' + item.highlight['body'] + '</p>');
+    }
+    
+/*    jQuery("#search-input").swiftype({
+        fetchFields: { 'page': ['url'] },
+        renderFunction: customRenderFunction,
+        highlightFields: {
+            'page': {
+                'title': {'size': 60, 'fallback': true },
+                'body': { 'size': 100, 'fallback':true }
+            }
+        },
+        engineKey: '_t6sDkq6YsFC_12W6UH2'
+    });
+    */
+    
+});
diff --git a/docs/assets/js/page-nav.js b/docs/assets/js/page-nav.js
new file mode 100644
index 0000000..8f48464
--- /dev/null
+++ b/docs/assets/js/page-nav.js
@@ -0,0 +1,37 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//       http://www.apache.org/licenses/LICENSE-2.0
+//
+//  Unless required by applicable law or agreed to in writing, software
+//  distributed under the License is distributed on an "AS IS" BASIS,
+//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//  See the License for the specific language governing permissions and
+//  limitations under the License.
+
+const rightNav = document.querySelector('.right-nav .sectlevel1')
+
+if (IntersectionObserver && rightNav) {
+    const tocAnchors = [...rightNav.querySelectorAll('a[href]')]
+    let last;
+    tocAnchors.forEach((a, i, all) => {
+        const target = document.querySelector(`${a.hash}`)
+        if (!target) return
+        const observer = new IntersectionObserver((entries) => {
+            entries.forEach(entry => {
+                a.classList.toggle('visible', entry.isIntersecting)
+                if (entry.isIntersecting) last = a
+
+                const firstVisible = rightNav.querySelector('.visible')
+                tocAnchors.forEach(a => a.classList.remove('active'))
+                if (firstVisible) firstVisible.classList.add('active')
+                else if (last) last.classList.add('active')
+            })
+        });
+        observer.observe(target)
+    })
+}
diff --git a/docs/assets/js/top-navigation.js b/docs/assets/js/top-navigation.js
new file mode 100644
index 0000000..291ae92
--- /dev/null
+++ b/docs/assets/js/top-navigation.js
@@ -0,0 +1,92 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//       http://www.apache.org/licenses/LICENSE-2.0
+//
+//  Unless required by applicable law or agreed to in writing, software
+//  distributed under the License is distributed on an "AS IS" BASIS,
+//  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//  See the License for the specific language governing permissions and
+//  limitations under the License.
+
+const query = window.matchMedia('(max-width: 450px)')
+const header = document.querySelector('header')
+const search = document.querySelector('header .search input')
+
+let state = {
+    narrowMode: false,
+    showSearch: false,
+    showNav: false
+}
+
+const eventTypes = {
+    navShow: 'topNavigationShow',
+    navHide: 'topNavigationHide'
+}
+
+/**
+ * @param {keyof eventTypes} type
+ */
+const emit = type => {
+    if (!CustomEvent) return
+    header.dispatchEvent(new CustomEvent(type, {bubbles: true}))
+}
+
+/**
+ * @param {typeof state} newState
+ */
+const render = (newState) => {
+    if (state.narrowMode !== newState.narrowMode)
+        header.classList.toggle('narrow-header')
+    if (state.showSearch !== newState.showSearch) {
+        header.classList.toggle('show-search')
+        search.value = ''
+        if (newState.showSearch) search.focus()
+    }
+    if (state.showNav !== newState.showNav) {
+        header.classList.toggle('show-nav')
+        emit(eventTypes[newState.showNav ? 'navShow' : 'navHide'])        
+    }
+    state = newState
+}
+
+render(Object.assign({}, state, {narrowMode: query.matches}))
+
+query.addListener((e) => {
+    render(Object.assign({}, state, {
+        narrowMode: e.matches,
+        showSearch: false,
+        showNav: false
+    }))
+})
+
+document.querySelector('.top-nav-toggle').addEventListener('click', () => {
+    render(Object.assign({}, state, {
+        showNav: !state.showNav,
+        showSearch: false
+    }))
+})
+
+document.querySelector('.search-toggle').addEventListener('click', () => {
+    render(Object.assign({}, state, {
+        showSearch: !state.showSearch,
+        showNav: false
+    }))
+})
+
+search.addEventListener('blur', () => {
+    render(Object.assign({}, state, {
+        showSearch: false
+    }))
+})
+
+export const hideTopNav = () => {
+    render(Object.assign({}, state, {
+        showNav: false,
+        showSearch: false
+    }))
+}
diff --git a/docs/favicon.ico b/docs/favicon.ico
new file mode 100644
index 0000000..62d5ea6
--- /dev/null
+++ b/docs/favicon.ico
Binary files differ
diff --git a/docs/run.sh b/docs/run.sh
new file mode 100755
index 0000000..bcbe32c
--- /dev/null
+++ b/docs/run.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+rm -rf _site
+rm -rf .jekyll-cache
+
+bundle exec jekyll s -lI --force_polling --trace
diff --git a/examples/README.md b/examples/README.md
index 720c664..2aef28e 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -7,7 +7,7 @@
 How to start examples in the developer's environment, please see [DEVNOTES.txt](DEVNOTES.txt).
 
 ## Running examples on JDK 9/10/11
-Ignite uses proprietary SDK APIs that are not available by default. See also [How to run Ignite on JDK 9,10 and 11](https://apacheignite.readme.io/docs/getting-started#section-running-ignite-with-java-9-10-11)
+Ignite uses proprietary SDK APIs that are not available by default. See also [How to run Ignite on JDK 9,10 and 11](https://ignite.apache.org/docs/latest/setup#running-ignite-with-java-11-or-later)
 
 To set up local IDE to easier access to examples, it is possible to add following options as default for all applications
 
diff --git a/examples/config/servlet/README.txt b/examples/config/servlet/README.txt
index 20d4b90..f84b8fd 100644
--- a/examples/config/servlet/README.txt
+++ b/examples/config/servlet/README.txt
@@ -3,6 +3,3 @@
 
 This folder contains web.xml file that demonstrates how to configure any servlet container
 to start a Apache Ignite node inside a Web application.
-
-For more information on available configuration properties, etc. refer to our documentation:
-http://apacheignite.readme.io/docs/web-session-clustering
diff --git a/examples/redis/redis-example.php b/examples/redis/redis-example.php
index 8664f06..911bcf1 100644
--- a/examples/redis/redis-example.php
+++ b/examples/redis/redis-example.php
@@ -21,8 +21,6 @@
 /**
  * To execute this script, run an Ignite instance with 'redis-ignite-internal-cache-0' cache specified and configured.
  * You will also need to have Predis extension installed. See https://github.com/nrk/predis for Predis details.
- *
- * See https://apacheignite.readme.io/docs/redis for more details on Redis integration.
  */
 
 // Load the library.
diff --git a/examples/redis/redis-example.py b/examples/redis/redis-example.py
index ac1c905..0a2eac5 100644
--- a/examples/redis/redis-example.py
+++ b/examples/redis/redis-example.py
@@ -18,8 +18,6 @@
 To execute this script, run an Ignite instance with 'redis-ignite-internal-cache-0' cache specified and configured.
 You will also need to have 'redis-py' installed.
 See https://github.com/andymccurdy/redis-py for the details on redis-py.
-
-See https://apacheignite.readme.io/docs/redis for more details on Redis integration.
 '''
 
 r = redis.StrictRedis(host='localhost', port=11211, db=0)
diff --git a/modules/platforms/cpp/core/namespaces.dox b/modules/platforms/cpp/core/namespaces.dox
index eccfc82..4b31c17 100644
--- a/modules/platforms/cpp/core/namespaces.dox
+++ b/modules/platforms/cpp/core/namespaces.dox
@@ -18,9 +18,7 @@
 /**
  * \mainpage Apache Ignite C++
  *
- * Apache Ignite In-Memory Database and Caching Platformis a high-performance, integrated and distributed in-memory platform for
- * computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with
- * traditional disk-based or flash-based technologies.
+ * C++ API reference for Apache Ignite.
  */
 
  /**
diff --git a/parent/pom.xml b/parent/pom.xml
index 08e773f..d3663bb 100644
--- a/parent/pom.xml
+++ b/parent/pom.xml
@@ -792,7 +792,7 @@
                         <artifactId>apache-rat-plugin</artifactId>
                         <version>0.12</version>
                         <configuration>
-                            <addDefaultLicenseMatchers>false</addDefaultLicenseMatchers>
+                            <addDefaultLicenseMatchers>true</addDefaultLicenseMatchers>
                             <licenses>
                                 <license implementation="org.apache.rat.analysis.license.FullTextMatchingLicense">
                                     <licenseFamilyCategory>IAL20</licenseFamilyCategory>
@@ -975,6 +975,14 @@
                                         <exclude>modules/platforms/python/requirements/**/*.txt</exclude><!--plain text can not be commented-->
                                         <!--Packaging -->
                                         <exclude>packaging/**</exclude>
+                                        <!-- Ignite Documentation-->
+                                        <exclude>docs/_site/**</exclude>
+                                        <exclude>docs/assets/images/**</exclude>
+                                        <exclude>docs/Gemfile.lock</exclude>
+                                        <exclude>docs/.jekyll-cache/**</exclude>
+                                        <exclude>docs/_docs/images/**</exclude>
+                                        <exclude>docs/Gemfile</exclude>
+                                        <exclude>docs/assets/js/anchor.min.js</exclude><!-- Distributed under the MIT license. The original license header is badly formatted. -->
                                     </excludes>
                                 </configuration>
                             </execution>