IGNITE-13884 Merged docs into 2.9.1 from 2.9 branch with updates (#8598)

* IGNITE-7595: new Ignite docs (returning the original changes after fixing licensing issues)

(cherry picked from commit 073488ac97517bbaad9f6b94b781fc404646f191)

* IGNITE-13574: add license headers for some imported files of the Ignite docs (#8361)

* Added a proper license header to some files used by the docs.

* Enabled the defaultLicenseMatcher for the license checker.

(cherry picked from commit d928fb8576b22dffbfce90a5541e67dc6cbfe410)

* ignite docs: updated a couple of contribution instructions

(cherry picked from commit 9e8da702068b1232789f8f9f93680f2c6d69ed16)

* IGNITE-13527: replace some references to the readme.io docs with the references to the new pages. The job will be finished as part of IGNITE-13586

(cherry picked from commit 7399ae64972cc097c48769cb5e2d9622ce7f7234)

* ignite docs: fixed broken lings to the SQLLine page

(cherry picked from commit faf4f467e964d478b3d99b94d43d32430a7e88f0)

* IGNITE-13615 Update .NET thin client feature set documentation

* IGNITE-13652 Wrong GitHub link for Apache Ignite With Spring Data/Example (#8420)

* ignite docs: updated the TcpDiscovery.soLinger documentation

* IGNITE-13663 : Represent in the documenttion affection of several node addresses on failure detection v2. (#8424)

* ignite docs: set the latest spring-data artifact id after receiving user feedback

* IGNITE-12951 Update documents for migrated extensions - Fixes #8488.

Signed-off-by: samaitra <saikat.maitra@gmail.com>
(cherry picked from commit 15a5da500c08948ee081533af97a9f1c2c8330f8)

* ignite docs: fixing a broken documentation link

* ignite docs: updated the index page with quick links to the APIs and examples

* ignite docs: fixed broken links and updated the C++ API header

* ignite docs: fixed case of GitHub

* IGNITE-13876 Updated documentation for 2.9.1 release (#8592)

(cherry picked from commit e74cf6ba8711338ed48dd01d1efe12505977f63f)

Co-authored-by: Denis Magda <dmagda@gridgain.com>
Co-authored-by: Pavel Tupitsyn <ptupitsyn@apache.org>
Co-authored-by: Denis Garus <garus.d.g@gmail.com>
Co-authored-by: Vladsz83 <vladsz83@gmail.com>
Co-authored-by: samaitra <saikat.maitra@gmail.com>
Co-authored-by: Nikita Safonov <73828260+nikita-tech-writer@users.noreply.github.com>
Co-authored-by: ymolochkov <ynmolochkov@sberbank.ru>
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index a22b7c641..5347636 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -36,7 +36,7 @@
 
 ## Contributing Documentation
 Documentation can be contributed to
- - End-User documentation https://apacheignite.readme.io/ . Use Suggest Edits. See also [How To Document](https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document).
+ - End-User documentation https://ignite.apache.org/docs/latest/ . Use Suggest Edits. See also [How To Document](https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document).
  - Developer documentation, design documents, IEPs [Apache Wiki](https://cwiki.apache.org/confluence/display/IGNITE). Ask at [Dev List](https://lists.apache.org/list.html?dev@ignite.apache.org) to be added as editor.
  - Markdown files, visible at GitHub, e.g. README.md; drawings explaining Apache Ignite & product internals.
  - Javadocs for packages (package-info.java), classes, methods, etc.
diff --git a/README.txt b/README.txt
index 4d02f4c..c7a2cdf 100644
--- a/README.txt
+++ b/README.txt
@@ -18,13 +18,7 @@
 
 For information on how to get started with Apache Ignite please visit:
 
-    http://apacheignite.readme.io/docs/getting-started
-
-
-You can find Apache Ignite documentation here:
-
-    http://apacheignite.readme.io/docs
-
+    https://ignite.apache.org/docs/latest/
 
 Crypto Notice
 =============
@@ -49,12 +43,12 @@
 The following provides more details on the included cryptographic software:
 
 * JDK SSL/TLS libraries used to enable secured connectivity between cluster
-nodes (https://apacheignite.readme.io/docs/ssltls).
+nodes (https://ignite.apache.org/docs/latest/security/ssl-tls).
 Oracle/OpenJDK (https://www.oracle.com/technetwork/java/javase/downloads/index.html)
 
 * JDK Java Cryptography Extensions build in encryption from the Java libraries is used
 for Transparent Data Encryption of data on disk
-(https://apacheignite.readme.io/docs/transparent-data-encryption)
+(https://ignite.apache.org/docs/latest/security/tde)
 and for AWS S3 Client Side Encryprion.
 (https://java.sun.com/javase/technologies/security/)
 
@@ -74,4 +68,4 @@
 * Apache Ignite.NET uses .NET Framework crypto APIs from standard class library
 for all security and cryptographic related code.
  .NET Classic, Windows-only (https://dotnet.microsoft.com/download)
- .NET Core  (https://dotnetfoundation.org/projects)
\ No newline at end of file
+ .NET Core  (https://dotnetfoundation.org/projects)
diff --git a/config/visor-cmd/node_startup_by_ssh.sample.ini b/config/visor-cmd/node_startup_by_ssh.sample.ini
index f1d8e01..649e0c7 100644
--- a/config/visor-cmd/node_startup_by_ssh.sample.ini
+++ b/config/visor-cmd/node_startup_by_ssh.sample.ini
@@ -15,7 +15,7 @@
 
 # ==================================================================
 # This is a sample file for Visor CMD to use with "start" command.
-# More info: https://apacheignite-tools.readme.io/docs/start-command
+# More info: https://ignite.apache.org/docs/latest/tools/visor-cmd
 # ==================================================================
 
 # Section with settings for host1:
diff --git a/docs/.gitignore b/docs/.gitignore
new file mode 100644
index 0000000..a01b89a
--- /dev/null
+++ b/docs/.gitignore
@@ -0,0 +1,5 @@
+.jekyll-cache/
+_site/
+Gemfile.lock
+.jekyll-metadata
+
diff --git a/docs/Gemfile b/docs/Gemfile
new file mode 100644
index 0000000..f471d02
--- /dev/null
+++ b/docs/Gemfile
@@ -0,0 +1,14 @@
+source "https://rubygems.org"
+
+# git_source(:github) {|repo_name| "https://github.com/#{repo_name}" }
+
+gem 'asciidoctor'
+gem 'jekyll', group: :jekyll_plugins
+gem 'wdm', '~> 0.1.1' if Gem.win_platform?
+group :jekyll_plugins do
+  gem 'jekyll-asciidoc'
+end
+#gem 'pygments.rb', '~> 1.2.1'
+gem 'thread_safe', '~> 0.3.6'
+gem 'slim', '~> 4.0.1'
+gem 'tilt', '~> 2.0.9'
diff --git a/docs/README.adoc b/docs/README.adoc
new file mode 100644
index 0000000..856b993
--- /dev/null
+++ b/docs/README.adoc
@@ -0,0 +1,212 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Apache Ignite Documentation
+:toc:
+:toc-title:
+
+== Overview
+The Apache Ignite documentation is maintained in the repository with the code base, in the "/docs" subdirectory. The directory contains the source files, HTML templates and css styles.
+
+
+The Apache Ignite documentation is written in link:https://asciidoctor.org/docs/what-is-asciidoc/[asciidoc].
+The Asciidoc files are compiled into HTML pages and published to https://ignite.apache.org/docs.
+
+
+.Content of the “docs” directory
+[cols="1,4",opts="stretch"]
+|===
+| pass:[_]docs  | The directory with .adoc files and code-snippets.
+| pass:[_]config.yml | Jekyll configuration file.
+|===
+
+
+== Building the Docs Locally
+
+To build the docs locally, you can install `jekyll` and other dependencies on your machine, or you can use Jekyll docker image.
+
+=== Install Jekyll and Asciidoctor
+
+. Install Jekyll by following this instruction:  https://jekyllrb.com/docs/installation/[window=_blank]
+. In the “/docs” directory, run the following command:
++
+[source, shell]
+----
+$ bundle
+----
++
+This should install all dependencies, including `asciidoctor`.
+. Start jekyll:
++
+[source, shell]
+----
+$ bundle exec jekyll s
+----
+The command compiles the Asciidoc files into HTML pages and starts a local webserver.
+
+Open `http://localhost:4000/docs[window=_blank]` in your browser.
+
+=== Run with Docker
+
+The following command starts jekyll in a container and downloads all dependencies. Run the command in the “/docs” directory.
+
+[source, shell]
+----
+$ docker run -v "$PWD:/srv/jekyll" -p 4000:4000 jekyll/jekyll:latest jekyll s
+----
+
+Open `http://localhost:4000/docs[window=_blank]` in your browser.
+
+== How to Contribute
+
+If you want to contribute to the documentation, add or modify the relevant page in the `docs/_docs` directory.
+This directory contains all .adoc files (which are then rendered into HTML pages and published on the web-site).
+
+Because we use asciidoc for documentation, consider the following points:
+
+* Get familiar with the asciidoc format: https://asciidoctor.org/docs/user-manual/. You don’t have to read the entire manual. Search through it when you want to learn how to create a numbered list, or insert an image, or use italics.
+* Please read the link:https://asciidoctor.org/docs/asciidoc-recommended-practices:[AsciiDoc Recommended Practices] and try to adhere to those when editing the .adoc source files.
+
+
+The following sections explain specific asciidoc syntax that we use.
+
+=== Table of content
+
+The table of content is defined in the `_data/toc.yaml` file.
+If you want to add a new page, make sure to update the TOC.
+
+=== Changing an URL of an existing page
+
+If you rename an already published page or change the page's path in the `/_data/toc.yaml` file,
+you must configure a proper redirect from the old to the new URL in the following files of the Ignite website:
+https://github.com/apache/ignite-website/blob/master/.htaccess
+
+Reach out to documentation maintainers if you need any help with this.
+
+=== Links to other sections in the docs
+All .adoc files are located in the "docs/_docs" directory.
+Any link to the files within the directory must be relative to that directory.
+Remove the file extension (.adoc).
+
+For example:
+[source, adoc]
+----
+link:persistence/native-persistence[Native Persistence]
+----
+
+This is a link to the Native Persistence page.
+
+=== Links to external resources
+
+When referencing an external resource, make the link to open in a new window by adding the `window=_blank` attribute:
+
+[source, adoc]
+----
+link:https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSE_Protocols[Supported protocols,window=_blank]
+----
+
+
+=== Tabs
+
+We use custom syntax to insert tabs. Tabs are used to provide code samples for different programming languages.
+
+Tabs are defined by the `tabs` block:
+```
+[tabs]
+--
+individual tabs are defined here
+--
+```
+
+Each tab is defined by the 'tab' directive:
+
+```
+tab:tab_name[]
+```
+
+where `tab_name` is the title of the tab.
+
+The content of the tab is everything that is given between the tab title, and the next tab or the end of the block.
+
+```asciidoc
+[tabs]
+--
+tab:XML[]
+
+The content of the XML tab goes here
+
+tab:Java[]
+
+The content of the Java tab is here
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+```
+
+=== Callouts
+
+Use the syntax below if you need to bring reader's attention to some details:
+
+[NOTE]
+====
+[discrete]
+=== Callout Title
+Callout Text
+====
+
+Change the callout type to `CAUTION` if you want to put out a warning:
+
+[CAUTION]
+====
+[discrete]
+=== Callout Title
+Callout Text
+====
+
+=== Code Snippets
+
+Code snippets must be taken from a compilable source code file (e.g. java, cs, js, etc).
+We use the `include` feature of asciidoc.
+Source code files are located in the `docs/_docs/code-snippets/{language}` folders.
+
+
+To add a code snippet to a page, follow these steps:
+
+* Create a file in the code snippets directory, e.g. _docs/code-snippets/java/org/apache/ignite/snippets/JavaThinClient.java
+
+* Enclose the piece of code you want to include within named tags (see https://asciidoctor.org/docs/user-manual/#by-tagged-regions). Give the tag a self-evident name.
+For example:
++
+```
+[source, java]
+----
+// tag::clientConnection[]
+ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
+try (IgniteClient client = Ignition.startClient(cfg)) {
+    ClientCache<Integer, String> cache = client.cache("myCache");
+    // get data from the cache
+}
+// end::clientConnection[]
+----
+```
+
+* Include the tag in the adoc file:
++
+[source, adoc,subs="macros"]
+----
+\include::{javaCodeDir}/JavaThinClient.java[tag=clientConnection,indent=0]
+----
diff --git a/docs/_config.yml b/docs/_config.yml
new file mode 100644
index 0000000..0562d1a
--- /dev/null
+++ b/docs/_config.yml
@@ -0,0 +1,46 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+exclude: [guidelines.md,  "Gemfile", "Gemfile.lock", README.adoc, "_docs/code-snippets", "_docs/includes", '*.sh']
+attrs: &asciidoc_attributes
+  version: 2.9.1
+  base_url: /docs
+  stylesdir: /docs/assets/css
+  imagesdir: /docs
+  source-highlighter: rouge
+  table-stripes: even
+  javadoc_base_url: https://ignite.apache.org/releases/{version}/javadoc
+  javaCodeDir: code-snippets/java/src/main/java/org/apache/ignite/snippets
+  csharpCodeDir: code-snippets/dotnet
+  githubUrl: https://github.com/apache/ignite/tree/master
+  docSourceUrl: https://github.com/apache/ignite/tree/IGNITE-7595/docs
+collections:
+  docs:
+    permalink: /docs/:path:output_ext
+    output: true
+defaults:
+  -
+    scope:
+      path: ''
+    values:
+      layout: 'doc'
+  -
+    scope:
+      path: '_docs'
+    values:
+      toc: ignite 
+asciidoctor:
+  base_dir: _docs/ 
+  attributes: *asciidoc_attributes
+   
diff --git a/docs/_data/toc.yaml b/docs/_data/toc.yaml
new file mode 100644
index 0000000..750c1d5
--- /dev/null
+++ b/docs/_data/toc.yaml
@@ -0,0 +1,559 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+- title: Documentation Overview
+  url: index
+- title: Quick Start Guides
+  items: 
+    - title: Java
+      url: quick-start/java
+    - title: .NET/C#
+      url: quick-start/dotnet
+    - title: C++
+      url: quick-start/cpp
+    - title: Python
+      url: quick-start/python
+    - title: Node.JS
+      url: quick-start/nodejs
+    - title: SQL
+      url: quick-start/sql
+    - title: PHP
+      url: quick-start/php
+    - title: REST API
+      url: quick-start/restapi
+- title: Installation
+  url: installation
+  items:
+  - title: Installing Using ZIP Archive
+    url: installation/installing-using-zip
+  - title: Installing Using Docker
+    url: installation/installing-using-docker
+  - title: Installing DEB or RPM package
+    url: installation/deb-rpm
+  - title: Kubernetes
+    items: 
+      - title: Amazon EKS 
+        url: installation/kubernetes/amazon-eks-deployment
+      - title: Azure Kubernetes Service 
+        url: installation/kubernetes/azure-deployment
+      - title: Google Kubernetes Engine
+        url: installation/kubernetes/gke-deployment
+  - title: VMWare
+    url: installation/vmware-installation
+- title: Setting Up
+  items:
+    - title: Understanding Configuration
+      url: understanding-configuration
+    - title: Setting Up
+      url: setup
+    - title: Configuring Logging
+      url: logging
+    - title: Resources Injection
+      url: resources-injection
+- title: Starting and Stopping Nodes
+  url: starting-nodes
+- title: Clustering
+  items:
+    - title: Overview
+      url: clustering/clustering
+    - title: TCP/IP Discovery
+      url: clustering/tcp-ip-discovery
+    - title: ZooKeeper Discovery
+      url: clustering/zookeeper-discovery
+    - title: Discovery in the Cloud
+      url: clustering/discovery-in-the-cloud
+    - title: Network Configuration
+      url: clustering/network-configuration
+    - title: Connecting Client Nodes 
+      url: clustering/connect-client-nodes
+    - title: Baseline Topology
+      url: clustering/baseline-topology
+    - title: Running Client Nodes Behind NAT
+      url: clustering/running-client-nodes-behind-nat
+- title: Thin Clients
+  items:
+    - title: Thin Clients Overview
+      url: thin-clients/getting-started-with-thin-clients
+    - title: Java Thin Client
+      url: thin-clients/java-thin-client
+    - title: .NET Thin Client
+      url: thin-clients/dotnet-thin-client
+    - title: C++ Thin Client
+      url: thin-clients/cpp-thin-client
+    - title: Python Thin Client
+      url: thin-clients/python-thin-client
+    - title: PHP Thin Client
+      url: thin-clients/php-thin-client
+    - title: Node.js Thin Client
+      url: thin-clients/nodejs-thin-client
+    - title: Binary Client Protocol
+      items:
+        - title: Binary Client Protocol
+          url: binary-client-protocol/binary-client-protocol
+        - title: Data Format
+          url: binary-client-protocol/data-format
+        - title: Key-Value Queries
+          url: binary-client-protocol/key-value-queries
+        - title: SQL and Scan Queries
+          url: binary-client-protocol/sql-and-scan-queries
+        - title: Binary Types Metadata
+          url: binary-client-protocol/binary-type-metadata
+        - title: Cache Configuration
+          url: binary-client-protocol/cache-configuration
+- title: Data Modeling
+  items: 
+    - title: Introduction
+      url: data-modeling/data-modeling
+    - title: Data Partitioning
+      url: data-modeling/data-partitioning
+    - title: Affinity Colocation 
+      url: data-modeling/affinity-collocation
+    - title: Binary Marshaller
+      url: data-modeling/binary-marshaller
+- title: Configuring Memory 
+  items:
+    - title: Memory Architecture
+      url: memory-architecture
+    - title: Configuring Data Regions
+      url: memory-configuration/data-regions
+    - title: Eviction Policies
+      url: memory-configuration/eviction-policies        
+- title: Configuring Persistence
+  items:
+    - title: Ignite Persistence
+      url: persistence/native-persistence
+    - title: External Storage
+      url: persistence/external-storage
+    - title: Swapping
+      url: persistence/swap
+    - title: Implementing Custom Cache Store
+      url: persistence/custom-cache-store
+    - title: Cluster Snapshots
+      url: persistence/snapshots
+    - title: Disk Compression
+      url: persistence/disk-compression
+    - title: Tuning Persistence
+      url: persistence/persistence-tuning
+- title: Configuring Caches
+  items:
+    - title: Cache Configuration 
+      url: configuring-caches/configuration-overview 
+    - title: Configuring Partition Backups
+      url: configuring-caches/configuring-backups
+    - title: Partition Loss Policy
+      url: configuring-caches/partition-loss-policy
+    - title: Atomicity Modes
+      url: configuring-caches/atomicity-modes
+    - title: Expiry Policy
+      url: configuring-caches/expiry-policies
+    - title: On-Heap Caching
+      url: configuring-caches/on-heap-caching
+    - title: Cache Groups 
+      url: configuring-caches/cache-groups
+    - title: Near Caches
+      url: configuring-caches/near-cache
+- title: Data Rebalancing
+  url: data-rebalancing 
+- title: Data Streaming
+  url: data-streaming
+- title: Using Key-Value API
+  items:
+    - title: Basic Cache Operations 
+      url: key-value-api/basic-cache-operations
+    - title: Working with Binary Objects
+      url: key-value-api/binary-objects
+    - title: Using Scan Queries
+      url: key-value-api/using-scan-queries
+    - title: Read Repair
+      url: read-repair
+- title: Performing Transactions
+  url: key-value-api/transactions
+- title: Working with SQL
+  items:
+    - title: Introduction
+      url: SQL/sql-introduction
+    - title: Understanding Schemas
+      url: SQL/schemas
+    - title: Defining Indexes
+      url: SQL/indexes
+    - title: Using SQL API
+      url: SQL/sql-api
+    - title: Distributed Joins
+      url: SQL/distributed-joins
+    - title: SQL Transactions
+      url: SQL/sql-transactions
+    - title: Custom SQL Functions
+      url: SQL/custom-sql-func
+    - title: JDBC Driver
+      url: SQL/JDBC/jdbc-driver
+    - title: JDBC Client Driver
+      url: SQL/JDBC/jdbc-client-driver
+    - title: ODBC Driver
+      items:
+        - title: ODBC Driver
+          url: SQL/ODBC/odbc-driver
+        - title: Connection String and DSN
+          url:  /SQL/ODBC/connection-string-dsn
+        - title: Querying and Modifying Data
+          url: SQL/ODBC/querying-modifying-data
+        - title: Specification
+          url: SQL/ODBC/specification
+        - title: Data Types
+          url: SQL/ODBC/data-types
+        - title: Error Codes
+          url: SQL/ODBC/error-codes
+    - title: Multiversion Concurrency Control
+      url: transactions/mvcc
+- title: SQL Reference
+  url: sql-reference/sql-reference-overview
+  items:
+    - title: SQL Conformance
+      url: sql-reference/sql-conformance
+    - title: Data Definition Language (DDL)
+      url: sql-reference/ddl
+    - title: Data Manipulation Language (DML)
+      url: sql-reference/dml
+    - title: Transactions
+      url: sql-reference/transactions
+    - title: Operational Commands
+      url: sql-reference/operational-commands
+    - title: Aggregate functions
+      url: sql-reference/aggregate-functions
+    - title: Numeric Functions
+      url: sql-reference/numeric-functions
+    - title: String Functions
+      url: sql-reference/string-functions
+    - title: Data and Time Functions
+      url: sql-reference/date-time-functions
+    - title: System Functions
+      url: sql-reference/system-functions
+    - title: Data Types
+      url: sql-reference/data-types
+- title: Distributed Computing
+  items:
+    - title: Distributed Computing API
+      url: distributed-computing/distributed-computing
+    - title: Cluster Groups
+      url: distributed-computing/cluster-groups
+    - title: Executor Service
+      url: distributed-computing/executor-service
+    - title: MapReduce API
+      url: distributed-computing/map-reduce
+    - title: Load Balancing
+      url: distributed-computing/load-balancing
+    - title: Fault Tolerance
+      url: distributed-computing/fault-tolerance
+    - title: Job Scheduling
+      url: distributed-computing/job-scheduling
+    - title: Colocating Computations with Data
+      url: distributed-computing/collocated-computations
+- title: Code Deployment
+  items:
+    - title: Deploying User Code
+      url: code-deployment/deploying-user-code
+    - title: Peer Class Loading
+      url: code-deployment/peer-class-loading
+- title: Machine Learning
+  items:
+    - title: Machine Learning
+      url: machine-learning/machine-learning
+    - title: Partition Based Dataset
+      url: machine-learning/partition-based-dataset
+    - title: Updating Trained Models
+      url: machine-learning/updating-trained-models
+    - title: Binary Classification
+      items:
+        - title: Introduction
+          url: machine-learning/binary-classification/introduction
+        - title: Linear SVM (Support Vector Machine)
+          url: machine-learning/binary-classification/linear-svm
+        - title: Decision Trees
+          url: machine-learning/binary-classification/decision-trees
+        - title: Multilayer Perceptron
+          url: machine-learning/binary-classification/multilayer-perceptron
+        - title: Logistic Regression
+          url: machine-learning/binary-classification/logistic-regression
+        - title: k-NN Classification
+          url: machine-learning/binary-classification/knn-classification
+        - title: ANN (Approximate Nearest Neighbor)
+          url: machine-learning/binary-classification/ann
+        - title: Naive Bayes
+          url: machine-learning/binary-classification/naive-bayes
+    - title: Regression
+      items:
+        - title: Introduction
+          url: machine-learning/regression/introduction
+        - title: Linear Regression
+          url: machine-learning/regression/linear-regression
+        - title: Decision Trees Regression
+          url: machine-learning/regression/decision-trees-regression
+        - title: k-NN Regression
+          url: machine-learning/regression/knn-regression
+    - title: Clustering
+      items:
+        - title: Introduction
+          url: machine-learning/clustering/introduction
+        - title: K-Means Clustering
+          url: machine-learning/clustering/k-means-clustering
+        - title: Gaussian mixture (GMM)
+          url: machine-learning/clustering/gaussian-mixture
+    - title: Preprocessing
+      url: machine-learning/preprocessing
+    - title: Model Selection
+      items:
+        - title: Introduction
+          url: machine-learning/model-selection/introduction
+        - title: Evaluator
+          url: machine-learning/model-selection/evaluator
+        - title: Split the dataset on test and train datasets
+          url: machine-learning/model-selection/split-the-dataset-on-test-and-train-datasets
+        - title: Hyper-parameter tuning
+          url: machine-learning/model-selection/hyper-parameter-tuning
+        - title: Pipeline API
+          url: machine-learning/model-selection/pipeline-api
+    - title: Multiclass Classification
+      url: machine-learning/multiclass-classification
+    - title: Ensemble Methods
+      items:
+        - title:
+          url: machine-learning/ensemble-methods/introduction
+        - title: Stacking
+          url: machine-learning/ensemble-methods/stacking
+        - title: Bagging
+          url: machine-learning/ensemble-methods/baggin
+        - title: Random Forest
+          url: machine-learning/ensemble-methods/random-forest
+        - title: Gradient Boosting
+          url: machine-learning/ensemble-methods/gradient-boosting
+    - title: Recommendation Systems
+      url: machine-learning/recommendation-systems
+    - title: Importing Model
+      items:
+        - title: Introduction
+          url: machine-learning/importing-model/introduction
+        - title: Import Model from XGBoost
+          url: machine-learning/importing-model/model-import-from-gxboost
+        - title: Import Model from Apache Spark
+          url: machine-learning/importing-model/model-import-from-apache-spark
+- title: Using Continuous Queries
+  url: key-value-api/continuous-queries
+- title: Using Ignite Services
+  url: services/services
+- title: Using Ignite Messaging
+  url: messaging
+- title: Distributed Data Structures
+  items:
+    - title: Queue and Set
+      url: data-structures/queue-and-set
+    - title: Atomic Types 
+      url: data-structures/atomic-types
+    - title: CountDownLatch 
+      url: data-structures/countdownlatch
+    - title: Atomic Sequence 
+      url: data-structures/atomic-sequence
+    - title:  Semaphore 
+      url: data-structures/semaphore
+    - title: ID Generator
+      url: data-structures/id-generator
+- title: Distributed Locks
+  url: distributed-locks
+- title: REST API
+  url: restapi
+- title: .NET Specific
+  items:
+    - title: Configuration Options
+      url: net-specific/net-configuration-options
+    - title: Deployment Options
+      url: net-specific/net-deployment-options
+    - title: Standalone Nodes
+      url: net-specific/net-standalone-nodes
+    - title: Logging
+      url: net-specific/net-logging
+    - title: LINQ
+      url: net-specific/net-linq
+    - title: Java Services Execution
+      url: net-specific/net-java-services-execution
+    - title: .NET Platform Cache
+      url: net-specific/net-platform-cache
+    - title: Plugins
+      url: net-specific/net-plugins
+    - title: Serialization
+      url: net-specific/net-serialization
+    - title: Cross-Platform Support
+      url: net-specific/net-cross-platform-support
+    - title: Platform Interoperability
+      url: net-specific/net-platform-interoperability
+    - title: Remote Assembly Loading
+      url: net-specific/net-remote-assembly-loading
+    - title: Troubleshooting
+      url: net-specific/net-troubleshooting
+    - title: Integrations
+      items:
+        - title: ASP.NET Output Caching
+          url: net-specific/asp-net-output-caching
+        - title: ASP.NET Session State Caching
+          url: net-specific/asp-net-session-state-caching
+        - title: Entity Framework 2nd Level Cache
+          url: net-specific/net-entity-framework-cache
+- title: C++ Specific
+  items:
+    - title: Serialization
+      url: cpp-specific/cpp-serialization
+    - title: Platform Interoperability
+      url: cpp-specific/cpp-platform-interoperability
+    - title: Objects Lifetime
+      url: cpp-specific/cpp-objects-lifetime
+- title: Monitoring
+  items:
+    - title: Introduction
+      url: monitoring-metrics/intro
+    - title: Cluster ID and Tag
+      url: monitoring-metrics/cluster-id
+    - title: Cluster States
+      url: monitoring-metrics/cluster-states
+    - title: Metrics
+      items: 
+        - title: Configuring Metrics
+          url: monitoring-metrics/configuring-metrics
+        - title: JMX Metrics
+          url: monitoring-metrics/metrics
+    - title: New Metrics System 
+      items:
+        - title: Introduction 
+          url: monitoring-metrics/new-metrics-system
+        - title: Metrics
+          url: monitoring-metrics/new-metrics
+    - title: System Views
+      url: monitoring-metrics/system-views
+    - title: Tracing
+      url: monitoring-metrics/tracing
+- title: Working with Events
+  items:
+    - title: Enabling and Listenting to Events
+      url: events/listening-to-events
+    - title: Events
+      url: events/events
+- title: Tools
+  items:
+    - title: Control Script
+      url: tools/control-script
+    - title: Visor CMD
+      url: tools/visor-cmd
+    - title: GridGain Control Center
+      url: tools/gg-control-center
+    - title: SQLLine
+      url: tools/sqlline
+    - title: Tableau
+      url: tools/tableau
+    - title: Informatica
+      url: tools/informatica
+    - title: Pentaho
+      url: tools/pentaho
+- title: Security
+  url: security
+  items: 
+    - title: Authentication
+      url: security/authentication
+    - title: SSL/TLS 
+      url: security/ssl-tls
+    - title: Transparent Data Encryption
+      items:
+        - title: Introduction
+          url: security/tde
+        - title: Master key rotation
+          url: security/master-key-rotation
+    - title: Sandbox
+      url: security/sandbox
+- title: Extensions and Integrations
+  items:
+    - title: Spring
+      items:
+        - title: Spring Boot
+          url: extensions-and-integrations/spring/spring-boot
+        - title: Spring Data
+          url: extensions-and-integrations/spring/spring-data
+        - title: Spring Caching
+          url: extensions-and-integrations/spring/spring-caching
+    - title: Ignite for Spark
+      items:
+        - title: Overview
+          url: extensions-and-integrations/ignite-for-spark/overview
+        - title: IgniteContext and IgniteRDD
+          url:  extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd
+        - title: Ignite DataFrame
+          url: extensions-and-integrations/ignite-for-spark/ignite-dataframe
+        - title: Installation
+          url: extensions-and-integrations/ignite-for-spark/installation
+        - title: Test Ignite with Spark-shell
+          url: extensions-and-integrations/ignite-for-spark/spark-shell
+        - title: Troubleshooting
+          url: extensions-and-integrations/ignite-for-spark/troubleshooting
+    - title: Hibernate L2 Cache
+      url: extensions-and-integrations/hibernate-l2-cache
+    - title: MyBatis L2 Cache
+      url: extensions-and-integrations/mybatis-l2-cache
+    - title: Streaming
+      items:
+        - title: Kafka Streamer
+          url: extensions-and-integrations/streaming/kafka-streamer
+        - title: Camel Streamer
+          url: extensions-and-integrations/streaming/camel-streamer
+        - title: Flink Streamer
+          url: extensions-and-integrations/streaming/flink-streamer
+        - title: Flume Sink
+          url: extensions-and-integrations/streaming/flume-sink
+        - title: JMS Streamer
+          url: extensions-and-integrations/streaming/jms-streamer
+        - title: MQTT Streamer
+          url: extensions-and-integrations/streaming/mqtt-streamer
+        - title: RocketMQ Streamer
+          url: extensions-and-integrations/streaming/rocketmq-streamer
+        - title: Storm Streamer
+          url: extensions-and-integrations/streaming/storm-streamer
+        - title: ZeroMQ Streamer
+          url: extensions-and-integrations/streaming/zeromq-streamer
+        - title: Twitter Streamer
+          url: extensions-and-integrations/streaming/twitter-streamer
+    - title: Cassandra Integration
+      items:
+        - title: Overview
+          url: extensions-and-integrations/cassandra/overview
+        - title: Configuration
+          url: extensions-and-integrations/cassandra/configuration
+        - title: Usage Examples
+          url: extensions-and-integrations/cassandra/usage-examples
+        - title: DDL Generator
+          url: extensions-and-integrations/cassandra/ddl-generator
+    - title: PHP PDO
+      url: extensions-and-integrations/php-pdo
+- title: Plugins
+  url: plugins
+- title: Performance and Troubleshooting
+  items:
+    - title: General Performance Tips
+      url: /perf-and-troubleshooting/general-perf-tips
+    - title: Memory and JVM Tuning
+      url: /perf-and-troubleshooting/memory-tuning
+    - title: Persistence Tuning
+      url: /perf-and-troubleshooting/persistence-tuning
+    - title: SQL Tuning
+      url: /perf-and-troubleshooting/sql-tuning
+    - title: Thread Pools Tuning
+      url: /perf-and-troubleshooting/thread-pools-tuning
+    - title: Troubleshooting and Debugging
+      url: /perf-and-troubleshooting/troubleshooting
+    - title: Handling Exceptions
+      url: /perf-and-troubleshooting/handling-exceptions
+    - title: Benchmarking With Yardstick
+      url: /perf-and-troubleshooting/yardstick-benchmarking
diff --git a/docs/_docs/SQL/JDBC/error-codes.adoc b/docs/_docs/SQL/JDBC/error-codes.adoc
new file mode 100644
index 0000000..f2e1a33
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/error-codes.adoc
@@ -0,0 +1,81 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Error Codes
+
+Ignite JDBC drivers pass error codes in the `java.sql.SQLException` class, used to facilitate exception handling on the application side. To get an error code, use the `java.sql.SQLException.getSQLState()` method. It returns a string containing the ANSI SQLSTATE error code:
+
+[source,java]
+----
+include::{javaCodeDir}/JDBCThinDriver.java[tags=error-codes, indent=0]
+----
+
+
+The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI SQLSTATE] error codes currently supported by Ignite. Note that the list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|0700B|Conversion failure (for example, a string expression cannot be parsed as a number or a date).
+
+|0700E|Invalid transaction isolation level.
+
+|08001|The driver failed to open a connection to the cluster.
+
+|08003|The connection is in the closed state. Happened unexpectedly.
+
+|08004|The connection was rejected by the cluster.
+
+|08006|I/O error during communication.
+
+|22004|Null value not allowed.
+
+|22023|Unsupported parameter type.
+
+|23000|Data integrity constraint violation.
+
+|24000|Invalid result set state.
+
+|0A000|Requested operation is not supported.
+
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
+
+|42000|Query parsing exception.
+
+|50000|Ignite internal error.
+The code is not defined by ANSI and refers to an Ignite specific error. Refer to the `java.sql.SQLException` error message for more information.
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
new file mode 100644
index 0000000..ee2ffeb
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/jdbc-client-driver.adoc
@@ -0,0 +1,297 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JDBC Client Driver
+:javaFile: {javaCodeDir}/JDBCClientDriver.java
+
+JDBC Client Driver interacts with the cluster by means of a client node.
+
+== JDBC Client Driver
+
+The JDBC Client Driver connects to the cluster by using a lclient node connection. You must provide a complete Spring XML configuration as part of the JDBC connection string, and copy all the JAR files mentioned below to the classpath of your application or SQL tool:
+
+- All the JARs under `{IGNITE_HOME}\libs` directory.
+- All the JARs under `{IGNITE_HOME}\ignite-indexing` and `{IGNITE_HOME}\ignite-spring` directories.
+
+The driver itself is more robust, and might not support the latest SQL features of Ignite. However, because it uses the client node connection underneath, it can execute and distribute queries, and aggregate their results directly from the application side.
+
+The JDBC connection URL has the following pattern:
+
+[source,shell]
+----
+jdbc:ignite:cfg://[<params>@]<config_url>
+----
+
+Where:
+
+- `<config_url>` is required and must represent a valid URL that points to the configuration file for the client node. This node will be started within the Ignite JDBC Client Driver when it (the JDBC driver) tries to establish a connection with the cluster.
+- `<params>` is optional and has the following format:
+
+[source,text]
+----
+param1=value1:param2=value2:...:paramN=valueN
+----
+
+
+The name of the driver's class is `org.apache.ignite.IgniteJdbcDriver`. For example, here's how to open a JDBC connection to the Ignite cluster:
+
+[source,java]
+----
+include::{javaFile}[tags=register, indent=0]
+----
+
+[NOTE]
+====
+[discrete]
+=== Securing Connection
+
+For information on how to secure the JDBC client driver connection, you can refer to the link:security/ssl-tls[Security documentation].
+====
+
+=== Supported Parameters
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`cache`
+
+|Cache name. If it is not defined, then the default cache will be used. Note that the cache name is case sensitive.
+| None.
+
+|`nodeId`
+
+|ID of node where query will be executed. Useful for querying through local caches.
+| None.
+
+|`local`
+
+|Query will be executed only on a local node. Use this parameter with the `nodeId` parameter in order to limit data set by specified node.
+
+|`false`
+
+|`collocated`
+
+|Flag that is used for optimization purposes. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are colocated together on the same node, Ignite can make significant performance and network optimizations.
+
+|`false`
+
+|`distributedJoins`
+
+|Allows use of distributed joins for non-colocated data.
+
+|`false`
+
+|`streaming`
+
+|Turns on bulk data load mode via INSERT statements for this connection. Refer to the <<Streaming Mode>> section for more details.
+
+|`false`
+
+|`streamingAllowOverwrite`
+
+|Tells Ignite to overwrite values for existing keys on duplication instead of skipping them. Refer to the <<Streaming Mode>> section for more details.
+
+|`false`
+
+|`streamingFlushFrequency`
+
+|Timeout, in milliseconds, that data streamer should use to flush data. By default, the data is flushed on connection close. Refer to the <<Streaming Mode>> section for more details.
+
+|`0`
+
+|`streamingPerNodeBufferSize`
+
+|Data streamer's per node buffer size. Refer to the <<Streaming Mode>> section for more details.
+
+|`1024`
+
+|`streamingPerNodeParallelOperations`
+
+|Data streamer's per node parallel operations number. Refer to the <<Streaming Mode>> section for more details.
+
+|`16`
+
+|`transactionsAllowed`
+
+|Presently ACID Transactions are supported, but only at the key-value API level. At the SQL level, Ignite supports atomic, but not transactional consistency.
+
+This means that the JDBC driver might throw a `Transactions are not supported` exception if you try to use this functionality.
+
+However, in cases when you need transactional syntax to work (even without transactional semantics), e.g. some BI tools might force the transactional behavior, set this parameter to `true` to prevent exceptions from being thrown.
+
+|`false`
+
+|`multipleStatementsAllowed`
+
+|JDBC driver will be able to process multiple SQL statements at a time, returning multiple `ResultSet` objects. If the parameter is disabled, the query with multiple statements fails.
+
+|`false`
+
+|`lazy`
+
+|Lazy query execution.
+
+By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimizes the duration of internal database locks, thus increasing concurrency.
+
+However, if the result set is too big to fit in the available memory, it can lead to excessive GC pauses and even `OutOfMemoryError` errors. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+
+|`false`
+
+|`skipReducerOnUpdate`
+
+|Enables server side update feature.
+
+When Ignite executes a DML operation, it first fetches all of the affected intermediate rows for analysis to the query initiator (also known as reducer), and then prepares batches of updated values to be sent to remote nodes.
+
+This approach might impact performance and saturate the network if a DML operation has to move many entries over it.
+
+Use this flag as a hint for Ignite to perform all intermediate rows analysis and updates "in-place" on the corresponding remote data nodes.
+
+Defaults to `false`, meaning that intermediate results will be fetched to the query initiator first.
+|`false`
+
+
+|=======================================================================
+
+[NOTE]
+====
+[discrete]
+=== Cross-Cache Queries
+
+The cache to which the driver is connected is treated as the default schema. To query across multiple caches, you can use Cross-Cache queries.
+====
+
+=== Streaming Mode
+
+It's feasible to add data into a cluster in streaming mode (bulk mode) using the JDBC driver. In this mode, the driver instantiates `IgniteDataStreamer` internally and feeds data to it. To activate this mode, add the `streaming` parameter set to `true` to a JDBC connection string:
+
+[source,java]
+----
+// Register JDBC driver.
+Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+// Opening connection in the streaming mode.
+Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");
+----
+
+Presently, streaming mode is supported only for INSERT operations. This is useful in cases when you want to achieve fast data preloading into a cache. The JDBC driver defines multiple connection parameters that affect the behavior of the streaming mode. These parameters are listed in the parameters table above.
+
+[WARNING]
+====
+[discrete]
+=== Cache Name
+
+Make sure you specify a target cache for streaming as an argument to the `cache=` parameter in the JDBC connection string. If a cache is not specified or does not match the table used in streaming DML statements, updates will be ignored.
+====
+
+The parameters cover almost all of the settings of a general `IgniteDataStreamer` and allow you to tune the streamer according to your needs. Please refer to the link:data-streaming[Data Streaming] section for more information on how to configure the streamer.
+
+[NOTE]
+====
+[discrete]
+=== Time Based Flushing
+
+By default, the data is flushed when either a connection is closed or `streamingPerNodeBufferSize` is met. If you need to flush the data more frequently, adjust the `streamingFlushFrequency` parameter.
+====
+
+[source,java]
+----
+include::{javaFile}[tags=time-based-flushing, indent=0]
+----
+
+== Example
+
+To start processing the data located in the cluster, you need to create a JDBC `Connection` object using one of the methods below:
+
+[source,java]
+----
+// Register JDBC driver.
+Class.forName("org.apache.ignite.IgniteJdbcDriver");
+
+// Open JDBC connection (cache name is not specified, which means that we use default cache).
+Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");
+----
+
+Right after that you can execute your SQL `SELECT` queries:
+
+[source,java]
+----
+// Query names of all people.
+ResultSet rs = conn.createStatement().executeQuery("select name from Person");
+
+while (rs.next()) {
+    String name = rs.getString(1);
+}
+
+----
+
+[source,java]
+----
+// Query people with specific age using prepared statement.
+PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
+
+stmt.setInt(1, 30);
+
+ResultSet rs = stmt.executeQuery();
+
+while (rs.next()) {
+    String name = rs.getString("name");
+    int age = rs.getInt("age");
+}
+----
+
+You can use DML statements to modify the data.
+
+=== INSERT
+[source,java]
+----
+// Insert a Person with a Long key.
+PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+stmt.setInt(1, 1);
+stmt.setString(2, "John Smith");
+stmt.setInt(3, 25);
+
+stmt.execute();
+----
+
+=== MERGE
+[source,java]
+----
+// Merge a Person with a Long key.
+PreparedStatement stmt = conn.prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
+
+stmt.setInt(1, 1);
+stmt.setString(2, "John Smith");
+stmt.setInt(3, 25);
+
+stmt.executeUpdate();
+----
+
+=== UPDATE
+
+[source,java]
+----
+// Update a Person.
+conn.createStatement().
+  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");
+----
+
+=== DELETE
+
+[source,java]
+----
+conn.createStatement().execute("DELETE FROM Person WHERE age = 25");
+----
diff --git a/docs/_docs/SQL/JDBC/jdbc-driver.adoc b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
new file mode 100644
index 0000000..09438c1
--- /dev/null
+++ b/docs/_docs/SQL/JDBC/jdbc-driver.adoc
@@ -0,0 +1,649 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= JDBC Driver
+:javaFile: {javaCodeDir}/JDBCThinDriver.java
+
+Ignite is shipped with JDBC drivers that allow processing of distributed data using standard SQL statements like `SELECT`, `INSERT`, `UPDATE` or `DELETE` directly from the JDBC side.
+
+Presently, there are two drivers supported by Ignite: the lightweight and easy to use JDBC Thin Driver described in this document and link:SQL/JDBC/jdbc-client-driver[JDBC Client Driver] that interacts with the cluster by means of a client node.
+
+== JDBC Thin Driver
+
+The JDBC Thin driver is a default, lightweight driver provided by Ignite. To start using the driver, just add `ignite-core-{version}.jar` to your application's classpath.
+
+The driver connects to one of the cluster nodes and forwards all the queries to it for final execution. The node handles the query distribution and the result's aggregations. Then the result is sent back to the client application.
+
+The JDBC connection string may be formatted with one of two patterns: `URL query` or `semicolon`:
+
+
+
+.Connection String Syntax
+[source,text]
+----
+// URL query pattern
+jdbc:ignite:thin://<hostAndPortRange0>[,<hostAndPortRange1>]...[,<hostAndPortRangeN>][/schema][?<params>]
+
+hostAndPortRange := host[:port_from[..port_to]]
+
+params := param1=value1[&param2=value2]...[&paramN=valueN]
+
+// Semicolon pattern
+jdbc:ignite:thin://<hostAndPortRange0>[,<hostAndPortRange1>]...[,<hostAndPortRangeN>][;schema=<schema_name>][;param1=value1]...[;paramN=valueN]
+----
+
+
+- `host` is required and defines the host of the cluster node to connect to.
+- `port_from` is the beginning of the port range to use to open the connection. 10800 is used by default if this parameter is omitted.
+- `port_to` is optional. It is set to the `port_from` value by default if this parameter is omitted.
+- `schema` is the schema name to access. PUBLIC is used by default. This name should correspond to the SQL ANSI-99 standard. Non-quoted identifiers are not case sensitive. Quoted identifiers are case sensitive. When semicolon format is used, the schema may be defined as a parameter with name schema.
+- `<params>` are optional.
+
+The name of the driver's class is `org.apache.ignite.IgniteJdbcThinDriver`. For instance, this is how you can open a JDBC connection to the cluster node listening on IP address 192.168.0.50:
+
+[source,java]
+----
+include::{javaFile}[tags=get-connection, indent=0]
+----
+
+
+[NOTE]
+====
+[discrete]
+=== Put the JDBC URL in quotes when connecting from bash
+
+Make sure to put the connection URL in double quotes (" ") when connecting from a bash environment, for example: `"jdbc:ignite:thin://[address]:[port];user=[username];password=[password]"`
+====
+
+=== Parameters
+The following table lists all the parameters that are supported by the JDBC connection string:
+
+[width="100%",cols="30%,40%,30%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`user`
+|Username for the SQL Connection. This parameter is required if authentication is enabled on the server.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+|ignite
+
+|`password`
+|Password for SQL Connection. Required if authentication is enabled on the server.
+See the link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE user] documentation for more details.
+|`ignite`
+
+|`distributedJoins`
+|Whether to execute distributed joins in link:SQL/distributed-joins#non-colocated-joins[non-colocated mode].
+|false
+
+|`enforceJoinOrder`
+
+|Whether to enforce join order of tables in the query. If set to `true`, the query optimizer does not reorder tables in the join.
+
+|`false`
+
+|`collocated`
+
+| Set this parameter to `true` if your SQL statement includes a GROUP BY clause that groups the results by either primary
+  or affinity key. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If
+  you know in advance that the elements of your query selection are colocated together on the same node and you group by
+  a primary or affinity key, then Ignite makes significant performance and network optimizations by grouping data locally
+   on each node participating in the query.
+|`false`
+
+|`replicatedOnly`
+
+|Whether the query contains only replicated tables. This is a hint for potentially more effective execution.
+
+|`false`
+
+|`autoCloseServerCursor`
+|Whether to close server-side cursors automatically when the last piece of a result set is retrieved. When this property is enabled, calling `ResultSet.close()` does not require a network call, which could improve performance. However, if the server-side cursor is already closed, you may get an exception when trying to call `ResultSet.getMetadata()`. This is why it defaults to `false`.
+|`false`
+
+| `partitionAwareness`
+| Enables xref:partition-awareness[] mode. In this mode, the driver tries to determine the nodes where the data that is being queried is located and send the query to these nodes.
+| `false`
+
+|`partitionAwarenessSQLCacheSize` [[partitionAwarenessSQLCacheSize]]
+| The number of distinct SQL queries that the driver keeps locally for optimization. When a query is executed for the first time, the driver receives the partition distribution for the table that is being queried and saves it for future use locally. When you query this table next time, the driver uses the partition distribution to determine where the data being queried is located to send the query to the right nodes. This local storage with SQL queries invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct SQL queries you are going to perform.
+| 1000
+
+|`partitionAwarenessPartitionDistributionsCacheSize` [[partitionAwarenessPartitionDistributionsCacheSize]]
+| The number of distinct objects that represent partition distribution that the driver keeps locally for optimization. See the description of the previous parameter for details. This local storage with partition distribution objects invalidates when the cluster topology changes. The optimal value for this parameter should equal the number of distinct tables (link:configuring-caches/cache-groups[cache groups]) you are going to use in your queries.
+| 1000
+
+|`socketSendBuffer`
+|Socket send buffer size. When set to 0, the OS default is used.
+|0
+
+|`socketReceiveBuffer`
+|Socket receive buffer size. When set to 0, the OS default is used.
+|0
+
+|`tcpNoDelay`
+| Whether to use `TCP_NODELAY` option.
+|`true`
+
+|`lazy`
+|Lazy query execution.
+By default, Ignite attempts to get and load the whole query result set into memory and then send it to the client. For small and medium result sets, this provides optimal performance and minimizes the duration of internal database locks, thus increasing concurrency.
+However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even 'OutOfMemoryError's. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+|`false`
+
+|`skipReducerOnUpdate`
+|Enables server side updates.
+When Ignite executes a DML operation, it fetches all the affected intermediate rows and sends them to the query initiator (also known as reducer) for analysis. Then it prepares batches of updated values to be sent to remote nodes.
+This approach might impact performance and it can saturate the network if a DML operation has to move many entries over it.
+Use this flag to tell Ignite to perform all intermediate row analysis and updates "in-place" on corresponding remote data nodes.
+Defaults to `false`, meaning that the intermediate results are fetched to the query initiator first.
+|`false`
+
+
+|=======================================================================
+
+For the list of security parameters, refer to the <<Using SSL>> section.
+
+=== Connection String Examples
+
+- `jdbc:ignite:thin://myHost` - connect to myHost on the port 10800 with all defaults.
+- `jdbc:ignite:thin://myHost:11900` - connect to myHost on custom port 11900 with all defaults.
+- `jdbc:ignite:thin://myHost:11900;user=ignite;password=ignite` - connect to myHost on custom port 11900 with user credentials for authentication.
+- `jdbc:ignite:thin://myHost:11900;distributedJoins=true&autoCloseServerCursor=true` - connect to myHost on custom port 11900 with enabled distributed joins and autoCloseServerCursor optimization.
+- `jdbc:ignite:thin://myHost:11900/myschema;` - connect to myHost on custom port 11900 and access to MYSCHEMA.
+- `jdbc:ignite:thin://myHost:11900/"MySchema";lazy=false` - connect to myHost on custom port 11900 with disabled lazy query execution and access to MySchema (schema name is case sensitive).
+
+=== Multiple Endpoints
+
+You can enable automatic failover if a current connection is broken by setting multiple connection endpoints in the connection string.
+The JDBC Driver randomly picks an address from the list to connect to. If the connection fails, the JDBC Driver selects another address from the list until the connection is restored.
+The Driver stops reconnecting and throws an exception if all the endpoints are unreachable.
+
+The example below shows how to pass three addresses via the connection string:
+
+[source,java]
+----
+include::{javaFile}[tags=multiple-endpoints, indent=0]
+----
+
+
+=== Partition Awareness [[partition-awareness]]
+
+[WARNING]
+====
+[discrete]
+Partition awareness is an experimental feature whose API or design architecture might be changed
+before a GA version is released.
+====
+
+Partition awareness is a feature that makes the JDBC driver "aware" of the partition distribution in the cluster.
+It allows the driver to pick the nodes that own the data that is being queried and send the query directly to those nodes
+(if the addresses of the nodes are provided in the driver's configuration). Partition awareness can increase average
+performance of queries that use the affinity key.
+
+Without partition awareness, the JDBC driver connects to a single node, and all queries are executed through that node.
+If the data is hosted on a different node, the query has to be rerouted within the cluster, which adds an additional network hop.
+Partition awareness eliminates that hop by sending the query to the right node.
+
+To make use of the partition awareness feature, provide the addresses of all the server nodes in the connection properties.
+The driver will route requests to the nodes that store the data requested by the query.
+
+[WARNING]
+====
+[discrete]
+Note that presently you need to provide the addresses of all server nodes in the connection properties because the driver does not load them automatically after a connection is opened.
+It also means that if a new server node joins the cluster, you are advised to reconnect the driver and add the node's address to the connection properties.
+Otherwise, the driver will not be able to send direct requests to this node.
+====
+
+To enable partition awareness, add the `partitionAwareness=true` parameter to the connection string and provide the
+endpoints of multiple server nodes:
+
+[source, java]
+----
+include::{javaFile}[tags=partition-awareness, indent=0]
+----
+
+NOTE: Partition Awareness can be used only with the default affinity function.
+
+Also see the description of the two related parameters: xref:partitionAwarenessSQLCacheSize[partitionAwarenessSQLCacheSize] and xref:partitionAwarenessPartitionDistributionsCacheSize[partitionAwarenessPartitionDistributionsCacheSize].
+
+
+=== Cluster Configuration
+
+In order to accept and process requests from JDBC Thin Driver, a cluster node binds to a local network interface on port 10800 and listens to incoming requests.
+
+Use an instance of `ClientConnectorConfiguration` to change the connection parameters:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+  <property name="clientConnectorConfiguration">
+    <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration" />
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration()
+    .setClientConnectorConfiguration(new ClientConnectorConfiguration());
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+The following parameters are supported:
+
+[width="100%",cols="30%,55%,15%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`host`
+
+|Host name or IP address to bind to. When set to `null`, binding is made to `localhost`.
+
+|`null`
+
+|`port`
+
+|TCP port to bind to. If the specified port is already in use, Ignite tries to find another available port using the `portRange` property.
+
+|`10800`
+
+|`portRange`
+
+| Defines the number of ports to try to bind to. E.g. if the port is set to `10800` and `portRange` is `100`, then the server tries to bind consecutively to any port in the `[10800, 10900]` range until it finds a free port.
+
+|`100`
+
+|`maxOpenCursorsPerConnection`
+
+|Maximum number of cursors that can be opened simultaneously for a single connection.
+
+|`128`
+
+|`threadPoolSize`
+
+|Number of request-handling threads in the thread pool.
+
+|`MAX(8, CPU cores)`
+
+|`socketSendBufferSize`
+
+|Size of the TCP socket send buffer. When set to 0, the system default value is used.
+
+|`0`
+
+|`socketReceiveBufferSize`
+
+|Size of the TCP socket receive buffer. When set to 0, the system default value is used.
+
+|`0`
+
+|`tcpNoDelay`
+
+|Whether to use `TCP_NODELAY` option.
+
+|`true`
+
+|`idleTimeout`
+
+|Idle timeout for client connections.
+Clients are disconnected automatically from the server after remaining idle for the configured timeout.
+When this parameter is set to zero or a negative value, the idle timeout is disabled.
+
+|`0`
+
+|`isJdbcEnabled`
+
+|Whether access through JDBC is enabled.
+
+|`true`
+
+|`isThinClientEnabled`
+
+|Whether access through thin client is enabled.
+
+|`true`
+
+
+|`sslEnabled`
+
+|If SSL is enabled, only SSL client connections are allowed. The node allows only one mode of connection: `SSL` or `plain`. A node cannot receive both types of client connections. But this option can be different for different nodes in the cluster.
+
+|`false`
+
+|`useIgniteSslContextFactory`
+
+|Whether to use SSL context factory from the node's configuration (see `IgniteConfiguration.sslContextFactory`).
+
+|`true`
+
+|`sslClientAuth`
+
+|Whether client authentication is required.
+
+|`false`
+
+|`sslContextFactory`
+
+|The class name that implements `Factory<SSLContext>` to provide node-side SSL. See link:security/ssl-tls[this] for more information.
+
+|`null`
+|=======================================================================
+
+[WARNING]
+====
+[discrete]
+=== JDBC Thin Driver is not thread safe
+
+The JDBC objects `Connections`, `Statements`, and `ResultSet` are not thread safe.
+Do not use statements and results sets from a single JDBC Connection in multiple threads.
+
+JDBC Thin Driver guards against concurrency. If concurrent access is detected, an exception
+(`SQLException`) is produced with the following message:
+
+....
+"Concurrent access to JDBC connection is not allowed
+[ownThread=<guard_owner_thread_name>, curThread=<current_thread_name>]",
+SQLSTATE="08006"
+....
+====
+
+
+=== Using SSL
+
+You can configure the JDBC Thin Driver to use SSL to secure communication with the cluster.
+SSL must be configured both on the cluster side and in the JDBC Driver.
+Refer to the  link:security/ssl-tls#ssl-for-clients[SSL for Thin Clients and JDBC/ODBC] section for the information about cluster configuration.
+
+To enable SSL in the JDBC Driver, pass the `sslMode=require` parameter in the connection string and provide the key store and trust store parameters:
+
+[source, java]
+----
+include::{javaFile}[tags=ssl,indent=0]
+----
+
+The following table lists all parameters that affect SSL/TLS connection:
+
+[width="100%",cols="30%,40%,30%"]
+|====
+|Parameter |Description |Default Value
+|`sslMode`
+a|Enables SSL connection. Available modes:
+
+* `require`: SSL protocol is enabled on the client. Only SSL connection is available.
+* `disable`: SSL protocol is disabled on the client. Only plain connection is supported.
+
+|`disable`
+
+|`sslProtocol`
+|Protocol name for secure transport. Protocol implementations supplied by JSSE: `SSLv3 (SSL)`, `TLSv1 (TLS)`, `TLSv1.1`, `TLSv1.2`
+|`TLS`
+
+|`sslKeyAlgorithm`
+
+|The Key manager algorithm to be used to create a key manager. Note that in most cases the default value is sufficient.
+Algorithms implementations supplied by JSSE: `PKIX (X509 or SunPKIX)`, `SunX509`.
+
+| `None`
+
+|`sslClientCertificateKeyStoreUrl`
+
+|URL of the client key store file.
+This is a mandatory parameter since SSL context cannot be initialized without a key manager.
+If `sslMode` is `require` and the key store URL isn't specified in the Ignite properties, the value of the JSSE property `javax.net.ssl.keyStore` is used.
+
+|The value of the
+`javax.net.ssl.keyStore`
+system property.
+
+|`sslClientCertificateKeyStorePassword`
+
+|Client key store password.
+
+If `sslMode` is `require` and the key store password isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.keyStorePassword` is used.
+
+|The value of the `javax.net.ssl.
+keyStorePassword` system property.
+
+|`sslClientCertificateKeyStoreType`
+
+|Client key store type used in context initialization.
+
+If `sslMode` is `require` and the key store type isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.keyStoreType` is used.
+
+|The value of the
+`javax.net.ssl.keyStoreType`
+system property.
+If the system property is not defined, the default value is `JKS`.
+
+|`sslTrustCertificateKeyStoreUrl`
+
+|URL of the trust store file. This is an optional parameter; however, one of these properties must be set: `sslTrustCertificateKeyStoreUrl` or `sslTrustAll`
+
+If `sslMode` is `require` and the trust store URL isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStore` is used.
+
+|The value of the
+`javax.net.ssl.trustStore` system property.
+
+|`sslTrustCertificateKeyStorePassword`
+
+|Trust store password.
+
+If `sslMode` is `require` and the trust store password isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStorePassword` is used.
+
+|The value of the
+`javax.net.ssl.trustStorePassword` system property
+
+|`sslTrustCertificateKeyStoreType`
+
+|Trust store type.
+
+If `sslMode` is `require` and the trust store type isn't specified in the Ignite properties, the JSSE property `javax.net.ssl.trustStoreType` is used.
+
+|The value of the
+`javax.net.ssl.trustStoreType`
+system property. If the system property is not defined the default value is `JKS`
+
+|`sslTrustAll`
+
+a|Disables server's certificate validation. Set to `true` to trust any server certificate (revoked, expired, or self-signed SSL certificates).
+
+CAUTION: Do not enable this option in production on a network you do not entirely trust. Especially anything using the public internet.
+
+|`false`
+
+|`sslFactory`
+
+|Class name of the custom implementation of the
+`Factory<SSLSocketFactory>`.
+
+If `sslMode` is `require` and a factory is specified, the custom factory is used instead of the JSSE socket factory. In this case, other SSL properties are ignored.
+
+|`null`
+|====
+
+
+//See the `ssl*` parameters of the JDBC driver, and `ssl*` parameters and `useIgniteSslContextFactory` of the `ClientConnectorConfiguration` for more detailed information.
+
+The default implementation is based on JSSE, and works through two Java keystore files:
+
+- `sslClientCertificateKeyStoreUrl` - the client certificate keystore holds the keys and certificate for the client.
+- `sslTrustCertificateKeyStoreUrl` - the trusted certificate keystore contains the certificate information to validate the server's certificate.
+
+The trusted store is an optional parameter, however one of the following parameters: `sslTrustCertificateKeyStoreUrl` or `sslTrustAll` must be configured.
+
+[WARNING]
+====
+[discrete]
+=== Using the "sslTrustAll" option
+
+Do not enable this option in production on a network you do not entirely trust, especially anything using the public internet.
+====
+
+If you want to use your own implementation or method to configure the `SSLSocketFactory`, you can use JDBC Driver's `sslFactory` parameter. It is a string that must contain the name of the class that implements the interface `Factory<SSLSocketFactory>`. The class must be available for JDBC Driver's class loader.
+
+== Ignite DataSource
+
+The DataSource object is used as a deployed object that can be located by logical name via the JNDI naming service. JDBC Driver's `org.apache.ignite.IgniteJdbcThinDataSource` implements a JDBC DataSource interface allowing you to utilize the DataSource interface instead.
+
+In addition to generic DataSource properties, `IgniteJdbcThinDataSource` supports all the Ignite-specific properties that can be passed into a JDBC connection string. For instance, the `distributedJoins` property can be (re)set via the `IgniteJdbcThinDataSource#setDistributedJoins()` method.
+
+Refer to the link:{javadoc_base_url}/org/apache/ignite/IgniteJdbcThinDataSource.html[JavaDocs] for more details.
+
+== Examples
+
+To start processing the data located in the cluster, you need to create a JDBC Connection object via one of the methods below:
+
+[source, java]
+----
+// Open the JDBC connection via DriverManager.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://192.168.0.50");
+----
+
+or
+
+[source,java]
+----
+include::{javaFile}[tags=connection-from-data-source,indent=0]
+----
+
+Then you can execute SQL SELECT queries as follows:
+
+[source,java]
+----
+include::{javaFile}[tags=select,indent=0]
+----
+
+You can also modify the data via DML statements.
+
+=== INSERT
+
+[source,java]
+----
+include::{javaFile}[tags=insert,indent=0]
+----
+
+
+=== MERGE
+
+
+[source,java]
+----
+include::{javaFile}[tags=merge,indent=0]
+
+----
+
+
+=== UPDATE
+
+
+[source,java]
+----
+// Update a Person.
+conn.createStatement().
+  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");
+----
+
+
+=== DELETE
+
+
+[source,java]
+----
+conn.createStatement().execute("DELETE FROM Person WHERE age = 25");
+----
+
+
+== Streaming
+
+JDBC Driver allows streaming data in bulk using the `SET` command. See the `SET` command link:sql-reference/operational-commands#set-streaming[documentation] for more information.
+
+
+
+
+
+
+== Error Codes
+
+The JDBC drivers pass error codes in the `java.sql.SQLException` class, used to facilitate exception handling on the application side. To get an error code, use the `java.sql.SQLException.getSQLState()` method. It returns a string containing the ANSI SQLSTATE error code defined:
+
+
+[source,java]
+----
+include::{javaFile}[tags=handle-exception,indent=0]
+----
+
+
+
+The table below lists all the link:https://en.wikipedia.org/wiki/SQLSTATE[ANSI SQLSTATE] error codes currently supported by Ignite. Note that the list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|0700B|Conversion failure (for example, a string expression cannot be parsed as a number or a date).
+
+|0700E|Invalid transaction isolation level.
+
+|08001|The driver failed to open a connection to the cluster.
+
+|08003|The connection is in the closed state. Happened unexpectedly.
+
+|08004|The connection was rejected by the cluster.
+
+|08006|I/O error during communication.
+
+|22004|Null value not allowed.
+
+|22023|Unsupported parameter type.
+
+|23000|Data integrity constraint violation.
+
+|24000|Invalid result set state.
+
+|0A000|Requested operation is not supported.
+
+|40001|Concurrent update conflict. See link:transactions/mvcc#concurrent-updates[Concurrent Updates].
+
+|42000|Query parsing exception.
+
+|50000| Internal error.
+The code is not defined by ANSI and refers to an Ignite specific error. Refer to the `java.sql.SQLException` error message for more information.
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/connection-string-dsn.adoc b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
new file mode 100644
index 0000000..6c5e1c4
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/connection-string-dsn.adoc
@@ -0,0 +1,255 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Connection String and DSN
+
+== Connection String Format
+
+The ODBC Driver supports standard connection string format. Here is the formal syntax:
+
+[source,text]
+----
+connection-string ::= empty-string[;] | attribute[;] | attribute; connection-string
+empty-string ::=
+attribute ::= attribute-keyword=attribute-value | DRIVER=[{]attribute-value[}]
+attribute-keyword ::= identifier
+attribute-value ::= character-string
+----
+
+
+In simple terms, an ODBC connection URL is a string with parameters of the choice separated by semicolon.
+
+== Supported Arguments
+
+The ODBC driver supports and uses several connection string/DSN arguments. All parameter names are case-insensitive - `ADDRESS`, `Address`, and `address` all are valid parameter names and refer to the same parameter. If an argument is not specified, the default value is used. The exception to this rule is the `ADDRESS` attribute. If it is not specified, `SERVER` and `PORT` attributes are used instead.
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Attribute keyword |Description |Default Value
+
+|`ADDRESS`
+|Address of the remote node to connect to. The format is: `<host>[:<port>]`. For example: `localhost`, `example.com:12345`, `127.0.0.1`, `192.168.3.80:5893`.
+If this attribute is specified, then `SERVER` and `PORT` arguments are ignored.
+|None.
+
+|`SERVER`
+|Address of the node to connect to.
+This argument value is ignored if ADDRESS argument is specified.
+|None.
+
+|`PORT`
+|Port on which `OdbcProcessor` of the node is listening.
+This argument value is ignored if `ADDRESS` argument is specified.
+|`10800`
+
+|`USER`
+|Username for SQL Connection. This parameter is required if authentication is enabled on the server.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+|Empty string
+
+|`PASSWORD`
+|Password for SQL Connection. This parameter is required if authentication is enabled on the server.
+See link:security/authentication[Authentication] and link:sql-reference/ddl#create-user[CREATE] user docs for more details on how to enable authentication and create user, respectively.
+|Empty string
+
+|`SCHEMA`
+|Schema name.
+|`PUBLIC`
+
+|`DSN`
+|DSN name to connect to.
+| None.
+
+|`PAGE_SIZE`
+|Number of rows returned in response to a fetching request to the data source. Default value should be fine in most cases. Setting a low value can result in slow data fetching while setting a high value can result in additional memory usage by the driver, and additional delay when the next page is being retrieved.
+|`1024`
+
+|`DISTRIBUTED_JOINS`
+|Enables the link:SQL/distributed-joins#non-colocated-joins[non-colocated distributed joins] feature for all queries that are executed over the ODBC connection.
+|`false`
+
+|`ENFORCE_JOIN_ORDER`
+|Enforces a join order of tables in SQL queries. If set to `true`, the query optimizer does not reorder tables in the join.
+|`false`
+
+|`PROTOCOL_VERSION`
+|Used to specify ODBC protocol version to use. Currently, there are following versions: `2.1.0`, `2.1.5`, `2.3.0`, `2.3.2`, `2.5.0`. You can use earlier versions of the protocol for backward compatibility.
+|`2.3.0`
+
+|`REPLICATED_ONLY`
+|Set this property to `true` if the query is to be executed over fully replicated tables. This can enforce execution optimizations.
+|`false`
+
+|`COLLOCATED`
+| Set this parameter to `true` if your SQL statement includes a GROUP BY clause that groups the results by either primary
+or affinity key. When Ignite executes a distributed query, it sends sub-queries to individual cluster members. If
+you know in advance that the elements of your query selection are colocated together on the same node and you group by
+a primary or affinity key, then Ignite makes significant performance and network optimizations by grouping data locally
+ on each node participating in the query.
+|`false`
+
+|`LAZY`
+|Lazy query execution.
+By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimize duration of internal database locks, thus increasing concurrency.
+However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even `OutOfMemoryError` errors. Use this flag to tell Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of a moderate performance hit.
+|`false`
+
+|`SKIP_REDUCER_ON_UPDATE`
+|Enables server side update feature.
+When Ignite executes a DML operation, first, it fetches all the affected intermediate rows for analysis to the query initiator (also known as reducer), and only then prepares batches of updated values that will be sent to remote nodes.
+This approach might affect performance, and saturate the network if a DML operation has to move many entries over it.
+Use this flag to tell Ignite to do all intermediate rows analysis and updates "in-place" on corresponding remote data nodes.
+Defaults to `false`, meaning that intermediate results will be fetched to the query initiator first.
+|`false`
+
+|`SSL_MODE`
+|Determines whether the SSL connection should be negotiated with the server. Use `require` or `disable` mode as needed.
+| None.
+
+|`SSL_KEY_FILE`
+|Specifies the name of the file containing the SSL server private key.
+| None.
+
+|`SSL_CERT_FILE`
+|Specifies the name of the file containing the SSL server certificate.
+| None.
+
+|`SSL_CA_FILE`
+|Specifies the name of the file containing the SSL server certificate authority (CA).
+| None.
+|=======================================================================
+
+== Connection String Samples
+You can find samples of the connection string below. These strings can be used with `SQLDriverConnect` ODBC call to establish connection with a node.
+
+
+[tabs]
+--
+tab:Authentication[]
+[source,text]
+----
+DRIVER={Apache Ignite};
+ADDRESS=localhost:10800;
+SCHEMA=somecachename;
+USER=yourusername;
+PASSWORD=yourpassword;
+SSL_MODE=[require|disable];
+SSL_KEY_FILE=<path_to_private_key>;
+SSL_CERT_FILE=<path_to_client_certificate>;
+SSL_CA_FILE=<path_to_trusted_certificates>
+----
+
+tab:Specific Cache[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=localhost:10800;CACHE=yourCacheName
+----
+
+tab:Default cache[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=localhost:10800
+----
+
+tab:DSN[]
+[source,text]
+----
+DSN=MyIgniteDSN
+----
+
+tab:Custom page size[]
+[source,text]
+----
+DRIVER={Apache Ignite};ADDRESS=example.com:12901;CACHE=MyCache;PAGE_SIZE=4096
+----
+--
+
+
+
+== Configuring DSN
+The same arguments apply if you prefer to use link:https://en.wikipedia.org/wiki/Data_source_name[DSN] (Data Source Name) for connection purposes.
+
+To configure DSN on Windows, you should use a system tool called `odbcad32` (for 32-bit [x86] systems) or `odbc64` (for 64-bit systems) which is an ODBC Data Source Administrator.
+
+When installing the DSN tool, _if you use the pre-built msi file_, make sure you've installed Microsoft Visual C++ 2010 (https://www.microsoft.com/en-ie/download/details.aspx?id=5555[32-bit/x86] or https://www.microsoft.com/en-us/download/details.aspx?id=14632[64-bit/x64]).
+
+Launch this tool, via `Control panel->Administrative Tools->Data Sources (ODBC)`. Once the ODBC Data Source Administrator is launched, select `Add...->Apache Ignite` and configure your DSN.
+
+
+image::images/odbc_dsn_configuration.png[Configuring DSN]
+
+
+To do the same on Linux, you have to locate the `odbc.ini` file. The file location varies among Linux distributions and depends on a specific Driver Manager used by the Linux distribution. As an example, if you are using unixODBC then you can run the following command which will print system wide ODBC related details:
+
+
+[source,text]
+----
+odbcinst -j
+----
+
+
+Use the `SYSTEM DATA SOURCES` and `USER DATA SOURCES` properties to locate the `odbc.ini` file.
+
+Once you locate the `odbc.ini` file, open it with the editor of your choice and add the DSN section to it, as shown below:
+
+[source,text]
+----
+[DSN Name]
+description=<Insert your description here>
+driver=Apache Ignite
+<Other arguments here...>
+----
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/data-types.adoc b/docs/_docs/SQL/ODBC/data-types.adoc
new file mode 100644
index 0000000..ab2d8e1
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/data-types.adoc
@@ -0,0 +1,38 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Types
+
+Supported data types listing.
+
+The following SQL data types, listed in this link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types[specification], are supported:
+
+- `SQL_CHAR`
+- `SQL_VARCHAR`
+- `SQL_LONGVARCHAR`
+- `SQL_SMALLINT`
+- `SQL_INTEGER`
+- `SQL_FLOAT`
+- `SQL_DOUBLE`
+- `SQL_BIT`
+- `SQL_TINYINT`
+- `SQL_BIGINT`
+- `SQL_BINARY`
+- `SQL_VARBINARY`
+- `SQL_LONGVARBINARY`
+- `SQL_GUID`
+- `SQL_DECIMAL`
+- `SQL_TYPE_DATE`
+- `SQL_TYPE_TIMESTAMP`
+- `SQL_TYPE_TIME`
diff --git a/docs/_docs/SQL/ODBC/error-codes.adoc b/docs/_docs/SQL/ODBC/error-codes.adoc
new file mode 100644
index 0000000..a1d29ce
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/error-codes.adoc
@@ -0,0 +1,155 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Error Codes
+
+To get an error code, use the `SQLGetDiagRec()` function. It returns a string holding the ANSI SQL error code defined. For example:
+
+[source,c++]
+----
+SQLHENV env;
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+SQLCHAR connectStr[] = "DRIVER={Apache Ignite};SERVER=localhost;PORT=10800;SCHEMA=Person;";
+SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, 0, 0, 0, SQL_DRIVER_COMPLETE);
+
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] = "SELECT firstName, lastName, resume, salary FROM Person";
+SQLRETURN ret = SQLExecDirect(stmt, query, SQL_NTS);
+
+if (ret != SQL_SUCCESS)
+{
+	SQLCHAR sqlstate[7] = "";
+	SQLINTEGER nativeCode;
+
+	SQLCHAR message[1024];
+	SQLSMALLINT reallen = 0;
+
+	int i = 1;
+	ret = SQLGetDiagRec(SQL_HANDLE_STMT, stmt, i, sqlstate,
+                      &nativeCode, message, sizeof(message), &reallen);
+
+	while (ret != SQL_NO_DATA)
+	{
+		std::cout << sqlstate << ": " << message;
+
+		++i;
+		ret = SQLGetDiagRec(SQL_HANDLE_STMT, stmt, i, sqlstate,
+                        &nativeCode, message, sizeof(message), &reallen);
+	}
+}
+----
+
+The table below lists all the error codes supported by Ignite presently. This list may be extended in the future.
+
+[width="100%",cols="20%,80%"]
+|=======================================================================
+|Code |Description
+
+|01S00
+|Invalid connection string attribute.
+
+|01S02
+|The driver did not support the specified value and substituted a similar value.
+
+|08001
+|The driver failed to open a connection to the cluster.
+
+|08002
+|The connection is already established.
+
+|08003
+|The connection is in the closed state. Happened unexpectedly.
+
+|08004
+|The connection is rejected by the cluster.
+
+|08S01
+|Connection failure.
+
+|22026
+|String length mismatch in data-at-execution dialog.
+
+|23000
+|Integrity constraint violation (e.g. duplicate key, null key and so on).
+
+|24000
+|Invalid cursor state.
+
+|42000
+|Syntax error in request.
+
+|42S01
+|Table already exists.
+
+|42S02
+|Table not found.
+
+|42S11
+|Index already exists.
+
+|42S12
+|Index not found.
+
+|42S21
+|Column already exists.
+
+|42S22
+|Column not found.
+
+|HY000
+|General error. See error message for details.
+
+|HY001
+|Memory allocation error.
+
+|HY003
+|Invalid application buffer type.
+
+|HY004
+|Invalid SQL data type.
+
+|HY009
+|Invalid use of null-pointer.
+
+|HY010
+|Function call sequence error.
+
+|HY090
+|Invalid string or buffer length (e.g. negative or zero length).
+
+|HY092
+|Option type out of range.
+
+|HY097
+|Column type out of range.
+
+|HY105
+|Invalid parameter type.
+
+|HY106
+|Fetch type out of range.
+
+|HYC00
+|Feature is not implemented.
+
+|IM001
+|Function is not supported.
+|=======================================================================
diff --git a/docs/_docs/SQL/ODBC/odbc-driver.adoc b/docs/_docs/SQL/ODBC/odbc-driver.adoc
new file mode 100644
index 0000000..9f4e9b8
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/odbc-driver.adoc
@@ -0,0 +1,343 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ODBC Driver
+
+== Overview
+Ignite includes an ODBC driver that allows you both to select and to modify data stored in a distributed cache using standard SQL queries and native ODBC API.
+
+For detailed information on ODBC please refer to link:https://msdn.microsoft.com/en-us/library/ms714177.aspx[ODBC Programmer's Reference].
+
+The ODBC driver implements version 3.0 of the ODBC API.
+
+== Cluster Configuration
+
+The ODBC driver is treated as a dynamic library on Windows and a shared object on Linux. An application does not load it directly. Instead, it uses the Driver Manager API that loads and unloads ODBC drivers whenever required.
+
+Internally, the ODBC driver uses TCP to connect to a cluster. The cluster-side connection parameters can be configured via the `IgniteConfiguration.clientConnectorConfiguration` property.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration"/>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+
+ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+cfg.setClientConnectorConfiguration(clientConnectorCfg);
+
+----
+--
+
+Client connector configuration supports the following properties:
+
+[width="100%",cols="20%,60%,20%"]
+|=======================================================================
+|Parameter |Description |Default Value
+
+|`host`
+|Host name or IP address to bind to. When set to null, binding is made to `localhost`.
+|`null`
+
+|`port`
+|TCP port to bind to. If the specified port is already in use, Ignite will try to find another available port using the `portRange` property.
+|`10800`
+
+|`portRange`
+|Defines the number of ports to try to bind to. E.g. if the port is set to `10800` and `portRange` is `100`, then server will sequentially try to bind to any port from `[10800, 10900]` until it finds a free port.
+|`100`
+
+|`maxOpenCursorsPerConnection`
+|Maximum number of cursors that can be opened simultaneously for a single connection.
+|`128`
+
+|`threadPoolSize`
+|Number of request-handling threads in the thread pool.
+|`MAX(8, CPU cores)`
+
+|`socketSendBufferSize`
+|Size of the TCP socket send buffer. When set to 0, the system default value is used.
+|`0`
+
+|`socketReceiveBufferSize`
+|Size of the TCP socket receive buffer. When set to 0, the system default value is used.
+|`0`
+
+|`tcpNoDelay`
+|Whether to use the `TCP_NODELAY` option.
+|`true`
+
+|`idleTimeout`
+|Idle timeout for client connections.
+Clients will automatically be disconnected from the server after being idle for the configured timeout.
+When this parameter is set to zero or a negative value, idle timeout will be disabled.
+|`0`
+
+|`isOdbcEnabled`
+|Whether access through ODBC is enabled.
+|`true`
+
+|`isThinClientEnabled`
+|Whether access through thin client is enabled.
+|`true`
+|=======================================================================
+
+
+You can change these parameters as shown in the example below:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/odbc.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+...
+ClientConnectorConfiguration clientConnectorCfg = new ClientConnectorConfiguration();
+
+clientConnectorCfg.setHost("127.0.0.1");
+clientConnectorCfg.setPort(12345);
+clientConnectorCfg.setPortRange(2);
+clientConnectorCfg.setMaxOpenCursorsPerConnection(512);
+clientConnectorCfg.setSocketSendBufferSize(65536);
+clientConnectorCfg.setSocketReceiveBufferSize(131072);
+clientConnectorCfg.setThreadPoolSize(4);
+
+cfg.setClientConnectorConfiguration(clientConnectorCfg);
+...
+----
+--
+
+A connection that is established from the ODBC driver side to the cluster via `ClientListenerProcessor` is also configurable. Find more details on how to alter connection settings from the driver side link:SQL/ODBC/connection-string-dsn[here].
+
+== Thread-Safety
+
+The current implementation of Ignite ODBC driver only provides thread-safety at the connections level. This means that you should not access the same connection from multiple threads without additional synchronization, though you can create separate connections for every thread and use them simultaneously.
+
+== Prerequisites
+
+Apache Ignite ODBC Driver was officially tested on:
+
+[cols="1,3a"]
+|===
+|OS
+|- Windows (XP and up, both 32-bit and 64-bit versions)
+- Windows Server (2008 and up, both 32-bit and 64-bit versions)
+- Ubuntu (18.04 64-bit)
+
+|C++ compiler
+
+|MS Visual C++ (10.0 and up), g++ (4.4.0 and up)
+
+|Visual Studio
+
+|2010 and above
+|===
+
+== Building ODBC Driver
+
+Ignite is shipped with pre-built installers for both 32- and 64-bit versions of the driver for Windows. So if you just want to install ODBC driver on Windows you may go straight to the <<Installing ODBC Driver>> section for installation instructions.
+
+If you use Linux you will still need to build ODBC driver before you can install it. So if you are using Linux or if you still want to build the driver by yourself for Windows, then keep reading.
+
+Ignite ODBC Driver source code is shipped as part of the Ignite package and it should be built before usage.
+
+Since the ODBC Driver is written in {cpp}, it is shipped as part of Ignite {cpp} and depends on some of the {cpp} libraries. More specifically, it depends on the `utils` and `binary` Ignite libraries. This means that you will need to build them prior to building the ODBC driver itself.
+
+We assume here that you are using the binary Ignite release. If you are using the source release, instead of `%IGNITE_HOME%\platforms\cpp` path you should use `%IGNITE_HOME%\modules\platforms\cpp` throughout.
+
+=== Building on Windows
+
+You will need MS Visual Studio 2010 or later to be able to build the ODBC driver on Windows. Once you have it, open Ignite solution `%IGNITE_HOME%\platforms\cpp\project\vs\ignite.sln` (or `ignite_86.sln` if you are running 32-bit platform), left-click on odbc project in the "Solution Explorer" and choose "Build". Visual Studio will automatically detect and build all the necessary dependencies.
+
+The path to the .sln file may vary depending on whether you're building from source files or binaries. If you don't see your .sln file in `%IGNITE_HOME%\platforms\cpp\project\vs\`, try looking in `%IGNITE_HOME%\modules\platforms\cpp\project\vs\`.
+
+NOTE: If you are using VS 2015 or later (MSVC 14.0 or later), you need to add `legacy_stdio_definitions.lib` as an additional library to odbc project linker's settings in order to be able to build the project. To add this library to the linker input in the IDE, open the context menu for the project node, choose `Properties`, then in the `Project Properties` dialog box, choose `Linker`, and edit the `Linker Input` to add `legacy_stdio_definitions.lib` to the semi-colon-separated list.
+
+Once the build process is complete, you can find `ignite.odbc.dll` in `%IGNITE_HOME%\platforms\cpp\project\vs\x64\Release` for the 64-bit version and in `%IGNITE_HOME%\platforms\cpp\project\vs\Win32\Release` for the 32-bit version.
+
+NOTE: Be sure to use the corresponding driver (32-bit or 64-bit) for your system.
+
+=== Building installers on Windows
+
+Once you have built driver binaries you may want to build installers for easier installation. Ignite uses link:http://wixtoolset.org[WiX Toolset] to generate ODBC installers, so to build them you'll need to download and install WiX. Make sure you have added the `bin` directory of the WiX Toolset to your PATH variable.
+
+Once everything is ready, open a terminal and navigate to the directory `%IGNITE_HOME%\platforms\cpp\odbc\install`. Execute the following commands one by one to build installers:
+
+
+[tabs]
+--
+tab:64-bit driver[]
+[source,shell]
+----
+candle.exe ignite-odbc-amd64.wxs
+light.exe -ext WixUIExtension ignite-odbc-amd64.wixobj
+----
+
+tab:32-bit driver[]
+[source,shell]
+----
+candle.exe ignite-odbc-x86.wxs
+light.exe -ext WixUIExtension ignite-odbc-x86.wixobj
+----
+--
+
+As a result, `ignite-odbc-amd64.msi` and `ignite-odbc-x86.msi` files should appear in the directory. You can use them to install your freshly built drivers.
+
+=== Building on Linux
+
+On a Linux-based operating system, you will need to install an ODBC Driver Manager of your choice to be able to build and use the Ignite ODBC Driver. The ODBC Driver has been tested with link:http://www.unixodbc.org[UnixODBC].
+
+==== Prerequisites
+include::includes/cpp-linux-build-prerequisites.adoc[]
+
+NOTE: The JDK is used only during the build process and not by the ODBC driver itself.
+
+==== Building ODBC driver
+- Create a build directory for cmake. We'll refer to it as `${CPP_BUILD_DIR}`
+- (Optional) Choose installation directory prefix (by default `/usr/local`). We'll refer to it as `${CPP_INSTALL_DIR}`
+- Build and install the driver by executing the following commands:
+
+[tabs]
+--
+tab:Ubuntu[]
+[source,bash,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON ${IGNITE_HOME}/platforms/cpp -DCMAKE_INSTALL_PREFIX=${CPP_INSTALL_DIR}
+make
+sudo make install
+----
+
+tab:CentOS/RHEL[]
+[source,shell,subs="attributes,specialchars"]
+----
+cd ${CPP_BUILD_DIR}
+cmake3 -DCMAKE_BUILD_TYPE=Release -DWITH_ODBC=ON  ${IGNITE_HOME}/platforms/cpp -DCMAKE_INSTALL_PREFIX=${CPP_INSTALL_DIR}
+make 
+sudo make install
+----
+
+--
+
+After the build process is over, you can find out where your ODBC driver has been placed by running the following command:
+
+[source,shell]
+----
+whereis libignite-odbc
+----
+
+The path should look something like: `/usr/local/lib/libignite-odbc.so`
+
+== Installing ODBC Driver
+
+In order to use ODBC driver, you need to register it in your system so that your ODBC Driver Manager will be able to locate it.
+
+=== Installing on Windows
+
+For 32-bit Windows, you should use the 32-bit version of the driver. For the
+64-bit Windows, you can use the 64-bit driver as well as the 32-bit. You may want to install both 32-bit and 64-bit drivers on 64-bit Windows to be able to use your driver from both 32-bit and 64-bit applications.
+
+==== Installing using installers
+
+NOTE: Microsoft Visual C++ 2010 Redistributable Package for 32-bit or 64-bit should be installed first.
+
+This is the easiest way and one should use it by default. Just launch the installer for the version of the driver that you need and follow the instructions:
+
+32-bit installer: `%IGNITE_HOME%\platforms\cpp\bin\odbc\ignite-odbc-x86.msi`
+64-bit installer: `%IGNITE_HOME%\platforms\cpp\bin\odbc\ignite-odbc-amd64.msi`
+
+==== Installing manually
+
+To install ODBC driver on Windows manually, you should first choose a directory on your
+file system where your driver or drivers will be located. Once you have
+chosen the location, you have to put your driver there and ensure that all driver
+dependencies can be resolved as well, i.e., they can be found either in the `%PATH%` or
+in the same directory where the driver DLL resides.
+
+After that, you have to use one of the install scripts from the following directory
+`%IGNITE_HOME%/platforms/cpp/odbc/install`. Note, that you may need OS administrator privileges to execute these scripts.
+
+[tabs]
+--
+tab:x86[]
+[source,shell]
+----
+install_x86 <absolute_path_to_32_bit_driver>
+----
+
+tab:AMD64[]
+[source,shell]
+----
+install_amd64 <absolute_path_to_64_bit_driver> [<absolute_path_to_32_bit_driver>]
+----
+
+--
+
+
+=== Installing on Linux
+
+To be able to build and install ODBC driver on Linux, you need to first install
+ODBC Driver Manager. The ODBC driver has been tested with link:http://www.unixodbc.org[UnixODBC].
+
+Once you have built the driver and performed the `make install` command, the ODBC Driver i.e. `libignite-odbc.so` will be placed in the `/usr/local/lib` folder. To install it as an ODBC driver in your Driver Manager and be able to use it, perform the following steps:
+
+- Ensure linker is able to locate all dependencies of the ODBC driver. You can check this by using `ldd` command. Assuming ODBC driver is located under `/usr/local/lib`:
++
+`ldd /usr/local/lib/libignite-odbc.so`
++
+If there are unresolved links to other libraries, you may want to add directories with these libraries to the `LD_LIBRARY_PATH`.
+
+- Edit file `${IGNITE_HOME}/platforms/cpp/odbc/install/ignite-odbc-install.ini` and ensure that Driver parameter of the Apache Ignite section points to where `libignite-odbc.so` is located.
+
+- To install the ODBC driver, use the following command:
+
+[source,shell]
+----
+odbcinst -i -d -f ${IGNITE_HOME}/platforms/cpp/odbc/install/ignite-odbc-install.ini
+----
+To perform this command, you may need root privileges.
+
+Now the Apache Ignite ODBC driver is installed and ready for use. You can connect to it and use it just like any other ODBC driver.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/querying-modifying-data.adoc b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
new file mode 100644
index 0000000..bfe7834
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/querying-modifying-data.adoc
@@ -0,0 +1,491 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Querying and Modifying Data
+
+== Overview
+This page elaborates on how to connect to a cluster and execute a variety of SQL queries using the ODBC driver.
+
+
+At the implementation layer, the ODBC driver uses SQL Fields queries to retrieve data from the cluster.
+This means that from ODBC side you can access only those fields that are link:SQL/sql-api#configuring-queryable-fields[defined in the cluster configuration].
+
+Moreover, the ODBC driver supports DML (Data Modification Layer), which means that you can modify your data using an ODBC connection.
+
+NOTE: Refer to the link:{githubUrl}/modules/platforms/cpp/examples/odbc-example[ODBC example] that incorporates complete logic and exemplary queries described below.
+
+== Configuring the Cluster
+As the first step, you need to set up a configuration that will be used by the cluster nodes.
+The configuration should include caches configurations as well with properly defined `QueryEntities` properties.
+`QueryEntities` are essential for the cases when your application (or the ODBC driver in our scenario) is going to query and modify the data using SQL statements.
+Alternatively you can create tables using DDL.
+
+[tabs]
+--
+tab:DDL[]
+[source,cpp]
+----
+SQLHENV env;
+
+// Allocate an environment handle
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+// Use ODBC ver 3
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+
+// Allocate a connection handle
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+// Prepare the connection string
+SQLCHAR connectStr[] = "DSN=My Ignite DSN";
+
+// Connecting to the Cluster.
+SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_COMPLETE);
+
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query1[] = "CREATE TABLE Person ( "
+    "id LONG PRIMARY KEY, "
+    "firstName VARCHAR, "
+    "lastName VARCHAR, "
+    "salary FLOAT) "
+    "WITH \"template=partitioned\"";
+
+SQLExecDirect(stmt, query1, SQL_NTS);
+
+SQLCHAR query2[] = "CREATE TABLE Organization ( "
+    "id LONG PRIMARY KEY, "
+    "name VARCHAR) "
+    "WITH \"template=partitioned\"";
+
+SQLExecDirect(stmt, query2, SQL_NTS);
+
+SQLCHAR query3[] = "CREATE INDEX idx_organization_name ON Organization (name)";
+
+SQLExecDirect(stmt, query3, SQL_NTS);
+----
+
+tab:Spring XML[]
+[source,xml]
+----
+include::code-snippets/xml/odbc-cache-config.xml[tags=ignite-config;!discovery, indent=0]
+----
+--
+
+As you can see, we defined two caches that will contain the data of `Person` and `Organization` types.
+For both types, we listed specific fields and indexes that will be read or updated using SQL.
+
+
+== Connecting to the Cluster
+
+After the cluster is configured and started, we can connect to it from the ODBC driver side. To do this, you need to prepare a valid connection string and pass it as a parameter to the ODBC driver at the connection time. Refer to the link:SQL/ODBC/connection-string-dsn[Connection String] page for more details.
+
+Alternatively, you can also use a link:SQL/ODBC/connection-string-dsn#configuring-dsn[pre-configured DSN] for connection purposes as shown in the example below.
+
+
+[source,c++]
+----
+SQLHENV env;
+
+// Allocate an environment handle
+SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &env);
+
+// Use ODBC ver 3
+SQLSetEnvAttr(env, SQL_ATTR_ODBC_VERSION, reinterpret_cast<void*>(SQL_OV_ODBC3), 0);
+
+SQLHDBC dbc;
+
+// Allocate a connection handle
+SQLAllocHandle(SQL_HANDLE_DBC, env, &dbc);
+
+// Prepare the connection string
+SQLCHAR connectStr[] = "DSN=My Ignite DSN";
+
+// Connecting to Ignite Cluster.
+SQLRETURN ret = SQLDriverConnect(dbc, NULL, connectStr, SQL_NTS, NULL, 0, NULL, SQL_DRIVER_COMPLETE);
+
+if (!SQL_SUCCEEDED(ret))
+{
+  SQLCHAR sqlstate[7] = { 0 };
+  SQLINTEGER nativeCode;
+
+  SQLCHAR errMsg[BUFFER_SIZE] = { 0 };
+  SQLSMALLINT errMsgLen = static_cast<SQLSMALLINT>(sizeof(errMsg));
+
+  SQLGetDiagRec(SQL_HANDLE_DBC, dbc, 1, sqlstate, &nativeCode, errMsg, errMsgLen, &errMsgLen);
+
+  std::cerr << "Failed to connect to Ignite: "
+            << reinterpret_cast<char*>(sqlstate) << ": "
+            << reinterpret_cast<char*>(errMsg) << ", "
+            << "Native error code: " << nativeCode
+            << std::endl;
+
+  // Releasing allocated handles.
+  SQLFreeHandle(SQL_HANDLE_DBC, dbc);
+  SQLFreeHandle(SQL_HANDLE_ENV, env);
+
+  return;
+}
+----
+
+
+== Querying Data
+
+After everything is up and running, we're ready to execute `SQL SELECT` queries using the `ODBC API`.
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] = "SELECT firstName, lastName, salary, Organization.name FROM Person "
+  "INNER JOIN \"Organization\".Organization ON Person.orgId = Organization.id";
+SQLSMALLINT queryLen = static_cast<SQLSMALLINT>(sizeof(queryLen));
+
+SQLRETURN ret = SQLExecDirect(stmt, query, queryLen);
+
+if (!SQL_SUCCEEDED(ret))
+{
+  SQLCHAR sqlstate[7] = { 0 };
+  SQLINTEGER nativeCode;
+
+  SQLCHAR errMsg[BUFFER_SIZE] = { 0 };
+  SQLSMALLINT errMsgLen = static_cast<SQLSMALLINT>(sizeof(errMsg));
+
+  SQLGetDiagRec(SQL_HANDLE_DBC, dbc, 1, sqlstate, &nativeCode, errMsg, errMsgLen, &errMsgLen);
+
+  std::cerr << "Failed to perfrom SQL query: "
+            << reinterpret_cast<char*>(sqlstate) << ": "
+            << reinterpret_cast<char*>(errMsg) << ", "
+            << "Native error code: " << nativeCode
+            << std::endl;
+}
+else
+{
+  // Printing the result set.
+  struct OdbcStringBuffer
+  {
+    SQLCHAR buffer[BUFFER_SIZE];
+    SQLLEN resLen;
+  };
+
+  // Getting a number of columns in the result set.
+  SQLSMALLINT columnsCnt = 0;
+  SQLNumResultCols(stmt, &columnsCnt);
+
+  // Allocating buffers for columns.
+  std::vector<OdbcStringBuffer> columns(columnsCnt);
+
+  // Binding colums. For simplicity we are going to use only
+  // string buffers here.
+  for (SQLSMALLINT i = 0; i < columnsCnt; ++i)
+    SQLBindCol(stmt, i + 1, SQL_C_CHAR, columns[i].buffer, BUFFER_SIZE, &columns[i].resLen);
+
+  // Fetching and printing data in a loop.
+  ret = SQLFetch(stmt);
+  while (SQL_SUCCEEDED(ret))
+  {
+    for (size_t i = 0; i < columns.size(); ++i)
+      std::cout << std::setw(16) << std::left << columns[i].buffer << " ";
+
+    std::cout << std::endl;
+
+    ret = SQLFetch(stmt);
+  }
+}
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+[NOTE]
+====
+[discrete]
+=== Columns binding
+
+In the example above, we bind all columns to the SQL_C_CHAR columns. This means that all values are going to be converted to strings upon fetching. This is done for the sake of simplicity. Value conversion upon fetching can be pretty slow; so your default decision should be to fetch the value the same way as it is stored.
+====
+
+== Inserting Data
+
+To insert new data into the cluster, `SQL INSERT` statements can be used from the ODBC side.
+
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] =
+	"INSERT INTO Person (id, orgId, firstName, lastName, resume, salary) "
+	"VALUES (?, ?, ?, ?, ?, ?)";
+
+SQLPrepare(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+// Binding columns.
+int64_t key = 0;
+int64_t orgId = 0;
+char name[1024] = { 0 };
+SQLLEN nameLen = SQL_NTS;
+double salary = 0.0;
+
+SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &key, 0, 0);
+SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, &orgId, 0, 0);
+SQLBindParameter(stmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,	sizeof(name), sizeof(name), name, 0, &nameLen);
+SQLBindParameter(stmt, 4, SQL_PARAM_INPUT, SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, &salary, 0, 0);
+
+// Filling cache.
+key = 1;
+orgId = 1;
+strncpy(name, "John", sizeof(name));
+salary = 2200.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 1;
+strncpy(name, "Jane", sizeof(name));
+salary = 1300.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 2;
+strncpy(name, "Richard", sizeof(name));
+salary = 900.0;
+
+SQLExecute(stmt);
+SQLMoreResults(stmt);
+
+++key;
+orgId = 2;
+strncpy(name, "Mary", sizeof(name));
+salary = 2400.0;
+
+SQLExecute(stmt);
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+Next, we are going to insert additional organizations without the usage of prepared statements.
+
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocate a statement handle
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query1[] = "INSERT INTO \"Organization\".Organization (id, name) VALUES (1L, 'Some company')";
+
+SQLExecDirect(stmt, query1, static_cast<SQLSMALLINT>(sizeof(query1)));
+
+SQLFreeStmt(stmt, SQL_CLOSE);
+
+SQLCHAR query2[] = "INSERT INTO \"Organization\".Organization (id, name) VALUES (2L, 'Some other company')";
+
+  SQLExecDirect(stmt, query2, static_cast<SQLSMALLINT>(sizeof(query2)));
+
+// Releasing statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+
+[WARNING]
+====
+[discrete]
+=== Error Checking
+
+For simplicity the example code above does not check for an error return code. You will want to do error checking in production.
+====
+
+== Updating Data
+
+Let's now update the salary for some of the persons stored in the cluster using SQL `UPDATE` statement.
+
+
+[source,c++]
+----
+void AdjustSalary(SQLHDBC dbc, int64_t key, double salary)
+{
+  SQLHSTMT stmt;
+
+  // Allocate a statement handle
+  SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+  SQLCHAR query[] = "UPDATE Person SET salary=? WHERE id=?";
+
+  SQLBindParameter(stmt, 1, SQL_PARAM_INPUT,
+      SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, &salary, 0, 0);
+
+  SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG,
+      SQL_BIGINT, 0, 0, &key, 0, 0);
+
+  SQLExecDirect(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+  // Releasing statement handle.
+  SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+}
+
+...
+AdjustSalary(dbc, 3, 1200.0);
+AdjustSalary(dbc, 1, 2500.0);
+----
+
+== Deleting Data
+
+Finally, let's remove a few records with the help of SQL `DELETE` statement.
+
+[source,c++]
+----
+void DeletePerson(SQLHDBC dbc, int64_t key)
+{
+  SQLHSTMT stmt;
+
+  // Allocate a statement handle
+  SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+  SQLCHAR query[] = "DELETE FROM Person WHERE id=?";
+
+  SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT,
+      0, 0, &key, 0, 0);
+
+  SQLExecDirect(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+  // Releasing statement handle.
+  SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+}
+
+...
+DeletePerson(dbc, 1);
+DeletePerson(dbc, 4);
+----
+
+== Batching With Arrays of Parameters
+
+The ODBC driver supports batching with link:https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/using-arrays-of-parameters[arrays of parameters] for DML statements.
+
+Let's try to insert the same records we did in the example above but now with a single `SQLExecute` call:
+
+[source,c++]
+----
+SQLHSTMT stmt;
+
+// Allocating a statement handle.
+SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
+
+SQLCHAR query[] =
+	"INSERT INTO Person (id, orgId, firstName, lastName, resume, salary) "
+	"VALUES (?, ?, ?, ?, ?, ?)";
+
+SQLPrepare(stmt, query, static_cast<SQLSMALLINT>(sizeof(query)));
+
+// Binding columns.
+int64_t key[4] = {0};
+int64_t orgId[4] = {0};
+char name[1024 * 4] = {0};
+SQLLEN nameLen[4] = {0};
+double salary[4] = {0};
+
+SQLBindParameter(stmt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, key, 0, 0);
+SQLBindParameter(stmt, 2, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_BIGINT, 0, 0, orgId, 0, 0);
+SQLBindParameter(stmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,	1024, 1024, name, 0, &nameLen);
+SQLBindParameter(stmt, 4, SQL_PARAM_INPUT, SQL_C_DOUBLE, SQL_DOUBLE, 0, 0, salary, 0, 0);
+
+// Filling cache.
+key[0] = 1;
+orgId[0] = 1;
+strncpy(name, "John", 1023);
+salary[0] = 2200.0;
+nameLen[0] = SQL_NTS;
+
+key[1] = 2;
+orgId[1] = 1;
+strncpy(name + 1024, "Jane", 1023);
+salary[1] = 1300.0;
+nameLen[1] = SQL_NTS;
+
+key[2] = 3;
+orgId[2] = 2;
+strncpy(name + 1024 * 2, "Richard", 1023);
+salary[2] = 900.0;
+nameLen[2] = SQL_NTS;
+
+key[3] = 4;
+orgId[3] = 2;
+strncpy(name + 1024 * 3, "Mary", 1023);
+salary[3] = 2400.0;
+nameLen[3] = SQL_NTS;
+
+// Asking the driver to store the total number of processed argument sets
+// in the following variable.
+SQLULEN setsProcessed = 0;
+SQLSetStmtAttr(stmt, SQL_ATTR_PARAMS_PROCESSED_PTR, &setsProcessed, SQL_IS_POINTER);
+
+// Setting the size of the arguments array. This is 4 in our case.
+SQLSetStmtAttr(stmt, SQL_ATTR_PARAMSET_SIZE, reinterpret_cast<SQLPOINTER>(4), 0);
+
+// Executing the statement.
+SQLExecute(stmt);
+
+// Releasing the statement handle.
+SQLFreeHandle(SQL_HANDLE_STMT, stmt);
+----
+
+NOTE: This type of batching is currently supported for `INSERT`, `UPDATE`, `DELETE`, and `MERGE` statements and does not work for `SELECTs`. The data-at-execution capability is not supported with Arrays of Parameters batching either.
+
+== Streaming
+
+The ODBC driver allows streaming data in bulk using the `SET` command. See the `SET` link:sql-reference/operational-commands#set-streaming[command documentation] for more information.
+
+NOTE: In streaming mode, the array of parameters and data-at-execution parameters are not supported.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/ODBC/specification.adoc b/docs/_docs/SQL/ODBC/specification.adoc
new file mode 100644
index 0000000..68e671b
--- /dev/null
+++ b/docs/_docs/SQL/ODBC/specification.adoc
@@ -0,0 +1,1090 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Specification
+
+== Overview
+
+ODBC defines several Interface conformance levels. In this section you can find which features are supported by the Apache Ignite ODBC driver.
+
+== Core Interface Conformance
+
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Feature |Supported|Comments
+
+|Allocate and free all types of handles, by calling `SQLAllocHandle` and `SQLFreeHandle`.
+|YES
+|
+
+|Use all forms of the `SQLFreeStmt` function.
+|YES
+|
+
+|Bind result set columns, by calling `SQLBindCol`.
+|YES
+|
+
+|Handle dynamic parameters, including arrays of parameters, in the input direction only, by calling `SQLBindParameter` and `SQLNumParams`.
+|YES
+|
+
+|Specify a bind offset.
+|YES
+|
+
+|Use the data-at-execution dialog, involving calls to `SQLParamData` and `SQLPutData`
+|YES
+|
+
+|Manage cursors and cursor names, by calling `SQLCloseCursor`, `SQLGetCursorName`, and `SQLSetCursorName`.
+|PARTIALLY
+|`SQLCloseCursor` is implemented. Named cursors are not supported by Ignite SQL.
+
+|Gain access to the description (metadata) of result sets, by calling `SQLColAttribute`, `SQLDescribeCol`, `SQLNumResultCols`, and `SQLRowCount`.
+|YES
+|
+
+|Query the data dictionary, by calling the catalog functions `SQLColumns`, `SQLGetTypeInfo`, `SQLStatistics`, and `SQLTables`.
+|PARTIALLY
+|`SQLStatistics` is not supported.
+
+|Manage data sources and connections, by calling `SQLConnect`, `SQLDataSources`, `SQLDisconnect`, and `SQLDriverConnect`. Obtain information on drivers, no matter which ODBC level they support, by calling `SQLDrivers`.
+|YES
+|
+
+|Prepare and execute SQL statements, by calling `SQLExecDirect`, `SQLExecute`, and `SQLPrepare`.
+|YES
+|
+
+|Fetch one row of a result set or multiple rows, in the forward direction only, by calling `SQLFetch` or by calling `SQLFetchScroll` with the `FetchOrientation` argument set to `SQL_FETCH_NEXT`
+|YES
+|
+
+|Obtain an unbound column in parts, by calling `SQLGetData`.
+|YES
+|
+
+|Obtain current values of all attributes, by calling `SQLGetConnectAttr`, `SQLGetEnvAttr`, and `SQLGetStmtAttr`, and set all attributes to their default values and set certain attributes to non-default values by calling `SQLSetConnectAttr`, `SQLSetEnvAttr`, and `SQLSetStmtAttr`.
+|PARTIALLY
+|Not all attributes are supported by now. See table below for details.
+
+|Manipulate certain fields of descriptors, by calling `SQLCopyDesc`, `SQLGetDescField`, `SQLGetDescRec`, `SQLSetDescField`, and `SQLSetDescRec`.
+|NO
+|
+
+|Obtain diagnostic information, by calling `SQLGetDiagField` and `SQLGetDiagRec`.
+|YES
+|
+
+|Detect driver capabilities, by calling `SQLGetFunctions` and `SQLGetInfo`. Also, detect the result of any text substitutions made to an SQL statement before it is sent to the data source, by calling `SQLNativeSql`.
+|YES
+|
+
+|Use the syntax of `SQLEndTran` to commit a transaction. A Core-level driver need not support true transactions; therefore, the application cannot specify `SQL_ROLLBACK` nor `SQL_AUTOCOMMIT_OFF` for the `SQL_ATTR_AUTOCOMMIT` connection attribute.
+|YES
+|
+
+|Call `SQLCancel` to cancel the data-at-execution dialog and, in multi-thread environments, to cancel an ODBC function executing in another thread. Core-level interface conformance does not mandate support for asynchronous execution of functions, nor the use of `SQLCancel` to cancel an ODBC function executing asynchronously. Neither the platform nor the ODBC driver need be multi-thread for the driver to conduct independent activities at the same time. However, in multi-thread environments, the ODBC driver must be thread-safe. Serialization of requests from the application is a conformant way to implement this specification, even though it might create serious performance problems.
+|NO
+|Current implementation does not support asynchronous execution. Also, is not supported for data-at-execution.
+
+|Obtain the `SQL_BEST_ROWID` row-identifying column of tables, by calling `SQLSpecialColumns`.
+|PARTIALLY
+|Current implementation always returns empty row set.
+
+|=======================================================================
+
+
+== Level 1 Interface Conformance
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Specify the schema of database tables and views (using two-part naming).
+|YES
+|
+
+|Invoke true asynchronous execution of ODBC functions, where applicable ODBC functions are all synchronous or all asynchronous on a given connection.
+|NO
+|
+
+|Use scrollable cursors, and thereby achieve access to a result set in methods other than forward-only, by calling `SQLFetchScroll` with the `FetchOrientation` argument other than `SQL_FETCH_NEXT`.
+|NO
+|
+
+|Obtain primary keys of tables, by calling `SQLPrimaryKeys`.
+|PARTIALLY
+|Returns empty result set by now.
+
+|Use stored procedures, through the ODBC escape sequence for procedure calls, and query the data dictionary regarding stored procedures, by calling `SQLProcedureColumns` and `SQLProcedures`.
+|NO
+|
+
+|Connect to a data source by interactively browsing the available servers, by calling `SQLBrowseConnect`.
+|NO
+|
+
+|Use ODBC functions instead of SQL statements to perform certain database operations: `SQLSetPos` with `SQL_POSITION` and `SQL_REFRESH`.
+|NO
+|
+
+|Gain access to the contents of multiple result sets generated by batches and stored procedures, by calling `SQLMoreResults`.
+|YES
+|
+
+|Delimit transactions spanning several ODBC functions, with true atomicity and the ability to specify `SQL_ROLLBACK` in `SQLEndTran`.
+|NO
+|Ignite SQL does not support transactions.
+|=======================================================================
+
+== Level 2 Interface Conformance
+[width="100%",cols="60%,10%,30%"]
+|=======================================================================
+|Feature|Supported|Comments
+
+|Use three-part names of database tables and views.
+|NO
+|Ignite SQL does not support catalogs.
+
+|Describe dynamic parameters, by calling `SQLDescribeParam`.
+|YES
+|
+
+|Use not only input parameters but also output and input/output parameters, and result values of stored procedures.
+|NO
+|Ignite SQL does not support output parameters
+
+|Use bookmarks, including retrieving bookmarks, by calling `SQLDescribeCol` and `SQLColAttribute` on column number 0; fetching based on a bookmark, by calling `SQLFetchScroll` with the `FetchOrientation` argument set to `SQL_FETCH_BOOKMARK`; and update, delete, and fetch by bookmark operations, by calling `SQLBulkOperations` with the Operation argument set to `SQL_UPDATE_BY_BOOKMARK`, `SQL_DELETE_BY_BOOKMARK`, or `SQL_FETCH_BY_BOOKMARK`.
+|NO
+|Ignite SQL does not support bookmarks.
+
+|Retrieve advanced information about the data dictionary, by calling `SQLColumnPrivileges`, `SQLForeignKeys`, and `SQLTablePrivileges`.
+|PARTIALLY
+|`SQLForeignKeys` implemented, but returns empty result set.
+
+|Use ODBC functions instead of SQL statements to perform additional database operations, by calling `SQLBulkOperations` with `SQL_ADD`, or `SQLSetPos` with `SQL_DELETE` or `SQL_UPDATE`.
+|NO
+|
+
+|Enable asynchronous execution of ODBC functions for specified individual statements.
+|NO
+|
+
+|Obtain the `SQL_ROWVER` row-identifying column of tables, by calling `SQLSpecialColumns`.
+|PARTIALLY
+|Implemented by returning an empty row set.
+
+|Set the `SQL_ATTR_CONCURRENCY` statement attribute to at least one value other than `SQL_CONCUR_READ_ONLY`.
+|NO
+|
+
+|The ability to time out login request and SQL queries (`SQL_ATTR_LOGIN_TIMEOUT` and `SQL_ATTR_QUERY_TIMEOUT`).
+|PARTIALLY
+|`SQL_ATTR_QUERY_TIMEOUT` support implemented.
+`SQL_ATTR_LOGIN_TIMEOUT` is not implemented yet.
+
+|The ability to change the default isolation level; the ability to execute transactions with the "serializable" level of isolation.
+|NO
+|Ignite does not support SQL transactions.
+|=======================================================================
+
+== Function Support
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Function|Supported|Conformance level
+
+|`SQLAllocHandle`
+|YES
+|Core
+
+|`SQLBindCol`
+|YES
+|Core
+
+|`SQLBindParameter`
+|YES
+|Core
+
+|`SQLBrowseConnect`
+|NO
+|Level 1
+
+|`SQLBulkOperations`
+|NO
+|Level 1
+
+|`SQLCancel`
+|NO
+|Core
+
+|`SQLCloseCursor`
+|YES
+|Core
+
+|`SQLColAttribute`
+|YES
+|Core
+
+|`SQLColumnPrivileges`
+|NO
+|Level 2
+
+|`SQLColumns`
+|YES
+|Core
+
+|`SQLConnect`
+|YES
+|Core
+
+|`SQLCopyDesc`
+|NO
+|Core
+
+|`SQLDataSources`
+|N/A
+|Core
+
+|`SQLDescribeCol`
+|YES
+|Core
+
+|`SQLDescribeParam`
+|YES
+|Level 2
+
+|`SQLDisconnect`
+|YES
+|Core
+
+|`SQLDriverConnect`
+|YES
+|Core
+
+|`SQLDrivers`
+|N/A
+|Core
+
+|`SQLEndTran`
+|PARTIALLY
+|Core
+
+|`SQLExecDirect`
+|YES
+|Core
+
+|`SQLExecute`
+|YES
+|Core
+
+|`SQLFetch`
+|YES
+|Core
+
+|`SQLFetchScroll`
+|YES
+|Core
+
+|`SQLForeignKeys`
+|PARTIALLY
+|Level 2
+
+|`SQLFreeHandle`
+|YES
+|Core
+
+|`SQLFreeStmt`
+|YES
+|Core
+
+|`SQLGetConnectAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetCursorName`
+|NO
+|Core
+
+|`SQLGetData`
+|YES
+|Core
+
+|`SQLGetDescField`
+|NO
+|Core
+
+|`SQLGetDescRec`
+|NO
+|Core
+
+|`SQLGetDiagField`
+|YES
+|Core
+
+|`SQLGetDiagRec`
+|YES
+|Core
+
+|`SQLGetEnvAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetFunctions`
+|NO
+|Core
+
+|`SQLGetInfo`
+|YES
+|Core
+
+|`SQLGetStmtAttr`
+|PARTIALLY
+|Core
+
+|`SQLGetTypeInfo`
+|YES
+|Core
+
+|`SQLMoreResults`
+|YES
+|Level 1
+
+|`SQLNativeSql`
+|YES
+|Core
+
+|`SQLNumParams`
+|YES
+|Core
+
+|`SQLNumResultCols`
+|YES
+|Core
+
+|`SQLParamData`
+|YES
+|Core
+
+|`SQLPrepare`
+|YES
+|Core
+
+|`SQLPrimaryKeys`
+|PARTIALLY
+|Level 1
+
+|`SQLProcedureColumns`
+|NO
+|Level 1
+
+|`SQLProcedures`
+|NO
+|Level 1
+
+|`SQLPutData`
+|YES
+|Core
+
+|`SQLRowCount`
+|YES
+|Core
+
+|`SQLSetConnectAttr`
+|PARTIALLY
+|Core
+
+|`SQLSetCursorName`
+|NO
+|Core
+
+|`SQLSetDescField`
+|NO
+|Core
+
+|`SQLSetDescRec`
+|NO
+|Core
+
+|`SQLSetEnvAttr`
+|PARTIALLY
+|Core
+
+|`SQLSetPos`
+|NO
+|Level 1
+
+|`SQLSetStmtAttr`
+|PARTIALLY
+|Core
+
+|`SQLSpecialColumns`
+|PARTIALLY
+|Core
+
+|`SQLStatistics`
+|NO
+|Core
+
+|`SQLTablePrivileges`
+|NO
+|Level 2
+
+|`SQLTables`
+|YES
+|Core
+|=======================================================================
+
+== Environment Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_CONNECTION_POOLING`
+|NO
+|Optional
+
+|`SQL_ATTR_CP_MATCH`
+|NO
+|Optional
+
+|`SQL_ATTR_ODBC_VER`
+|YES
+|Core
+
+|`SQL_ATTR_OUTPUT_NTS`
+|YES
+|Optional
+|=======================================================================
+
+== Connection Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_ACCESS_MODE`
+|NO
+|Core
+
+|`SQL_ATTR_ASYNC_ENABLE`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_AUTO_IPD`
+|NO
+|Level 2
+
+|`SQL_ATTR_AUTOCOMMIT`
+|NO
+|Level 1
+
+|`SQL_ATTR_CONNECTION_DEAD`
+|YES
+|Level 1
+
+|`SQL_ATTR_CONNECTION_TIMEOUT`
+|YES
+|Level 2
+
+|`SQL_ATTR_CURRENT_CATALOG`
+|NO
+|Level 2
+
+|`SQL_ATTR_LOGIN_TIMEOUT`
+|NO
+|Level 2
+
+|`SQL_ATTR_ODBC_CURSORS`
+|NO
+|Core
+
+|`SQL_ATTR_PACKET_SIZE`
+|NO
+|Level 2
+
+|`SQL_ATTR_QUIET_MODE`
+|NO
+|Core
+
+|`SQL_ATTR_TRACE`
+|NO
+|Core
+
+|`SQL_ATTR_TRACEFILE`
+|NO
+|Core
+
+|`SQL_ATTR_TRANSLATE_LIB`
+|NO
+|Core
+
+|`SQL_ATTR_TRANSLATE_OPTION`
+|NO
+|Core
+
+|`SQL_ATTR_TXN_ISOLATION`
+|NO
+|Level 1 / Level 2
+|=======================================================================
+
+== Statement Attribute Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_ATTR_APP_PARAM_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_APP_ROW_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_ASYNC_ENABLE`
+|NO
+|Level 1/ Level 2
+
+|`SQL_ATTR_CONCURRENCY`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_CURSOR_SCROLLABLE`
+|NO
+|Level 1
+
+|`SQL_ATTR_CURSOR_SENSITIVITY`
+|NO
+|Level 2
+
+|`SQL_ATTR_CURSOR_TYPE`
+|NO
+|Level 1 / Level 2
+
+|`SQL_ATTR_ENABLE_AUTO_IPD`
+|NO
+|Level 2
+
+|`SQL_ATTR_FETCH_BOOKMARK_PTR`
+|NO
+|Level 2
+
+|`SQL_ATTR_IMP_PARAM_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_IMP_ROW_DESC`
+|PARTIALLY
+|Core
+
+|`SQL_ATTR_KEYSET_SIZE`
+|NO
+|Level 2
+
+|`SQL_ATTR_MAX_LENGTH`
+|NO
+|Level 1
+
+|`SQL_ATTR_MAX_ROWS`
+|NO
+|Level 1
+
+|`SQL_ATTR_METADATA_ID`
+|NO
+|Core
+
+|`SQL_ATTR_NOSCAN`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_BIND_OFFSET_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAM_BIND_TYPE`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_OPERATION_PTR`
+|NO
+|Core
+
+|`SQL_ATTR_PARAM_STATUS_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAMS_PROCESSED_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_PARAMSET_SIZE`
+|YES
+|Core
+
+|`SQL_ATTR_QUERY_TIMEOUT`
+|YES
+|Level 2
+
+|`SQL_ATTR_RETRIEVE_DATA`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_ARRAY_SIZE`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_BIND_OFFSET_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_BIND_TYPE`
+|YES
+|Core
+
+|`SQL_ATTR_ROW_NUMBER`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_OPERATION_PTR`
+|NO
+|Level 1
+
+|`SQL_ATTR_ROW_STATUS_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_ROWS_FETCHED_PTR`
+|YES
+|Core
+
+|`SQL_ATTR_SIMULATE_CURSOR`
+|NO
+|Level 2
+
+|`SQL_ATTR_USE_BOOKMARKS`
+|NO
+|Level 2
+|=======================================================================
+
+== Descriptor Header Fields Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_DESC_ALLOC_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_ARRAY_SIZE`
+|NO
+|Core
+
+|`SQL_DESC_ARRAY_STATUS_PTR`
+|NO
+|Core / Level 1
+
+|`SQL_DESC_BIND_OFFSET_PTR`
+|NO
+|Core
+
+|`SQL_DESC_BIND_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_COUNT`
+|NO
+|Core
+
+|`SQL_DESC_ROWS_PROCESSED_PTR`
+|NO
+|Core
+|=======================================================================
+
+== Descriptor Record Fields Conformance
+[width="100%",cols="70%,15%,15%"]
+|=======================================================================
+|Feature|Supported|Conformance Level
+
+|`SQL_DESC_AUTO_UNIQUE_VALUE`
+|NO
+|Level 2
+
+|`SQL_DESC_BASE_COLUMN_NAME`
+|NO
+|Core
+
+|`SQL_DESC_BASE_TABLE_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_CASE_SENSITIVE`
+|NO
+|Core
+
+|`SQL_DESC_CATALOG_NAME`
+|NO
+|Level 2
+
+|`SQL_DESC_CONCISE_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_DATA_PTR`
+|NO
+|Core
+
+|`SQL_DESC_DATETIME_INTERVAL_CODE`
+|NO
+|Core
+
+|`SQL_DESC_DATETIME_INTERVAL_PRECISION`
+|NO
+|Core
+
+|`SQL_DESC_DISPLAY_SIZE`
+|NO
+|Core
+
+|`SQL_DESC_FIXED_PREC_SCALE`
+|NO
+|Core
+
+|`SQL_DESC_INDICATOR_PTR`
+|NO
+|Core
+
+|`SQL_DESC_LABEL`
+|NO
+|Level 2
+
+|`SQL_DESC_LENGTH`
+|NO
+|Core
+
+|`SQL_DESC_LITERAL_PREFIX`
+|NO
+|Core
+
+|`SQL_DESC_LITERAL_SUFFIX`
+|NO
+|Core
+
+|`SQL_DESC_LOCAL_TYPE_NAME`
+|NO
+|Core
+
+|`SQL_DESC_NAME`
+|NO
+|Core
+
+|`SQL_DESC_NULLABLE`
+|NO
+|Core
+
+|`SQL_DESC_OCTET_LENGTH`
+|NO
+|Core
+
+|`SQL_DESC_OCTET_LENGTH_PTR`
+|NO
+|Core
+
+|`SQL_DESC_PARAMETER_TYPE`
+|NO
+|Core / Level 2
+
+|`SQL_DESC_PRECISION`
+|NO
+|Core
+
+|`SQL_DESC_ROWVER`
+|NO
+|Level 1
+
+|`SQL_DESC_SCALE`
+|NO
+|Core
+
+|`SQL_DESC_SCHEMA_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_SEARCHABLE`
+|NO
+|Core
+
+|`SQL_DESC_TABLE_NAME`
+|NO
+|Level 1
+
+|`SQL_DESC_TYPE`
+|NO
+|Core
+
+|`SQL_DESC_TYPE_NAME`
+|NO
+|Core
+
+|`SQL_DESC_UNNAMED`
+|NO
+|Core
+
+|`SQL_DESC_UNSIGNED`
+|NO
+|Core
+
+|`SQL_DESC_UPDATABLE`
+|NO
+|Core
+
+|=======================================================================
+
+== SQL Data Types
+
+The following SQL data types listed in the link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types[specification] are supported:
+
+[width="100%",cols="80%,20%"]
+|=======================================================================
+|Data Type |Supported
+
+|`SQL_CHAR`
+|YES
+
+|`SQL_VARCHAR`
+|YES
+
+|`SQL_LONGVARCHAR`
+|YES
+
+|`SQL_WCHAR`
+|NO
+
+|`SQL_WVARCHAR`
+|NO
+
+|`SQL_WLONGVARCHAR`
+|NO
+
+|`SQL_DECIMAL`
+|YES
+
+|`SQL_NUMERIC`
+|NO
+
+|`SQL_SMALLINT`
+|YES
+
+|`SQL_INTEGER`
+|YES
+
+|`SQL_REAL`
+|NO
+
+|`SQL_FLOAT`
+|YES
+
+|`SQL_DOUBLE`
+|YES
+
+|`SQL_BIT`
+|YES
+
+|`SQL_TINYINT`
+|YES
+
+|`SQL_BIGINT`
+|YES
+
+|`SQL_BINARY`
+|YES
+
+|`SQL_VARBINARY`
+|YES
+
+|`SQL_LONGVARBINARY`
+|YES
+
+|`SQL_TYPE_DATE`
+|YES
+
+|`SQL_TYPE_TIME`
+|YES
+
+|`SQL_TYPE_TIMESTAMP`
+|YES
+
+|`SQL_TYPE_UTCDATETIME`
+|NO
+
+|`SQL_TYPE_UTCTIME`
+|NO
+
+|`SQL_INTERVAL_MONTH`
+|NO
+
+|`SQL_INTERVAL_YEAR`
+|NO
+
+|`SQL_INTERVAL_YEAR_TO_MONTH`
+|NO
+
+|`SQL_INTERVAL_DAY`
+|NO
+
+|`SQL_INTERVAL_HOUR`
+|NO
+
+|`SQL_INTERVAL_MINUTE`
+|NO
+
+|`SQL_INTERVAL_SECOND`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_HOUR`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_MINUTE`
+|NO
+
+|`SQL_INTERVAL_DAY_TO_SECOND`
+|NO
+
+|`SQL_INTERVAL_HOUR_TO_MINUTE`
+|NO
+
+|`SQL_INTERVAL_HOUR_TO_SECOND`
+|NO
+
+|`SQL_INTERVAL_MINUTE_TO_SECOND`
+|NO
+
+|`SQL_GUID`
+|YES
+|=======================================================================
+
+
+== C Data Types
+
+The following C data types listed in the link:https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/c-data-types[specification] are supported:
+
+[width="100%",cols="80%,20%"]
+|=======================================================================
+|Data Type |Supported
+
+|`SQL_C_CHAR`
+|YES
+
+|`SQL_C_WCHAR`
+|YES
+
+|`SQL_C_SHORT`
+|YES
+
+|`SQL_C_SSHORT`
+|YES
+
+|`SQL_C_USHORT`
+|YES
+
+|`SQL_C_LONG`
+|YES
+
+|`SQL_C_SLONG`
+|YES
+
+|`SQL_C_ULONG`
+|YES
+
+|`SQL_C_FLOAT`
+|YES
+
+|`SQL_C_DOUBLE`
+|YES
+
+|`SQL_C_BIT`
+|YES
+
+|`SQL_C_TINYINT`
+|YES
+
+|`SQL_C_STINYINT`
+|YES
+
+|`SQL_C_UTINYINT`
+|YES
+
+|`SQL_C_BIGINT`
+|YES
+
+|`SQL_C_SBIGINT`
+|YES
+
+|`SQL_C_UBIGINT`
+|YES
+
+|`SQL_C_BINARY`
+|YES
+
+|`SQL_C_BOOKMARK`
+|NO
+
+|`SQL_C_VARBOOKMARK`
+|NO
+
+|`SQL_C_INTERVAL`* (all interval types)
+|NO
+
+|`SQL_C_TYPE_DATE`
+|YES
+
+|`SQL_C_TYPE_TIME`
+|YES
+
+|`SQL_C_TYPE_TIMESTAMP`
+|YES
+
+|`SQL_C_NUMERIC`
+|YES
+
+|`SQL_C_GUID`
+|YES
+|=======================================================================
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/_docs/SQL/custom-sql-func.adoc b/docs/_docs/SQL/custom-sql-func.adoc
new file mode 100644
index 0000000..c531fc6
--- /dev/null
+++ b/docs/_docs/SQL/custom-sql-func.adoc
@@ -0,0 +1,49 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Custom SQL Functions
+
+:javaFile: {javaCodeDir}/SqlAPI.java
+
+The SQL Engine can extend the SQL functions' set, defined by the ANSI-99 specification, via the addition of custom SQL functions written in Java.
+
+A custom SQL function is just a public static method marked by the `@QuerySqlFunction` annotation.
+
+////
+TODO looks like it's unsupported in C#
+////
+
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-example, indent=0]
+----
+
+
+The class that owns the custom SQL function has to be registered in the `CacheConfiguration`.
+To do that, use the `setSqlFunctionClasses(...)` method.
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-config, indent=0]
+----
+
+Once you have deployed a cache with the above configuration, you can call the custom function from within SQL queries:
+
+[source,java]
+----
+include::{javaFile}[tags=sql-function-query, indent=0]
+----
+
+NOTE: Classes registered with `CacheConfiguration.setSqlFunctionClasses(...)` must be added to the classpath of all the nodes where the defined custom functions might be executed. Otherwise, you will get a `ClassNotFoundException` error when trying to execute the custom function.
diff --git a/docs/_docs/SQL/distributed-joins.adoc b/docs/_docs/SQL/distributed-joins.adoc
new file mode 100644
index 0000000..5394c3a
--- /dev/null
+++ b/docs/_docs/SQL/distributed-joins.adoc
@@ -0,0 +1,110 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Distributed Joins
+
+A distributed join is a SQL statement with a join clause that combines two or more partitioned tables.
+If the tables are joined on the partitioning column (affinity key), the join is called a _colocated join_. Otherwise, it is called a _non-colocated join_.
+
+Colocated joins are more efficient because they can be effectively distributed between the cluster nodes.
+
+By default, Ignite treats each join query as if it is a colocated join and executes it accordingly (see the corresponding section below).
+
+WARNING: If your query is non-colocated, you have to enable the non-colocated mode of query execution by setting `SqlFieldsQuery.setDistributedJoins(true)`; otherwise, the results of the query execution may be incorrect.
+
+[CAUTION]
+====
+If you often join tables, we recommend that you partition your tables on the same column (on which you join the tables).
+
+Non-colocated joins should be reserved for cases when it's impossible to use colocated joins.
+====
+
+== Colocated Joins
+
+The following image illustrates the procedure of executing a colocated join. A colocated join (`Q`) is sent to all the nodes that store the data matching the query condition. Then the query is executed over the local data set on each node (`E(Q)`). The results (`R`) are aggregated on the node that initiated the query (the client node).
+
+image::images/collocated_joins.png[]
+
+
+== Non-colocated Joins
+
+If you execute a query in a non-colocated mode, the SQL Engine executes the query locally on all the nodes that store the data matching the query condition. But because the data is not colocated, each node will request missing data (that is not present locally) from other nodes by sending either broadcast or unicast requests. This process is depicted on the image below.
+
+image::images/non_collocated_joins.png[]
+
+If the join is done on the primary or affinity key, the nodes send unicast requests because in this case the nodes know the location of the missing data. Otherwise, nodes send broadcast requests. For performance reasons, both broadcast and unicast requests are aggregated into batches.
+
+Enable the non-colocated mode of query execution by setting a JDBC/ODBC parameter or, if you use SQL API, by calling `SqlFieldsQuery.setDistributedJoins(true)`.
+
+WARNING: If you use a non-collocated join on a column from a link:data-modeling/data-partitioning#replicated[replicated table], the column must have an index.
+Otherwise, you will get an exception.
+
+
+
+== Hash Joins
+
+//tag::hash-join[]
+To boost performance of join queries, Ignite supports the https://en.wikipedia.org/wiki/Hash_join[hash join
+algorithm].
+Hash joins can be more efficient than nested loop joins for many scenarios, except when the probe side of the join is very small.
+However, hash joins can only be used with equi-joins, i.e. a type of join with equality comparison in the join-predicate.
+
+//end::hash-join[]
+
+To enforce the use of hash joins:
+
+. Use the `enforceJoinOrder` option:
++
+[tabs]
+--
+tab:Java API[]
+[source,java]
+----
+include::{javaCodeDir}/SqlAPI.java[tags=enforceJoinOrder,indent=0]
+----
+
+tab:JDBC[]
+[source,java]
+----
+Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
+
+// Open the JDBC connection.
+Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1?enforceJoinOrder=true");
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlJoinOrder.cs[tag=sqlJoinOrder,indent=0]
+----
+
+tab:C++[]
+[source,c++]
+----
+include::code-snippets/cpp/src/sql_join_order.cpp[tag=sql-join-order,indent=0]
+----
+--
+
+. Specify `USE INDEX(HASH_JOIN_IDX)` on the table for which you want to create the hash-join index:
++
+--
+
+[source, sql]
+----
+SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX) WHERE TABLE_A.column1 = TABLE_B.column2
+----
+--
+
+
+
+
diff --git a/docs/_docs/SQL/indexes.adoc b/docs/_docs/SQL/indexes.adoc
new file mode 100644
index 0000000..4f6a36f
--- /dev/null
+++ b/docs/_docs/SQL/indexes.adoc
@@ -0,0 +1,357 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Defining Indexes
+
+:javaFile: {javaCodeDir}/Indexes.java
+:csharpFile: {csharpCodeDir}/DefiningIndexes.cs
+
+In addition to common DDL commands, such as CREATE/DROP INDEX, developers can use Ignite's link:SQL/sql-api[SQL APIs] to define indexes.
+
+[NOTE]
+====
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from Java code, link:setup#enabling-modules[add this module to your classpath].
+====
+
+Ignite automatically creates indexes for each primary key and affinity key field.
+When you define an index on a field in the value object, Ignite creates a composite index consisting of the indexed field and the cache's primary key.
+In SQL terms, it means that the index will be composed of two columns: the column you want to index and the primary key column.
+
+== Creating Indexes With SQL
+
+Refer to the link:sql-reference/ddl#create-index[CREATE INDEX] section.
+
+== Configuring Indexes Using Annotations
+
+Indexes, as well as queryable fields, can be configured from code via the `@QuerySqlField` annotation. In the example below, the Ignite SQL engine will create indexes for the `id` and `salary` fields.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=configuring-with-annotation,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=idxAnnotationCfg,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The type name is used as the table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained in the link:SQL/schemas[Schemas] section).
+
+Both `id` and `salary` are indexed fields. `id` will be sorted in ascending order (default) and `salary` in descending order.
+
+If you do not want to index a field, but you still need to use it in SQL queries, then the field must be annotated without the `index = true` parameter.
+Such a field is called a _queryable field_.
+In the example above, `name` is defined as a link:SQL/sql-api#configuring-queryable-fields[queryable field].
+
+The `age` field is neither queryable nor is it an indexed field, and thus it will not be accessible from SQL queries.
+
+When you define the indexed fields, you need to <<Registering Indexed Types,register indexed types>>.
+
+////
+Now you can execute the SQL query as follows:
+
+[source,java]
+----
+SqlFieldsQuery qry = new SqlFieldsQuery("SELECT id, name FROM Person" +
+		"WHERE id > 1500 LIMIT 10");
+----
+////
+
+
+[NOTE]
+====
+[discrete]
+=== Updating Indexes and Queryable Fields at Runtime
+
+Use the link:sql-reference/ddl#create-index[CREATE/DROP INDEX] commands if you need to manage indexes or make an object's new fields visible to the SQL engine at​ runtime.
+====
+
+=== Indexing Nested Objects
+Fields of nested objects can also be indexed and queried using annotations. For example, consider a `Person` object that has an `Address` object as a field:
+
+[source,java]
+----
+public class Person {
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private long id;
+
+    /** Queryable field. Will be visible for SQL engine. */
+    @QuerySqlField
+    private String name;
+
+    /** Will NOT be visible for SQL engine. */
+    private int age;
+
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private Address address;
+}
+----
+
+Where the structure of the `Address` class might look like:
+
+[source,java]
+----
+public class Address {
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField (index = true)
+    private String street;
+
+    /** Indexed field. Will be visible for SQL engine. */
+    @QuerySqlField(index = true)
+    private int zip;
+}
+----
+
+In the above example, the `@QuerySqlField(index = true)` annotation is specified on all the fields of the `Address` class, as well as the `Address` object in the `Person` class.
+
+This makes it possible to execute SQL queries like the following:
+
+[source,java]
+----
+QueryCursor<List<?>> cursor = personCache.query(new SqlFieldsQuery( "select * from Person where street = 'street1'"));
+----
+
+Note that you do not need to specify `address.street` in the WHERE clause of the SQL query. This is because the fields of the `Address` class are flattened within the `Person` table which simply allows us to access the `Address` fields in the queries directly.
+
+WARNING: If you create indexes for nested objects, you won't be able to run UPDATE or INSERT statements on the table.
+
+=== Registering Indexed Types
+After indexed and queryable fields are defined, they have to be registered in the SQL engine along with the object types they belong to.
+
+To specify which types should be indexed, pass the corresponding key-value pairs in the `CacheConfiguration.setIndexedTypes()` method as shown in the example below.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=register-indexed-types,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=register-indexed-types,indent=0]
+----
+tab:C++[unsupported]
+--
+
+This method accepts only pairs of types: one for key class and another for value class. Primitives are passed as boxed types.
+
+[NOTE]
+====
+[discrete]
+=== Predefined Fields
+In addition to all the fields marked with a `@QuerySqlField` annotation, each table will have two special predefined fields: `pass:[_]key` and `pass:[_]val`, which represent links to whole key and value objects. This is useful, for instance, when one of them is of a primitive type and you want to filter by its value. To do this, run a query like: `SELECT * FROM Person WHERE pass:[_]key = 100`.
+====
+
+NOTE: Since Ignite supports link:key-value-api/binary-objects[Binary Objects], there is no need to add classes of indexed types to the classpath of cluster nodes. The SQL query engine can detect values of indexed and queryable fields, avoiding object deserialization.
+
+=== Group Indexes
+
+To set up a multi-field index that can accelerate queries with complex conditions, you can use a `@QuerySqlField.Group` annotation. You can add multiple `@QuerySqlField.Group` annotations in `orderedGroups` if you want a field to be a part of more than one group.
+
+For instance, in the `Person` class below we have the field `age` which belongs to an indexed group named `age_salary_idx` with a group order of "0" and descending sort order. Also, in the same group, we have the field `salary` with a group order of "3" and ascending sort order. Furthermore, the field `salary` itself is a single column index (the `index = true` parameter is specified in addition to the `orderedGroups` declaration). Group `order` does not have to be a particular number. It is needed only to sort fields inside of a particular group.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Indexes_groups.java[tag=group-indexes,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DefiningIndexes.cs[tag=groupIdx,indent=0]
+----
+tab:C++[unsupported]
+--
+
+NOTE: Annotating a field with `@QuerySqlField.Group` outside of `@QuerySqlField(orderedGroups={...})` will have no effect.
+
+== Configuring Indexes Using Query Entities
+
+Indexes and queryable fields can also be configured via the `org.apache.ignite.cache.QueryEntity` class which is convenient for Spring XML based configuration.
+
+All concepts that are discussed as part of the annotation based configuration above are also valid for the `QueryEntity` based approach. Furthermore, the types whose fields are configured with the `@QuerySqlField` annotation and are registered with the `CacheConfiguration.setIndexedTypes()` method are internally converted into query entities.
+
+The example below shows how to define a single field index, group indexes, and queryable fields.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/query-entities.xml[tags=ignite-config,indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tag=index-using-queryentity,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/DefiningIndexes.cs[tag=queryEntity,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+A short name of the `valueType` is used as a table name in SQL queries. In this case, our table name will be `Person` (schema name usage and definition is explained on the link:SQL/schemas[Schemas] page).
+
+Once the `QueryEntity` is defined, you can execute the SQL query as follows:
+
+[source,java]
+----
+include::{javaFile}[tag=query,indent=0]
+----
+
+[NOTE]
+====
+[discrete]
+=== Updating Indexes and Queryable Fields at Runtime
+
+Use the link:sql-reference/ddl#create-index[CREATE/DROP INDEX] command if you need to manage indexes or make new fields of the object visible to the SQL engine at​ runtime.
+====
+
+== Configuring Index Inline Size
+
+Proper index inline size can help speed up queries on indexed fields.
+//For primitive types and BinaryObjects, Ignite uses a predefined inline index size
+Refer to the dedicated section in the link:SQL/sql-tuning#increasing-index-inline-size[SQL Tuning guide] for the information on how to choose a proper inline size.
+
+In most cases, you will only need to set the inline size for indexes on variable-length fields, such as strings or arrays.
+The default value is 10.
+
+You can change the default value by setting either
+
+* inline size for each index individually, or
+* `CacheConfiguration.sqlIndexMaxInlineSize` property for all indexes within a given cache, or
+* `IGNITE_MAX_INDEX_PAYLOAD_SIZE` system property for all indexes in the cluster
+
+The settings are applied in the order listed above.
+
+//Ignite automatically creates indexes on the primary key and on the affinity key.
+//The inline size for these indexes can be configured via the `CacheConfiguration.sqlIndexMaxInlineSize` property.
+
+You can also configure inline size for each index individually, which will overwrite the default value.
+To set the index inline size for a user-defined index, use one of the following methods. In all cases, the value is set in bytes.
+
+* When using annotations:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=annotation-with-inline-size,indent=0]
+----
+tab:C#/.NET[]
+[source,java]
+----
+include::{csharpFile}[tag=annotation-with-inline-size,indent=0]
+----
+tab:C++[unsupported]
+--
+
+* When using `QueryEntity`:
++
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=query-entity-with-inline-size,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=query-entity-with-inline-size,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+* If you create indexes using the `CREATE INDEX` command, you can use the `INLINE_SIZE` option to set the inline size. See examples in the link:sql-reference/ddl[corresponding section].
++
+[source, sql]
+----
+create index country_idx on Person (country) INLINE_SIZE 13;
+----
+
+
+== Custom Keys
+If you use only predefined SQL data types for primary keys, then you do not need to perform additional manipulation with the SQL schema configuration. Those data types are defined by the `GridQueryProcessor.SQL_TYPES` constant, as listed below.
+
+Predefined SQL data types include:
+
+- all the primitives and their wrappers except `char` and `Character`
+- `String`
+- `BigDecimal`
+- `byte[]`
+- `java.util.Date`, `java.sql.Date`, `java.sql.Timestamp`
+- `java.util.UUID`
+
+However, once you decide to introduce a custom complex key and refer to its fields from DML statements, you need to:
+
+- Define those fields in the `QueryEntity` the same way as you set fields for the value object.
+- Use the new configuration parameter `QueryEntity.setKeyFields(..)` to distinguish key fields from value fields.
+
+The example below shows how to do this.
+
+[tabs]
+--
+
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/custom-keys.xml[tags=ignite-config;!discovery, indent=0]
+
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=custom-key,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::{csharpFile}[tag=custom-key,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+[NOTE]
+====
+[discrete]
+=== Automatic Hash Code Calculation and Equals Implementation
+
+If a custom key can be serialized into a binary form, then Ingnite calculates its hash code and implement the `equals()` method automatically.
+
+However, if the key's type is `Externalizable`, and if it cannot be serialized into the binary form, then you are required to implement the `hashCode` and `equals` methods manually. See the link:key-value-api/binary-objects[Binary Objects] page for more details.
+====
+
+
diff --git a/docs/_docs/SQL/schemas.adoc b/docs/_docs/SQL/schemas.adoc
new file mode 100644
index 0000000..613fc46
--- /dev/null
+++ b/docs/_docs/SQL/schemas.adoc
@@ -0,0 +1,94 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Understanding Schemas
+
+== Overview
+
+Ignite has a number of default schemas and supports creating custom schemas.
+
+There are two schemas that are available by default:
+
+- The SYS schema, which contains a number of system views with information about cluster nodes. You can't create tables in this schema. Refer to the link:monitoring-metrics/system-views[System Views] page for further information.
+- The <<PUBLIC Schema,PUBLIC schema>>, which is used by default whenever a schema is not specified.
+
+Custom schemas are created in the following cases:
+
+- You can specify custom schemas in the cluster configuration. See <<Custom Schemas>>.
+- Ignite creates a schema for each cache created via one of the programming interfaces or XML configuration. See <<Cache and Schema Names>>.
+
+
+== PUBLIC Schema
+
+The PUBLIC schema is used by default whenever a schema is required and is not specified. For example, when you connect to the cluster via JDBC without setting the schema explicitly, you will connect to the PUBLIC schema.
+
+
+== Custom Schemas
+Custom schemas can be set via the `sqlSchemas` property of `IgniteConfiguration`. You can specify a list of schemas in the configuration before starting your cluster and then create objects in these schemas at runtime.
+
+Below is a configuration example with two custom schemas.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/schemas.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/Schemas.java[tags=custom-schemas, indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UnderstandingSchemas.cs[tag=schemas,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+To connect to a specific schema via, for example, a JDBC driver, provide the schema name in the connection string:
+
+[source,text]
+----
+jdbc:ignite:thin://127.0.0.1/MY_SCHEMA
+----
+
+== Cache and Schema Names
+When you create a cache with link:SQL/sql-api#configuring-queryable-fields[queryable fields], you can manipulate the cached data using the link:SQL/sql-api[SQL API]. In SQL terms, each such cache corresponds to a separate schema whose name equals the name of the cache.
+
+Similarly, when you create a table via a DDL statement, you can access it as a key-value cache via Ignite's supported programming interfaces. The name of the corresponding cache can be specified by providing the `CACHE_NAME` parameter in the `WITH` part of the `CREATE TABLE` statement.
+
+[source,sql]
+----
+CREATE TABLE City (
+  ID INT(11),
+  Name CHAR(35),
+  CountryCode CHAR(3),
+  District CHAR(20),
+  Population INT(11),
+  PRIMARY KEY (ID, CountryCode)
+) WITH "backups=1, CACHE_NAME=City";
+----
+
+See the link:sql-reference/ddl#create-table[CREATE TABLE] page for more details.
+
+If you do not use this parameter, the cache name is defined in the following format (in capital letters):
+
+....
+SQL_<SCHEMA_NAME>_<TABLE_NAME>
+....
diff --git a/docs/_docs/SQL/sql-api.adoc b/docs/_docs/SQL/sql-api.adoc
new file mode 100644
index 0000000..c372c5a
--- /dev/null
+++ b/docs/_docs/SQL/sql-api.adoc
@@ -0,0 +1,352 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL API
+:javaSourceFile: {javaCodeDir}/SqlAPI.java
+
+In addition to using the JDBC driver, Java developers can use Ignite's SQL APIs to query and modify data stored in Ignite.
+
+The `SqlFieldsQuery` class is an interface for executing SQL statements and navigating through the results. `SqlFieldsQuery` is executed through the `IgniteCache.query(SqlFieldsQuery)` method, which returns a query cursor.
+
+== Configuring Queryable Fields
+
+If you want to query a cache using SQL statements, you need to define which fields of the value objects are queryable. Queryable fields are the fields of your data model that the SQL engine can "see" and query.
+
+NOTE: If you create tables using JDBC or SQL tools, you do not need to define queryable fields.
+
+[NOTE]
+====
+Indexing capabilities are provided by the 'ignite-indexing' module. If you start Ignite from java code, link:setup#enabling-modules[add this module to the classpath of your application].
+====
+
+In Java, queryable fields can be configured in two ways:
+
+* using annotations
+* by defining query entities
+
+
+=== @QuerySqlField Annotation
+
+To make specific fields queryable, annotate the fields in the value class definition with the `@QuerySqlField` annotation and call `CacheConfiguration.setIndexedTypes(...)`.
+////
+TODO : CacheConfiguration.setIndexedTypes is presented only in java, C# got different API, rewrite sentence above
+////
+
+
+[tabs]
+--
+tab:Java[]
+
+[source,java]
+----
+include::{javaCodeDir}/QueryEntitiesExampleWithAnnotation.java[tags=query-entity-annotation, indent=0]
+----
+
+Make sure to call `CacheConfiguration.setIndexedTypes(...)` to let the SQL engine know about the annotated fields.
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=sqlQueryFields,indent=0]
+----
+tab:C++[unsupported]
+--
+
+=== Query Entities
+
+You can define queryable fields using the `QueryEntity` class. Query entities can be configured via XML configuration.
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/query-entities.xml[tags=ignite-config,indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/QueryEntityExample.java[tags=query-entity,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=queryEntities,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Querying
+
+To execute a select query on a cache, simply create an object of `SqlFieldsQuery` providing the query string to the constructor and run `cache.query(...)`.
+Note that in the following example, the Person cache must be configured to be <<Configuring Queryable Fields,visible to the SQL engine>>.
+
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=simple-query,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=querying,indent=0]
+----
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql.cpp[tag=sql-fields-query,indent=0]
+----
+--
+
+`SqlFieldsQuery` returns a cursor that iterates through the results that match the SQL query.
+
+=== Local Execution
+
+To force local execution of a query, use `SqlFieldsQuery.setLocal(true)`. In this case, the query is executed against the data stored on the node where the query is run. It means that the results of the query are almost always incomplete. Use the local mode only if you are confident you understand this limitation.
+
+=== Subqueries in WHERE Clause
+
+`SELECT` queries used in `INSERT` and `MERGE` statements as well as `SELECT` queries generated by `UPDATE` and `DELETE` operations are distributed and executed in either link:SQL/distributed-joins[colocated or non-colocated distributed modes].
+
+However, if there is a subquery that is executed as part of a `WHERE` clause, then it can be executed in the colocated mode only.
+
+For instance, let's consider the following query:
+
+[source,sql]
+----
+DELETE FROM Person WHERE id IN
+    (SELECT personId FROM Salary s WHERE s.amount > 2000);
+----
+The SQL engine generates the `SELECT` query in order to get a list of entries to be deleted. The query is distributed and executed across the cluster and looks like the one below:
+[source,sql]
+----
+SELECT _key, _val FROM Person WHERE id IN
+    (SELECT personId FROM Salary s WHERE s.amount > 2000);
+----
+However, the subquery from the `IN` clause (`SELECT personId FROM Salary ...`) is not distributed further and is executed over the local data set available on the node.
+
+== Inserting, Updating, Deleting, and Merging
+
+With `SqlFieldsQuery` you can execute the other DML commands in order to modify the data:
+
+
+[tabs]
+--
+tab:INSERT[]
+[source,java]
+----
+include::{javaSourceFile}[tag=insert,indent=0]
+----
+
+tab:UPDATE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=update,indent=0]
+----
+
+tab:DELETE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=delete,indent=0]
+----
+
+tab:MERGE[]
+[source,java]
+----
+include::{javaSourceFile}[tag=merge,indent=0]
+----
+--
+
+When using `SqlFieldsQuery` to execute DDL statements, you must call `getAll()` on the cursor returned from the `query(...)` method.
+
+== Specifying the Schema
+
+By default, any SELECT statement executed via `SqlFieldsQuery` is resolved against the PUBLIC schema. However, if the table you want to query is in a different schema, you can specify the schema by calling `SqlFieldsQuery.setSchema(...)`. In this case, the statement is executed in the given schema.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=set-schema,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=schema,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql.cpp[tag=sql-fields-query-scheme,indent=0]
+----
+--
+
+Alternatively, you can define the schema in the statement:
+
+[source,java]
+----
+SqlFieldsQuery sql = new SqlFieldsQuery("select name from Person.City");
+----
+
+== Creating Tables
+
+You can pass any supported DDL statement to `SqlFieldsQuery` and execute it on a cache as shown below.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=create-table,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=creatingTables,indent=0]
+----
+
+tab:C++[]
+[source,cpp]
+----
+include::code-snippets/cpp/src/sql_create.cpp[tag=sql-create,indent=0]
+----
+--
+
+
+In terms of SQL schema, the following tables are created as a result of executing the code:
+
+* Table "Person" in the "Person" schema (if it hasn't been created before).
+* Table "City" in the "Person" schema.
+
+To query the "City" table, use statements like `select * from Person.City` or `new SqlFieldsQuery("select * from City").setSchema("PERSON")` (note the uppercase).
+
+
+////////////////////////////////////////////////////////////////////////////////
+== Joining Tables
+
+
+== Cross-Table Queries
+
+
+`SqlQuery.setSchema("PUBLIC")`
+
+++++
+<code-tabs>
+<code-tab data-tab="Java">
+++++
+[source,java]
+----
+IgniteCache cache = ignite.getOrCreateCache(
+    new CacheConfiguration<>()
+        .setName("Person")
+        .setIndexedTypes(Long.class, Person.class));
+
+// Creating City table.
+cache.query(new SqlFieldsQuery("CREATE TABLE City " +
+    "(id int primary key, name varchar, region varchar)").setSchema("PUBLIC")).getAll();
+
+// Creating Organization table.
+cache.query(new SqlFieldsQuery("CREATE TABLE Organization " +
+    "(id int primary key, name varchar, cityName varchar)").setSchema("PUBLIC")).getAll();
+
+// Joining data between City, Organizaion and Person tables. The latter
+// was created with either annotations or QueryEntity approach.
+SqlFieldsQuery qry = new SqlFieldsQuery("SELECT o.name from Organization o " +
+    "inner join \"Person\".Person p on o.id = p.orgId " +
+    "inner join City c on c.name = o.cityName " +
+    "where p.age > 25 and c.region <> 'Texas'");
+
+// Set the query's default schema to PUBLIC.
+// Table names from the query without the schema set will be
+// resolved against PUBLIC schema.
+// Person table belongs to "Person" schema (person cache) and this is why
+// that schema name is set explicitly.
+qry.setSchema("PUBLIC");
+
+// Executing the query.
+cache.query(qry).getAll();
+----
+++++
+</code-tab>
+<code-tab data-tab="C#/.NET">
+++++
+[source,csharp]
+----
+
+----
+++++
+</code-tab>
+<code-tab data-tab="C++">
+++++
+[source,cpp]
+----
+TODO
+----
+++++
+</code-tab>
+</code-tabs>
+++++
+
+
+////////////////////////////////////////////////////////////////////////////////
+
+== Cancelling Queries
+There are two ways to cancel long running queries.
+
+The first approach is to prevent run away queries by setting a query execution timeout.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=set-timeout,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=qryTimeout,indent=0]
+----
+tab:C++[unsupported]
+--
+
+The second approach is to halt the query by using `QueryCursor.close()`.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=cancel-by-closing,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/UsingSqlApi.cs[tag=cursorDispose,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Example
+
+The Ignite Community Edition distribution package includes a ready-to-run `SqlDmlExample` as a part of its link:{githubUrl}/examples/src/main/java/org/apache/ignite/examples/sql/SqlDmlExample.java[source code]. This example demonstrates the usage of all the above-mentioned DML operations.
diff --git a/docs/_docs/SQL/sql-introduction.adoc b/docs/_docs/SQL/sql-introduction.adoc
new file mode 100644
index 0000000..bfe6d11
--- /dev/null
+++ b/docs/_docs/SQL/sql-introduction.adoc
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Working with SQL
+
+Ignite comes with ANSI-99 compliant, horizontally scalable and fault-tolerant distributed SQL database. The distribution is provided either by partitioning the data across cluster nodes or by full replication, depending on the use case.
+
+As a SQL database, Ignite supports all DML commands including SELECT, UPDATE, INSERT, and DELETE queries and also implements a subset of DDL commands relevant for distributed systems.
+
+You can interact with Ignite as you would with any other SQL enabled storage by connecting with link:SQL/JDBC/jdbc-driver/[JDBC] or link:SQL/ODBC/odbc-driver[ODBC] drivers from both external tools and applications. Java, .NET and C++ developers can leverage native  link:SQL/sql-api[SQL APIs].
+
+Internally, SQL tables have the same data structure as link:data-modeling/data-modeling#key-value-cache-vs-sql-table[key-value caches]. It means that you can change partition distribution of your data and leverage link:data-modeling/affinity-collocation[affinity colocation techniques] for better performance.
+
+Ignite's SQL engine uses H2 Database to parse and optimize queries and generate execution plans.
+
+== Distributed Queries
+
+Queries against link:data-modeling/data-partitioning#partitioned[partitioned] tables are executed in a distributed manner:
+
+- The query is parsed and split into multiple “map” queries and a single “reduce” query.
+- All the map queries are executed on all the nodes where required data resides.
+- All the nodes provide result sets of local execution to the query initiator that, in turn, will merge provided result sets into the final results.
+
+You can force a query to be processed locally, i.e. on the subset of data that is stored on the node where the query is executed.
+
+== Local Queries
+
+If a query is executed over a link:data-modeling/data-partitioning#replicated[replicated] table, it will be run against the local data.
+
+Queries over partitioned tables are executed in a distributed manner.
+However, you can force local execution of a query over a partitioned table.
+See link:SQL/sql-api#local-execution[Local Execution] for details.
+
+
+////
+== Known Limitations
+TODO
+
+https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#section-known-limitations
+
+https://issues.apache.org/jira/browse/IGNITE-7822 - describe this if not fixed
+////
diff --git a/docs/_docs/SQL/sql-transactions.adoc b/docs/_docs/SQL/sql-transactions.adoc
new file mode 100644
index 0000000..6824746
--- /dev/null
+++ b/docs/_docs/SQL/sql-transactions.adoc
@@ -0,0 +1,87 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Transactions
+:javaSourceFile: {javaCodeDir}/SqlTransactions.java
+
+IMPORTANT: Support for SQL transactions is currently in the beta stage. For production use, consider key-value transactions.
+
+== Overview
+SQL Transactions are supported for caches that use the `TRANSACTIONAL_SNAPSHOT` atomicity mode. The `TRANSACTIONAL_SNAPSHOT` mode is the implementation of multiversion concurrency control (MVCC) for Ignite caches. For more information about MVCC and current limitations, visit the link:transactions/mvcc[Multiversion Concurrency Control] page.
+
+See the link:sql-reference/transactions[Transactions] page for the transaction syntax supported by Ignite.
+
+== Enabling MVCC
+To enable MVCC for a cache, use the `TRANSACTIONAL_SNAPSHOT` atomicity mode in the cache configuration. If you create a table with the `CREATE TABLE` command, specify the atomicity mode as a parameter in the `WITH` part of the command:
+
+[tabs]
+--
+tab:SQL[]
+[source,sql]
+----
+CREATE TABLE Person WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT"
+----
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="cacheConfiguration">
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+
+            <property name="name" value="myCache"/>
+
+            <property name="atomicityMode" value="TRANSACTIONAL_SNAPSHOT"/>
+
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaSourceFile}[tag=enable,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/SqlTransactions.cs[tag=mvcc,indent=0]
+----
+tab:C++[unsupported]
+--
+
+
+
+== Limitations
+
+=== Cross-Cache Transactions
+
+The `TRANSACTIONAL_SNAPSHOT` mode is enabled per cache and does not permit caches with different atomicity modes within one transaction. Thus, if you want to cover multiple tables in one SQL transaction, all tables must be created with the `TRANSACTIONAL_SNAPSHOT` mode.
+
+=== Nested Transactions
+
+Ignite supports three modes of handling nested SQL transactions that can be enabled via a JDBC/ODBC connection parameter.
+
+[source,sql]
+----
+jdbc:ignite:thin://127.0.0.1/?nestedTransactionsMode=COMMIT
+----
+
+
+When a nested transaction occurs within another transaction, the system behavior depends on the `nestedTransactionsMode` parameter:
+
+- `ERROR` — When the nested transaction is encountered, an error is thrown and the enclosing transaction is rolled back. This is the default behavior.
+- `COMMIT` — The enclosing transaction is committed; the nested transaction starts and is committed when its COMMIT statement is encountered. The rest of the statements in the enclosing transaction are executed as implicit transactions.
+- `IGNORE` — DO NOT USE THIS MODE. The beginning of the nested transaction is ignored, statements within the nested transaction will be executed as part of the enclosing transaction, and all changes will be committed with the commit of the nested transaction. The subsequent statements of the enclosing transaction will be executed as implicit transactions.
diff --git a/docs/_docs/SQL/sql-tuning.adoc b/docs/_docs/SQL/sql-tuning.adoc
new file mode 100644
index 0000000..35872e8
--- /dev/null
+++ b/docs/_docs/SQL/sql-tuning.adoc
@@ -0,0 +1,471 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL Performance Tuning
+
+This article outlines basic and advanced optimization techniques for Ignite SQL queries. Some of the sections are also useful for debugging and troubleshooting.
+
+
+== Using the EXPLAIN Statement
+
+Ignite supports the `EXPLAIN` statement, which could be used to read the execution plan of a query.
+Use this command to analyse your queries for possible optimization.
+Note that the plan will contain multiple rows: the last one will contain a query for the reducing side (usually your application), others are for map nodes (usually server nodes).
+Read the link:SQL/sql-introduction#distributed-queries[Distributed Queries] section to learn how queries are executed in Ignite.
+
+[source,sql]
+----
+EXPLAIN SELECT name FROM Person WHERE age = 26;
+----
+
+The execution plan is generated by H2 as described link:http://www.h2database.com/html/performance.html#explain_plan[here, window=_blank].
+
+== OR Operator and Selectivity
+
+//*TODO*: is this still valid?
+
+If a query contains an `OR` operator, then indexes may not be used as expected depending on the complexity of the query.
+For example, for the query `select name from Person where gender='M' and (age = 20 or age = 30)`, an index on the `gender` field will be used instead of an index on the `age` field, although the latter is a more selective index.
+As a workaround for this issue, you can rewrite the query with `UNION ALL` (notice that `UNION` without `ALL` will return `DISTINCT` rows, which will change the query semantics and will further penalize your query performance):
+
+[source,sql]
+----
+SELECT name FROM Person WHERE gender='M' and age = 20
+UNION ALL
+SELECT name FROM Person WHERE gender='M' and age = 30
+----
+
+== Avoid Having Too Many Columns
+
+Avoid having too many columns in the result set of a `SELECT` query. Due to limitations of the H2 query parser, queries with 100+ columns may perform worse than expected.
+
+== Lazy Loading
+
+By default, Ignite attempts to load the whole result set to memory and send it back to the query initiator (which is usually your application).
+This approach provides optimal performance for queries of small or medium result sets.
+However, if the result set is too big to fit in the available memory, it can lead to prolonged GC pauses and even `OutOfMemoryError` exceptions.
+
+To minimize memory consumption, at the cost of a moderate performance hit, you can load and process the result sets lazily by passing the `lazy` parameter to the JDBC and ODBC connection strings or use a similar method available for Java, .NET, and C++ APIs:
+
+[tabs]
+--
+
+tab:Java[]
+[source,java]
+----
+SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10");
+
+// Result set will be loaded lazily.
+query.setLazy(true);
+----
+tab:JDBC[]
+[source,sql]
+----
+jdbc:ignite:thin://192.168.0.15?lazy=true
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+var query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10")
+{
+    // Result set will be loaded lazily.
+    Lazy = true
+};
+----
+tab:C++[]
+--
+
+////
+*TODO* Add tabs for ODBC and other programming languages - C# and C++
+////
+
+== Querying Colocated Data
+
+When Ignite executes a distributed query, it sends sub-queries to individual cluster nodes to fetch the data and groups the results on the reducer node (usually your application).
+If you know in advance that the data you are querying is link:data-modeling/affinity-collocation[colocated] by the `GROUP BY` condition, you can use `SqlFieldsQuery.collocated = true` to tell the SQL engine to do the grouping on the remote nodes.
+This will reduce network traffic between the nodes and query execution time.
+When this flag is set to `true`, the query is executed on individual nodes first and the results are sent to the reducer node for final calculation.
+
+Consider the following example, in which we assume that the data is colocated by `department_id` (in other words, the `department_id` field is configured as the affinity key).
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Employee GROUP BY department_id
+----
+
+Because of the nature of the SUM operation, Ignite sums up the salaries across the elements stored on individual nodes, and then sends these sums to the reducer node where the final result are calculated.
+This operation is already distributed, and enabling the `collocated` flag only slightly improves performance.
+
+Let's take a slightly different example:
+
+[source,sql]
+----
+SELECT AVG(salary) FROM Employee GROUP BY department_id
+----
+
+In this example, Ignite has to fetch all (`salary`, `department_id`) pairs to the reducer node and calculate the results there.
+However, if employees are colocated by the `department_id` field, i.e. employee data for the same department is stored on the same node, setting `SqlFieldsQuery.collocated = true` reduces query execution time because Ignite calculates the averages for each department on the individual nodes and sends the results to the reducer node for final calculation.
+
+
+== Enforcing Join Order
+
+When this flag is set, the query optimizer will not reorder tables in joins.
+In other words, the order in which joins are applied during query execution will be the same as specified in the query.
+Without this flag, the query optimizer can reorder joins to improve performance.
+However, sometimes it might make an incorrect decision.
+This flag helps to control and explicitly specify the order of joins instead of relying on the optimizer.
+
+Consider the following example:
+
+[source, sql]
+----
+SELECT * FROM Person p
+JOIN Company c ON p.company = c.name where p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+This query contains a join between two tables: `Person` and `Company`.
+To get the best performance, we should understand which join will return the smallest result set.
+The table with the smaller result set size should be given first in the join pair.
+To get the size of each result set, let's test each part.
+
+.Q1:
+[source, sql]
+----
+SELECT count(*)
+FROM Person p
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000;
+----
+
+.Q2:
+[source, sql]
+----
+SELECT count(*)
+FROM Company c
+where
+c.name NOT LIKE 'O%';
+----
+
+After running Q1 and Q2, we can get two different outcomes:
+
+Case 1:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 30000
+|Q2 |100000
+|===
+
+Q2 returns more entries than Q1.
+In this case, we don't need to modify the original query, because smaller subset has already been located on the left side of the join.
+
+Case 2:
+[cols="1,1",opts="stretch,autowidth",stripes=none]
+|===
+|Q1 | 50000
+|Q2 |10000
+|===
+
+Q1 returns more entries than Q2. So we need to change the initial query as follows:
+
+[source, sql]
+----
+SELECT *
+FROM Company c
+JOIN Person p
+ON p.company = c.name
+where
+p.name = 'John Doe'
+AND p.age > 20
+AND p.id > 5000
+AND p.id < 100000
+AND c.name NOT LIKE 'O%';
+----
+
+The force join order hint can be specified as follows:
+
+* link:SQL/JDBC/jdbc-driver#parameters[JDBC driver connection parameter]
+* link:SQL/ODBC/connection-string-dsn#supported-arguments[ODBC driver connection attribute]
+* If you use link:SQL/sql-api[SqlFieldsQuery] to execute SQL queries, you can set the enforce join order hint by calling the `SqlFieldsQuery.setEnforceJoinOrder(true)` method.
+
+
+== Increasing Index Inline Size
+
+Every entry in the index has a constant size which is calculated during index creation. This size is called _index inline size_.
+Ideally this size should be enough to store full indexed entry in serialized form.
+When values are not fully included in the index, Ignite may need to perform additional data page reads during index lookup, which can impair performance if persistence is enabled.
+
+
+Here is how values are stored in the index:
+
+// the source code block below uses css-styles from the pygments library. If you change the highlighting library, you should change the syles as well.
+[source,java,subs="quotes"]
+----
+[tok-kt]#int#
+0     1       5
+| tag | value |
+[tok-k]#Total: 5 bytes#
+
+[tok-kt]#long#
+0     1       9
+| tag | value |
+[tok-k]#Total: 9 bytes#
+
+[tok-kt]#String#
+0     1      3             N
+| tag | size | UTF-8 value |
+[tok-k]#Total: 3 + string length#
+
+[tok-kt]#POJO (BinaryObject)#
+0     1         5
+| tag | BO hash |
+[tok-k]#Total: 5#
+----
+
+For primitive data types (bool, byte, short, int, etc.), Ignite automatically calculates the index inline size so that the values are included in full.
+For example, for `int` fields, the inline size is 5 (1 byte for the tag and 4 bytes for the value itself). For `long` fields, the inline size is 9 (1 byte for the tag + 8 bytes for the value).
+
+For binary objects, the index includes the hash of each object, which is enough to avoid collisions. The inline size is 5.
+
+For variable length data, indexes include only first several bytes of the value.
+Therefore, when indexing fields with variable-length data, we recommend that you estimate the length of your field values and set the inline size to a value that includes most (about 95%) or all values.
+For example, if you have a `String` field with 95% of the values containing 10 characters or fewer, you can set the inline size for the index on that field to 13.
+
+
+The inline sizes explained above apply to single field indexes.
+However, when you define an index on a field in the value object or on a non-primary key column, Ignite creates a _composite index_ by appending the primary key to the indexed value.
+Therefore, when calculating the inline size for composite indexes, add up the inline size of the primary key.
+
+
+Below is an example of index inline size calculation for a cache where both key and value are complex objects.
+
+[source, java]
+----
+public class Key {
+    @QuerySqlField
+    private long id;
+
+    @QuerySqlField
+    @AffinityKeyMapped
+    private long affinityKey;
+}
+
+public class Value {
+    @QuerySqlField(index = true)
+    private long longField;
+
+    @QuerySqlField(index = true)
+    private int intField;
+
+    @QuerySqlField(index = true)
+    private String stringField; // we suppose that 95% of the values are 10 symbols
+}
+----
+
+The following table summarizes the inline index sizes for the indexes defined in the example above.
+
+[cols="1,1,1,2",opts="stretch,header"]
+|===
+|Index | Kind | Recommended Inline Size | Comment
+
+| (_key)
+|Primary key index
+| 5
+|Inlined hash of a binary object (5)
+
+|(affinityKey, _key)
+|Affinity key index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(longField, _key)
+|Secondary index
+|14
+|Inlined long (9) + binary object's hash (5)
+
+|(intField, _key)
+|Secondary index
+|10
+|Inlined int (5) + binary object up to hash (5)
+
+|(stringField, _key)
+|Secondary index
+|18
+|Inlined string (13) + binary object's hash (5) (assuming that the string is {tilde}10 symbols)
+
+|===
+//_
+
+//The inline size for the first two indexes is set via `CacheConfiguration.sqlIndexMaxInlineSize = 29` (because a single property is responsible for two indexes, we set it to the largest value).
+//The inline size for the rest of the indexes is set when you define a corresponding index.
+Note that you will only have to set the inline size for the index on `stringField`. For other indexes, Ignite calculates the inline size automatically.
+
+Refer to the link:SQL/indexes#configuring-index-inline-size[Configuring Index Inline Size] section for the information on how to change the inline size.
+
+You can check the inline size of an existing index in the link:monitoring-metrics/system-views#indexes[INDEXES] system view.
+
+[WARNING]
+====
+Note that since Ignite encodes strings to `UTF-8`, some characters use more than 1 byte.
+====
+
+== Query Parallelism
+
+By default, a SQL query is executed in a single thread on each participating node. This approach is optimal for queries returning small result sets involving index search. For example:
+
+[source,sql]
+----
+SELECT * FROM Person WHERE p.id = ?;
+----
+
+Certain queries might benefit from being executed in multiple threads.
+This relates to queries with table scans and aggregations, which is often the case for HTAP and OLAP workloads.
+For example:
+
+[source,sql]
+----
+SELECT SUM(salary) FROM Person;
+----
+
+The number of threads created on a single node for query execution is configured per cache and by default equals 1.
+You can change the value by setting the `CacheConfiguration.queryParallelism` parameter.
+If you create SQL tables using the CREATE TABLE command, you can use a link:configuring-caches/configuration-overview#cache-templates[cache template] to set this parameter.
+
+If a query contains `JOINs`, then all the participating caches must have the same degree of parallelism.
+
+== Index Hints
+
+Index hints are useful in scenarios when you know that one index is more suitable for certain queries than another.
+You can use them to instruct the query optimizer to choose a more efficient execution plan.
+To do this, you can use `USE INDEX(indexA,...,indexN)` statement as shown in the following example.
+
+
+[source,sql]
+----
+SELECT * FROM Person USE INDEX(index_age)
+WHERE salary > 150000 AND age < 35;
+----
+
+
+== Partition Pruning
+
+Partition pruning is a technique that optimizes queries that use affinity keys in the `WHERE` condition.
+When executing such a query, Ignite  scans only those partitions where the requested data is stored.
+This reduces query time because the query is sent only to the nodes that store the requested partitions.
+
+In the following example, the employee objects are colocated by the `id` field (if an affinity key is not set
+explicitly then the primary key is used as the affinity key):
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR)
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+In the next example, the affinity key is set explicitly and, therefore, will be used to colocate data and direct
+queries to the nodes that keep primary copies of the data:
+
+
+[source,sql]
+----
+CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR) WITH "AFFINITY_KEY=department_id"
+
+/* This query is sent to all nodes */
+SELECT * FROM employee WHERE id=10;
+
+/* This query is sent to the node where the requested key is stored */
+SELECT * FROM employee WHERE department_id=10;
+----
+
+
+[NOTE]
+====
+Refer to link:data-modeling/affinity-collocation[affinity colocation] page for more details
+on how data gets colocated and how it helps boost performance in distributed storages like Ignite.
+====
+
+== Skip Reducer on Update
+
+When Ignite executes a DML operation, it first fetches all the affected intermediate rows for analysis to the reducer node (usually your application), and only then prepares batches of updated values that will be sent to remote nodes.
+
+This approach might affect performance and saturate the network if a DML operation has to move many entries.
+
+Use this flag as a hint for the SQL engine to do all intermediate rows analysis and updates “in-place” on the server nodes. The hint is supported for JDBC and ODBC connections.
+
+
+[tabs]
+--
+tab:JDBC Connection String[]
+[source,text]
+----
+//jdbc connection string
+jdbc:ignite:thin://192.168.0.15/skipReducerOnUpdate=true
+----
+--
+
+== SQL On-heap Row Cache
+
+Ignite stores data and indexes in its own memory space outside of Java heap. This means that with every data
+access, a part of the data will be copied from the off-heap space to Java heap, potentially deserialized, and kept in
+the heap as long as your application or server node references it.
+
+The SQL on-heap row cache is intended to store hot rows (key-value objects) in Java heap, minimizing resources
+spent for data copying and deserialization. Each cached row refers to an entry in the off-heap region and can be
+invalidated when one of the following happens:
+
+* The master entry stored in the off-heap region is updated or removed.
+* The data page that stores the master entry is evicted from RAM.
+
+The on-heap row cache can be enabled for a specific cache/table (if you use `CREATE TABLE` to create SQL tables and caches, then the parameter can be passed via a link:configuring-caches/configuration-overview#cache-templates[cache template]):
+
+
+[source,xml]
+----
+include::code-snippets/xml/sql-on-heap-cache.xml[tags=ignite-config;!discovery,indent=0]
+----
+
+////
+*TODO* Add tabs for ODBC/JDBC and other programming languages - Java C# and C++
+////
+
+If the row cache is enabled, you might be able to trade RAM for performance. You might get up to a 2x performance increase for some SQL queries and use cases by allocating more RAM for rows caching purposes.
+
+[WARNING]
+====
+[discrete]
+=== SQL On-Heap Row Cache Size
+
+Presently, the cache is unlimited and can occupy as much RAM as allocated to your memory data regions. Make sure to:
+
+* Set the JVM max heap size equal to the total size of all the data regions that store caches for which this on-heap row cache is enabled.
+
+* link:perf-troubleshooting-guide/memory-tuning#java-heap-and-gc-tuning[Tune] JVM garbage collection accordingly.
+====
+
+== Using TIMESTAMP instead of DATE
+
+//TODO: is this still valid?
+Use the `TIMESTAMP` type instead of `DATE` whenever possible. Presently, the `DATE` type is serialized/deserialized very inefficiently resulting in performance degradation.
diff --git a/docs/_docs/binary-client-protocol/binary-client-protocol.adoc b/docs/_docs/binary-client-protocol/binary-client-protocol.adoc
new file mode 100644
index 0000000..9caf373
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/binary-client-protocol.adoc
@@ -0,0 +1,286 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Client Protocol
+
+== Overview
+
+Ignite binary client protocol enables user applications to communicate with an existing Ignite cluster without starting a full-fledged Ignite node. An application can connect to the cluster through a raw TCP socket. Once the connection is established, the application can communicate with the Ignite cluster and perform cache operations using the established format.
+
+To communicate with the Ignite cluster, a client must obey the data format and communication details explained below.
+
+== Data Format
+
+=== Byte Ordering
+
+Ignite binary client protocol has little-endian byte ordering.
+
+=== Data Objects
+
+User data, such as cache keys and values, are represented in the Ignite link:key-value-api/binary-objects[Binary Object] format. A data object can be a standard (predefined) type or a complex object. For the complete list of data types supported, see the link:binary-client-protocol/data-format[Data Format] section.
+
+== Message Format
+
+All messages- requests and responses, including handshake, start with an `int` type message length (excluding these first 4 bytes) followed by the payload (message body).
+
+=== Handshake
+
+The binary client protocol requires a connection handshake to ensure that client and server versions are compatible. The following tables show the structure of handshake message request and response. Refer to the <<Example>> section on how to send and receive a handshake request and response respectively.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|   Description
+|int| Length of handshake payload
+|byte|    Handshake code, always 1.
+|short|   Version major.
+|short|   Version minor.
+|short|   Version patch.
+|byte|    Client code, always 2.
+|String|  Username
+|String|  Password
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type (success) |   Description
+|int| Success message length, 1.
+|byte|    Success flag, 1.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type (failure)  |  Description
+|int| Error message length.
+|byte|    Success flag, 0.
+|short|   Server version major.
+|short|   Server version minor.
+|short|   Server version patch.
+|String|  Error message.
+|===
+
+
+=== Standard Message Header
+
+Client operation messages are composed of a header and operation-specific data. Each operation has its own <<Client Operations,data request and response format>>, with a common header.
+
+The following tables and examples show the request and response structure of a client operation message header:
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |   Description
+|int| Length of payload.
+|short|   Operation code
+|long|    Request id, generated by client and returned as-is in response
+|===
+
+
+.Request header
+[source, java]
+----
+private static void writeRequestHeader(int reqLength, short opCode, long reqId, DataOutputStream out) throws IOException {
+  // Message length
+  writeIntLittleEndian(10 + reqLength, out);
+
+  // Op code
+  writeShortLittleEndian(opCode, out);
+
+  // Request id
+  writeLongLittleEndian(reqId, out);
+}
+----
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type | Description
+|int| Length of response message.
+|long|    Request id (see above)
+|int| Status code (0 for success, otherwise error code)
+|String|  Error message (present only when status is not 0)
+|===
+
+
+
+.Response header
+[source, java]
+----
+private static void readResponseHeader(DataInputStream in) throws IOException {
+  // Response length
+  final int len = readIntLittleEndian(in);
+
+  // Request id
+  long resReqId = readLongLittleEndian(in);
+
+  // Success code
+  int statusCode = readIntLittleEndian(in);
+}
+----
+
+
+== Connectivity
+
+=== TCP Socket
+
+Client applications should connect to server nodes with a TCP socket. By default, the connector is enabled on port 10800. You can configure the port number and other server-side​ connection parameters in the `clientConnectorConfiguration` property of `IgniteConfiguration` of your cluster, as shown below:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!-- Thin client connection configuration. -->
+    <property name="clientConnectorConfiguration">
+        <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
+            <property name="host" value="127.0.0.1"/>
+            <property name="port" value="10900"/>
+            <property name="portRange" value="30"/>
+        </bean>
+    </property>
+
+    <!-- Other Ignite Configurations. -->
+
+</bean>
+
+----
+
+
+tab:Java[]
+
+[source, java]
+----
+IgniteConfiguration cfg = new IgniteConfiguration();
+
+ClientConnectorConfiguration ccfg = new ClientConnectorConfiguration();
+ccfg.setHost("127.0.0.1");
+ccfg.setPort(10900);
+ccfg.setPortRange(30);
+
+// Set client connection configuration in IgniteConfiguration
+cfg.setClientConnectorConfiguration(ccfg);
+
+// Start Ignite node
+Ignition.start(cfg);
+----
+
+--
+
+=== Connection Handshake
+
+Besides socket connection, the thin client protocol requires a connection handshake to ensure that client and server versions are compatible. Note that handshake must be the first message after the connection is established.
+
+For the handshake message request and response structure, see the <<Handshake>> section above.
+
+
+=== Example
+
+
+.Socket and Handshake Connection
+[source, java]
+----
+Socket socket = new Socket();
+socket.connect(new InetSocketAddress("127.0.0.1", 10800));
+
+String username = "yourUsername";
+
+String password = "yourPassword";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Message length
+writeIntLittleEndian(18 + username.length() + password.length(), out);
+
+// Handshake operation
+writeByteLittleEndian(1, out);
+
+// Protocol version 1.0.0
+writeShortLittleEndian(1, out);
+writeShortLittleEndian(1, out);
+writeShortLittleEndian(0, out);
+
+// Client code: thin client
+writeByteLittleEndian(2, out);
+
+// username
+writeString(username, out);
+
+// password
+writeString(password, out);
+
+// send request
+out.flush();
+
+// Receive handshake response
+DataInputStream in = new DataInputStream(socket.getInputStream());
+int length = readIntLittleEndian(in);
+int successFlag = readByteLittleEndian(in);
+
+// Since Ignite binary protocol uses little-endian byte order,
+// we need to implement big-endian to little-endian
+// conversion methods for write and read.
+
+// Write int in little-endian byte order
+private static void writeIntLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.write((v >>> 0) & 0xFF);
+  out.write((v >>> 8) & 0xFF);
+  out.write((v >>> 16) & 0xFF);
+  out.write((v >>> 24) & 0xFF);
+}
+
+// Write short in little-endian byte order
+private static final void writeShortLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.write((v >>> 0) & 0xFF);
+  out.write((v >>> 8) & 0xFF);
+}
+
+// Write byte in little-endian byte order
+private static void writeByteLittleEndian(int v, DataOutputStream out) throws IOException {
+  out.writeByte(v);
+}
+
+// Read int in little-endian byte order
+private static int readIntLittleEndian(DataInputStream in) throws IOException {
+  int ch1 = in.read();
+  int ch2 = in.read();
+  int ch3 = in.read();
+  int ch4 = in.read();
+  if ((ch1 | ch2 | ch3 | ch4) < 0)
+    throw new EOFException();
+  return ((ch4 << 24) + (ch3 << 16) + (ch2 << 8) + (ch1 << 0));
+}
+
+
+// Read byte in little-endian byte order
+private static byte readByteLittleEndian(DataInputStream in) throws IOException {
+  return in.readByte();
+}
+
+// Other write and read methods
+
+----
+
+
+== Client Operations
+
+Upon successful handshake, a client can start performing various cache operations:
+
+* link:binary-client-protocol/key-value-queries[Key-Value Queries]
+* link:binary-client-protocol/sql-and-scan-queries[SQL and Scan Queries]
+* link:binary-client-protocol/binary-type-metadata[Binary-Type Operations]
+* link:binary-client-protocol/cache-configuration[Cache Configuration Operations]
diff --git a/docs/_docs/binary-client-protocol/binary-type-metadata.adoc b/docs/_docs/binary-client-protocol/binary-type-metadata.adoc
new file mode 100644
index 0000000..320a83c
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/binary-type-metadata.adoc
@@ -0,0 +1,421 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Binary Type Metadata
+
+== Operation Codes
+
+Upon a successful handshake with an Ignite server node a client can start performing binary-type related operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+
+[cols="2,1",opts="header"]
+|===
+|Operation  | OP_CODE
+|OP_GET_BINARY_TYPE_NAME| 3000
+|OP_REGISTER_BINARY_TYPE_NAME|    3001
+|OP_GET_BINARY_TYPE | 3002
+|OP_PUT_BINARY_TYPE|  3003
+|OP_RESOURCE_CLOSE|   0
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_GET_BINARY_TYPE_NAME
+
+Gets the platform-specific full binary type name by id. For example, .NET and Java can map to the same type Foo, but classes will be Apache.Ignite.Foo in .NET and org.apache.ignite.Foo in Java.
+
+Names are registered with OP_REGISTER_BINARY_TYPE_NAME.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type   | Description
+|Header |  Request header.
+|byte |    Platform id:
+JAVA = 0
+DOTNET = 1
+|int| Type id; Java-style hash code of the type name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  |Description
+|Header |  Response header.
+|String |  Binary type name.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+int typeLen = type.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_GET_BINARY_TYPE_NAME, 1, out);
+
+// Platform id
+writeByteLittleEndian(0, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+----
+
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting String
+int typeCode = readByteLittleEndian(in); // type code
+int strLen = readIntLittleEndian(in); // length
+
+byte[] buf = new byte[strLen];
+
+readFully(in, buf, 0, strLen);
+
+String s = new String(buf);
+
+System.out.println(s);
+----
+
+
+--
+
+== OP_GET_BINARY_TYPE
+
+Gets the binary type information by id.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type   | Description
+|Header |  Request header.
+|int | Type id; Java-style hash code of the type name.
+|===
+
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type | Description
+|Header|  Response header.
+|bool|    False: binary type does not exist, response end.
+True: binary type exists, response as follows.
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|String|  Affinity key field name.
+|int| BinaryField count.
+|BinaryField * count| Structure of BinaryField:
+
+`String`  Field name
+
+`int` Type id; Java-style hash code of the type name.
+
+`int` Field id; Java-style hash code of the field name.
+
+|bool|    Is Enum or not.
+
+If set to true, then you have to pass the following 2 parameters. Otherwise, skip them.
+|int| _Pass only if 'is enum' parameter is 'true'_.
+
+Enum field count.
+|String + int|    _Pass only if 'is enum' parameter is 'true'_.
+
+Enum values. An enum value is a pair of a literal value (String) and numerical value (int).
+
+Repeat for as many times as the Enum field count that is obtained in the previous parameter.
+
+|int| Schema count.
+|BinarySchema|    Structure of BinarySchema:
+
+`int` Unique schema id.
+
+`int` Number of fields in the schema.
+
+`int` Field Id; Java-style hash code of the field name. Repeat for as many times as the total number of fields in the schema.
+
+Repeat for as many times as the BinarySchema count that is obtained in the previous parameter.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(4, OP_BINARY_TYPE_GET, 1, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+boolean typeExist = readBooleanLittleEndian(in);
+
+int typeId = readIntLittleEndian(in);
+
+String typeName = readString(in);
+
+String affinityFieldName = readString(in);
+
+int fieldCount = readIntLittleEndian(in);
+
+for (int i = 0; i < fieldCount; i++)
+    readBinaryTypeField(in);
+
+boolean isEnum = readBooleanLittleEndian(in);
+
+int schemaCount = readIntLittleEndian(in);
+
+// Read binary schemas
+for (int i = 0; i < schemaCount; i++) {
+  int schemaId = readIntLittleEndian(in); // Schema Id
+
+  int fieldCount = readIntLittleEndian(in); // field count
+
+  for (int j = 0; j < fieldCount; j++) {
+    System.out.println(readIntLittleEndian(in)); // field id
+  }
+}
+
+private static void readBinaryTypeField (DataInputStream in) throws IOException{
+  String fieldName = readString(in);
+  int fieldTypeId = readIntLittleEndian(in);
+  int fieldId = readIntLittleEndian(in);
+  System.out.println(fieldName);
+}
+----
+--
+
+
+== OP_REGISTER_BINARY_TYPE_NAME
+
+Registers the platform-specific full binary type name by id. For example, .NET and Java can map to the same type Foo, but classes will be Apache.Ignite.Foo in .NET and org.apache.ignite.Foo in Java.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  | Description
+|Header |  Request header.
+|byte|    Platform id:
+JAVA = 0
+DOTNET = 1
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|===
+
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  |Description
+|Header | Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+int typeLen = type.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(20 + typeLen, OP_PUT_BINARY_TYPE_NAME, 1, out);
+
+//Platform id
+writeByteLittleEndian(0, out);
+
+//Type id
+writeIntLittleEndian(type.hashCode(), out);
+
+// Type name
+writeString(type, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+
+--
+
+== OP_PUT_BINARY_TYPE
+
+Registers binary type information in cluster.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |  Description
+|Header|  Response header.
+|int| Type id; Java-style hash code of the type name.
+|String|  Type name.
+|String|  Affinity key field name.
+|int| BinaryField count.
+|BinaryField| Structure of BinaryField:
+
+`String`  Field name
+
+`int` Type id; Java-style hash code of the type name.
+
+`int` Field id; Java-style hash code of the field name.
+
+Repeat for as many times as the BinaryField count that is passed in the previous parameter.
+|bool|    Is Enum or not.
+
+If set to true, then you have to pass the following 2 parameters. Otherwise, skip them.
+|int| Pass only if 'is enum' parameter is 'true'.
+
+Enum field count.
+|String + int|    Pass only if 'is enum' parameter is 'true'.
+
+Enum values. An enum value is a pair of a literal value (String) and numerical value (int).
+
+Repeat for as many times as the Enum field count that is passed in the previous parameter.
+|int| BinarySchema count.
+|BinarySchema|    Structure of BinarySchema:
+
+`int` Unique schema id.
+
+`int` Number of fields in the schema.
+
+`int` Field id; Java-style hash code of the field name. Repeat for as many times as the total number of fields in the schema.
+
+Repeat for as many times as the BinarySchema count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+| Response Type | Description
+|Header |  Response header.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String type = "ignite.myexamples.model.Person";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(120, OP_BINARY_TYPE_PUT, 1, out);
+
+// Type id
+writeIntLittleEndian(type.hashCode(), out);
+
+// Type name
+writeString(type, out);
+
+// Affinity key field name
+writeByteLittleEndian(101, out);
+
+// Field count
+writeIntLittleEndian(3, out);
+
+// Field 1
+String field1 = "id";
+writeBinaryTypeField(field1, "long", out);
+
+// Field 2
+String field2 = "name";
+writeBinaryTypeField(field2, "String", out);
+
+// Field 3
+String field3 = "salary";
+writeBinaryTypeField(field3, "int", out);
+
+// isEnum
+out.writeBoolean(false);
+
+// Schema count
+writeIntLittleEndian(1, out);
+
+// Schema
+writeIntLittleEndian(657, out);  // Schema id; can be any custom value
+writeIntLittleEndian(3, out);  // field count
+writeIntLittleEndian(field1.hashCode(), out);
+writeIntLittleEndian(field2.hashCode(), out);
+writeIntLittleEndian(field3.hashCode(), out);
+
+private static void writeBinaryTypeField (String field, String fieldType, DataOutputStream out) throws IOException{
+  writeString(field, out);
+  writeIntLittleEndian(fieldType.hashCode(), out);
+  writeIntLittleEndian(field.hashCode(), out);
+}
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+
+--
+
diff --git a/docs/_docs/binary-client-protocol/cache-configuration.adoc b/docs/_docs/binary-client-protocol/cache-configuration.adoc
new file mode 100644
index 0000000..9c2a9b1
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/cache-configuration.adoc
@@ -0,0 +1,714 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Cache Configuration
+
+== Operation Codes
+
+Upon successful handshake with an Ignite server node, a client can start performing various cahe configuration operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+| Operation | OP_CODE
+|OP_CACHE_GET_NAMES|  1050
+|OP_CACHE_CREATE_WITH_NAME|   1051
+|OP_CACHE_GET_OR_CREATE_WITH_NAME|    1052
+|OP_CACHE_CREATE_WITH_CONFIGURATION|  1053
+|OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION|   1054
+|OP_CACHE_GET_CONFIGURATION|  1055
+|OP_CACHE_DESTROY|    1056
+|OP_QUERY_SCAN|   2000
+|OP_QUERY_SCAN_CURSOR_GET_PAGE|   2001
+|OP_QUERY_SQL|    2002
+|OP_QUERY_SQL_CURSOR_GET_PAGE|    2003
+|OP_QUERY_SQL_FIELDS| 2004
+|OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE| 2005
+|OP_BINARY_TYPE_NAME_GET| 3000
+|OP_BINARY_TYPE_NAME_PUT| 3001
+|OP_BINARY_TYPE_GET|  3002
+|OP_BINARY_TYPE_PUT|  3003
+|===
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_CACHE_CREATE_WITH_NAME
+
+Creates a cache with a given name. Cache template can be applied if there is '{asterisk}' in the cache name. Throws exception if a cache with specified name already exists.
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request header.
+|String|  Cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+String cacheName = "myNewCache";
+
+int nameLength = cacheName.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5 + nameLength, OP_CACHE_CREATE_WITH_NAME, 1, out);
+
+// Cache name
+writeString(cacheName, out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+----
+--
+
+
+
+== OP_CACHE_GET_OR_CREATE_WITH_NAME
+
+Creates a cache with a given name. Cache template can be applied if there is '{asterisk}' in the cache name. Does nothing if the cache exists.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|String|  Cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+String cacheName = "myNewCache";
+
+int nameLength = cacheName.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5 + nameLength, OP_CACHE_GET_OR_CREATE_WITH_NAME, 1, out);
+
+// Cache name
+writeString(cacheName, out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_GET_NAMES
+
+Gets existing cache names.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Cache count.
+|String|  Cache name.
+
+Repeat for as many times as the cache count that is obtained in the previous parameter.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_GET_NAMES, 1, out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+// Cache count
+int cacheCount = readIntLittleEndian(in);
+
+// Cache names
+for (int i = 0; i < cacheCount; i++) {
+  int type = readByteLittleEndian(in); // type code
+
+  int strLen = readIntLittleEndian(in); // length
+
+  byte[] buf = new byte[strLen];
+
+  readFully(in, buf, 0, strLen);
+
+  String s = new String(buf); // cache name
+
+  System.out.println(s);
+}
+
+----
+--
+
+
+== OP_CACHE_GET_CONFIGURATION
+
+Gets configuration for the given cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Flag.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Length of the configuration in bytes (all the configuration parameters).
+|CacheConfiguration|  Structure of Cache configuration (See below).
+|===
+
+
+Cache Configuration
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|int| Number of backups.
+|int| CacheMode:
+
+LOCAL = 0
+
+REPLICATED = 1
+
+PARTITIONED = 2
+
+|bool|    CopyOnRead
+|String|  DataRegionName
+|bool|    EagerTTL
+|bool|    StatisticsEnabled
+|String|  GroupName
+|bool|    Invalidate
+|long|    DefaultLockTimeout (milliseconds)
+|int| MaxQueryIterators
+|String|  Name
+|bool|    IsOnheapCacheEnabled
+|int| PartitionLossPolicy:
+
+READ_ONLY_SAFE = 0
+
+READ_ONLY_ALL = 1
+
+READ_WRITE_SAFE = 2
+
+READ_WRITE_ALL = 3
+
+IGNORE = 4
+
+|int| QueryDetailMetricsSize
+|int| QueryParellelism
+|bool|    ReadFromBackup
+|int| RebalanceBatchSize
+|long|    RebalanceBatchesPrefetchCount
+|long|    RebalanceDelay (milliseconds)
+|int| RebalanceMode:
+
+SYNC = 0
+
+ASYNC = 1
+
+NONE = 2
+
+|int| RebalanceOrder
+|long|    RebalanceThrottle (milliseconds)
+|long|    RebalanceTimeout (milliseconds)
+|bool|    SqlEscapeAll
+|int| SqlIndexInlineMaxSize
+|String|  SqlSchema
+|int| WriteSynchronizationMode:
+
+FULL_SYNC = 0
+
+FULL_ASYNC = 1
+
+PRIMARY_SYNC = 2
+
+|int| CacheKeyConfiguration count.
+|CacheKeyConfiguration|   Structure of CacheKeyConfiguration:
+
+`String` Type name
+
+`String` Affinity key field name
+
+Repeat for as many times as the CacheKeyConfiguration count that is obtained in the previous parameter.
+int QueryEntity count.
+|QueryEntity * count| Structure of QueryEntity (see below).
+|===
+
+
+QueryEntity
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|String|  Key type name.
+|String|  Value type name.
+|String|  Table name.
+|String|  Key field name.
+|String|  Value field name.
+|int| QueryField count
+|QueryField * count|  Structure of QueryField:
+
+`String` Name
+
+`String` Type name
+
+`bool` Is key field
+
+`bool` Is notNull constraint field
+
+Repeat for as many times as the QueryField count that is obtained in the previous parameter.
+|int| Alias count
+|(String + String) * count|   Field name aliases.
+|int| QueryIndex count
+|QueryIndex * count | Structure of QueryIndex:
+
+`String`  Index name
+
+`byte`    Index type:
+
+SORTED = 0
+
+FULLTEXT = 1
+
+GEOSPATIAL = 2
+
+`int` Inline size
+
+`int` Field count
+
+`(string + bool) * count`  Fields (name + IsDescensing)
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String cacheName = "myCache";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_GET_CONFIGURATION, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+
+// Config length
+int configLen = readIntLittleEndian(in);
+
+// CacheAtomicityMode
+int cacheAtomicityMode = readIntLittleEndian(in);
+
+// Backups
+int backups = readIntLittleEndian(in);
+
+// CacheMode
+int cacheMode = readIntLittleEndian(in);
+
+// CopyOnRead
+boolean copyOnRead = readBooleanLittleEndian(in);
+
+// Other configurations
+
+----
+--
+
+
+== OP_CACHE_CREATE_WITH_CONFIGURATION
+
+Creates cache with provided configuration. An exception is thrown if the name is already in use.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Length of the configuration in bytes (all the used configuration parameters).
+|short|   Number of configuration parameters.
+|short + property type |   Configuration Property data.
+
+Repeat for as many times as the number of configuration parameters.
+|===
+
+
+Any number of configuration parameters can be provided. Note that `Name` is required.
+
+Cache configuration data is specified in key-value form, where key is the `short` property id and value is property-specific data. Table below describes all available parameters.
+
+
+[cols="1,1,3",opts="header"]
+|===
+|Property Code |   Property Type|   Description
+|2|   int| CacheAtomicityMode:
+
+TRANSACTIONAL = 0,
+
+ATOMIC = 1
+|3|   int| Backups
+|1|   int| CacheMode:
+LOCAL = 0, REPLICATED = 1, PARTITIONED = 2
+|5|   boolean| CopyOnRead
+|100| String|  DataRegionName
+|405| boolean| EagerTtl
+|406| boolean| StatisticsEnabled
+|400| String|  GroupName
+|402| long|    DefaultLockTimeout (milliseconds)
+|403| int| MaxConcurrentAsyncOperations
+|206| int| MaxQueryIterators
+|0|   String|  Name
+|101| bool|    IsOnheapcacheEnabled
+|404| int| PartitionLossPolicy:
+
+READ_ONLY_SAFE = 0,
+
+ READ_ONLY_ALL = 1,
+
+ READ_WRITE_SAFE = 2,
+
+ READ_WRITE_ALL = 3,
+
+ IGNORE = 4
+|202| int| QueryDetailMetricsSize
+|201| int| QueryParallelism
+|6|   bool|    ReadFromBackup
+|303| int| RebalanceBatchSize
+|304| long|    RebalanceBatchesPrefetchCount
+|301| long|    RebalanceDelay (milliseconds)
+|300| int| RebalanceMode: SYNC = 0, ASYNC = 1, NONE = 2
+|305| int| RebalanceOrder
+|306| long|    RebalanceThrottle (milliseconds)
+|302| long|    RebalanceTimeout (milliseconds)
+|205| bool|    SqlEscapeAll
+|204| int| SqlIndexInlineMaxSize
+|203| String|  SqlSchema
+|4|   int| WriteSynchronizationMode:
+
+FULL_SYNC = 0,
+
+ FULL_ASYNC = 1,
+
+PRIMARY_SYNC = 2
+|401| int + CacheKeyConfiguration * count| CacheKeyConfiguration count + CacheKeyConfiguration
+
+Structure of CacheKeyConfiguration:
+
+`String` Type name
+
+`String` Affinity key field name
+|200 | int + QueryEntity * count |  QueryEntity count + QueryEntity
+
+Structure of QueryEntity: (see below)
+|===
+
+
+
+QueryEntity
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+|String|  Key type name.
+|String|  Value type name.
+|String|  Table name.
+|String|  Key field name.
+|String|  Value field name.
+|int| QueryField count
+|QueryField|  Structure of QueryField:
+
+`String` Name
+
+`String` Type name
+
+`bool` Is key field
+
+`bool` Is notNull constraint field
+
+Repeat for as many times as the QueryField count.
+|int| Alias count
+|String + String| Field name alias.
+
+Repeat for as many times as the alias count.
+|int| QueryIndex count
+|QueryIndex|  Structure of QueryIndex:
+
+`String`  Index name
+
+`byte`    Index type:
+
+SORTED = 0
+
+FULLTEXT = 1
+
+GEOSPATIAL = 2
+
+`int` Inline size
+
+`int` Field count
+
+`string + bool` Fields (name + IsDescensing)
+
+Repeat for as many times as the field count that is passed in the previous parameter.
+
+Repeat for as many times as the QueryIndex count.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(30, OP_CACHE_CREATE_WITH_CONFIGURATION, 1, out);
+
+// Config length in bytes
+writeIntLittleEndian(16, out);
+
+// Number of properties
+writeShortLittleEndian(2, out);
+
+// Backups opcode
+writeShortLittleEndian(3, out);
+// Backups: 2
+writeIntLittleEndian(2, out);
+
+// Name opcode
+writeShortLittleEndian(0, out);
+// Name
+writeString("myNewCache", out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION
+
+Creates cache with provided configuration. Does nothing if the name is already in use.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|CacheConfiguration|  Cache configuration (see format above).
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+writeRequestHeader(30, OP_CACHE_GET_OR_CREATE_WITH_CONFIGURATION, 1, out);
+
+// Config length in bytes
+writeIntLittleEndian(16, out);
+
+// Number of properties
+writeShortLittleEndian(2, out);
+
+// Backups opcode
+writeShortLittleEndian(3, out);
+
+// Backups: 2
+writeIntLittleEndian(2, out);
+
+// Name opcode
+writeShortLittleEndian(0, out);
+
+// Name
+writeString("myNewCache", out);
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_DESTROY
+
+Destroys the cache with a given name.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String cacheName = "myCache";
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(4, OP_CACHE_DESTROY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Send request
+out.flush();
+----
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+readResponseHeader(in);
+----
+--
+
diff --git a/docs/_docs/binary-client-protocol/data-format.adoc b/docs/_docs/binary-client-protocol/data-format.adoc
new file mode 100644
index 0000000..b56b8c0
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/data-format.adoc
@@ -0,0 +1,1072 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Data Format
+
+Standard data types are represented as a combination of type code and value.
+
+:table_opts: cols="1,1,4",opts="header"
+
+[{table_opts}]
+|===
+|Field |  Size in bytes |  Description
+|`type_code` |  1 |   Signed one-byte integer code that indicates the type of the value.
+|`value` |  Variable|    Value itself. Its format and size depends on the type_code
+|===
+
+
+Below you can find description of the supported types and their format.
+
+
+== Primitives
+
+Primitives are the very basic types, such as numbers.
+
+
+=== Byte
+[{table_opts}]
+|===
+| Field  | Size in bytes  | Description
+|Type |   1|   1
+|Value  | 1  | Single byte value.
+
+|===
+
+=== Short
+
+Type code: 2;
+
+2-bytes long signed integer number. Little-endian.
+
+Structure:
+
+
+[{table_opts}]
+|===
+| Field |   Size in bytes | Description
+| `Value`  |  2|   The value.
+|===
+
+
+=== Int
+
+Type code: 3;
+
+4-bytes long signed integer number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`value`|   4|   The value.
+|===
+
+=== Long
+
+Type code: 4;
+
+8-bytes long signed integer number. Little-endian.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|`value` |   8  | The value.
+|===
+
+
+=== Float
+
+Type code: 5;
+
+4-byte long IEEE 754 floating-point number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+| value|   4|   The value.
+|===
+
+=== Double
+Type code: 6;
+
+8-byte long IEEE 754 floating-point number. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value  | 8|   The value.
+
+|===
+
+=== Char
+Type code: 7;
+
+Single UTF-16 code unit. Little-endian.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value |   2 |   The UTF-16 code unit in little-endian.
+|===
+
+
+=== Bool
+
+Type code: 8;
+
+Boolean value. Zero for false and non-zero for true.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes |   Description
+
+|value |  1 |  The value. Zero for false and non-zero for true.
+
+|===
+
+=== NULL
+
+Type code: 101;
+
+This is not exactly a type. It's just a null value, which can be assigned to object of any type.
+Has no payload, only consists of the type code.
+
+== Standard objects
+
+=== String
+
+Type code: 9;
+
+String in UTF-8 encoding. Should always be a valid UTF-8 string.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes |   Description
+|length|  4|   Signed integer number in little-endian. Length of the string in UTF-8 code units, i.e. in bytes.
+| data |    length |  String data in UTF-8 encoding. Without BOM.
+
+|===
+
+=== UUID (Guid)
+
+
+Type code: 10;
+
+A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|most_significant_bits|   8|   64-bit number in little endian, representing 64 most significant bits of UUID.
+|least_significant_bits|  8|   64-bit number in little endian, representing 64 least significant bits of UUID.
+
+|===
+
+=== Timestamp
+
+Type code: 33;
+
+More precise than a Date data type. Except for a milliseconds since epoch, contains a nanoseconds fraction of a last millisecond, which value could be in a range from 0 to 999999. It means, the full time stamp in nanoseconds can be obtained with the following expression: `msecs_since_epoch \* 1000000 + msec_fraction_in_nsecs`.
+
+NOTE: The nanoseconds time stamp evaluation expression is provided for clarification purposes only. One should not use the expression in production code, as in some languages the expression may result in integer number overflow.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes  | Description
+|`msecs_since_epoch`|   8|   Signed integer number in little-endian. Number of milliseconds elapsed since 00:00:00 1 Jan 1970 UTC. This format widely known as a Unix or POSIX time.
+|`msec_fraction_in_nsecs`|  4|   Signed integer number in little-endian. Nanosecond fraction of a millisecond.
+
+|===
+
+=== Date
+
+Type code: 11;
+
+Date, represented as a number of milliseconds elapsed since 00:00:00 1 Jan 1970 UTC. This format widely known as a Unix or POSIX time.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`msecs_since_epoch`|   8|   The value. Signed integer number in little-endian.
+|===
+
+=== Time
+
+Type code: 36;
+
+Time, represented as a number of milliseconds elapsed since midnight, i.e. 00:00:00 UTC.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|value|   8|   Signed integer number in little-endian. Number of milliseconds elapsed since 00:00:00 UTC.
+
+|===
+
+=== Decimal
+
+Type code: 30;
+
+Numeric value of any desired precision and scale.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|scale|   4|   Signed integer number in little-endian. Effectively, a power of the ten, on which the unscaled value should be divided. For example, 42 with scale 3 is 0.042, 42 with scale -3 is 42000, and 42 with scale 1 is 42.
+|length|  4|   Signed integer number in little-endian. Length of the number in bytes.
+|data|    length|  First bit is the flag of negativity. If it's set to 1, then value is negative. Other bits form signed integer number of variable length in big-endian format.
+
+|===
+
+=== Enum
+
+Type code: 28;
+
+Value of an enumerable type. For such types defined only a finite number of named values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id| 4|   Signed integer number in little-endian. See <<Type ID>> for details.
+|ordinal| 4|   Signed integer number stored in little-endian. Enumeration value ordinal . Its position in its enum declaration, where the initial constant is assigned an ordinal of zero.
+
+|===
+
+== Arrays of primitives
+
+Arrays of this kind only contain payloads of values as elements. They all have similar format. See format description in a table below for details. Pay attention that array only contains payloads, not type codes.
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`length`|  4|   Signed integer number. Number of elements in the array.
+|`element_0_payload`|   Depends on the type.|    Payload of the value 0.
+|`element_1_payload`|   Depends on the type.|    Payload of the value 1.
+|... |... |...
+|`element_N_payload`|   Depends on the type. |   Payload of the value N.
+
+|===
+
+=== Byte array
+
+Type code: 12;
+
+Array of bytes. May be either a piece of raw data, or array of small signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    length|  Elements sequence. Every element is a payload of type "byte".
+
+|===
+
+Short array
+
+Type code: 13;
+
+Array of short signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 2`|  Elements sequence. Every element is a payload of type "short".
+
+|===
+
+=== Int array
+
+Type code: 14;
+
+Array of signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 4`|  Elements sequence. Every element is a payload of type "int".
+
+|===
+
+=== Long array
+
+Type code: 15;
+
+Array of long signed integer numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 8`|  Elements sequence. Every element is a payload of type "long".
+
+|===
+
+=== Float array
+
+Type code: 16;
+
+Array of floating point numbers.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 4` | Elements sequence. Every element is a payload of type "float".
+
+|===
+
+=== Double array
+
+Type code: 17;
+
+Array of floating point numbers with double precision.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    `length * 8`|  Elements sequence. Every element is a payload of type "double".
+
+|===
+
+=== Char array
+
+Type code: 18;
+
+Array of UTF-16 code units. Unlike string, this type is not necessary contains valid UTF-16 text.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length | 4|   Signed integer number. Number of elements in the array.
+|elements|    length * 2|  Elements sequence. Every element is a payload of type "char".
+
+|===
+
+=== Bool array
+
+Type code: 19;
+
+Array of boolean values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    length|  Elements sequence. Every element is a payload of type "bool".
+
+|===
+
+== Arrays of standard objects
+
+Arrays of this kind contain full values as elements. It means, their elements contain type code as well as payload. This format allows for elements of such collections to be NULL values. That's why they are called "objects". They all have similar format. See format description in a table below for details.
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|`length` | 4|   Signed integer number.  Number of elements in the array.
+|`element_0_full_value`|    Depends on value type.|  Full value of the element 0. Contains of type code and payload. Also, can be NULL.
+|`element_1_full_value`|    Depends on value type.|  Full value of the element 1 or NULL.
+|... |...| ...
+|`element_N_full_value`|    Depends on value type.|  Full value of the element N or NULL.
+
+|===
+
+=== String array
+
+Type code: 20;
+
+Array of UTF-8 string values.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Depends on every string length. Every element size is either `5 + value_length` for string, or 1 for `NULL`.|  Elements sequence. Every element is a full value of type "string", including type code, or `NULL`.
+
+|===
+
+=== UUID (Guid) array
+
+Type code: 21;
+
+Array of UUIDs (Guids).
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 17 for UUID, or 1 for NULL.|  Elements sequence. Every element is a full value of type "UUID", including type code, or NULL.
+
+|===
+
+=== Timestamp array
+
+Type code: 34;
+
+Array of timestamp values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes |  Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 13 for Timestamp, or 1 for NULL.| Elements sequence. Every element is a full value of type "timestamp", including type code, or NULL.
+
+|===
+
+=== Date array
+
+Type code: 22;
+
+Array of dates.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either 9 for Date, or 1 for NULL.|   Elements sequence. Every element is a full value of type "date", including type code, or NULL.
+
+|===
+
+=== Time array
+
+Type code: 37;
+
+Array of time values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements   | Variable. Every element size is either 9 for Time, or 1 for NULL.|   Elements sequence. Every element is a full value of type "time", including type code, or NULL.
+
+|===
+
+=== Decimal array
+
+Type code: 31;
+
+Array of decimal values.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Every element size is either `9 + value_length` for Decimal, or 1 for NULL.| Elements sequence. Every element is a full value of type "decimal", including type code, or NULL.
+
+|===
+
+== Object collections
+
+=== Object array
+
+Type code: 23;
+
+Array of objects of any type. Can contain objects of any type. This includes standard objects of any type, as well as complex objects of various types, NULL values and any combinations of them. This also means, that collections may contain other collections.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id |4|   Type identifier of the contained objects. For example, in Java this type is used to de-serialize to a Type[]. Obviously, all values in array should have Type as a parent. It is parent type of any object type. For example, in Java this always can be java.lang.Object. Type ID for such "root" object type is -1. See <<Type ID>> for details.
+|length|  4|   Signed integer number. Number of elements in the array.
+|elements|    Variable. Depends on sizes of the objects.|  Elements sequence. Every element is a full value of any type or NULL.
+
+|===
+
+=== Collection
+
+Type code: 24;
+
+General collection type. Just as an object array, contains objects, but unlike array, it have a hint for a deserialization to a platform-specific collection of a certain type, not just an array. There are following collection types:
+
+
+*  `USER_SET` = -1. This is a general set type, which can not be mapped to more specific set type. Still, it is known, that it is set. It makes sense to deserialize such a collection to the basic and most widely used set-like type on your platform, e.g. hash set.
+*    `USER_COL` = 0. This is a general collection type, which can not be mapped to any more specific collection type. It makes sense to deserialize such a collection to the basic and most widely used collection type on your platform, e.g. resizeable array.
+*    `ARR_LIST` = 1. This is in fact a resizeable array type.
+*    `LINKED_LIST` = 2. This is a linked list type.
+*    `HASH_SET` = 3. This is a basic hash set type.
+*    `LINKED_HASH_SET` = 4. This is a hash set type, which maintains element order.
+*    `SINGLETON_LIST` = 5. This is a collection that only contains a single element, but behaves as a collection. Could be used by platforms for optimization purposes. If not applicable, any collection type could be used.
+
+[NOTE]
+====
+Collection type byte is used as a hint by a certain platform to deserialize a collection to the most suitable type. For example, in Java HASH_SET deserialized to java.util.HashSet, while LINKED_HASH_SET deserialized to java.util.LinkedHashSet. It is recommended for a thin client implementation to try and use the most suitable collection type on serialization and deserialization. But still, it is only a hint, which user can ignore if it is not relevant or not applicable for the platform.
+====
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the collection.
+|type|    1|   Type of the collection. See description for details.
+elements  |  Variable. Depends on sizes of the objects.  Elements sequence. Every element is a full value of any type or NULL.
+
+|===
+
+=== Map
+
+Type code: 25;
+
+Map-like collection type. Contains pairs of key and value objects. Both key and value objects can be objects of a various types. It includes standard objects of various type, as well as complex objects of various types and any combinations of them. Have a hint for a deserialization to a map of a certain type. There are following map types:
+
+*   `HASH_MAP` = 1. This is a basic hash map.
+*   `LINKED_HASH_MAP` = 2. This is a hash map, which maintains element order.
+
+[NOTE]
+====
+Map type byte is used as a hint by a certain platform to deserialize a collection to the most suitable type. It is recommended for a thin client implementation to try and use the most suitable map type on serialization and deserialization. But still, it is only a hint, which user can ignore if it is not relevant or not applicable for the platform.
+====
+
+Structure:
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|length|  4|   Signed integer number. Number of elements in the collection.
+|type|    1|   Type of the collection. See description for details.
+|elements|    Variable. Depends on sizes of the objects.|  Elements sequence. Elements here are keys and values, followed one by one in pairs. Every element is a full value of any type or NULL.
+
+|===
+
+=== Enum array
+
+Type code: 29;
+
+Array of enumerable type value. Element could be either enumerable value or null. So, any element either occupies 9 bytes or 1 byte.
+
+Structure:
+
+
+[{table_opts}]
+|===
+|Field|   Size in bytes|   Description
+|type_id| 4|   Type identifier of the contained objects. For example, in Java this type is used to de-serialize to a EnumType[]. Obviously, all values in array should have EnumType as a parent. It is parent type of any enumerable object type. See <<Type ID>> for details.
+|length|  4|   Signed integer number. Number of elements in the collection.
+|elements|    Variable. Depends on sizes of the objects. | Elements sequence. Every element is a full value of enum type or NULL.
+
+|===
+
+== Complex object
+
+Type code: 103;
+
+Complex object consist of a 24-byte header, set of fields (data objects), and a schema (field IDs and positions). Depending on an operation and your data model, a data object can be of a primitive type or complex type (set of fields).
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size in bytes|   Optionality
+|`version`| 1|   Mandatory
+|`flags`|   2|   Mandatory
+|`type_id`| 4|   Mandatory
+|`hash_code`|   4|   Mandatory
+|`length`|  4|   Mandatory
+|`schema_id`|   4|   Mandatory
+|`object_fields`|   Variable| length.    Optional
+|`schema`|  Variable| length.    Optional
+|`raw_data_offset`| 4|   Optional
+
+|===
+
+
+== Version
+
+This is a field, indicating complex object layout version. It is needed for backward compatibility. Clients should check this field and indicate error to a user, if the object layout version is unknown to them, to prevent data corruption and unpredictable results of the de-serialization.
+
+== Flags
+
+This field is 16-bit long little-endian bitmask. Contains object flags, which indicate how the object instance should be handled by a reader. There are following flags:
+
+*    `USER_TYPE = 0x0001` - Indicates that type is a user type. Should be always set for any client type. Can be ignored on a de-serialization.
+*    `HAS_SCHEMA = 0x0002` - Indicates that object layout contains schema in the footer. See <<Schema>> for details.
+*    `HAS_RAW_DATA = 0x0004` - Indicating that object has raw data. See <<Raw data offset>> for details.
+*    `OFFSET_ONE_BYTE = 0x0008` - Indicating that schema field offset is one byte long. See <<Schema>> for details.
+*    `OFFSET_TWO_BYTES = 0x0010` - Indicating that schema field offset is two byte long. See <<Schema>> for details.
+*    `COMPACT_FOOTER = 0x0020` - Indicating that footer does not contain field IDs, only offsets. See <<Schema>> for details.
+
+== Type ID
+
+This field contains a unique type identifier. It is 4 bytes long and stored in little-endian. By default, Type ID is obtained as a Java-style hash code of the type name. Type ID evaluation algorithm should be the same across all platforms in the cluster for all platforms to be able to operate with objects of this type. Default type ID calculation algorithm, which is recommended for use by all thin clients, can be found below.
+
+[tabs]
+--
+
+tab:Java[]
+[source, java]
+----
+static int hashCode(String str) {
+  int len = str.length;
+
+  int h = 0;
+
+  for (int i = 0; i < len; i++) {
+    int c = str.charAt(i);
+
+    c = Character.toLowerCase(c);
+
+    h = 31 * h + c;
+  }
+
+  return h;
+}
+----
+
+tab:C[]
+
+[source, c]
+----
+int32_t HashCode(const char* val, size_t size)
+{
+  if (!val && size == 0)
+    return 0;
+
+  int32_t hash = 0;
+
+  for (size_t i = 0; i < size; ++i)
+  {
+    char c = val[i];
+
+    if ('A' <= c && c <= 'Z')
+      c |= 0x20;
+
+    hash = 31 * hash + c;
+  }
+
+  return hash;
+}
+----
+
+--
+
+
+
+
+
+== Hash code
+
+Hash code of the value. It is stored as a 4-byte long little-endian value and calculated as a Java-style hash of contents without header. Used by Ignite engine for comparisons, for example - to compare keys. Hash calculation algorithm can be found below.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+static int dataHashCode(byte[] data) {
+  int len = data.length;
+
+  int h = 0;
+
+  for (int i = 0; i < len; i++)
+    h = 31 * h + data[i];
+
+  return h;
+}
+----
+tab:C[]
+
+[source, c]
+----
+int32_t GetDataHashCode(const void* data, size_t size)
+{
+  if (!data)
+    return 0;
+
+  int32_t hash = 1;
+  const int8_t* bytes = static_cast<const int8_t*>(data);
+
+  for (int i = 0; i < size; ++i)
+    hash = 31 * hash + bytes[i];
+
+  return hash;
+}
+----
+
+--
+
+
+
+
+== Length
+
+This field contains full length of the object including header. It is stored as a 4-byte long little-endian integer number. Using this field you can easily skip the whole object by simply increasing current data stream position by the value of this field.
+
+== Schema ID
+
+Object schema identifier. It is stored as a 4-byte long little-endian value and calculated as a hash of all object field IDs. It is used for complex object size optimization. Ignite uses schema ID to avoid writing of the whole schema to the end of the every complex object value. Instead, it stores all schemas in the binary metadata store and only writes field offsets to the object. This optimization helps to significantly reduce size for the complex object containing a lot of short fields (such as ints).
+
+If the schema is missing (e.g. the whole object is written in raw mode, or have no fields at all), the schema ID field is 0.
+
+See <<Schema>> for details on schema structure.
+
+[NOTE]
+====
+Schema ID can not be determined using Type ID as objects of the same type (and thus, having the same Type ID) can have a multiple schemas, i.e. field sequence.
+====
+
+Schema ID calculation algorithm can be found below:
+
+[tabs]
+--
+
+tab:Java[]
+
+[source, java]
+----
+/** FNV1 hash offset basis. */
+private static final int FNV1_OFFSET_BASIS = 0x811C9DC5;
+
+/** FNV1 hash prime. */
+private static final int FNV1_PRIME = 0x01000193;
+
+static int calculateSchemaId(int fieldIds[])
+{
+  if (fieldIds == null || fieldIds.length == 0)
+    return 0;
+
+  int len = fieldIds.length;
+
+  int schemaId = FNV1_OFFSET_BASIS;
+
+  for (size_t i = 0; i < len; ++i)
+  {
+    fieldId = fieldIds[i];
+
+    schemaId = schemaId ^ (fieldId & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 8) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 16) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+    schemaId = schemaId ^ ((fieldId >> 24) & 0xFF);
+    schemaId = schemaId * FNV1_PRIME;
+  }
+}
+----
+
+
+tab:C[]
+
+[source, c]
+----
+/** FNV1 hash offset basis. */
+enum { FNV1_OFFSET_BASIS = 0x811C9DC5 };
+
+/** FNV1 hash prime. */
+enum { FNV1_PRIME = 0x01000193 };
+
+int32_t CalculateSchemaId(const int32_t* fieldIds, size_t num)
+{
+  if (!fieldIds || num == 0)
+    return 0;
+
+  int32_t schemaId = FNV1_OFFSET_BASIS;
+
+  for (size_t i = 0; i < num; ++i)
+  {
+    fieldId = fieldIds[i];
+
+    schemaId ^= fieldId & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 8) & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 16) & 0xFF;
+    schemaId *= FNV1_PRIME;
+    schemaId ^= (fieldId >> 24) & 0xFF;
+    schemaId *= FNV1_PRIME;
+  }
+}
+----
+
+
+--
+
+
+
+== Object Fields
+
+Object fields. Every field is a binary object and could be either complex or standard type. Note that a complex object that has no fields at all is a valid object and may be encountered. Every field can have or not have a name. For named fields there is an offset written in the object schema, by which they can be located in object without de-serialization of the whole object. Fields without name are always stored after the named fields and are written in a so called "raw mode".
+
+Thus, fields that have been written in a raw mode can only be accessed by sequential read in the same order as they were written, while named fields can be read in a random order.
+
+== Schema
+
+Object schema. Any complex object may have or have no schema, so this field is optional. Schema is not present in object, if there is no named fields in object. It also includes cases, when the object does not have fields at all. You should check the HAS_SCHEMA object flag to determine if the object has schema.
+
+The main purpose of a schema is to allow for fast search of object fields. For this purpose, schema contains a sequence of offsets of object fields in the object payload. Field offsets themselves can be of a different size. The size of these fields determined on a write by a max offset value. If it is in the range of [24..255] bytes, then 1-byte offset is used, if it's in the range of [256..65535] bytes, then 2-byte offset is used. In all other cases 4-byte offsets are used. To determine the size of the offsets on read, clients should check `OFFSET_ONE_BYTE` and `OFFSET_TWO_BYTES` flags. If the `OFFSET_ONE_BYTE` flag is set, then offsets are 1 byte long, else if `OFFSET_TWO_BYTES` flag is set, then offsets are 2-byte long, otherwise offsets are 4-byte long.
+
+There are two formats of schema supported:
+
+* Full schema approach - simpler to implement but uses more resources.
+*  Compact footer approach - harder to implement, but provides better performance and reduces memory consumption; thus it is recommended for new clients to implement this approach.
+
+You can find more details on both formats below.
+
+Note that the flag COMPACT_FOOTER should be checked by clients to determine which approach is used in every specific object.
+
+=== Full schema approach
+
+When this approach is used, COMPACT_FOOTER flag is not set and the whole object schema is written to the footer of the object. In this case only complex object itself is needed for a de-serialization - schema_id field is ignored and no additional data is required. The structure of the schema field of the complex object in this case can be found below:
+
+[cols="1,1,2",opts="header"]
+|===
+|Field |  Size in bytes |  Description
+|`field_id_0`|  4|   ID of the field with the index 0. 4-byte long hash stored in little-endian. The Field ID calculated using field name the same way it is done for a <<Type ID>>.
+|`field_offset_0`|  Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian Offset of the field in object, starting from the very first byte of the full object value (i.e. type_code position).
+|`field_id_1`|  4|   4-byte long hash stored in little-endian. ID of the field with the index 1.
+|`field_offset_1` | Variable, depending on the size of the object: 1, 2 or 4.|   Unsigned integer number stored in little-endian. Offset of the field in object.
+|...| ...| ...
+|`field_id_N`|  4|   4-byte long hash stored in little-endian. ID of the field with the index N.
+|`field_offset_N`|  Variable, depending on the size of the object: 1, 2 or 4. |   Unsigned integer number stored in little-endian. Offset of the field in object.
+
+|===
+
+=== Compact footer approach
+
+In this approach, COMPACT_FOOTER flag is set and only field offset sequence is written to the object footer. In this case client uses schema_id field to search objects schema in a previously stored meta store to find out fields order and associate field with its offset.
+
+If this approach is used, client needs to keep schemas in a special meta store and send/retrieve them to Ignite servers. See link:check[Binary Types] for details.
+
+The structure of the schema in this case can be found below:
+
+[cols="1,1,2",opts="header"]
+|===
+|Field |  Size in bytes |  Description
+|`field_offset_0` | Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian. Offset of the field 0 in the object, starting from the very first byte of the full object value (i.e. type_code position).
+|`field_offset_1`|  Variable, depending on the size of the object: 1, 2 or 4. |  Unsigned integer number stored in little-endian. Offset of the 1-st field in object.
+|...| ...| ...
+|`field_id_N`|  Variable, depending on the size of the object: 1, 2 or 4.  | Unsigned integer number stored in little-endian.
+Offset of the N-th field in object.
+
+|===
+
+== Raw data offset
+
+Optional field. Only present in object, if there is any fields, that have been written in a raw mode. In this case, HAS_RAW_DATA flag is set and the raw data offset field is present and is stored as an 4-byte long little-endian value, which points to the offset of the raw data in complex object, starting from the very first byte of the header (i.e. this field always greater than a header length).
+
+This field is used to position stream for user to start reading in a raw mode.
+
+== Special types
+
+=== Wrapped Data
+
+Type code: 27;
+
+One or more binary objects can be wrapped in an array. This allows reading, storing, passing and writing objects efficiently without understanding their contents, performing simple byte copy.
+All cache operations return complex objects inside a wrapper (but not primitives).
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |   Size |    Description
+|length|  4|   Signed integer number stored in little-endian. Size of the wrapped data in bytes.
+|payload| length|  Payload.
+|offset|  4|   Signed integer number stored in little-endian. Offset of the object within an array. Array can contain an object graph, this offset points to the root object.
+
+|===
+
+=== Binary enum
+
+Type code: 38
+
+Wrapped enumerable type. This type can be returned by the engine in place of the ordinary enum type. Enums should be written in this form when Binary API is used.
+
+Structure:
+
+[{table_opts}]
+|===
+|Field |  Size  |  Description
+|type_id| 4|   Signed integer number in little-endian. See <<Type ID>> for details.
+|ordinal| 4|   Signed integer number stored in little-endian. Enumeration value ordinal . Its position in its enum declaration, where the initial constant is assigned an ordinal of zero.
+
+|===
+
+== Serialization and Deserialization examples
+
+=== Reading objects
+
+A code template below shows how to read data of various types from an input byte stream:
+
+
+[source, java]
+----
+private static Object readDataObject(DataInputStream in) throws IOException {
+  byte code = in.readByte();
+
+  switch (code) {
+    case 1:
+      return in.readByte();
+    case 2:
+      return readShortLittleEndian(in);
+    case 3:
+      return readIntLittleEndian(in);
+    case 4:
+      return readLongLittleEndian(in);
+    case 27: {
+      int len = readIntLittleEndian(in);
+      // Assume 0 offset for simplicity
+      Object res = readDataObject(in);
+      int offset = readIntLittleEndian(in);
+      return res;
+    }
+    case 103:
+      byte ver = in.readByte();
+      assert ver == 1; // version
+      short flags = readShortLittleEndian(in);
+      int typeId = readIntLittleEndian(in);
+      int hash = readIntLittleEndian(in);
+      int len = readIntLittleEndian(in);
+      int schemaId = readIntLittleEndian(in);
+      int schemaOffset = readIntLittleEndian(in);
+      byte[] data = new byte[len - 24];
+      in.read(data);
+      return "Binary Object: " + typeId;
+    default:
+      throw new Error("Unsupported type: " + code);
+  }
+}
+----
+
+=== Int
+
+The following code snippet shows how to write and read a data object of type int, using a socket based output/input stream.
+
+
+[source, java]
+----
+// Write int data object
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+int val = 11;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(val, out);
+
+// Read int data object
+DataInputStream in = new DataInputStream(socket.getInputStream());
+int typeCode = readByteLittleEndian(in);
+int val = readIntLittleEndian(in);
+----
+
+Refer to the link:example[example section] for implementation of `write...()` and `read..()` methods shown above.
+
+As another example, for String type, the structure would be:
+
+
+
+[cols="1,2",opts="header"]
+|===
+|Type |    Description
+| byte |    String type code, 9.
+|int | String length in UTF-8 bytes.
+|bytes |   Actual string.
+|===
+
+=== String
+
+The code snippet below shows how to write and read a String value following this format:
+
+
+[source, java]
+----
+private static void writeString (String str, DataOutputStream out) throws IOException {
+  writeByteLittleEndian(9, out); // type code for String
+
+  int strLen = str.getBytes("UTF-8").length; // length of the string
+  writeIntLittleEndian(strLen, out);
+
+  out.writeBytes(str);
+}
+
+private static String readString(DataInputStream in) throws IOException {
+  int type = readByteLittleEndian(in); // type code
+
+  int strLen = readIntLittleEndian(in); // length of the string
+
+  byte[] buf = new byte[strLen];
+
+  readFully(in, buf, 0, strLen);
+
+  return new String(buf);
+}
+----
+
+
+
+
+
diff --git a/docs/_docs/binary-client-protocol/key-value-queries.adoc b/docs/_docs/binary-client-protocol/key-value-queries.adoc
new file mode 100644
index 0000000..1acabc5
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/key-value-queries.adoc
@@ -0,0 +1,1416 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Key-Value Queries
+
+This page describes the key-value operations that you can perform with a cache. The key-value operations are equivalent to Ignite's native cache operations. Each operation has a link:binary-client-protocol/binary-client-protocol#standard-message-header[header] and operation-specific data.
+
+Refer to the Data Format page for a list of available data types and data format specification.
+
+== Operation Codes
+
+Upon successful handshake with an Ignite server node, a client can start performing various key-value operations by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+
+
+|Operation|   OP_CODE
+|OP_CACHE_GET|    1000
+|OP_CACHE_PUT|    1001
+|OP_CACHE_PUT_IF_ABSENT|  1002
+|OP_CACHE_GET_ALL|    1003
+|OP_CACHE_PUT_ALL|    1004
+|OP_CACHE_GET_AND_PUT|    1005
+|OP_CACHE_GET_AND_REPLACE|    1006
+|OP_CACHE_GET_AND_REMOVE| 1007
+|OP_CACHE_GET_AND_PUT_IF_ABSENT|  1008
+|OP_CACHE_REPLACE|    1009
+|OP_CACHE_REPLACE_IF_EQUALS|  1010
+|OP_CACHE_CONTAINS_KEY|   1011
+|OP_CACHE_CONTAINS_KEYS|  1012
+|OP_CACHE_CLEAR|  1013
+|OP_CACHE_CLEAR_KEY|  1014
+|OP_CACHE_CLEAR_KEYS| 1015
+|OP_CACHE_REMOVE_KEY| 1016
+|OP_CACHE_REMOVE_IF_EQUALS|   1017
+|OP_CACHE_REMOVE_KEYS|    1018
+|OP_CACHE_REMOVE_ALL| 1019
+|OP_CACHE_GET_SIZE|   1020
+
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+== OP_CACHE_GET
+
+Retrieves a value from a cache by key. If the cache does not contain the key, null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the cache entry to be returned.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|Data Object| The value that corresponds to the given key. null if the cache does not contain the key.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_ALL
+
+Retrieves multiple key-value pairs from a cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object| Key for the cache entry.
+
+Repeat for as many times as the key count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|int| Result count.
+|Key Data Object + Value Data Object| Resulting key-value pairs. Keys that are not present in the cache are not included.
+
+Repeat for as many times as the result count that is obtained in the previous parameter.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_GET_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Key count
+writeIntLittleEndian(2, out);
+
+// Data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Result count
+int resCount = readIntLittleEndian(in);
+
+for (int i = 0; i < resCount; i++) {
+  // Resulting data object
+  int resKeyTypeCode = readByteLittleEndian(in); // Integer type code
+  int resKey = readIntLittleEndian(in); // Cache key
+
+  // Resulting data object
+  int resValTypeCode = readByteLittleEndian(in); // Integer type code
+  int resValue = readIntLittleEndian(in); // Cache value
+}
+
+----
+--
+
+
+== OP_CACHE_PUT
+
+Puts a value with a given key to a cache (overwriting existing value if any).
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value for the key.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response Header
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_PUT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_PUT_ALL
+
+Puts multiple key-value pairs to cache (overwriting existing associations if any).
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key-value pair count
+|Key Data Object + Value Data | Object Key-value pairs.
+
+Repeat for as many times as the key-value pair count that is passed in the previous parameter.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(29, OP_CACHE_PUT_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Entry Count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache value data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value1, out);   // Cache value
+
+// Cache key data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+
+// Cache value data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value2, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+----
+--
+
+
+== OP_CACHE_CONTAINS_KEY
+
+Returns a value indicating whether given key is present in cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header | Response header.
+|bool  |  True when key is present, false otherwise.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_CONTAINS_KEY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Result
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_CONTAINS_KEYS
+
+Returns a value indicating whether all given keys are present in cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object |Key obtained from cache.
+
+Repeat for as many times as the key count that is passed in the previous parameter.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    True when keys are present, false otherwise.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_CONTAINS_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+//Count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+int key1 = 11;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache key data object 2
+int key2 = 22;
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_PUT
+
+Puts a key and an associated value into a cache and returns the previous value for that key. If the cache does not contain the key, a new entry is created and null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key to be updated.
+|Data Object| The new value for the specified key.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |  Description
+|Header|  Response header.
+|Data Object| The existing value associated with the specified key, or null.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_PUT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_REPLACE
+
+
+Replaces the value associated with the given key in the specified cache and returns the previous value. If the cache does not contain the key, the operation returns null without changing the cache.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key whose value is to be replaced.
+|Data Object| The new value to be associated with the specified key.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+| Response Type |  Description
+|Header|  Response header.
+|Data Object| The previous value associated with the given key, or null if the key does not exist.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_REPLACE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_REMOVE
+
+Removes a specific entry from a cache and returns the entry's value. If the key does not exist, null is returned.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key to be removed.
+
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|Data Object| The existing value associated with the specified key or null, if the key does not exist.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET_AND_REMOVE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByte(in);
+int value = readInt(in);
+
+----
+--
+
+
+== OP_CACHE_PUT_IF_ABSENT
+
+Puts an entry to a cache if that entry does not exist.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be added.
+|Data Object| The value of the key to be added.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|bool|    true if the new entry is created, false if the entry already exists.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_PUT_IF_ABSENT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache Value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_AND_PUT_IF_ABSENT
+
+Puts an entry to a cache if it does not exist; otherwise, returns the existing value.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be added.
+|Data Object| The value of the entry to be added.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|Data Object| null if the cache does not contain the entry (in this case a new entry is created) or the existing value associated with the given key.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_GET_AND_PUT_IF_ABSENT, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting cache value (Data Object)
+int resTypeCode = readByteLittleEndian(in);
+int value = readIntLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REPLACE
+
+Puts a value with a given key to cache only if the key already exists.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value for the key.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    Value indicating whether replace happened.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_REPLACE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REPLACE_IF_EQUALS
+
+Puts a value with a given key to cache only if the key already exists and value equals provided value.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|Data Object| Value to be compared with the existing value in the cache for the given key.
+|Data Object| New value for the key.
+|===
+
+[cols="1,2",opts="header"]
+|===
+| Response Type |   Description
+|Header|  Response header.
+|bool|    Value indicating whether replace happened.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(20, OP_CACHE_REPLACE_IF_EQUALS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value to compare
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(newValue, out);   // New cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_CLEAR
+
+Clears the cache without notifying listeners or cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type  | Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_CLEAR, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_CLEAR_KEY
+
+Clears the cache key without notifying listeners or cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_CLEAR_KEY, 1, out);;
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_CLEAR_KEYS
+
+Clears the cache keys without notifying listeners or cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Key count.
+|Data Object * count| Keys
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_CLEAR_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// key count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache key data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_KEY
+
+Removes an entry with a given key, notifying listeners and cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| Key for the cache entry.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    Value indicating whether remove happened.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_REMOVE_KEY, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_IF_EQUALS
+
+Removes an entry with a given key if the specified value is equal to the current value, notifying listeners and cache writers.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|Data Object| The key of the entry to be removed.
+|Data Object| The value to be compared with the current value.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|bool|    Value indicating whether remove happened
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_CACHE_REMOVE_IF_EQUALS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Cache key data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key, out);   // Cache key
+
+// Cache value data object
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(value, out);   // Cache value
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Resulting boolean value
+boolean res = readBooleanLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_GET_SIZE
+
+Gets the number of entries in a cache. This method is equivalent to `IgniteCache.size(CachePeekMode... peekModes)`.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| The number of peek modes you are going to request. When set to 0, CachePeekMode.ALL is used. When set to a positive value, you need to specify in the following fields the type of entries that should be counted: all, backup, primary, or near cache entries.
+|byte|    Indicates which type of entries should be counted: 0 = all, 1 = near cache entries, 2 = primary entries, 3 = backup entries.
+
+This field must be provided as many times as specified in the previous field.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|long|    Cache size.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(10, OP_CACHE_GET_SIZE, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// Peek mode count; '0' means All
+writeIntLittleEndian(0, out);
+
+// Peek mode
+writeByteLittleEndian(0, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+// Number of entries in cache
+long cacheSize = readLongLittleEndian(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_KEYS
+
+Removes entries with given keys, notifying listeners and cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type  |  Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|int| Number of keys to remove.
+|Data Object| The key to be removed. If the cache does not contain the key, it is ignored. This field must be provided for each key to be removed.
+|....|
+|Data Object| The key to be removed.
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(19, OP_CACHE_REMOVE_KEYS, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+
+// key count
+writeIntLittleEndian(2, out);
+
+// Cache key data object 1
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key1, out);   // Cache key
+
+// Cache value data object 2
+writeByteLittleEndian(3, out);  // Integer type code
+writeIntLittleEndian(key2, out);   // Cache key
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+--
+
+
+== OP_CACHE_REMOVE_ALL
+
+Removes all entries from cache, notifying listeners and cache writers. See the javadoc for the corresponding cache method.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request Header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+
+|===
+
+[cols="1,2",opts="header"]
+|===
+|Response Type|   Description
+|Header|  Response header.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(5, OP_CACHE_REMOVE_ALL, 1, out);
+
+// Cache id
+writeIntLittleEndian(cacheName.hashCode(), out);
+
+// Flags = none
+writeByteLittleEndian(0, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response length
+final int len = readIntLittleEndian(in);
+
+// Request id
+long resReqId = readLongLittleEndian(in);
+
+// Success
+int statusCode = readIntLittleEndian(in);
+
+----
+--
+
diff --git a/docs/_docs/binary-client-protocol/sql-and-scan-queries.adoc b/docs/_docs/binary-client-protocol/sql-and-scan-queries.adoc
new file mode 100644
index 0000000..168b5aa
--- /dev/null
+++ b/docs/_docs/binary-client-protocol/sql-and-scan-queries.adoc
@@ -0,0 +1,634 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= SQL and Scan Queries
+
+== Operation codes
+
+Upon a successful handshake with an Ignite server node, a client can start performing various SQL and scan queries by sending a request (see request/response structure below) with a specific operation code:
+
+
+[cols="2,1",opts="header"]
+|===
+|Operation |   OP_CODE
+|OP_QUERY_SQL|    2002
+|OP_QUERY_SQL_CURSOR_GET_PAGE|    2003
+|OP_QUERY_SQL_FIELDS| 2004
+|OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE| 2005
+|OP_QUERY_SCAN|   2000
+|OP_QUERY_SCAN_CURSOR_GET_PAGE|   2001
+|OP_RESOURCE_CLOSE|   0
+|===
+
+
+Note that the above mentioned op_codes are part of the request header, as explained link:binary-client-protocol/binary-client-protocol#standard-message-header[here].
+
+[NOTE]
+====
+[discrete]
+=== Customs Methods Used in Sample Code Snippets Implementation
+
+Some of the code snippets below use `readDataObject(...)` introduced in link:binary-client-protocol/binary-client-protocol#data-objects[this section] and little-endian versions of methods for reading and writing multiple-byte values that are covered in link:binary-client-protocol/binary-client-protocol#data-objects[this example].
+====
+
+
+== OP_QUERY_SQL
+
+Executes an SQL query over data stored in the cluster. The query returns the whole record (key and value).
+
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|String|  Name of a type or SQL table.
+|String|  SQL query string.
+|int| Query argument count.
+|Data Object| Query argument.
+
+Repeat for as many times as the query argument count that is passed in the previous parameter.
+|bool|    Distributed joins.
+|bool|    Local query.
+|bool|    Replicated only - Whether query contains only replicated tables or not.
+|int| Cursor page size.
+|long|    Timeout (miliseconds).
+
+Timeout value should be non-negative. Zero value disables timeout.
+|===
+
+
+Response includes the first page of the result.
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id. Can be closed with OP_RESOURSE_CLOSE.
+|int| Row count for the first page.
+|Key Data Object + Value Data Object| Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SQL_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+|===
+
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String entityName = "Person";
+int entityNameLength = getStrLen(entityName); // UTF-8 bytes
+
+String sql = "Select * from Person";
+int sqlLength = getStrLen(sql);
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(34 + entityNameLength + sqlLength, OP_QUERY_SQL, 1, out);
+
+// Cache id
+String queryCacheName = "personCache";
+writeIntLittleEndian(queryCacheName.hashCode(), out);
+
+// Flag = none
+writeByteLittleEndian(0, out);
+
+// Query Entity
+writeString(entityName, out);
+
+// SQL query
+writeString(sql, out);
+
+// Argument count
+writeIntLittleEndian(0, out);
+
+// Joins
+out.writeBoolean(false);
+
+// Local query
+out.writeBoolean(false);
+
+// Replicated
+out.writeBoolean(false);
+
+// cursor page size
+writeIntLittleEndian(1, out);
+
+// Timeout
+writeLongLittleEndian(5000, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+long cursorId = readLongLittleEndian(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++) {
+  Object key = readDataObject(in);
+  Object val = readDataObject(in);
+
+  System.out.println("CacheEntry: " + key + ", " + val);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+
+== OP_QUERY_SQL_CURSOR_GET_PAGE
+
+Retrieves the next SQL query cursor page by cursor id from OP_QUERY_SQL.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|long|    Cursor id.
+|===
+
+
+Response format looks as follows:
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id.
+|int| Row count.
+|Key Data Object + Value Data Object| Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SQL_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(8, OP_QUERY_SQL_CURSOR_GET_PAGE, 1, out);
+
+// Cursor Id (received from Sql query operation)
+writeLongLittleEndian(cursorId, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++){
+  Object key = readDataObject(in);
+  Object val = readDataObject(in);
+
+  System.out.println("CacheEntry: " + key + ", " + val);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SQL_FIELDS
+
+Performs SQL fields query.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Use 0. This field is deprecated and will be removed in the future.
+|String|  Schema for the query; can be null, in which case default PUBLIC schema will be used.
+|int| Query cursor page size.
+|int| Max rows.
+|String|  SQL
+|int| Argument count.
+|Data Object| Query argument.
+
+Repeat for as many times as the query argument count that is passed in the previous parameter.
+
+|byte|    Statement type.
+
+ANY = 0
+
+SELECT = 1
+
+UPDATE = 2
+
+|bool|    Distributed joins
+|bool|    Local query.
+|bool|    Replicated only - Whether query contains only replicated tables or not.
+|bool|    Enforce join order.
+|bool|    Collocated - Whether your data is co-located or not.
+|bool|    Lazy query execution.
+|long|    Timeout (milliseconds).
+|bool|    Include field names.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id. Can be closed with OP_RESOURCE_CLOSE.
+|int| Field (column) count.
+|String (optional)|   Needed only when IncludeFieldNames is true in the request.
+
+Column name.
+
+Repeat for as many times as the field count that is retrieved in the previous parameter.
+
+|int| First page row count.
+Data Object Column (field) value. Repeat for as many times as the field count.
+
+Repeat for as many times as the row count that is retrieved in the previous parameter.
+|bool|    Indicates whether more results are available to be retrieved with OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+String sql = "Select id, salary from Person";
+int sqlLength = sql.getBytes("UTF-8").length;
+
+String sqlSchema = "PUBLIC";
+int sqlSchemaLength = sqlSchema.getBytes("UTF-8").length;
+
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(43 + sqlLength + sqlSchemaLength, OP_QUERY_SQL_FIELDS, 1, out);
+
+// Cache id
+String queryCacheName = "personCache";
+int cacheId = queryCacheName.hashCode();
+writeIntLittleEndian(cacheId, out);
+
+// Flag = none
+writeByteLittleEndian(0, out);
+
+// Schema
+writeByteLittleEndian(9, out);
+writeIntLittleEndian(sqlSchemaLength, out);
+out.writeBytes(sqlSchema); //sqlSchemaLength
+
+// cursor page size
+writeIntLittleEndian(2, out);
+
+// Max Rows
+writeIntLittleEndian(5, out);
+
+// SQL query
+writeByteLittleEndian(9, out);
+writeIntLittleEndian(sqlLength, out);
+out.writeBytes(sql);//sqlLength
+
+// Argument count
+writeIntLittleEndian(0, out);
+
+// Statement type
+writeByteLittleEndian(1, out);
+
+// Joins
+out.writeBoolean(false);
+
+// Local query
+out.writeBoolean(false);
+
+// Replicated
+out.writeBoolean(false);
+
+// Enforce join order
+out.writeBoolean(false);
+
+// collocated
+out.writeBoolean(false);
+
+// Lazy
+out.writeBoolean(false);
+
+// Timeout
+writeLongLittleEndian(5000, out);
+
+// Replicated
+out.writeBoolean(false);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+long cursorId = readLongLittleEndian(in);
+
+int colCount = readIntLittleEndian(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries
+for (int i = 0; i < rowCount; i++) {
+  long id = (long) readDataObject(in);
+  int salary = (int) readDataObject(in);
+
+  System.out.println("Person id: " + id + "; Person Salary: " + salary);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE
+
+Retrieves the next query result page by cursor id from OP_QUERY_SQL_FIELDS .
+
+[cols="1,2",opts="header"]
+|===
+|Request Type|    Description
+|Header|  Request header.
+|long|    Cursor id received from OP_QUERY_SQL_FIELDS
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|int| Row count.
+|Data Object| Column (field) value. Repeat for as many times as the field count.
+
+Repeat for as many times as the row count that is retrieved in the previous parameter.
+|bool|    Indicates whether more results are available to be retrieved with OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(8, QUERY_SQL_FIELDS_CURSOR_GET_PAGE, 1, out);
+
+// Cursor Id
+writeLongLittleEndian(1, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++){
+   // read data objects * column count.
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SCAN
+
+Performs scan query.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|int| Cache ID: Java-style hash code of the cache name.
+|byte|    Flag. Pass 0 for default, or 1 to keep the value in binary form.
+|Data Object| Filter object. Can be null if you are not going to filter data on the cluster. The filter class has to be added to the classpath of the server nodes.
+|byte|    Filter platform:
+
+JAVA = 1
+
+DOTNET = 2
+
+CPP = 3
+
+Pass this parameter only if filter object is not null.
+|int| Cursor page size.
+|int| Number of partitions to query (negative to query entire cache).
+|bool|    Local flag - whether this query should be executed on local node only.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id.
+|int| Row count.
+|Key Data Object + Value Data Object| Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SCAN_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(15, OP_QUERY_SCAN, 1, out);
+
+// Cache id
+String queryCacheName = "personCache";
+writeIntLittleEndian(queryCacheName.hashCode(), out);
+
+// flags
+writeByteLittleEndian(0, out);
+
+// Filter Object
+writeByteLittleEndian(101, out); // null
+
+// Cursor page size
+writeIntLittleEndian(1, out);
+
+// Partition to query
+writeIntLittleEndian(-1, out);
+
+// local flag
+out.writeBoolean(false);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+//Response header
+readResponseHeader(in);
+
+// Cursor id
+long cursorId = readLongLittleEndian(in);
+
+int rowCount = readIntLittleEndian(in);
+
+// Read entries (as user objects)
+for (int i = 0; i < rowCount; i++) {
+  Object key = readDataObject(in);
+  Object val = readDataObject(in);
+
+  System.out.println("CacheEntry: " + key + ", " + val);
+}
+
+boolean moreResults = readBooleanLittleEndian(in);
+
+----
+
+--
+
+
+== OP_QUERY_SCAN_CURSOR_GET_PAGE
+
+
+Fetches the next SQL query cursor page by cursor id that is obtained from OP_QUERY_SCAN.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|long|    Cursor id.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header.
+|long|    Cursor id.
+|long|    Row count.
+|Key Data Object + Value Data Object | Records in the form of key-value pairs.
+
+Repeat for as many times as the row count obtained in the previous parameter.
+|bool|    Indicates whether more results are available to be fetched with OP_QUERY_SCAN_CURSOR_GET_PAGE.
+When true, query cursor is closed automatically.
+|===
+
+
+== OP_RESOURCE_CLOSE
+
+Closes a resource, such as query cursor.
+
+[cols="1,2",opts="header"]
+|===
+|Request Type |    Description
+|Header|  Request header.
+|long|    Resource id.
+|===
+
+
+[cols="1,2",opts="header"]
+|===
+|Response Type |   Description
+|Header|  Response header
+|===
+
+[tabs]
+--
+tab:Request[]
+
+[source, java]
+----
+DataOutputStream out = new DataOutputStream(socket.getOutputStream());
+
+// Request header
+writeRequestHeader(8, OP_RESOURCE_CLOSE, 1, out);
+
+// Resource id
+long cursorId = 1;
+writeLongLittleEndian(cursorId, out);
+----
+
+tab:Response[]
+
+[source, java]
+----
+// Read result
+DataInputStream in = new DataInputStream(socket.getInputStream());
+
+// Response header
+readResponseHeader(in);
+
+----
+
+--
+
diff --git a/docs/_docs/clustering/baseline-topology.adoc b/docs/_docs/clustering/baseline-topology.adoc
new file mode 100644
index 0000000..4245dc7
--- /dev/null
+++ b/docs/_docs/clustering/baseline-topology.adoc
@@ -0,0 +1,159 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Baseline Topology
+
+:javaFile: {javaCodeDir}/ClusterAPI.java
+:csharpFile: {csharpCodeDir}/BaselineTopology.cs
+
+The _baseline topology_ is a set of nodes meant to hold data.
+The concept of baseline topology was introduced to give you the ability to control when you want to
+link:data-modeling/data-partitioning#rebalancing[rebalance the data in the cluster]. For example, if
+you have a cluster of 3 nodes where the data is distributed between the nodes, and you add 2 more nodes, the rebalancing
+process re-distributes the data between all 5 nodes. The rebalancing process happens when the
+baseline topology changes, which can either happen automatically or be triggered manually.
+
+The baseline topology only includes server nodes; client nodes are never included because they do not store data.
+
+The purpose of the baseline topology is to:
+
+* Avoid unnecessary data transfer when a server node leaves the cluster for a short period of time, for example, due to
+occasional network failures or scheduled server maintenance.
+* Give you the ability to control when you want to rebalance the data.
+
+Baseline topology changes automatically when <<Baseline Topology Autoadjustment>> is enabled. This is the default
+behavior for pure in-memory clusters. For persistent clusters, the baseline topology autoadjustment feature must be enabled
+manually. By default, it is disabled and you have to change the baseline topology manually. You can change the baseline
+topology using the link:control-script#activation-deactivation-and-topology-management[control script].
+
+[CAUTION]
+====
+Any attempt to create a cache while the baseline topology is being changed results in an exception.
+For more details, see link:key-value-api/basic-cache-operations#creating-caches-dynamically[Creating Caches Dynamically].
+====
+
+== Baseline Topology in Pure In-Memory Clusters
+In pure in-memory clusters, the default behavior is to adjust the baseline topology to the set of all server nodes
+automatically when you add or remove server nodes from the cluster. The data is rebalanced automatically, too.
+You can disable the baseline autoadjustment feature and manage baseline topology manually.
+
+NOTE: In previous releases, baseline topology was relevant only to clusters with persistence.
+However, since version 2.8.0, it applies to in-memory clusters as well.
+If you have a pure in-memory cluster, the transition should be transparent for you because, by default, the baseline topology changes automatically when a server node leaves or joins the cluster.
+
+== Baseline Topology in Persistent Clusters
+
+If your cluster has at least one data region in which persistence is enabled, the cluster is inactive when you start it for the first time.
+In the inactive state, all operations are prohibited.
+The cluster must be activated before you can create caches and upload data.
+Cluster activation sets the current set of server nodes as the baseline topology.
+When you restart the cluster, it is activated automatically as soon as all nodes that are registered in the baseline topology join in.
+However, if some nodes do not join after a restart, you must to activate the cluster manually.
+
+You can activate the cluster using one of the following tools:
+
+* link:control-script#activating-cluster[Control script]
+* link:restapi#change-cluster-state[REST API command]
+* Programmatically:
++
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=activate,indent=0]
+----
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=activate,indent=0]
+----
+tab:C++[]
+--
+
+== Baseline Topology Autoadjustment
+
+Instead of changing the baseline topology manually, you can let the cluster do it automatically. This feature is called
+Baseline Topology Autoadjustment. When it is enabled, the cluster monitors the state of its server nodes and sets the
+baseline on the current topology automatically when the cluster topology is stable for a configurable period of time.
+
+Here is what happens when the set of nodes in the cluster changes:
+
+* The cluster waits for a configurable amount of time (5 min by default).
+* If there are no other topology changes during this period, Ignite sets the baseline topology to the current set of nodes.
+* If the set of nodes changes during this period, the timeout is updated.
+
+Each change in the set of nodes resets the timeout for autoadjustment.
+When the timeout expires and the current set of nodes is different from the baseline topology (for example, new nodes
+are present or some old nodes left), Ignite changes the baseline topology to the current set of nodes.
+This also triggers data rebalancing.
+
+The autoadjustment timeout allows you to avoid data rebalancing when a node disconnects for a short period due to a
+temporary network problem or when you want to quickly restart the node.
+You can set the timeout to a higher value if you expect temporary changes in the set of nodes and don't want to change
+the baseline topology.
+
+Baseline topology is autoadjusted only if the cluster is in the active state.
+
+To enable automatic baseline adjustment, you can use the
+link:control-script#enabling-baseline-topology-autoadjustment[control script] or the
+programmatic API methods shown below:
+
+[tabs]
+--
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=enable-autoadjustment,indent=0]
+----
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=enable-autoadjustment,indent=0]
+----
+tab:C++[]
+--
+
+
+To disable automatic baseline adjustment, use the same method with `false` passed in:
+
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=disable-autoadjustment,indent=0]
+----
+
+tab:C#/.NET[]
+[source, csharp]
+----
+include::{csharpFile}[tags=disable-autoadjustment,indent=0]
+----
+tab:C++[]
+--
+
+
+== Monitoring Baseline Topology
+
+You can use the following tools to monitor and/or manage the baseline topology:
+
+* link:control-script[Control Script]
+* link:monitoring-metrics/metrics#monitoring-topology[JMX Beans]
+
diff --git a/docs/_docs/clustering/clustering.adoc b/docs/_docs/clustering/clustering.adoc
new file mode 100644
index 0000000..8496a3c
--- /dev/null
+++ b/docs/_docs/clustering/clustering.adoc
@@ -0,0 +1,51 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Clustering
+
+== Overview
+
+In this chapter, we discuss different ways nodes can discover each other to form a cluster.
+
+On start-up, a node is assigned either one of the two roles: _server node_ or _client node_.
+Server nodes are the workhorses of the cluster; they cache data, execute compute tasks, etc.
+Client nodes join the topology as regular nodes but they do not store data. Client nodes are used to stream data into the cluster and execute user queries.
+
+To form a cluster, each node must be able to connect to all other nodes. To ensure that, a proper <<Discovery Mechanisms,discovery mechanism>> must be configured.
+
+
+NOTE: In addition to client nodes, you can use Thin Clients to define and manipulate data in the cluster.
+Learn more about the thin clients in the link:thin-clients/getting-started-with-thin-clients[Thin Clients] section.
+
+
+image::images/ignite_clustering.png[Ignite Cluster]
+
+
+
+== Discovery Mechanisms
+
+Nodes can automatically discover each other and form a cluster.
+This allows you to scale out when needed without having to restart the whole cluster.
+Developers can also leverage Ignite's hybrid cloud support that allows establishing connection between private and public clouds such as Amazon Web Services, providing them with the best of both worlds.
+
+Ignite provides two implementations of the discovery mechanism intended for different usage scenarios:
+
+* link:clustering/tcp-ip-discovery[TCP/IP Discovery] is designed and optimized for 100s of nodes.
+* link:clustering/zookeeper-discovery[ZooKeeper Discovery] that allows scaling Ignite clusters to 100s and 1000s of nodes preserving linear scalability and performance.
+
+
+
+
+
+
diff --git a/docs/_docs/clustering/connect-client-nodes.adoc b/docs/_docs/clustering/connect-client-nodes.adoc
new file mode 100644
index 0000000..7373ed7
--- /dev/null
+++ b/docs/_docs/clustering/connect-client-nodes.adoc
@@ -0,0 +1,106 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Connecting Client Nodes
+:javaFile: {javaCodeDir}/ClientNodes.java
+
+
+== Reconnecting a Client Node
+
+A client node can get disconnected from the cluster in several cases:
+
+* The client node cannot re-establish the connection with the server node due to network issues.
+* Connection with the server node was broken for some time; the client node is able to re-establish the connection with the cluster, but the server already dropped the client node since the server did not receive client heartbeats.
+* Slow clients can be kicked out by the cluster.
+
+
+When a client determines that it is disconnected from the cluster, it assigns a new node ID to itself and tries to reconnect to the cluster.
+Note that this has a side effect: the ID property of the local `ClusterNode` changes in the case of a client reconnection.
+This means that any application logic that relied on the ID may be affected.
+
+You can disable client reconnection in the node configuration:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/client-node.xml[tags=ignite-config, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=disable-reconnection, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[unsupported]
+--
+
+
+While a client is in a disconnected state and an attempt to reconnect is in progress, the Ignite API throws a `IgniteClientDisconnectedException`.
+The exception contains a `future` that represents a re-connection operation.
+You can use the `future` to wait until the operation is complete.
+//This future can also be obtained using the `IgniteCluster.clientReconnectFuture()` method.
+
+[tabs]
+--
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=reconnect, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[]
+--
+
+//When the client node reconnects to the cluster,
+//This future can also be obtained using the `IgniteCluster.clientReconnectFuture()` method.
+
+
+== Client Disconnected/Reconnected Events
+
+There are two discovery events that are triggered on the client node when it is disconnected from or reconnected to the cluster:
+
+* `EVT_CLIENT_NODE_DISCONNECTED`
+* `EVT_CLIENT_NODE_RECONNECTED`
+
+You can listen to these events and execute custom actions in response.
+Refer to the link:events/listening-to-events[Listening to events] section for a code example.
+
+== Managing Slow Client Nodes
+
+In many deployments, client nodes are launched on slower machines with lower network throughput.
+In these scenarios, it is possible that the servers will generate the load (such as continuous queries notification, for example) that the clients cannot to handle.
+This can result in a growing queue of outbound messages on the servers, which may eventually cause either an out-of-memory situation on the server or block the whole cluster.
+
+To handle these situations, you can configure the maximum number of outgoing messages for client nodes.
+If the size of the outbound queue exceeds this value, the client node is disconnected from the cluster.
+
+The examples below show how to configure a slow client queue limit.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/client-node.xml[tags=!*;ignite-config;slow-client, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=slow-clients, indent=0]
+----
+tab:C#/.NET[]
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/clustering/discovery-in-the-cloud.adoc b/docs/_docs/clustering/discovery-in-the-cloud.adoc
new file mode 100644
index 0000000..6372015
--- /dev/null
+++ b/docs/_docs/clustering/discovery-in-the-cloud.adoc
@@ -0,0 +1,270 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Discovery in the Cloud
+
+:javaFile: {javaCodeDir}/DiscoveryInTheCloud.java
+
+Nodes discovery on a cloud platform is usually proven to be more
+challenging because most virtual environments are subject to the
+following limitations:
+
+* Multicast is disabled;
+* TCP addresses change every time a new image is started.
+
+Although you can use TCP-based discovery in the absence of the
+Multicast, you still have to deal with constantly changing IP addresses.
+This causes a serious inconvenience and makes configurations based on
+static IPs virtually unusable in such environments.
+
+To mitigate the constantly changing IP addresses problem, Ignite supports a number of IP finders designed to work in the cloud:
+
+* Apache jclouds IP Finder
+* Amazon S3 IP Finder
+* Amazon ELB IP Finder
+* Google Cloud Storage IP Finder
+
+
+TIP: Cloud-based IP Finders allow you to create your configuration once and reuse it for all instances.
+
+== Apache jclouds IP Finder
+
+To mitigate the constantly changing IP addresses problem, Ignite supports automatic node discovery by utilizing Apache jclouds multi-cloud toolkit via `TcpDiscoveryCloudIpFinder`.
+For information about Apache jclouds please refer to https://jclouds.apache.org[jclouds.apache.org].
+
+The IP finder forms nodes addresses by getting the private and public IP addresses of all virtual machines running on the cloud and adding a port number to them.
+The port is the one that is set with either `TcpDiscoverySpi.setLocalPort(int)` or `TcpDiscoverySpi.DFLT_PORT`.
+This way all the nodes can try to connect to any formed IP address and initiate automatic grid node discovery.
+
+Refer to https://jclouds.apache.org/reference/providers/#compute[Apache jclouds providers section] to get the list of supported cloud platforms.
+
+CAUTION: All virtual machines must start Ignite instances on the same port, otherwise they will not be able to discover each other using this IP finder.
+
+Here is an example of how to configure Apache jclouds based IP finder:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.cloud.TcpDiscoveryCloudIpFinder">
+            <!-- Configuration for Google Compute Engine. -->
+            <property name="provider" value="google-compute-engine"/>
+            <property name="identity" value="YOUR_SERVICE_ACCOUNT_EMAIL"/>
+            <property name="credentialPath" value="PATH_YOUR_PEM_FILE"/>
+            <property name="zones">
+            <list>
+                <value>us-central1-a</value>
+                <value>asia-east1-a</value>
+            </list>
+            </property>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=jclouds,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Amazon S3 IP Finder
+
+Amazon S3-based discovery allows Ignite nodes to register their IP addresses on start-up in an Amazon S3 store.
+This way other nodes can try to connect to any of the IP addresses stored in S3 and initiate automatic node discovery.
+To use S3 based automatic node discovery, you need to configure the `TcpDiscoveryS3IpFindera` type of `ipFinder`.
+
+CAUTION: You must link:setup#enabling-modules[enable the 'ignite-aws' module].
+
+Here is an example of how to configure Amazon S3 based IP finder:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
+          <property name="awsCredentials" ref="aws.creds"/>
+          <property name="bucketName" value="YOUR_BUCKET_NAME"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- AWS credentials. Provide your access key ID and secret access key. -->
+<bean id="aws.creds" class="com.amazonaws.auth.BasicAWSCredentials">
+  <constructor-arg value="YOUR_ACCESS_KEY_ID" />
+  <constructor-arg value="YOUR_SECRET_ACCESS_KEY" />
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=aws1,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+You can also use *Instance Profile* for AWS credentials provider.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
+          <property name="awsCredentialsProvider" ref="aws.creds"/>
+          <property name="bucketName" value="YOUR_BUCKET_NAME"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- Instance Profile based credentials -->
+<bean id="aws.creds" class="com.amazonaws.auth.InstanceProfileCredentialsProvider">
+  <constructor-arg value="false" />
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=aws2,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Amazon ELB Based Discovery
+
+AWS ELB-based IP finder does not require nodes to register their IP
+addresses. The IP finder automatically fetches addresses of all the
+nodes connected under an ELB and uses them to connect to the cluster. To
+use ELB based automatic node discovery, you need to configure the
+`TcpDiscoveryElbIpFinder` type of `ipFinder`.
+
+Here is an example of how to configure Amazon ELB based IP finder:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.elb.TcpDiscoveryElbIpFinder">
+          <property name="credentialsProvider">
+              <bean class="com.amazonaws.auth.AWSStaticCredentialsProvider">
+                  <constructor-arg ref="aws.creds"/>
+              </bean>
+          </property>
+          <property name="region" value="YOUR_ELB_REGION_NAME"/>
+          <property name="loadBalancerName" value="YOUR_AWS_ELB_NAME"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- AWS credentials. Provide your access key ID and secret access key. -->
+<bean id="aws.creds" class="com.amazonaws.auth.BasicAWSCredentials">
+  <constructor-arg value="YOUR_ACCESS_KEY_ID" />
+  <constructor-arg value="YOUR_SECRET_ACCESS_KEY" />
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=awsElb,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+
+== Google Compute Discovery
+
+Ignite supports automatic node discovery by utilizing Google Cloud Storage store.
+This mechanism is implemented in `TcpDiscoveryGoogleStorageIpFinder`.
+On start-up, each node registers its IP address in the storage and discovers other nodes by reading the storage.
+
+IMPORTANT: To use `TcpDiscoveryGoogleStorageIpFinder`, enable the `ignite-gce` link:setup#enabling-modules[module] in your application.
+
+Here is an example of how to configure Google Cloud Storage based IP finder:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.gce.TcpDiscoveryGoogleStorageIpFinder">
+          <property name="projectName" ref="YOUR_GOOGLE_PLATFORM_PROJECT_NAME"/>
+          <property name="bucketName" value="YOUR_BUCKET_NAME"/>
+          <property name="serviceAccountId" value="YOUR_SERVICE_ACCOUNT_ID"/>
+          <property name="serviceAccountP12FilePath" value="PATH_TO_YOUR_PKCS12_KEY"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=google,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
diff --git a/docs/_docs/clustering/network-configuration.adoc b/docs/_docs/clustering/network-configuration.adoc
new file mode 100644
index 0000000..8d47b60
--- /dev/null
+++ b/docs/_docs/clustering/network-configuration.adoc
@@ -0,0 +1,198 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Network Configuration
+:javaFile: {javaCodeDir}/NetworkConfiguration.java
+:xmlFile: code-snippets/xml/network-configuration.xml
+
+== IPv4 vs IPv6
+
+Ignite tries to support IPv4 and IPv6 but this can sometimes lead to issues where the cluster becomes detached. A possible solution — unless you require IPv6 — is to restrict Ignite to IPv4 by setting the `-Djava.net.preferIPv4Stack=true` JVM parameter.
+
+
+== Discovery
+This section describes the network parameters of the default discovery mechanism, which uses the TCP/IP protocol to exahcange discovery messages and is implemented in the `TcpDiscoverySpi` class.
+
+You can change the properties of the discovery mechanism as follows:
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;discovery, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=discovery, indent=0]
+
+----
+
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
+
+The following table describes some most important properties of `TcpDiscoverySpi`.
+You can find the complete list of properties in the javadoc:org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi[] javadoc.
+
+[CAUTION]
+====
+You should initialize the `IgniteConfiguration.localHost` or `TcpDiscoverySpi.localAddress` parameter with the network
+interface that will be used for inter-node communication. By default, a node binds to and listens on all available IP
+addresses of an environment it's running on. It can prolong node failures detection if some of the node's addresses are
+not reachable from other cluster nodes.
+====
+
+[cols="1,2,1",opts="header"]
+|===
+|Property | Description| Default Value
+| `localAddress`| Local host IP address used for discovery. If set, overrides the `IgniteConfiguration.localHost` setting. | By default, a node binds to all available network addresses. If there is a non-loopback address available, then java.net.InetAddress.getLocalHost() is used.
+| `localPort`  | The port that the node binds to. If set to a non-default value, other cluster nodes must know this port to be able to discover the node. | `47500`
+| `localPortRange`| If the `localPort` is busy, the node attempts to bind to the next port (incremented by 1) and continues this process until it finds a free port. The `localPortRange` property defines the number of ports the node will try (starting from `localPort`).
+   | `100`
+| `soLinger`| Specifies a linger-on-close timeout of TCP sockets used by Discovery SPI. See Java `Socket.setSoLinger` API
+for details on how to adjust this setting. In Ignite, the timeout defaults to a non-negative value to prevent
+link:https://bugs.openjdk.java.net/browse/JDK-8219658[potential deadlocks with SSL connections, window=_blank] but,
+as a side effect, this can prolong the detection of cluster node failures. Alternatively, update your JRE version to the
+one with the SSL issue fixed and adjust this setting accordingly. | `0`
+| `reconnectCount` | The number of times the node tries to (re)establish connection to another node. |`10`
+| `networkTimeout` |  The maximum network timeout in milliseconds for network operations. |`5000`
+| `socketTimeout` |  The socket operations timeout. This timeout is used to limit connection time and write-to-socket time. |`5000`
+| `ackTimeout`| The acknowledgement timeout for discovery messages.
+If an acknowledgement is not received within this timeout, the discovery SPI tries to resend the message.  |  `5000`
+| `joinTimeout` |  The join timeout defines how much time the node waits to join a cluster. If a non-shared IP finder is used and the node fails to connect to any address from the IP finder, the node keeps trying to join within this timeout. If all addresses are unresponsive, an exception is thrown and the node terminates.
+`0` means waiting indefinitely.  | `0`
+| `statisticsPrintFrequency` | Defines how often the node prints discovery statistics to the log.
+`0` indicates no printing. If the value is greater than 0, and quiet mode is disabled, then statistics is printed out at INFO level once every period. | `0`
+
+|===
+
+
+
+== Communication
+
+After the nodes discover each other and the cluster is formed, the nodes exchange messages via the communication SPI.
+The messages represent distributed cluster operations, such as task execution, data modification operations, queries, etc.
+The default implementation of the communication SPI uses the TCP/IP protocol to exchange messages (`TcpCommunicationSpi`).
+This section describes the properties of `TcpCommunicationSpi`.
+
+Each node opens a local communication port and address to which other nodes connect and send messages.
+At startup, the node tries to bind to the specified communication port (default is 47100).
+If the port is already used, the node increments the port number until it finds a free port.
+The number of attempts is defined by the `localPortRange` property (defaults to 100).
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::{xmlFile}[tags=!*;ignite-config;communication-spi, indent=0]
+----
+
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/ClusteringOverview.java[tag=commSpi,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringOverview.cs[tag=CommunicationSPI,indent=0]
+----
+tab:C++[unsupported]
+--
+
+Below is a list of some important properties of `TcpCommunicationSpi`.
+You can find the list of all properties in the javadoc:org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi[] javadoc.
+
+[cols="1,2,1",opts="header"]
+|===
+|Property | Description| Default Value
+| `localAddress` | The local address for the communication SPI to bind to. If set, overrides the `IgniteConfiguration.localHost` setting. |
+
+| `localPort` | The local port that the node uses for communication.  | `47100`
+
+| `localPortRange` | The range of ports the nodes tries to bind to sequentially until it finds a free one. |  `100`
+
+|`tcpNoDelay` | Sets the value for the `TCP_NODELAY` socket option. Each socket accepted or created will use the provided value.
+
+The option should be set to `true` (default) to reduce request/response time during communication over TCP. In most cases we do not recommend changing this option.| `true`
+
+|`idleConnectionTimeout` | The maximum idle connection timeout (in milliseconds) after which the connection is closed. |  `600000`
+
+|`usePairedConnections` | Whether dual socket connection between the nodes should be enforced. If set to `true`, two separate connections will be established between the communicating nodes: one for outgoing messages, and one for incoming messages. When set to `false`, a single TCP connection will be used for both directions.
+This flag is useful on some operating systems when messages take too long to be delivered.   | `false`
+
+| `directBuffer` | A boolean flag that indicates whether to allocate NIO direct buffer instead of NIO heap allocation buffer. Although direct buffers perform better, in some cases (especially on Windows) they may cause JVM crashes. If that happens in your environment, set this property to `false`.   | `true`
+
+|`directSendBuffer` | Whether to use NIO direct buffer instead of NIO heap allocation buffer when sending messages.   | `false`
+
+|`socketReceiveBuffer`| Receive buffer size for sockets created or accepted by the communication SPI. If set to `0`,   the operating system's default value is used. | `0`
+
+|`socketSendBuffer` | Send buffer size for sockets created or accepted by the communication SPI. If set to `0` the  operating system's default value is used. | `0`
+
+|===
+
+
+== Connection Timeouts
+
+////
+//Connection timeout is a period of time a cluster node waits before a connection to another node is considered "failed".
+
+Every node in a cluster is connected to every other node.
+When node A sends a message to node B, and node B does not reply in `failureDetectionTimeout` (in milliseconds), then node B will be removed from the cluster.
+////
+
+There are several properties that define connection timeouts:
+
+[cols="",opts="header"]
+|===
+|Property | Description | Default Value
+| `IgniteConfiguration.failureDetectionTimeout` | A timeout for basic network operations for server nodes. | `10000`
+
+| `IgniteConfiguration.clientFailureDetectionTimeout` | A timeout for basic network operations for client nodes.  | `30000`
+
+|===
+
+//CAUTION: The timeout automatically controls configuration parameters of `TcpDiscoverySpi`, such as socket timeout, message acknowledgment timeout and others. If any of these parameters is set explicitly, then the failure timeout setting will be ignored.
+
+:ths: &#8239;
+
+You can set the failure detection timeout in the node configuration as shown in the example below.
+//The default value is 10{ths}000 ms for server nodes and 30{ths}000 ms for client nodes.
+The default values allow the discovery SPI to work reliably on most on-premise and containerized deployments.
+However, in stable low-latency networks, you can set the parameter to {tilde}200 milliseconds in order to detect and react to​ failures more quickly.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/network-configuration.xml[tags=!*;ignite-config;failure-detection-timeout, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=failure-detection-timeout, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+
+--
diff --git a/docs/_docs/clustering/running-client-nodes-behind-nat.adoc b/docs/_docs/clustering/running-client-nodes-behind-nat.adoc
new file mode 100644
index 0000000..f60285a
--- /dev/null
+++ b/docs/_docs/clustering/running-client-nodes-behind-nat.adoc
@@ -0,0 +1,47 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Running Client Nodes Behind NAT
+
+If your client nodes are deployed behind a NAT, the server nodes won't be able to establish connection with the clients because of the limitations of the communication protocol.
+This includes deployment cases when client nodes are running in virtual environments (like Kubernetes) and the server nodes are deployed elsewhere.
+
+For cases like this, you need to enable a special mode of communication:
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/client-behind-nat.xml[tags=ignite-config;!discovery,indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaCodeDir}/Discovery.java[tags=client-behind-nat,indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[unsupported]
+--
+
+== Limitations
+
+* This mode cannot be used when `TcpCommunicationSpi.usePairedConnections = true` on both server and client nodes.
+
+* Peer class loading for link:key-value-api/continuous-queries[continuous queries (transformers and filters)] does not work when a continuous query is started from a client node `forceClientToServerConnections = true`.
+You will need to add the corresponding classes to the classpath of every server node.
+
+* This property can only be used on client nodes. This limitation will be addressed in the future releases.
diff --git a/docs/_docs/clustering/tcp-ip-discovery.adoc b/docs/_docs/clustering/tcp-ip-discovery.adoc
new file mode 100644
index 0000000..44fdd53
--- /dev/null
+++ b/docs/_docs/clustering/tcp-ip-discovery.adoc
@@ -0,0 +1,426 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= TCP/IP Discovery
+
+:javaFile: {javaCodeDir}/TcpIpDiscovery.java
+
+In an Ignite  cluster, nodes can discover each other by using `DiscoverySpi`.
+Ignite provides `TcpDiscoverySpi` as a default implementation of `DiscoverySpi` that uses TCP/IP for node discovery.
+Discovery SPI can be configured for Multicast and Static IP based node
+discovery.
+
+== Multicast IP Finder
+
+`TcpDiscoveryMulticastIpFinder` uses Multicast to discover other nodes
+and is the default IP finder. Here is an example of how to configure
+this finder via a Spring XML file or programmatically:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/discovery-multicast.xml[tags=ignite-config, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=multicast,indent=0]
+----
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=multicast,indent=0]
+----
+tab:C++[unsupported]
+--
+
+== Static IP Finder
+
+Static IP Finder, implemented in `TcpDiscoveryVmIpFinder`, allows you to specify a set of IP addresses and ports that will be checked for node discovery.
+
+You are only required to provide at least one IP address of a remote
+node, but usually it is advisable to provide 2 or 3 addresses of
+nodes that you plan to start in the future. Once a
+connection to any of the provided IP addresses is established, Ignite automatically discovers all other nodes.
+
+[TIP]
+====
+Instead of specifying addresses in the configuration, you can specify them in
+the `IGNITE_TCP_DISCOVERY_ADDRESSES` environment variable or in the system property
+with the same name. Addresses should be comma separated and may optionally contain
+a port range.
+====
+
+[TIP]
+====
+By default, the `TcpDiscoveryVmIpFinder` is used in the 'non-shared' mode.
+If you plan to start a server node, then in this mode the list of IP addresses should contain the address of the local node as well. In this case, the node will not wait until other nodes join the cluster; instead, it will become the first cluster node and start to operate normally.
+====
+
+You can configure the static IP finder via XML configuration or programmatically:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/discovery-static.xml[tags=ignite-config, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=static,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=static,indent=0]
+----
+
+tab:Shell[]
+[source,shell]
+----
+# The configuration should use TcpDiscoveryVmIpFinder without addresses specified:
+
+IGNITE_TCP_DISCOVERY_ADDRESSES=1.2.3.4,1.2.3.5:47500..47509 bin/ignite.sh -v config/default-config.xml
+----
+--
+
+[WARNING]
+====
+[discrete]
+Provide multiple node addresses only if you are sure that those are reachable. The unreachable addresses increase the
+time it takes for the nodes to join the cluster. Let's say you set five IP addresses, and nobody listens for incoming
+connections on two addresses out of five. If Ignite starts connecting to the cluster via those two unreachable addresses,
+it will impact the node's startup time.
+====
+
+
+== Multicast and Static IP Finder
+
+You can use both Multicast and Static IP based discovery together. In
+this case, in addition to any addresses received via multicast,
+`TcpDiscoveryMulticastIpFinder` can also work with a pre-configured list
+of static IP addresses, just like Static IP-Based Discovery described
+above. Here is an example of how to configure Multicast IP finder with
+static IP addresses:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/discovery-static-and-multicast.xml[tags=ignite-config, indent=0]
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=multicastAndStatic,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=multicastAndStatic,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+
+== Isolated Clusters on Same Set of Machines
+
+Ignite allows you to start two isolated clusters on the same set of
+machines. This can be done if nodes from different clusters use non-intersecting local port ranges for `TcpDiscoverySpi` and `TcpCommunicationSpi`.
+
+Let’s say you need to start two isolated clusters on a single machine
+for testing purposes. For the nodes from the first cluster, you
+should use the following `TcpDiscoverySpi` and `TcpCommunicationSpi`
+configurations:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!--
+    Explicitly configure TCP discovery SPI to provide list of
+    initial nodes from the first cluster.
+    -->
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <!-- Initial local port to listen to. -->
+            <property name="localPort" value="48500"/>
+
+            <!-- Changing local port range. This is an optional action. -->
+            <property name="localPortRange" value="20"/>
+
+            <!-- Setting up IP finder for this cluster -->
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                    <property name="addresses">
+                        <list>
+                            <!--
+                            Addresses and port range of nodes from
+                            the first cluster.
+                            127.0.0.1 can be replaced with actual IP addresses
+                            or host names. Port range is optional.
+                            -->
+                            <value>127.0.0.1:48500..48520</value>
+                        </list>
+                    </property>
+                </bean>
+            </property>
+        </bean>
+    </property>
+
+    <!--
+    Explicitly configure TCP communication SPI changing local
+    port number for the nodes from the first cluster.
+    -->
+    <property name="communicationSpi">
+        <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+            <property name="localPort" value="48100"/>
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=isolated1,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=isolated1,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+
+For the nodes from the second cluster, the configuration might look like
+this:
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
+    <!--
+    Explicitly configure TCP discovery SPI to provide list of initial
+    nodes from the second cluster.
+    -->
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <!-- Initial local port to listen to. -->
+            <property name="localPort" value="49500"/>
+
+            <!-- Changing local port range. This is an optional action. -->
+            <property name="localPortRange" value="20"/>
+
+            <!-- Setting up IP finder for this cluster -->
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
+                    <property name="addresses">
+                        <list>
+                            <!--
+                            Addresses and port range of the nodes from the second cluster.
+                            127.0.0.1 can be replaced with actual IP addresses or host names. Port range is optional.
+                            -->
+                            <value>127.0.0.1:49500..49520</value>
+                        </list>
+                    </property>
+                </bean>
+            </property>
+        </bean>
+    </property>
+
+    <!--
+    Explicitly configure TCP communication SPI changing local port number
+    for the nodes from the second cluster.
+    -->
+    <property name="communicationSpi">
+        <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
+            <property name="localPort" value="49100"/>
+        </bean>
+    </property>
+</bean>
+
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=isolated2,indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/ClusteringTcpIpDiscovery.cs[tag=isolated2,indent=0]
+----
+
+tab:C++[unsupported]
+
+--
+
+As you can see from the configurations, the difference between them is minor — only port numbers for SPIs and IP finder vary.
+
+[TIP]
+====
+If you want the nodes from different clusters to be able to look for
+each other using the multicast protocol, replace
+`TcpDiscoveryVmIpFinder` with `TcpDiscoveryMulticastIpFinder` and set
+unique `TcpDiscoveryMulticastIpFinder.multicastGroups` in each
+configuration above.
+====
+
+[CAUTION]
+====
+[discrete]
+=== Persistence Files Location
+
+If the isolated clusters use Native Persistence, then every
+cluster has to store its persistence files under different paths in the
+file system. Refer to the link:persistence/native-persistence[Native Persistence documentation] to learn how you can change persistence related directories.
+====
+
+
+== JDBC-Based IP Finder
+NOTE: Not supported in .NET/C#/{cpp}.
+
+You can have your database be a common shared storage of initial IP addresses. With this IP finder, nodes will write their IP addresses to a database on startup. This is done via `TcpDiscoveryJdbcIpFinder`.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+      <property name="ipFinder">
+        <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.jdbc.TcpDiscoveryJdbcIpFinder">
+          <property name="dataSource" ref="ds"/>
+        </bean>
+      </property>
+    </bean>
+  </property>
+</bean>
+
+<!-- Configured data source instance. -->
+<bean id="ds" class="some.Datasource">
+
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=jdbc,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+
+tab:C++[unsupported]
+
+--
+
+
+== Shared File System IP Finder
+
+NOTE: Not supported in .NET/C#/{cpp}.
+
+A shared file system can be used as a storage for nodes' IP addresses. The nodes will write their IP addresses to the file system on startup. This behavior is supported by `TcpDiscoverySharedFsIpFinder`.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.sharedfs.TcpDiscoverySharedFsIpFinder">
+                  <property name="path" value="/var/ignite/addresses"/>
+                </bean>
+            </property>
+        </bean>
+    </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=sharedFS,indent=0]
+----
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+--
+
+== ZooKeeper IP Finder
+
+NOTE: Not supported in .NET/C#.
+
+To set up ZooKeeper IP finder use `TcpDiscoveryZookeeperIpFinder` (note that `ignite-zookeeper` module has to be enabled).
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+    <property name="discoverySpi">
+        <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
+            <property name="ipFinder">
+                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder">
+                    <property name="zkConnectionString" value="127.0.0.1:2181"/>
+                </bean>
+            </property>
+        </bean>
+    </property>
+</bean>
+----
+
+tab:Java[]
+[source,java]
+----
+include::{javaFile}[tag=zk,indent=0]
+----
+
+tab:C#/.NET[unsupported]
+tab:C++[unsupported]
+
+--
+
+
+
+
diff --git a/docs/_docs/clustering/zookeeper-discovery.adoc b/docs/_docs/clustering/zookeeper-discovery.adoc
new file mode 100644
index 0000000..3a0ddd9
--- /dev/null
+++ b/docs/_docs/clustering/zookeeper-discovery.adoc
@@ -0,0 +1,193 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= ZooKeeper Discovery
+
+Ignite's default TCP/IP Discovery organizes cluster nodes into a ring topology that has advantages and
+disadvantages. For instance, on topologies with hundreds of cluster
+nodes, it can take many seconds for a system message to traverse through
+all the nodes. As a result, the basic processing of events such as
+joining of new nodes or detecting the failed ones can take a while,
+affecting the overall cluster responsiveness and performance.
+
+ZooKeeper Discovery is designed for massive deployments that
+need to preserve ease of scalability and linear performance.
+However, using both Ignite and ZooKeeper requires configuring and managing two
+distributed systems, which can be challenging.
+Therefore, we recommend that you use ZooKeeper Discovery only if you plan to scale to 100s or 1000s nodes.
+Otherwise, it is best to use link:clustering/tcp-ip-discovery[TCP/IP Discovery].
+
+ZooKeeper Discovery uses ZooKeeper as a single point of synchronization
+and to organize the cluster into a star-shaped topology where a
+ZooKeeper cluster sits in the center and the Ignite nodes exchange
+discovery events through it.
+
+image::images/zookeeper.png[Zookeeper]
+
+It is worth mentioning that ZooKeeper Discovery is an alternative implementation of the Discovery SPI and doesn’t affect the Communication SPI.
+Once the nodes discover each other via ZooKeeper Discovery, they use Communication SPI for peer-to-peer communication.
+////////////////////////////////////////////////////////////////////////////////
+TODO: explain what it means
+////////////////////////////////////////////////////////////////////////////////
+
+== Configuration
+
+To enable ZooKeeper Discovery, you need to configure `ZookeeperDiscoverySpi` in a way similar to this:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="discoverySpi">
+    <bean class="org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi">
+      <property name="zkConnectionString" value="127.0.0.1:34076,127.0.0.1:43310,127.0.0.1:36745"/>
+      <property name="sessionTimeout" value="30000"/>
+      <property name="zkRootPath" value="/apacheIgnite"/>
+      <property name="joinTimeout" value="10000"/>
+    </bean>
+  </property>
+</bean>
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/ZookeeperDiscovery.java[tag=cfg,indent=0]
+----
+tab:.NET[unsupported]
+tab:C++[unsupported]
+--
+
+The following parameters are required (other parameters are optional):
+
+* `zkConnectionString` - keeps the list of addresses of ZooKeeper
+servers.
+* `sessionTimeout` - specifies the time after which an Ignite node is considered disconnected if it doesn’t react to events exchanged via Discovery SPI.​
+
+== Failures and Split Brain Handling
+
+In case of network partitioning, some of ​the nodes cannot communicate to each other because they are located in separated network segments, which may lead to failure to process user requests or inconsistent data modification.
+
+ZooKeeper Discovery approaches network partitioning (aka. split brain)
+and communication failures between individual nodes in the following
+way:
+
+[CAUTION]
+====
+It is assumed that the ZooKeeper cluster is always visible to all the
+nodes in the cluster. In fact, if a node disconnects from ZooKeeper, it
+shuts down and other nodes treat it as failed or disconnected.
+====
+
+Whenever a node discovers that it cannot connect to some of the other
+nodes in the cluster, it initiates a communication failure resolution
+process by publishing special requests to the ZooKeeper cluster. When
+the process is started, all nodes try to connect to each other and send
+the results of the connection attempts to the node that coordinates the
+process (_the coordinator node_). Based on this information, the
+coordinator node creates a connectivity graph that represents the
+network situation in the cluster. Further actions depend on the type of
+network segmentation. The following sections discuss possible scenarios.
+
+=== Cluster is split into several disjoint components
+
+If the cluster is split into several independent components, each
+component (being a cluster) may think of itself as a master cluster and
+continue to process user requests, resulting in data inconsistency. To
+avoid this, only the component with the largest number of nodes is kept
+alive; and the nodes from the other components are brought down.
+
+image::images/network_segmentation.png[Network Segmentation]
+
+The image above shows a case where the cluster network is split into 2 segments.
+The nodes from the smaller cluster (right-hand segment) are terminated.
+
+image::images/segmentation_resolved.png[Segmentation Resolved]
+
+When there are multiple largest components, the one that has the largest
+number of clients is kept alive, and the others are shut down.
+
+=== Several links between nodes are missing
+
+Some nodes cannot connect to some other nodes, which means the nodes are
+not completely disconnected from the cluster but can’t exchange data
+with some of the nodes and, therefore, cannot be part of the cluster. In
+the image below, one node cannot connect to two other nodes.
+
+image::images/split_brain.png[Split-brain]
+
+In this case, the task is to find the largest component in which every
+node can connect to every other node, which, in the general case, is a
+difficult problem and cannot be solved in an acceptable amount of time. The
+coordinator node uses a heuristic algorithm to find the best approximate
+solution. The nodes that are left out of the solution are shut down.
+
+image::images/split_brain_resolved.png[Split-brain Resolved]
+
+=== ZooKeeper cluster segmentation
+
+In large-scale deployments where the ZooKeeper cluster can span multiple data centers and geographically diverse locations, it can split into multiple segments due to network segmentation.
+If this occurs, ZooKeeper checks if there is a segment that contains more than a half of all ZooKeeper nodes (ZooKeeper requires this many nodes to continue its operation), and, if found, this segment takes over managing the Ignite cluster, while other segments are shut down.
+If there is no such segment, ZooKeeper shuts down all its nodes.
+
+In case of ZooKeeper cluster segmentation, the Ignite cluster may or may not be split.
+In any case, when the ZooKeeper nodes are shut down, the corresponding Ignite nodes try to connect to available ZooKeeper nodes and shut down if unable to do so.
+
+The following image is an example of network segmentation that splits both the Ignite cluster and ZooKeeper cluster into two segments.
+This may happen if your clusters are deployed in two data centers.
+In this case, the ZooKeeper node located in Data Center B shuts itself down.
+The Ignite nodes located in Data Center B are not able to connect to the remaining ZooKeeper nodes and shut themselves down as well.
+
+image::images/zookeeper_split.png[Zookeeper Split]
+
+== Custom Discovery Events
+
+Changing a ring-shaped topology to the star-shaped one affects the way
+custom discovery events are handled by the Discovery SPI component. Since
+the ring topology is linear, it means that each discovery message is
+processed by nodes sequentially.
+
+With ZooKeeper Discovery, the coordinator sends discovery messages to
+all nodes simultaneously resulting in the messages to be processed in
+parallel. As a result, ZooKeeper Discovery prohibits custom discovery events from being changed. For instance, the nodes are not allowed to add any payload to discovery messages.
+
+== Ignite and ZooKeeper Configuration Considerations
+
+When using ZooKeeper Discovery, you need to make sure that the configuration parameters of the ZooKeeper cluster and Ignite cluster match each other.
+
+Consider a sample ZooKeeper configuration, as follows:
+
+[source,shell]
+----
+# The number of milliseconds of each tick
+tickTime=2000
+
+# The number of ticks that can pass between sending a request and getting an acknowledgement
+syncLimit=5
+----
+
+Configured this way, ZooKeeper server detects its own segmentation from the rest of the ZooKeeper cluster only after `tickTime * syncLimit` elapses.
+Until this event is detected at ZooKeeper level, all Ignite nodes connected to the segmented ZooKeeper server do not try to reconnect to the other ZooKeeper servers.
+
+On the other hand, there is a `sessionTimeout` parameter on the Ignite
+side that defines how soon ZooKeeper closes an Ignite node’s session if
+the node gets disconnected from the ZooKeeper cluster.
+If `sessionTimeout` is smaller than `tickTime * syncLimit` , then the
+Ignite node is notified by the segmented ZooKeeper server too
+late — its session expires before it tries to reconnect to other ZooKeeper servers.
+
+To avoid this situation, `sessionTimeout` should be bigger than `tickTime * syncLimit`.
diff --git a/docs/_docs/code-deployment/deploying-user-code.adoc b/docs/_docs/code-deployment/deploying-user-code.adoc
new file mode 100644
index 0000000..3916278
--- /dev/null
+++ b/docs/_docs/code-deployment/deploying-user-code.adoc
@@ -0,0 +1,96 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Deploying User Code
+:javaFile: {javaCodeDir}/UserCodeDeployment.java
+
+In addition to link:code-deployment/peer-class-loading[peer class loading], you can deploy user code by configuring `UriDeploymentSpi`. With this approach, you specify the location of your libraries in the node configuration.
+Ignite scans the location periodically and redeploys the classes if they change.
+The location may be a file system directory or an HTTP(S) location.
+When Ignite detects that the libraries are removed from the location, the classes are undeployed from the cluster.
+
+You can specify multiple locations (of different types) by providing both directory paths and http(s) URLs.
+
+//TODO NOTE: peer class loading vs. URL deployment
+
+
+== Deploying from a Local Directory
+
+To deploy libraries from a file system directory, add the directory path to the list of URIs in the `UriDeploymentSpi` configuration.
+The directory must exist on the nodes where it is specified and contain jar files with the classes you want to deploy.
+Note that the path must be specified using the "file://" scheme.
+You can specify different directories on different nodes.
+
+[tabs]
+--
+tab:XML[]
+[source, xml]
+----
+include::code-snippets/xml/deployment.xml[tags=!*;ignite-config;from-local-dir, indent=0]
+----
+tab:Java[]
+[source, java]
+----
+include::{javaFile}[tags=from-local-dir, indent=0]
+----
+tab:C#/.NET[]
+
+tab:C++[]
+--
+
+You can pass the following parameter in the URL:
+
+[cols="1,2,1",opts="header"]
+|===
+|Parameter | Description | Default Value
+| `freq` |  Scanning frequency in milliseconds. | `5000`
+|===
+
+
+== Deploying from a URL
+
+To deploy libraries from an http(s) location, add the URL to the list of URIs in the `UriDeploymentSpi` configuration.
+
+Ignite parses the HTML file to find the HREF attributes of all `<a>` tags on the page.
+The references must point to the jar files you want to deploy.
+//It's important that only HTTP scanner uses the URLConnection.getLastModified() method to check if there were any changes since last iteration for each GAR-file before redeploying.
+
+[tabs]
+--
+tab:XML[]
+
+[source, xml]
+----
+include::code-snippets/xml/deployment.xml[tags=!*;ignite-config;from-url, indent=0]
+----
+
+tab:Java[]
+
+[source, java]
+----
+include::{javaFile}[tags=from-url, indent=0]
+----
+
+tab:C#/.NET[]
+tab:C++[]
+--
+
+You can pass the following parameter in the URL:
+
+[cols="1,2,1",opts="header"]
+|===
+|Parameter | Description | Default Value
+| `freq` |  Scanning frequency in milliseconds. | `300000`
+|===
+
diff --git a/docs/_docs/code-deployment/peer-class-loading.adoc b/docs/_docs/code-deployment/peer-class-loading.adoc
new file mode 100644
index 0000000..0dd7d18
--- /dev/null
+++ b/docs/_docs/code-deployment/peer-class-loading.adoc
@@ -0,0 +1,166 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+= Peer Class Loading
+
+== Overview
+
+Peer class loading refers to loading classes from a local node where they are defined to remote nodes.
+With peer class loading enabled, you don't have to manually deploy your Java code on each node in the cluster and re-deploy it each time it changes.
+Ignite automatically loads the classes from the node where they are defined to the nodes where they are used.
+
+[CAUTION]
+====
+[discrete]
+=== Automatic Assemblies Loading in .NET
+If you develop C# and .NET applications, then refer to the link:net-specific/net-remote-assembly-loading[Remote Assembly Loading]
+page for details on how to set up and use the peer-class-loading feature with that type of applications.
+====
+
+For example, when link:key-value-api/using-scan-queries[querying data] with a custom transformer, you only need to define your tasks on the client node that initiates the computation, and Ignite loads the classes to the server nodes.
+
+When enabled, peer class loading is used to deploy the following classes:
+
+* Tasks and jobs submitted via the link:distributed-computing/distributed-computing[compute interface].
+* Transformers and filters used with link:key-value-api/using-scan-queries[scan queries] and link:key-value-api/continuous-queries[continuous queries].
+* Stream transformers, receivers and visitors used with link:data-streaming#data-streamers[data streamers].
+* link:distributed-computing/collocated-computations#entry-processor[Entry processors].
+
+When defining the classes listed above, we recommend that each class is created as either a separate class or inner static class and not as a lambda or anonymous inner class. Non-static inner classes are serialized together with its enclosing class. If some fields of the enclosing class cannot be serialized, you will get serialization exceptions.
+
+[IMPORTANT]
+====
+The peer class loading functionality does not deploy the key and object classes of the entries stored in caches.
+====
+
+[WARNING]
+====
+The peer class loading functionality allows any client to deploy custom code to the cluster. If you want to use it in production environments, make sure only authorized clients have access to the cluster.
+====
+
+
+This is what happens when a class is required on remote nodes:
+
+* Ignite checks if the class is available in the local classpath, i.e. if it was loaded during system initialization, and if it was, it is returned. No class loading from a peer node takes place in this case.
+* If the class is not available locally, then a request for the class definition is sent to the originating node. The originating node sends the class's byte-code and the class is loaded on the worker node. This happens once per class. When the class definition is loaded on a node, it does not have to be loaded again.
+
+[NOTE]
+====
+[discrete]
+=== Deploying 3rd Party Libraries
+When utilizing peer class loading, you should be aware of the libraries that get loaded from peer nodes vs. libraries that are already available locally in the class path.
+We suggest you should include all 3rd party libraries into the class path of every node.
+This can be achieved by copying your JAR files into the `{IGNITE_HOME}/libs` folder.
+This way you do not transfer megabytes of 3rd party classes to remote nodes every time you change a line of code.
+====
+
+
+== Enabling Peer Class Loading
+
+Here is how you can configure peer class loading:
+
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+include::code-snippets/xml/peer-class-loading.xml[tags=ignite-config;!discovery, indent=0]
+----
+tab:Java[]
+[source,java]
+----
+include::{javaCodeDir}/PeerClassLoading.java[tags=configure, indent=0]
+----
+
+tab:C#/.NET[]
+[source,csharp]
+----
+include::code-snippets/dotnet/PeerClassLoading.cs[tag=enable,indent=0]
+----
+
+tab:C++[unsupported]
+--
+
+
+The following table describes parameters related to peer class loading.
+
+[cols="30%,60%,10%",opts="header,width=100%"]
+|===
+|Parameter| Description | Default value
+
+|`peerClassLoadingEnabled`| Enables/disables peer class loading. | `false`
+|`deploymentMode` | The peer class loading mode. | `SHARED`
+
+| `peerClassLoadingExecutorService` | Configures a thread pool to be used for peer class loading. If not configured, a default pool is used.  | `null`
+| `peerClassLoadingExecutorServiceShutdown` |Peer class loading executor service shutdown flag. If the flag is set to `true`, the peer class loading thread pool is forcibly shut down when the node stops. | `true`
+|`peerClassLoadingLocalClassPathExclude` |List of packages in the system class path that should be P2P loaded even if they exist locally. | `null`
+
+|`peerClassLoadingMissedResourcesCacheSize`| Size of missed resources cache. Set to 0 to avoid caching of missing resources. | 100
+
+|===
+
+
+
+== Peer Class Loading Modes
+
+=== PRIVATE and ISOLATED
+Classes deployed within the same class loader on the master node still share the same class loader remotely on worker nodes.
+However, the tasks deployed from different master nodes does not share the same class loader on worker nodes.
+This is useful in development environments when different developers can be working on different versions of the same classes.
+There is no difference between `PRIVATE` and `ISOLATED` deployment modes since the `@UserResource` annotation has been removed.
+Both constants were kept for backward-compatibility reasons and one of them is likely to be removed in a future major release.
+
+In this mode, classes get un-deployed when the master node leaves the cluster.
+
+=== SHARED
+
+This is the default deployment mode.
+In this mode, classes from different master nodes with the same user version share the same class loader on worker nodes.
+Classes are un-deployed when all master nodes leave the cluster or the user version changes.
+This mode allows classes coming from different master nodes to share the same instances of user resources on remote nodes (see below).
+This method is specifically useful in production as, in comparison to `ISOLATED` mode which has a scope of a single class loader on a single master node, `SHARED` mode broadens the deployment scope to all master nodes.
+
+In this mode, classes get un-deployed when all the master nodes leave the cluster.
+
+=== CONTINUOUS
+In `CONTINUOUS` mode, the classes do not get un-deployed when master nodes leave the cluster.
+Un-deployment only happens when a class user version changes.
+The advantage of this approach is that it allows tasks coming from different master nodes to share the same instances of user resources on worker nodes.
+This allows the tasks executing on worker nodes to reuse, for example, the same instances of connection pools or caches.
+When using this mode, you can start up multiple stand-alone worker nodes, define user resources on the master nodes, and have them initialized once on worker nodes regardless of which master node they came from.
+In comparison to the `ISOLATED` deployment mode which has a scope of a single class loader on a single master node, `CONTINUOUS` mode broadens the deployment scope to all master nodes which is specifically useful in production.
+
+In this mode, classes do not get un-deployed even if all the master nodes leave the cluster.
+
+== Un-Deployment and User Versions
+
+The classes deployed with peer class loading have their own lifecycle. On certain events (when the master node leaves or the user version changes, depending on deployment mode), the class information is un-deployed from the cluster: the class definition is erased from all nodes and the user resources linked with that class definition are also optionally erased (again, depending on deployment mode).
+
+User version comes into play whenever you want to redeploy classes deployed in `SHARED` or `CONTINUOUS` modes.
+By default, Ignite automatically detects if the class loader has changed or a node is restarted.
+However, if you would like to change and redeploy the code on a subset of nodes, or in the case of `CONTINUOUS` mode, kill every living deployment, you should change the user version.
+User version is specified in the `META-INF/ignite.xml` file of your class path as follows:
+
+[source, xml]
+-------------------------------------------------------------------------------
+<!-- User version. -->
+<bean id="userVersion" class="java.lang.String">
+    <constructor-arg value="0"/>
+</bean>
+-------------------------------------------------------------------------------
+
+By default, all Ignite startup scripts (ignite.sh or ignite.bat) pick up the user version from the `IGNITE_HOME/config/userversion` folder.
+Usually, you just need to update the user version under that folder.
+However, in case of GAR or JAR deployment, you should remember to provide the `META-INF/ignite.xml` file with the desired user version in it.
diff --git a/docs/_docs/code-snippets/cpp/src/affinity_run.cpp b/docs/_docs/code-snippets/cpp/src/affinity_run.cpp
new file mode 100644
index 0000000..94ee1ee
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/affinity_run.cpp
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+using namespace cache;
+
+//tag::affinity-run[]
+/*
+ * Function class.
+ */
+struct FuncAffinityRun : compute::ComputeFunc<void>
+{
+    /*
+    * Default constructor.
+    */
+    FuncAffinityRun()
+    {
+        // No-op.
+    }
+
+    /*
+    * Parameterized constructor.
+    */
+    FuncAffinityRun(std::string cacheName, int32_t key) :
+        cacheName(cacheName), key(key)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        Ignite& node = GetIgnite();
+
+        Cache<int32_t, std::string> cache = node.GetCache<int32_t, std::string>(cacheName.c_str());
+
+        // Peek is a local memory lookup.
+        std::cout << "Co-located [key= " << key << ", value= " << cache.LocalPeek(key, CachePeekMode::ALL) << "]" << std::endl;
+    }
+
+    std::string cacheName;
+    int32_t key;
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<FuncAffinityRun>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("FuncAffinityRun");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "FuncAffinityRun";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const FuncAffinityRun& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const FuncAffinityRun& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(FuncAffinityRun& dst)
+            {
+                dst = FuncAffinityRun();
+            }
+
+            static void Write(BinaryWriter& writer, const FuncAffinityRun& obj)
+            {
+                writer.WriteString("cacheName", obj.cacheName);
+                writer.WriteInt32("key", obj.key);
+            }
+
+            static void Read(BinaryReader& reader, FuncAffinityRun& dst)
+            {
+                dst.cacheName = reader.ReadString("cacheName");
+                dst.key = reader.ReadInt32("key");
+            }
+        };
+    }
+}
+
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get cache instance.
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<FuncAffinityRun>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    int key = 1;
+
+    // This closure will execute on the remote node where
+    // data for the given 'key' is located.
+    compute.AffinityRun(cache.GetName(), key, FuncAffinityRun(cache.GetName(), key));
+}
+//end::affinity-run[]
diff --git a/docs/_docs/code-snippets/cpp/src/cache_asynchronous_execution.cpp b/docs/_docs/code-snippets/cpp/src/cache_asynchronous_execution.cpp
new file mode 100644
index 0000000..85c87a0
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_asynchronous_execution.cpp
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+using namespace cache;
+
+//tag::cache-asynchronous-execution[]
+/*
+ * Function class.
+ */
+class HelloWorld : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<HelloWorld>;
+public:
+    /*
+     * Default constructor.
+     */
+    HelloWorld()
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        std::cout << "Job Result: Hello World" << std::endl;
+    }
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<HelloWorld>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("HelloWorld");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "HelloWorld";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const HelloWorld& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const HelloWorld& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(HelloWorld& dst)
+            {
+                dst = HelloWorld();
+            }
+
+            static void Write(BinaryWriter& writer, const HelloWorld& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, HelloWorld& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<HelloWorld>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    // Declaring function instance.
+    HelloWorld helloWorld;
+
+    // Making asynchronous call.
+    compute.RunAsync(helloWorld);
+}
+//end::cache-asynchronous-execution[]
diff --git a/docs/_docs/code-snippets/cpp/src/cache_atomic_operations.cpp b/docs/_docs/code-snippets/cpp/src/cache_atomic_operations.cpp
new file mode 100644
index 0000000..a505ac2
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_atomic_operations.cpp
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::cache-atomic-operations[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<std::string, int32_t> cache = ignite.GetOrCreateCache<std::string, int32_t>("myNewCache");
+
+    // Put-if-absent which returns previous value.
+    int32_t oldVal = cache.GetAndPutIfAbsent("Hello", 11);
+
+    // Put-if-absent which returns boolean success flag.
+    boolean success = cache.PutIfAbsent("World", 22);
+
+    // Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.
+    oldVal = cache.GetAndReplace("Hello", 11);
+
+    // Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.
+    success = cache.Replace("World", 22);
+
+    // Replace-if-matches operation.
+    success = cache.Replace("World", 2, 22);
+
+    // Remove-if-matches operation.
+    success = cache.Remove("Hello", 1);
+    //end::cache-atomic-operations[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/cache_creating_dynamically.cpp b/docs/_docs/code-snippets/cpp/src/cache_creating_dynamically.cpp
new file mode 100644
index 0000000..3fb11a9
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_creating_dynamically.cpp
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::cache-creating-dynamically[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Create a cache with the given name, if it does not exist.
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myNewCache");
+    //end::cache-creating-dynamically[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/cache_get_put.cpp b/docs/_docs/code-snippets/cpp/src/cache_get_put.cpp
new file mode 100644
index 0000000..a2d7291
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_get_put.cpp
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+/** Cache name. */
+const char* CACHE_NAME = "cacheName";
+
+int main()
+{
+    //tag::cache-get-put[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    try
+    {
+        Ignite ignite = Ignition::Start(cfg);
+
+        Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>(CACHE_NAME);
+
+        // Store keys in the cache (the values will end up on different cache nodes).
+        for (int32_t i = 0; i < 10; i++)
+        {
+            cache.Put(i, std::to_string(i));
+        }
+
+        for (int i = 0; i < 10; i++)
+        {
+            std::cout << "Got [key=" << i << ", val=" + cache.Get(i) << "]" << std::endl;
+        }
+    }
+    catch (IgniteError& err)
+    {
+        std::cout << "An error occurred: " << err.GetText() << std::endl;
+        return err.GetCode();
+    }
+    //end::cache-get-put[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/cache_getting_instance.cpp b/docs/_docs/code-snippets/cpp/src/cache_getting_instance.cpp
new file mode 100644
index 0000000..c2d0665
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/cache_getting_instance.cpp
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+#include <string>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::cache-getting-instance[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Obtain instance of cache named "myCache".
+    // Note that different caches may have different generics.
+    Cache<int32_t, std::string> cache = ignite.GetCache<int32_t, std::string>("myCache");
+    //end::cache-getting-instance[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/city.h b/docs/_docs/code-snippets/cpp/src/city.h
new file mode 100644
index 0000000..8e25ee6
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/city.h
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+namespace ignite
+{
+    struct City
+    {
+        City() : population(0)
+        {
+            // No-op.
+        }
+
+        City(const int32_t population) :
+            population(population)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+            oss << "City [population=" << population << ']';
+            return oss.str();
+        }
+
+        int32_t population;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::City)
+
+            typedef ignite::City City;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(City)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(City)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(City)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(City)
+
+            static void Write(BinaryWriter& writer, const ignite::City& obj)
+        {
+            writer.WriteInt32("population", obj.population);
+        }
+
+        static void Read(BinaryReader& reader, ignite::City& dst)
+        {
+            dst.population = reader.ReadInt32("population");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/city_key.h b/docs/_docs/code-snippets/cpp/src/city_key.h
new file mode 100644
index 0000000..b673601
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/city_key.h
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+namespace ignite
+{
+    struct CityKey
+    {
+        CityKey() : id(0)
+        {
+            // No-op.
+        }
+
+        CityKey(int32_t id, const std::string& name) :
+            id(id),
+            name(name)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+
+            oss << "CityKey [id=" << id
+                << ", name=" << name << ']';
+
+            return oss.str();
+        }
+
+        int32_t id;
+        std::string name;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::CityKey)
+
+            typedef ignite::CityKey CityKey;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(CityKey)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(CityKey)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(CityKey)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(CityKey)
+
+            static void Write(BinaryWriter& writer, const ignite::CityKey& obj)
+        {
+            writer.WriteInt64("id", obj.id);
+            writer.WriteString("name", obj.name);
+        }
+
+        static void Read(BinaryReader& reader, ignite::CityKey& dst)
+        {
+            dst.id = reader.ReadInt32("id");
+            dst.name = reader.ReadString("name");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/compute_acessing_data.cpp b/docs/_docs/code-snippets/cpp/src/compute_acessing_data.cpp
new file mode 100644
index 0000000..a6da98d
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_acessing_data.cpp
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+
+//tag::compute-acessing-data[]
+/*
+ * Function class.
+ */
+class GetValue : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<GetValue>;
+public:
+    /*
+     * Default constructor.
+     */
+    GetValue()
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        Ignite& node = GetIgnite();
+
+        // Get the data you need
+        Cache<int64_t, Person> cache = node.GetCache<int64_t, Person>("person");
+
+        // do with the data what you need to do
+        Person person = cache.Get(1);
+    }
+};
+//end::compute-acessing-data[]
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<GetValue>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("GetValue");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "GetValue";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const GetValue& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const GetValue& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(GetValue& dst)
+            {
+                dst = GetValue();
+            }
+
+            static void Write(BinaryWriter& writer, const GetValue& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, GetValue& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, Person>("person");
+    cache.Put(1, Person(1, "first", "last", "resume", 100.00));
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<GetValue>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    // Run compute task.
+    compute.Run(GetValue());
+}
diff --git a/docs/_docs/code-snippets/cpp/src/compute_broadcast.cpp b/docs/_docs/code-snippets/cpp/src/compute_broadcast.cpp
new file mode 100644
index 0000000..136c00d
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_broadcast.cpp
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-broadcast[]
+/*
+ * Function class.
+ */
+class Hello : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<Hello>;
+public:
+    /*
+     * Default constructor.
+     */
+    Hello()
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        std::cout << "Hello" << std::endl;
+    }
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<Hello>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("Hello");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "Hello";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const Hello& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const Hello& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(Hello& dst)
+            {
+                dst = Hello();
+            }
+
+            static void Write(BinaryWriter& writer, const Hello& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, Hello& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<Hello>();
+
+    // Broadcast to all nodes.
+    compute::Compute compute = ignite.GetCompute();
+
+    // Declaring function instance.
+    Hello hello;
+
+    // Print out hello message on nodes in the cluster group.
+    compute.Broadcast(hello);
+}
+//end::compute-broadcast[]
diff --git a/docs/_docs/code-snippets/cpp/src/compute_call.cpp b/docs/_docs/code-snippets/cpp/src/compute_call.cpp
new file mode 100644
index 0000000..bf1d8d5
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_call.cpp
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-call[]
+/*
+ * Function class.
+ */
+class CountLength : public compute::ComputeFunc<int32_t>
+{
+    friend struct ignite::binary::BinaryType<CountLength>;
+public:
+    /*
+     * Default constructor.
+     */
+    CountLength()
+    {
+        // No-op.
+    }
+
+    /*
+     * Constructor.
+     *
+     * @param text Text.
+     */
+    CountLength(const std::string& word) :
+        word(word)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     * Counts number of characters in provided word.
+     *
+     * @return Word's length.
+     */
+    virtual int32_t Call()
+    {
+        return word.length();
+    }
+
+    /** Word to print. */
+    std::string word;
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<CountLength>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("CountLength");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "CountLength";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const CountLength& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const CountLength& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(CountLength& dst)
+            {
+                dst = CountLength("");
+            }
+
+            static void Write(BinaryWriter& writer, const CountLength& obj)
+            {
+                writer.RawWriter().WriteString(obj.word);
+            }
+
+            static void Read(BinaryReader& reader, CountLength& dst)
+            {
+                dst.word = reader.RawReader().ReadString();
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<CountLength>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    std::istringstream iss("How many characters");
+    std::vector<std::string> words((std::istream_iterator<std::string>(iss)),
+        std::istream_iterator<std::string>());
+
+    int32_t total = 0;
+
+    // Iterate through all words in the sentence, create and call jobs.
+    for (std::string word : words)
+    {
+        // Add word length received from cluster node.
+        total += compute.Call<int32_t>(CountLength(word));
+    }
+}
+//end::compute-call[]
diff --git a/docs/_docs/code-snippets/cpp/src/compute_call_async.cpp b/docs/_docs/code-snippets/cpp/src/compute_call_async.cpp
new file mode 100644
index 0000000..bd72bdf
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_call_async.cpp
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-call-async[]
+/*
+ * Function class.
+ */
+class CountLength : public compute::ComputeFunc<int32_t>
+{
+    friend struct ignite::binary::BinaryType<CountLength>;
+public:
+    /*
+     * Default constructor.
+     */
+    CountLength()
+    {
+        // No-op.
+    }
+
+    /*
+     * Constructor.
+     *
+     * @param text Text.
+     */
+    CountLength(const std::string& word) :
+        word(word)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     * Counts number of characters in provided word.
+     *
+     * @return Word's length.
+     */
+    virtual int32_t Call()
+    {
+        return word.length();
+    }
+
+    /** Word to print. */
+    std::string word;
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<CountLength>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("CountLength");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "CountLength";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const CountLength& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const CountLength& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(CountLength& dst)
+            {
+                dst = CountLength("");
+            }
+
+            static void Write(BinaryWriter& writer, const CountLength& obj)
+            {
+                writer.RawWriter().WriteString(obj.word);
+            }
+
+            static void Read(BinaryReader& reader, CountLength& dst)
+            {
+                dst.word = reader.RawReader().ReadString();
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<CountLength>();
+
+    // Get compute instance.
+    compute::Compute asyncCompute = ignite.GetCompute();
+
+    std::istringstream iss("Count characters using callable");
+    std::vector<std::string> words((std::istream_iterator<std::string>(iss)),
+        std::istream_iterator<std::string>());
+
+    std::vector<Future<int32_t>> futures;
+
+    // Iterate through all words in the sentence, create and call jobs.
+    for (std::string word : words)
+    {
+        // Counting number of characters remotely.
+        futures.push_back(asyncCompute.CallAsync<int32_t>(CountLength(word)));
+    }
+
+    int32_t total = 0;
+
+    // Counting total number of characters.
+    for (Future<int32_t> future : futures)
+    {
+        // Waiting for results.
+        future.Wait();
+
+        total += future.GetValue();
+    }
+
+    // Printing result.
+    std::cout << "Total number of characters: " << total << std::endl;
+}
+//end::compute-call-async[]
diff --git a/docs/_docs/code-snippets/cpp/src/compute_get.cpp b/docs/_docs/code-snippets/cpp/src/compute_get.cpp
new file mode 100644
index 0000000..4071d04
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_get.cpp
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+const char* CONFIG_DEFAULT = "/path/to/configuration.xml";
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = CONFIG_DEFAULT;
+
+    //tag::compute-get[]
+    Ignite ignite = Ignition::Start(cfg);
+
+    Compute compute = ignite.GetCompute();
+    //end::compute-get[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/compute_run.cpp b/docs/_docs/code-snippets/cpp/src/compute_run.cpp
new file mode 100644
index 0000000..40896a1
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/compute_run.cpp
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+
+using namespace ignite;
+
+//tag::compute-run[]
+/*
+ * Function class.
+ */
+class PrintWord : public compute::ComputeFunc<void>
+{
+    friend struct ignite::binary::BinaryType<PrintWord>;
+public:
+    /*
+     * Default constructor.
+     */
+    PrintWord()
+    {
+        // No-op.
+    }    
+    
+    /*
+     * Constructor.
+     *
+     * @param text Text.
+     */
+    PrintWord(const std::string& word) :
+        word(word)
+    {
+        // No-op.
+    }
+
+    /**
+     * Callback.
+     */
+    virtual void Call()
+    {
+        std::cout << word << std::endl;
+    }
+
+    /** Word to print. */
+    std::string word;
+
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<PrintWord>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("PrintWord");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "PrintWord";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const PrintWord& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const PrintWord& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(PrintWord& dst)
+            {
+                dst = PrintWord("");
+            }
+
+            static void Write(BinaryWriter& writer, const PrintWord& obj)
+            {
+                writer.RawWriter().WriteString(obj.word);
+            }
+
+            static void Read(BinaryReader& reader, PrintWord& dst)
+            {
+                dst.word = reader.RawReader().ReadString();
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a compute function.
+    binding.RegisterComputeFunc<PrintWord>();
+
+    // Get compute instance.
+    compute::Compute compute = ignite.GetCompute();
+
+    std::istringstream iss("Print words on different cluster nodes");
+    std::vector<std::string> words((std::istream_iterator<std::string>(iss)),
+        std::istream_iterator<std::string>());
+
+    // Iterate through all words and print
+    // each word on a different cluster node.
+    for (std::string word : words)
+    {
+        // Run compute task.
+        compute.Run(PrintWord(word));
+    }
+}
+//end::compute-run[]
diff --git a/docs/_docs/code-snippets/cpp/src/concurrent_updates.cpp b/docs/_docs/code-snippets/cpp/src/concurrent_updates.cpp
new file mode 100644
index 0000000..85d715a
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/concurrent_updates.cpp
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace transactions;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignition::Start(cfg);
+
+    Ignite ignite = Ignition::Get();
+
+    Cache<std::int32_t, std::string> cache = ignite.GetOrCreateCache<std::int32_t, std::string>("myCache");
+
+    //tag::concurrent-updates[]
+    for (int i = 1; i <= 5; i++)
+    {
+        Transaction tx = ignite.GetTransactions().TxStart();
+        std::cout << "attempt #" << i << ", value: " << cache.Get(1) << std::endl;
+        try {
+            cache.Put(1, "new value");
+            tx.Commit();
+            std::cout << "attempt #" << i << " succeeded" << std::endl;
+            break;
+        }
+        catch (IgniteError e)
+        {
+            if (!tx.IsRollbackOnly())
+            {
+                // Transaction was not marked as "rollback only",
+                // so it's not a concurrent update issue.
+                // Process the exception here.
+                break;
+            }
+        }
+    }
+    //end::concurrent-updates[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/continuous_query.cpp b/docs/_docs/code-snippets/cpp/src/continuous_query.cpp
new file mode 100644
index 0000000..0d2cfb3
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/continuous_query.cpp
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+/**
+ * Listener class.
+ */
+template<typename K, typename V>
+class Listener : public event::CacheEntryEventListener<K, V>
+{
+public:
+    /**
+     * Default constructor.
+     */
+    Listener()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param evts Events.
+     * @param num Events number.
+     */
+    virtual void OnEvent(const CacheEntryEvent<K, V>* evts, uint32_t num)
+    {
+        for (uint32_t i = 0; i < num; ++i)
+        {
+            std::cout << "Queried entry [key=" << (evts[i].HasValue() ? evts[i].GetKey() : K())
+                << ", val=" << (evts[i].HasValue() ? evts[i].GetValue() : V()) << ']'
+                << std::endl;
+        }
+    }
+};
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    //tag::continuous-query[]
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Custom listener
+    Listener<int32_t, std::string> listener;
+
+    // Declaring continuous query.
+    continuous::ContinuousQuery<int32_t, std::string> query(MakeReference(listener));
+
+    // Declaring optional initial query
+    ScanQuery initialQuery = ScanQuery();
+
+    continuous::ContinuousQueryHandle<int32_t, std::string> handle = cache.QueryContinuous(query, initialQuery);
+
+    // Iterating over existing data stored in the cache.
+    QueryCursor<int32_t, std::string> cursor = handle.GetInitialQueryCursor();
+
+    while (cursor.HasNext())
+    {
+        std::cout << cursor.GetNext().GetKey() << std::endl;
+    }
+    //end::continuous-query[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/continuous_query_filter.cpp b/docs/_docs/code-snippets/cpp/src/continuous_query_filter.cpp
new file mode 100644
index 0000000..2663f7e
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/continuous_query_filter.cpp
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+
+#include <ignite/ignition.h>
+#include <ignite/cache/query/continuous/continuous_query.h>
+
+#include "ignite/examples/person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+/**
+ * Listener class.
+ */
+template<typename K, typename V>
+class Listener : public event::CacheEntryEventListener<K, V>
+{
+public:
+    /**
+     * Default constructor.
+     */
+    Listener()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param evts Events.
+     * @param num Events number.
+     */
+    virtual void OnEvent(const CacheEntryEvent<K, V>* evts, uint32_t num)
+    {
+        for (uint32_t i = 0; i < num; ++i)
+        {
+            std::cout << "Queried entry [key=" << (evts[i].HasValue() ? evts[i].GetKey() : K())
+                << ", val=" << (evts[i].HasValue() ? evts[i].GetValue() : V()) << ']'
+                << std::endl;
+        }
+    }
+};
+
+//tag::continuous-query-filter[]
+template<typename K, typename V>
+struct RemoteFilter : event::CacheEntryEventFilter<int32_t, std::string>
+{
+    /**
+     * Default constructor.
+     */
+    RemoteFilter()
+    {
+        // No-op.
+    }
+
+    /**
+     * Destructor.
+     */
+    virtual ~RemoteFilter()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param event Event.
+     * @return True if the event passes filter.
+     */
+    virtual bool Process(const CacheEntryEvent<K, V>& event)
+    {
+        std::cout << "The value for key " << event.GetKey() <<
+            " was updated from " << event.GetOldValue() << " to " << event.GetValue() << std::endl;
+        return true;
+    }
+};
+
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType< RemoteFilter<int32_t, std::string> >
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("RemoteFilter<int32_t,std::string>");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "RemoteFilter<int32_t,std::string>";
+
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static bool IsNull(const RemoteFilter<int32_t, std::string>&)
+            {
+                return false;
+            }
+
+            static void GetNull(RemoteFilter<int32_t, std::string>& dst)
+            {
+                dst = RemoteFilter<int32_t, std::string>();
+            }
+
+            static void Write(BinaryWriter& writer, const RemoteFilter<int32_t, std::string>& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, RemoteFilter<int32_t, std::string>& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    // Start a node.
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get binding.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering remote filter.
+    binding.RegisterCacheEntryEventFilter<RemoteFilter<int32_t, std::string>>();
+
+    // Get cache instance.
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Declaring custom listener.
+    Listener<int32_t, std::string> listener;
+
+    // Declaring filter.
+    RemoteFilter<int32_t, std::string> filter;
+
+    // Declaring continuous query.
+    continuous::ContinuousQuery<int32_t, std::string> qry(MakeReference(listener), MakeReference(filter));
+}
+//end::continuous-query-filter[]
diff --git a/docs/_docs/code-snippets/cpp/src/continuous_query_listener.cpp b/docs/_docs/code-snippets/cpp/src/continuous_query_listener.cpp
new file mode 100644
index 0000000..947b01e
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/continuous_query_listener.cpp
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+//tag::continuous-query-listener[]
+/**
+ * Listener class.
+ */
+template<typename K, typename V>
+class Listener : public event::CacheEntryEventListener<K, V>
+{
+public:
+    /**
+     * Default constructor.
+     */
+    Listener()
+    {
+        // No-op.
+    }
+
+    /**
+     * Event callback.
+     *
+     * @param evts Events.
+     * @param num Events number.
+     */
+    virtual void OnEvent(const CacheEntryEvent<K, V>* evts, uint32_t num)
+    {
+        for (uint32_t i = 0; i < num; ++i)
+        {
+            std::cout << "Queried entry [key=" << (evts[i].HasValue() ? evts[i].GetKey() : K())
+                << ", val=" << (evts[i].HasValue() ? evts[i].GetValue() : V()) << ']'
+                << std::endl;
+        }
+    }
+};
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("myCache");
+
+    // Declaring custom listener.
+    Listener<int32_t, std::string> listener;
+
+    // Declaring continuous query.
+    continuous::ContinuousQuery<int32_t, std::string> query(MakeReference(listener));
+
+    continuous::ContinuousQueryHandle<int32_t, std::string> handle = cache.QueryContinuous(query);
+}
+//end::continuous-query-listener[]
diff --git a/docs/_docs/code-snippets/cpp/src/country.h b/docs/_docs/code-snippets/cpp/src/country.h
new file mode 100644
index 0000000..487c24f
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/country.h
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+namespace ignite
+{
+    struct Country
+    {
+        Country() : population(0)
+        {
+            // No-op.
+        }
+
+        Country(const int32_t population, const std::string& name) :
+            population(population),
+            name(name)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+            oss << "Country [population=" << population
+                << ", name=" << name << ']';
+            return oss.str();
+        }
+
+        int32_t population;
+        std::string name;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::Country)
+
+            typedef ignite::Country Country;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(Country)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(Country)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(Country)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(Country)
+
+            static void Write(BinaryWriter& writer, const ignite::Country& obj)
+        {
+            writer.WriteInt32("population", obj.population);
+            writer.WriteString("name", obj.name);
+        }
+
+        static void Read(BinaryReader& reader, ignite::Country& dst)
+        {
+            dst.population = reader.ReadInt32("population");
+            dst.name = reader.ReadString("name");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/invoke.cpp b/docs/_docs/code-snippets/cpp/src/invoke.cpp
new file mode 100644
index 0000000..1d2895b
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/invoke.cpp
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <stdint.h>
+#include <iostream>
+#include <sstream>
+
+#include <ignite/ignition.h>
+#include <ignite/compute/compute.h>
+#include "ignite/cache/cache_entry_processor.h"
+
+using namespace ignite;
+using namespace cache;
+
+//tag::invoke[]
+/**
+ * Processor for invoke method.
+ */
+class IncrementProcessor : public cache::CacheEntryProcessor<std::string, int32_t, int32_t, int32_t>
+{
+public:
+    /**
+     * Constructor.
+     */
+    IncrementProcessor()
+    {
+        // No-op.
+    }
+
+    /**
+     * Copy constructor.
+     *
+     * @param other Other instance.
+     */
+    IncrementProcessor(const IncrementProcessor& other)
+    {
+        // No-op.
+    }
+
+    /**
+     * Assignment operator.
+     *
+     * @param other Other instance.
+     * @return This instance.
+     */
+    IncrementProcessor& operator=(const IncrementProcessor& other)
+    {
+        return *this;
+    }
+
+    /**
+     * Call instance.
+     */
+    virtual int32_t Process(MutableCacheEntry<std::string, int32_t>& entry, const int& arg)
+    {
+        // Increment the value for a specific key by 1.
+        // The operation will be performed on the node where the key is stored.
+        // Note that if the cache does not contain an entry for the given key, it will
+        // be created.
+        if (!entry.IsExists())
+            entry.SetValue(1);
+        else
+            entry.SetValue(entry.GetValue() + 1);
+
+        return entry.GetValue();
+    }
+};
+
+/**
+ * Binary type structure. Defines a set of functions required for type to be serialized and deserialized.
+ */
+namespace ignite
+{
+    namespace binary
+    {
+        template<>
+        struct BinaryType<IncrementProcessor>
+        {
+            static int32_t GetTypeId()
+            {
+                return GetBinaryStringHashCode("IncrementProcessor");
+            }
+
+            static void GetTypeName(std::string& dst)
+            {
+                dst = "IncrementProcessor";
+            }
+
+            static int32_t GetFieldId(const char* name)
+            {
+                return GetBinaryStringHashCode(name);
+            }
+
+            static int32_t GetHashCode(const IncrementProcessor& obj)
+            {
+                return 0;
+            }
+
+            static bool IsNull(const IncrementProcessor& obj)
+            {
+                return false;
+            }
+
+            static void GetNull(IncrementProcessor& dst)
+            {
+                dst = IncrementProcessor();
+            }
+
+            static void Write(BinaryWriter& writer, const IncrementProcessor& obj)
+            {
+                // No-op.
+            }
+
+            static void Read(BinaryReader& reader, IncrementProcessor& dst)
+            {
+                // No-op.
+            }
+        };
+    }
+}
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "platforms/cpp/examples/put-get-example/config/example-cache.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    // Get cache instance.
+    Cache<std::string, int32_t> cache = ignite.GetOrCreateCache<std::string, int32_t>("myCache");
+
+    // Get binding instance.
+    IgniteBinding binding = ignite.GetBinding();
+
+    // Registering our class as a cache entry processor.
+    binding.RegisterCacheEntryProcessor<IncrementProcessor>();
+
+    std::string key("mykey");
+    IncrementProcessor inc;
+
+    cache.Invoke<int32_t>(key, inc, NULL);
+}
+//end::invoke[]
diff --git a/docs/_docs/code-snippets/cpp/src/key_value_execute_sql.cpp b/docs/_docs/code-snippets/cpp/src/key_value_execute_sql.cpp
new file mode 100644
index 0000000..8903173
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/key_value_execute_sql.cpp
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "country.h";
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+const char* CITY_CACHE_NAME = "City";
+const char* COUNTRY_CACHE_NAME = "Country";
+const char* COUNTRY_LANGUAGE_CACHE_NAME = "CountryLanguage";
+
+int main()
+{
+    //tag::key-value-execute-sql[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "config/sql.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<int64_t, std::string> cityCache = ignite.GetOrCreateCache<int64_t, std::string>(CITY_CACHE_NAME);
+    Cache<int64_t, Country> countryCache = ignite.GetOrCreateCache<int64_t, Country>(COUNTRY_CACHE_NAME);
+    Cache<int64_t, std::string> languageCache = ignite.GetOrCreateCache<int64_t, std::string>(COUNTRY_LANGUAGE_CACHE_NAME);
+
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    SqlFieldsQuery query = SqlFieldsQuery("SELECT name, population FROM country ORDER BY population DESC LIMIT 10");
+
+    QueryFieldsCursor cursor = countryCache.Query(query);
+    while (cursor.HasNext())
+    {
+        QueryFieldsRow row = cursor.GetNext();
+        std::string name = row.GetNext<std::string>();
+        std::string population = row.GetNext<std::string>();
+        std::cout << "    >>> " << population << " people live in " << name << std::endl;
+    }
+    //end::key-value-execute-sql[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/key_value_object_key.cpp b/docs/_docs/code-snippets/cpp/src/key_value_object_key.cpp
new file mode 100644
index 0000000..d1d0338
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/key_value_object_key.cpp
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "city.h"
+#include "city_key.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    //tag::key-value-object-key[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    Cache<CityKey, City> cityCache = ignite.GetOrCreateCache<CityKey, City>("City");
+
+    CityKey key = CityKey(5, "NLD");
+
+    cityCache.Put(key, 100000);
+
+    //getting the city by ID and country code
+    City city = cityCache.Get(key);
+
+    std::cout << ">> Updating Amsterdam record:" << std::endl;
+    city.population = city.population - 10000;
+
+    cityCache.Put(key, city);
+
+    std::cout << cityCache.Get(key).ToString() << std::endl;
+    //end::key-value-object-key[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/person.h b/docs/_docs/code-snippets/cpp/src/person.h
new file mode 100644
index 0000000..492f5c5
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/person.h
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "ignite/binary/binary.h"
+
+namespace ignite
+{
+    struct Person
+    {
+        Person() : orgId(0), salary(.0)
+        {
+            // No-op.
+        }
+
+        Person(int64_t orgId, const std::string& firstName,
+            const std::string& lastName, const std::string& resume, double salary) :
+            orgId(orgId),
+            firstName(firstName),
+            lastName(lastName),
+            resume(resume),
+            salary(salary)
+        {
+            // No-op.
+        }
+
+        std::string ToString() const
+        {
+            std::ostringstream oss;
+
+            oss << "Person [orgId=" << orgId
+                << ", lastName=" << lastName
+                << ", firstName=" << firstName
+                << ", salary=" << salary
+                << ", resume=" << resume << ']';
+
+            return oss.str();
+        }
+
+        int64_t orgId;
+        std::string firstName;
+        std::string lastName;
+        std::string resume;
+        double salary;
+    };
+}
+
+namespace ignite
+{
+    namespace binary
+    {
+        IGNITE_BINARY_TYPE_START(ignite::Person)
+
+            typedef ignite::Person Person;
+
+        IGNITE_BINARY_GET_TYPE_ID_AS_HASH(Person)
+            IGNITE_BINARY_GET_TYPE_NAME_AS_IS(Person)
+            IGNITE_BINARY_GET_FIELD_ID_AS_HASH
+            IGNITE_BINARY_IS_NULL_FALSE(Person)
+            IGNITE_BINARY_GET_NULL_DEFAULT_CTOR(Person)
+
+            static void Write(BinaryWriter& writer, const ignite::Person& obj)
+        {
+            writer.WriteInt64("orgId", obj.orgId);
+            writer.WriteString("firstName", obj.firstName);
+            writer.WriteString("lastName", obj.lastName);
+            writer.WriteString("resume", obj.resume);
+            writer.WriteDouble("salary", obj.salary);
+        }
+
+        static void Read(BinaryReader& reader, ignite::Person& dst)
+        {
+            dst.orgId = reader.ReadInt64("orgId");
+            dst.firstName = reader.ReadString("firstName");
+            dst.lastName = reader.ReadString("lastName");
+            dst.resume = reader.ReadString("resume");
+            dst.salary = reader.ReadDouble("salary");
+        }
+
+        IGNITE_BINARY_TYPE_END
+    }
+};
diff --git a/docs/_docs/code-snippets/cpp/src/scan_query.cpp b/docs/_docs/code-snippets/cpp/src/scan_query.cpp
new file mode 100644
index 0000000..56b35c3
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/scan_query.cpp
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite grid = Ignition::Start(cfg);
+
+    //tag::query-cursor[]
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, ignite::Person>("personCache");
+
+    QueryCursor<int64_t, Person> cursor = cache.Query(ScanQuery());
+    //end::query-cursor[]
+
+    // Iterate over results.
+    while (cursor.HasNext())
+    {
+        std::cout << cursor.GetNext().GetKey() << std::endl;
+    }
+
+    //tag::set-local[]
+    ScanQuery sq;
+    sq.SetLocal(true);
+
+    QueryCursor<int64_t, Person> cursor = cache.Query(sq);
+    //end::set-local[]
+
+}
diff --git a/docs/_docs/code-snippets/cpp/src/setting_work_directory.cpp b/docs/_docs/code-snippets/cpp/src/setting_work_directory.cpp
new file mode 100644
index 0000000..50ea929
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/setting_work_directory.cpp
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::setting-work-directory[]
+    IgniteConfiguration cfg;
+
+    cfg.igniteHome = "/path/to/work/directory";
+    //end::setting-work-directory[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/sql.cpp b/docs/_docs/code-snippets/cpp/src/sql.cpp
new file mode 100644
index 0000000..fb80f01
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/sql.cpp
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "config/sql.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    //tag::sql-fields-query[]
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, Person>("Person");
+
+    // Iterate over the result set.
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    QueryFieldsCursor cursor = cache.Query(SqlFieldsQuery("select concat(firstName, ' ', lastName) from Person"));
+    while (cursor.HasNext())
+    {
+        std::cout << "personName=" << cursor.GetNext().GetNext<std::string>() << std::endl;
+    }
+    //end::sql-fields-query[]
+
+    //tag::sql-fields-query-scheme[]
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    SqlFieldsQuery sql = SqlFieldsQuery("select name from City");
+    sql.SetSchema("PERSON");
+    //end::sql-fields-query-scheme[]
+
+    //tag::sql-fields-query-scheme-inline[]
+    // SQL Fields Query can only be performed using fields that have been listed in "QueryEntity" been of the config!
+    sql = SqlFieldsQuery("select name from Person.City");
+    //end::sql-fields-query-scheme-inline[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/sql_create.cpp b/docs/_docs/code-snippets/cpp/src/sql_create.cpp
new file mode 100644
index 0000000..ceae081
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/sql_create.cpp
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+
+    //tag::sql-create[]
+    Cache<int64_t, Person> cache = ignite.GetOrCreateCache<int64_t, Person>("Person");
+
+    // Creating City table.
+    cache.Query(SqlFieldsQuery("CREATE TABLE City (id int primary key, name varchar, region varchar)"));
+    //end::sql-create[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/sql_join_order.cpp b/docs/_docs/code-snippets/cpp/src/sql_join_order.cpp
new file mode 100644
index 0000000..3a1dc94
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/sql_join_order.cpp
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+#include "person.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace query;
+
+int main()
+{
+	//tag::sql-join-order[]
+	SqlFieldsQuery query = SqlFieldsQuery("SELECT * FROM TABLE_A, TABLE_B USE INDEX(HASH_JOIN_IDX) WHERE TABLE_A.column1 = TABLE_B.column2");
+	query.SetEnforceJoinOrder(true);
+	//end::sql-join-order[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/start_stop_nodes.cpp b/docs/_docs/code-snippets/cpp/src/start_stop_nodes.cpp
new file mode 100644
index 0000000..b68c35a
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/start_stop_nodes.cpp
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+
+int main()
+{
+    //tag::start-all-nodes[]
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+    //end::start-all-nodes[]
+
+    //tag::activate-cluster[]
+    ignite.SetActive(true);
+    //end::activate-cluster[]
+
+    //tag::deactivate-cluster[]
+    ignite.SetActive(false);
+    //end::deactivate-cluster[]
+
+    //tag::stop-node[]
+    Ignition::Stop(ignite.GetName(), false);
+    //end::stop-node[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_authentication.cpp b/docs/_docs/code-snippets/cpp/src/thin_authentication.cpp
new file mode 100644
index 0000000..34b36dd
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_authentication.cpp
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::thin-authentication[]
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void TestClientWithAuth()
+{
+    IgniteClientConfiguration cfg;
+    cfg.SetEndPoints("127.0.0.1:10800");
+
+    // Use your own credentials here.
+    cfg.SetUser("ignite");
+    cfg.SetPassword("ignite");
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    cache::CacheClient<int32_t, std::string> cacheClient =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+
+    cacheClient.Put(42, "Hello Ignite Thin Client with auth!");
+}
+//end::thin-authentication[]
+
+int main()
+{
+    TestClientWithAuth();
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_client_cache.cpp b/docs/_docs/code-snippets/cpp/src/thin_client_cache.cpp
new file mode 100644
index 0000000..d8fe477
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_client_cache.cpp
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+int main()
+{
+    IgniteClientConfiguration cfg;
+    cfg.SetEndPoints("127.0.0.1:10800");
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    //tag::thin-getting-cache-instance[]
+    cache::CacheClient<int32_t, std::string> cache =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+    //end::thin-getting-cache-instance[]
+
+    //tag::basic-cache-operations[]
+    std::map<int, std::string> vals;
+    for (int i = 1; i < 100; i++)
+    {
+        vals[i] = i;
+    }
+
+    cache.PutAll(vals);
+    cache.Replace(1, "2");
+    cache.Put(101, "101");
+    cache.RemoveAll();
+    //end::basic-cache-operations[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_client_ssl.cpp b/docs/_docs/code-snippets/cpp/src/thin_client_ssl.cpp
new file mode 100644
index 0000000..b11dcfd
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_client_ssl.cpp
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void main()
+{
+    //tag::thin-client-ssl[]
+    IgniteClientConfiguration cfg;
+
+    // Sets SSL mode.
+    cfg.SetSslMode(SslMode::Type::REQUIRE);
+
+    // Sets file path to SSL certificate authority to authenticate server certificate during connection establishment.
+    cfg.SetSslCaFile("path/to/SSL/certificate/authority");
+
+    // Sets file path to SSL certificate to use during connection establishment.
+    cfg.SetSslCertFile("path/to/SSL/certificate");
+
+    // Sets file path to SSL private key to use during connection establishment.
+    cfg.SetSslKeyFile("path/to/SSL/private/key");
+    //end::thin-client-ssl[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_creating_client_instance.cpp b/docs/_docs/code-snippets/cpp/src/thin_creating_client_instance.cpp
new file mode 100644
index 0000000..a2c4230
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_creating_client_instance.cpp
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::thin-creating-client-instance[]
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void TestClient()
+{
+    IgniteClientConfiguration cfg;
+
+    //Endpoints list format is "<host>[port[..range]][,...]"
+    cfg.SetEndPoints("127.0.0.1:11110,example.com:1234..1240");
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    cache::CacheClient<int32_t, std::string> cacheClient =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+
+    cacheClient.Put(42, "Hello Ignite Thin Client!");
+}
+//end::thin-creating-client-instance[]
+
+int main()
+{
+    TestClient();
+}
diff --git a/docs/_docs/code-snippets/cpp/src/thin_partition_awareness.cpp b/docs/_docs/code-snippets/cpp/src/thin_partition_awareness.cpp
new file mode 100644
index 0000000..c965d9b
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/thin_partition_awareness.cpp
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+//tag::thin-partition-awareness[]
+#include <ignite/thin/ignite_client.h>
+#include <ignite/thin/ignite_client_configuration.h>
+
+using namespace ignite::thin;
+
+void TestClientPartitionAwareness()
+{
+    IgniteClientConfiguration cfg;
+    cfg.SetEndPoints("127.0.0.1:10800,217.29.2.1:10800,200.10.33.1:10800");
+    cfg.SetPartitionAwareness(true);
+
+    IgniteClient client = IgniteClient::Start(cfg);
+
+    cache::CacheClient<int32_t, std::string> cacheClient =
+        client.GetOrCreateCache<int32_t, std::string>("TestCache");
+
+    cacheClient.Put(42, "Hello Ignite Partition Awareness!");
+
+    cacheClient.RefreshAffinityMapping();
+
+    // Getting a value
+    std::string val = cacheClient.Get(42);
+}
+//end::thin-partition-awareness[]
+
+int main()
+{
+    TestClientPartitionAwareness();
+}
diff --git a/docs/_docs/code-snippets/cpp/src/transactions.cpp b/docs/_docs/code-snippets/cpp/src/transactions.cpp
new file mode 100644
index 0000000..6e79ee6
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/transactions.cpp
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace transactions;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignition::Start(cfg);
+
+    Ignite ignite = Ignition::Get();
+
+    Cache<std::string, int32_t> cache = ignite.GetOrCreateCache<std::string, int32_t>("myCache");
+
+    //tag::transactions-execution[]
+    Transactions transactions = ignite.GetTransactions();
+
+    Transaction tx = transactions.TxStart();
+    int hello = cache.Get("Hello");
+
+    if (hello == 1)
+        cache.Put("Hello", 11);
+
+    cache.Put("World", 22);
+
+    tx.Commit();
+    //end::transactions-execution[]
+
+    //tag::transactions-optimistic[]
+    // Re-try the transaction a limited number of times.
+    int const retryCount = 10;
+    int retries = 0;
+    
+    // Start a transaction in the optimistic mode with the serializable isolation level.
+    while (retries < retryCount)
+    {
+        retries++;
+    
+        try
+        {
+            Transaction tx = ignite.GetTransactions().TxStart(
+                    TransactionConcurrency::OPTIMISTIC, TransactionIsolation::SERIALIZABLE);
+
+            // commit the transaction
+            tx.Commit();
+
+            // the transaction succeeded. Leave the while loop.
+            break;
+        }
+        catch (IgniteError e)
+        {
+            // Transaction has failed. Retry.
+        }
+    }
+    //end::transactions-optimistic[]
+}
diff --git a/docs/_docs/code-snippets/cpp/src/transactions_pessimistic.cpp b/docs/_docs/code-snippets/cpp/src/transactions_pessimistic.cpp
new file mode 100644
index 0000000..ea28876
--- /dev/null
+++ b/docs/_docs/code-snippets/cpp/src/transactions_pessimistic.cpp
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <iostream>
+
+#include "ignite/ignite.h"
+#include "ignite/ignition.h"
+
+using namespace ignite;
+using namespace cache;
+using namespace transactions;
+
+int main()
+{
+    IgniteConfiguration cfg;
+    cfg.springCfgPath = "/path/to/configuration.xml";
+
+    Ignite ignite = Ignition::Start(cfg);
+    
+    Cache<int32_t, int32_t> cache = ignite.GetOrCreateCache<int32_t, int32_t>("myCache");
+
+    //tag::transactions-pessimistic[]
+    try {
+        Transaction tx = ignite.GetTransactions().TxStart(
+            TransactionConcurrency::PESSIMISTIC, TransactionIsolation::READ_COMMITTED, 300, 0);
+        cache.Put(1, 1);
+    
+        cache.Put(2, 1);
+    
+        tx.Commit();
+    }
+    catch (IgniteError& err)
+    {
+        std::cout << "An error occurred: " << err.GetText() << std::endl;
+        std::cin.get();
+        return err.GetCode();
+    }
+    //end::transactions-pessimistic[]
+}
diff --git a/docs/_docs/code-snippets/dotnet/AffinityCollocation.cs b/docs/_docs/code-snippets/dotnet/AffinityCollocation.cs
new file mode 100644
index 0000000..433e113
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/AffinityCollocation.cs
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Cache;
+using Apache.Ignite.Core.Cache.Affinity;
+using Apache.Ignite.Core.Cache.Configuration;
+
+namespace dotnet_helloworld
+{
+    // tag::affinityCollocation[]
+    class Person
+    {
+        public int Id { get; set; }
+        public string Name { get; set; }
+        public int CityId { get; set; }
+        public string CompanyId { get; set; }
+    }
+
+    class PersonKey
+    {
+        public int Id { get; set; }
+
+        [AffinityKeyMapped] public string CompanyId { get; set; }
+    }
+
+    class Company
+    {
+        public string Name { get; set; }
+    }
+
+    class AffinityCollocation
+    {
+        public static void Example()
+        {
+            var personCfg = new CacheConfiguration
+            {
+                Name = "persons",
+                Backups = 1,
+                CacheMode = CacheMode.Partitioned
+            };
+
+            var companyCfg = new CacheConfiguration
+            {
+                Name = "companies",
+                Backups = 1,
+                CacheMode = CacheMode.Partitioned
+            };
+
+            using (var ignite = Ignition.Start())
+            {
+                var personCache = ignite.GetOrCreateCache<PersonKey, Person>(personCfg);
+                var companyCache = ignite.GetOrCreateCache<string, Company>(companyCfg);
+
+                var person = new Person {Name = "Vasya"};
+
+                var company = new Company {Name = "Company1"};
+
+                personCache.Put(new PersonKey {Id = 1, CompanyId = "company1_key"}, person);
+                companyCache.Put("company1_key", company);
+            }
+        }
+    }
+    // end::affinityCollocation[]
+
+    static class CacheKeyConfigurationExamples
+    {
+        public static void ConfigureAffinityKeyWithCacheKeyConfiguration() {
+            // tag::config-with-key-configuration[]
+            var personCfg = new CacheConfiguration("persons")
+            {
+                KeyConfiguration = new[]
+                {
+                    new CacheKeyConfiguration
+                    {
+                        TypeName = nameof(Person),
+                        AffinityKeyFieldName = nameof(Person.CompanyId)
+                    } 
+                }
+            };
+
+            var companyCfg = new CacheConfiguration("companies");
+
+            IIgnite ignite = Ignition.Start();
+
+            ICache<PersonKey, Person> personCache = ignite.GetOrCreateCache<PersonKey, Person>(personCfg);
+            ICache<string, Company> companyCache = ignite.GetOrCreateCache<string, Company>(companyCfg);
+
+            var companyId = "company_1";
+            Company c1 = new Company {Name = "My company"};
+            Person p1 = new Person {Id = 1, Name = "John", CompanyId = companyId};
+
+            // Both the p1 and c1 objects will be cached on the same node
+            personCache.Put(new PersonKey {Id = 1, CompanyId = companyId}, p1);
+            companyCache.Put(companyId, c1);
+
+            // Get the person object
+            p1 = personCache.Get(new PersonKey {Id = 1, CompanyId = companyId});
+            // end::config-with-key-configuration[]
+        }
+
+        public static void AffinityKeyClass()
+        {
+            // tag::affinity-key-class[]
+            var personCfg = new CacheConfiguration("persons");
+            var companyCfg = new CacheConfiguration("companies");
+
+            IIgnite ignite = Ignition.Start();
+
+            ICache<AffinityKey, Person> personCache = ignite.GetOrCreateCache<AffinityKey, Person>(personCfg);
+            ICache<string, Company> companyCache = ignite.GetOrCreateCache<string, Company>(companyCfg);
+
+            var companyId = "company_1";
+            Company c1 = new Company {Name = "My company"};
+            Person p1 = new Person {Id = 1, Name = "John", CompanyId = companyId};
+
+            // Both the p1 and c1 objects will be cached on the same node
+            personCache.Put(new AffinityKey(1, companyId), p1);
+            companyCache.Put(companyId, c1);
+
+            // Get the person object
+            p1 = personCache.Get(new AffinityKey(1, companyId));
+            // end::affinity-key-class[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/BaselineTopology.cs b/docs/_docs/code-snippets/dotnet/BaselineTopology.cs
new file mode 100644
index 0000000..65710ea
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/BaselineTopology.cs
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+using System;
+using Apache.Ignite.Core;
+
+namespace dotnet_helloworld
+{
+    public static class BaselineTopology
+    {
+        public static void Activate()
+        {
+            // tag::activate[]
+            IIgnite ignite = Ignition.Start();
+            ignite.GetCluster().SetActive(true);
+            // end::activate[]
+        }
+
+        public static void EnableAutoAdjust()
+        {
+            // tag::enable-autoadjustment[]
+            IIgnite ignite = Ignition.Start();
+            ignite.GetCluster().SetBaselineAutoAdjustEnabledFlag(true);
+            ignite.GetCluster().SetBaselineAutoAdjustTimeout(30000);
+            // end::enable-autoadjustment[]
+        }
+
+        public static void DisableAutoAdjust()
+        {
+            IIgnite ignite = Ignition.Start();
+            // tag::disable-autoadjustment[]
+            ignite.GetCluster().SetBaselineAutoAdjustEnabledFlag(false);
+            // end::disable-autoadjustment[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/BasicCacheOperations.cs b/docs/_docs/code-snippets/dotnet/BasicCacheOperations.cs
new file mode 100644
index 0000000..11980c4
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/BasicCacheOperations.cs
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Compute;
+
+namespace dotnet_helloworld
+{
+    public class BasicCacheOperations
+    {
+        public static void AtomicOperations()
+        {
+            // tag::atomicOperations1[]
+            using (var ignite = Ignition.Start("examples/config/example-cache.xml"))
+            {
+                var cache = ignite.GetCache<int, string>("cache_name");
+
+                for (var i = 0; i < 10; i++)
+                {
+                    cache.Put(i, i.ToString());
+                }
+
+                for (var i = 0; i < 10; i++)
+                {
+                    Console.Write("Got [key=" + i + ", val=" + cache.Get(i) + ']');
+                }
+            }
+            // end::atomicOperations1[]
+
+            // tag::atomicOperations2[]
+            using (var ignite = Ignition.Start("examples/config/example-cache.xml"))
+            {
+                var cache = ignite.GetCache<string, int>("cache_name");
+
+                // Put-if-absent which returns previous value.
+                var oldVal = cache.GetAndPutIfAbsent("Hello", 11);
+
+                // Put-if-absent which returns boolean success flag.
+                var success = cache.PutIfAbsent("World", 22);
+
+                // Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.
+                oldVal = cache.GetAndReplace("Hello", 11);
+
+                // Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.
+                success = cache.Replace("World", 22);
+
+                // Replace-if-matches operation.
+                success = cache.Replace("World", 2, 22);
+
+                // Remove-if-matches operation.
+                success = cache.Remove("Hello", 1);
+            }
+            // end::atomicOperations2[]
+        }
+
+        // tag::asyncExec[]
+        class HelloworldFunc : IComputeFunc<string>
+        {
+            public string Invoke()
+            {
+                return "Hello World";
+            }
+        }
+        
+        public static void AsynchronousExecution()
+        {
+            var ignite = Ignition.Start();
+            var compute = ignite.GetCompute();
+            
+            //Execute a closure asynchronously
+            var fut = compute.CallAsync(new HelloworldFunc());
+            
+            // Listen for completion and print out the result
+            fut.ContinueWith(Console.Write);
+        }
+        // end::asyncExec[]
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ClusterGroups.cs b/docs/_docs/code-snippets/dotnet/ClusterGroups.cs
new file mode 100644
index 0000000..8948b7d
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ClusterGroups.cs
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+using System;
+using Apache.Ignite.Core;
+using Apache.Ignite.Core.Compute;
+using Apache.Ignite.Core.Discovery.Tcp;
+using Apache.Ignite.Core.Discovery.Tcp.Static;
+
+namespace dotnet_helloworld
+{
+    public class ClusterGroups
+    {
+        // tag::broadcastAction[]
+        class PrintNodeIdAction : IComputeAction
+        {
+            public void Invoke()
+            {
+                Console.WriteLine("Hello node: " +
+                                  Ignition.GetIgnite().GetCluster().GetLocalNode().Id);
+            }
+        }
+
+        public static void RemotesBroadcastDemo()
+        {
+            var ignite = Ignition.Start();
+
+            var cluster = ignite.GetCluster();
+
+            // Get compute instance which will only execute
+            // over remote nodes, i.e. all the nodes except for this one.
+            var compute = cluster.ForRemotes().GetCompute();
+
+            // Broadcast to all remote nodes and print the ID of the node
+            // on which this closure is executing.
+            compute.Broadcast(new PrintNodeIdAction());
+        }
+        // end::broadcastAction[]
+
+        public static void ClusterGroupsDemo()
+        {
+            var ignite = Ignition.Start(
+                new IgniteConfiguration
+                {
+                    DiscoverySpi = new TcpDiscoverySpi
+                    {
+                        LocalPort = 48500,
+                        LocalPortRange = 20,
+                        IpFinder = new TcpDiscoveryStaticIpFinder
+                        {
+                            Endpoints = new[]
+                            {
+                                "127.0.0.1:48500..48520"
+                            }
+                        }
+                    }
+                }
+            );
+    
+            // tag::clusterGroups[]
+            var cluster = ignite.GetCluster();
+            
+            // All nodes on which cache with name "myCache" is deployed,
+            // either in client or server mode.
+            var cacheGroup = cluster.ForCacheNodes("myCache");
+
+            // All data nodes responsible for caching data for "myCache".
+            var dataGroup = cluster.ForDataNodes("myCache");
+
+            // All client nodes that access "myCache".
+            var clientGroup = cluster.ForClientNodes("myCache");
+            // end::clusterGroups[]
+        }
+    }
+}
diff --git a/docs/_docs/code-snippets/dotnet/ClusteringOverview.cs b/docs/_docs/code-snippets/dotnet/ClusteringOverview.cs
new file mode 100644
index 0000000..2b6ea30
--- /dev/null
+++ b/docs/_docs/code-snippets/dotnet/ClusteringOverview.cs
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0