# sprint-2 - fixed devnotes file.
diff --git a/wiki/documentation/basic-concepts/async-support.md b/wiki/documentation/basic-concepts/async-support.md
deleted file mode 100644
index e434f70..0000000
--- a/wiki/documentation/basic-concepts/async-support.md
+++ /dev/null
@@ -1,92 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-All distributed methods on all Ignite APIs can be executed either synchronously or asynchronously. However, instead of having a duplicate asynchronous method for every synchronous one (like `get()` and `getAsync()`, or `put()` and `putAsync()`, etc.), Ignite chose a more elegant approach, where methods don't have to be duplicated.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteAsyncSupport"
-}
-[/block]
-`IgniteAsyncSupport` interface adds asynchronous mode to many Ignite APIs. For example, `IgniteCompute`, `IgniteServices`, `IgniteCache`, and `IgniteTransactions` all extend `IgniteAsyncSupport` interface.
-
-To enable asynchronous mode, you should call `withAsync()` method. 
-
-## Compute Grid Example
-The example below illustrates the difference between synchronous and asynchronous computations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute a job and wait for the result.\nString res = compute.call(() -> {\n  // Print hello world on some cluster node.\n\tSystem.out.println(\"Hello World\");\n  \n  return \"Hello World\";\n});",
-      "language": "java",
-      "name": "Synchronous"
-    }
-  ]
-}
-[/block]
-Here is how you would make the above invocation asynchronous:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\n// Asynchronously execute a job.\nasyncCompute.call(() -> {\n  // Print hello world on some cluster node and wait for completion.\n\tSystem.out.println(\"Hello World\");\n  \n  return \"Hello World\";\n});\n\n// Get the future for the above invocation.\nIgniteFuture<String> fut = asyncCompute.future();\n\n// Asynchronously listen for completion and print out the result.\nfut.listenAsync(f -> System.out.println(\"Job result: \" + f.get()));",
-      "language": "java",
-      "name": "Asynchronous"
-    }
-  ]
-}
-[/block]
-## Data Grid Example
-Here is the data grid example for synchronous and asynchronous invocations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<String, Integer> cache = ignite.jcache(\"mycache\");\n\n// Synchronously store value in cache and get previous value.\nInteger val = cache.getAndPut(\"1\", 1);",
-      "language": "java",
-      "name": "Synchronous"
-    }
-  ]
-}
-[/block]
-Here is how you would make the above invocation asynchronous.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCache<String, Integer> asyncCache = ignite.jcache(\"mycache\").withAsync();\n\n// Asynhronously store value in cache.\nasyncCache.getAndPut(\"1\", 1);\n\n// Get future for the above invocation.\nIgniteFuture<Integer> fut = asyncCache.future();\n\n// Asynchronously listen for the operation to complete.\nfut.listenAsync(f -> System.out.println(\"Previous cache value: \" + f.get()));",
-      "language": "java",
-      "name": "Asynchronous"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "@IgniteAsyncSupported"
-}
-[/block]
-Not every method on Ignite APIs is distributed and therefore does not really require asynchronous mode. To avoid confusion about which method is distributed, i.e. can be asynchronous, and which is not, all distributed methods in Ignite are annotated with `@IgniteAsyncSupported` annotation.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that, although not really needed, in async mode you can still get the future for non-distributed operations as well.  However, this future will always be completed."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/basic-concepts/getting-started.md b/wiki/documentation/basic-concepts/getting-started.md
deleted file mode 100644
index 705c7a9..0000000
--- a/wiki/documentation/basic-concepts/getting-started.md
+++ /dev/null
@@ -1,235 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Prerequisites"
-}
-[/block]
-Apache Ignite was officially tested on:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Name",
-    "h-1": "Value",
-    "0-0": "JDK",
-    "0-1": "Oracle JDK 7 and above",
-    "1-0": "OS",
-    "2-0": "Network",
-    "1-1": "Linux (any flavor),\nMac OSX (10.6 and up)\nWindows (XP and up), \nWindows Server (2008 and up)",
-    "2-1": "No restrictions (10G recommended)",
-    "3-0": "Hardware",
-    "3-1": "No restrictions"
-  },
-  "cols": 2,
-  "rows": 3
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Installation"
-}
-[/block]
-Here is the quick summary on installation of Apache Ignite:
-  * Download Apache Ignite as ZIP archive from https://ignite.incubator.apache.org/
-  * Unzip ZIP archive into the installation folder in your system
-  * Set `IGNITE_HOME` environment variable to point to the installation folder and make sure there is no trailing `/` in the path (this step is optional)
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Start From Command Line"
-}
-[/block]
-An Ignite node can be started from command line either with default configuration or by passing a configuration file. You can start as many nodes as you like and they will all automatically discover each other. 
-
-##With Default Configuration
-To start a grid node with default configuration, open the command shell and, assuming you are in `IGNITE_HOME` (Ignite installation folder), just type this:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignite.sh",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
-and you will see the output similar to this:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "[02:49:12] Ignite node started OK (id=ab5d18a6)\n[02:49:12] Topology snapshot [ver=1, nodes=1, CPUs=8, heap=1.0GB]",
-      "language": "text"
-    }
-  ]
-}
-[/block]
-By default `ignite.sh` starts Ignite node with the default configuration: `config/default-config.xml`.
-
-##Passing Configuration File 
-To pass configuration file explicitly,  from command line, you can type ggstart.sh <path to configuration file> from within your Ignite installation folder. For example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignite.sh examples/config/example-cache.xml",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
-Path to configuration file can be absolute, or relative to either `IGNITE_HOME` (Ignite installation folder) or `META-INF` folder in your classpath. 
-[block:callout]
-{
-  "type": "success",
-  "title": "Interactive Mode",
-  "body": "To pick a configuration file in interactive mode just pass `-i` flag, like so: `ignite.sh -i`."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Get It With Maven"
-}
-[/block]
-Another easy way to get started with Apache Ignite in your project is to use Maven 2 dependency management.
-
-Ignite requires only one `ignite-core` mandatory dependency. Usually you will also need to add `ignite-spring` for spring-based XML configuration and `ignite-indexing` for SQL querying.
-
-Replace `${ignite-version}` with actual Ignite version.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-core</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-indexing</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "title": "Maven Setup",
-  "body": "See [Maven Setup](/docs/maven-setup) for more information on how to include individual Ignite maven artifacts."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "First Ignite Compute Application"
-}
-[/block]
-Let's write our first grid application which will count a number of non-white-space characters in a sentence. As an example, we will take a sentence, split it into multiple words, and have every compute job count number of characters in each individual word. At the end we simply add up results received from individual jobs to get our total count.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "try (Ignite ignite = Ignition.start()) {\n  Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n\n  // Iterate through all the words in the sentence and create Callable jobs.\n  for (final String word : \"Count characters using callable\".split(\" \"))\n    calls.add(word::length);\n\n  // Execute collection of Callables on the grid.\n  Collection<Integer> res = ignite.compute().call(calls);\n\n  int sum = res.stream().mapToInt(Integer::intValue).sum();\n \n\tSystem.out.println(\"Total number of characters is '\" + sum + \"'.\");\n}",
-      "language": "java",
-      "name": "compute"
-    },
-    {
-      "code": "try (Ignite ignite = Ignition.start()) {\n    Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n \n    // Iterate through all the words in the sentence and create Callable jobs.\n    for (final String word : \"Count characters using callable\".split(\" \")) {\n        calls.add(new IgniteCallable<Integer>() {\n            @Override public Integer call() throws Exception {\n                return word.length();\n            }\n        });\n    }\n \n    // Execute collection of Callables on the grid.\n    Collection<Integer> res = ignite.compute().call(calls);\n \n    int sum = 0;\n \n    // Add up individual word lengths received from remote nodes.\n    for (int len : res)\n        sum += len;\n \n    System.out.println(\">>> Total number of characters in the phrase is '\" + sum + \"'.\");\n}",
-      "language": "java",
-      "name": "java7 compute"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "Note that because of  [Zero Deployment](doc:zero-deployment) feature, when running the above application from your IDE, remote nodes will execute received jobs without explicit deployment.",
-  "title": "Zero Deployment"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "First Ignite Data Grid Application"
-}
-[/block]
-Now let's write a simple set of mini-examples which will put and get values to/from distributed cache, and perform basic transactions.
-
-Since we are using cache in this example, we should make sure that it is configured. Let's use example configuration shipped with Ignite that already has several caches configured: 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignite.sh examples/config/example-cache.xml",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "try (Ignite ignite = Ignition.start(\"examples/config/example-cache.xml\")) {\n    IgniteCache<Integer, String> cache = ignite.jcache(CACHE_NAME);\n \n    // Store keys in cache (values will end up on different cache nodes).\n    for (int i = 0; i < 10; i++)\n        cache.put(i, Integer.toString(i));\n \n    for (int i = 0; i < 10; i++)\n        System.out.println(\"Got [key=\" + i + \", val=\" + cache.get(i) + ']');\n}",
-      "language": "java",
-      "name": "Put and Get"
-    },
-    {
-      "code": "// Put-if-absent which returns previous value.\nInteger oldVal = cache.getAndPutIfAbsent(\"Hello\", 11);\n  \n// Put-if-absent which returns boolean success flag.\nboolean success = cache.putIfAbsent(\"World\", 22);\n  \n// Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.\noldVal = cache.getAndReplace(\"Hello\", 11);\n \n// Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.\nsuccess = cache.replace(\"World\", 22);\n  \n// Replace-if-matches operation.\nsuccess = cache.replace(\"World\", 2, 22);\n  \n// Remove-if-matches operation.\nsuccess = cache.remove(\"Hello\", 1);",
-      "language": "java",
-      "name": "Atomic Operations"
-    },
-    {
-      "code": "try (Transaction tx = ignite.transactions().txStart()) {\n    Integer hello = cache.get(\"Hello\");\n  \n    if (hello == 1)\n        cache.put(\"Hello\", 11);\n  \n    cache.put(\"World\", 22);\n  \n    tx.commit();\n}",
-      "language": "java",
-      "name": "Transactions"
-    },
-    {
-      "code": "// Lock cache key \"Hello\".\nLock lock = cache.lock(\"Hello\");\n \nlock.lock();\n \ntry {\n    cache.put(\"Hello\", 11);\n    cache.put(\"World\", 22);\n}\nfinally {\n    lock.unlock();\n} ",
-      "language": "java",
-      "name": "Distributed Locks"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Ignite Visor Admin Console"
-}
-[/block]
-The easiest way to examine the content of the data grid as well as perform a long list of other management and monitoring operations is to use Ignite Visor Command Line Utility.
-
-To start Visor simply run:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignitevisorcmd.sh",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/basic-concepts/ignite-life-cycel.md b/wiki/documentation/basic-concepts/ignite-life-cycel.md
deleted file mode 100644
index d7db8e5..0000000
--- a/wiki/documentation/basic-concepts/ignite-life-cycel.md
+++ /dev/null
@@ -1,122 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite is JVM-based. Single JVM represents one or more logical Ignite nodes (most of the time, however, a single JVM runs just one Ignite node). Throughout Ignite documentation we use term Ignite runtime and Ignite node almost interchangeably. For example, when we say that you can "run 5 nodes on this host" - in most cases it technically means that you can start 5 JVMs on this host each running a single Ignite node. Ignite also supports multiple Ignite nodes in a single JVM. In fact, that is exactly how most of the internal tests run for Ignite itself.
-[block:callout]
-{
-  "type": "success",
-  "body": "Ignite runtime == JVM process == Ignite node (in most cases)"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Ignition Class"
-}
-[/block]
-The `Ignition` class starts individual Ignite nodes in the network topology. Note that a physical server (like a computer on the network) can have multiple Ignite nodes running on it.
-
-Here is how you can start grid node locally with all defaults
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.start();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-or by passing a configuration file:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.start(\"examples/config/example-cache.xml\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-Path to configuration file can be absolute, or relative to either `IGNITE_HOME` (Ignite installation folder) or `META-INF` folder in your classpath.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "LifecycleBean"
-}
-[/block]
-Sometimes you need to perform certain actions before or after the Ignite node starts or stops. This can be done by implementing `LifecycleBean` interface, and specifying the implementation bean in `lifecycleBeans` property of `IgniteConfiguration` in the spring XML file:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...\n    <property name=\"lifecycleBeans\">\n        <list>\n            <bean class=\"com.mycompany.MyGridLifecycleBean\"/>\n        </list>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-`GridLifeCycleBean` can also configured programmatically the following way:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Create new configuration.\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Provide lifecycle bean to configuration.\ncfg.setLifecycleBeans(new MyGridLifecycleBean());\n \n// Start Ignite node with given configuration.\nIgnite ignite = GridGain.start(cfg)",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-An implementation of `LifecycleBean` may look like the following:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyLifecycleBean implements LifecycleBean {\n    @Override public void onLifecycleEvent(LifecycleEventType evt) {\n        if (evt == LifecycleEventType.BEFORE_GRID_START) {\n            // Do something.\n            ...\n        }\n    }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-You can inject Ignite instance and other useful resources into a `LifecycleBean` implementation. Please refer to [Resource Injection](/docs/resource-injection) section for more information.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Lifecycle Event Types"
-}
-[/block]
-The following lifecycle event types are supported:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Event Type",
-    "h-1": "Description",
-    "0-0": "BEFORE_NODE_START",
-    "0-1": "Invoked before Ignite node startup routine is initiated.",
-    "1-0": "AFTER_NODE_START",
-    "1-1": "Invoked right after Ignite node has started.",
-    "2-0": "BEFORE_NODE_STOP",
-    "2-1": "Invoked right before Ignite stop routine is initiated.",
-    "3-0": "AFTER_NODE_STOP",
-    "3-1": "Invoked right after Ignite node has stopped."
-  },
-  "cols": 2,
-  "rows": 4
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/basic-concepts/maven-setup.md b/wiki/documentation/basic-concepts/maven-setup.md
deleted file mode 100644
index 48641d3..0000000
--- a/wiki/documentation/basic-concepts/maven-setup.md
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-If you are using Maven to manage dependencies of your project, you can import individual Ignite modules a la carte.
-[block:callout]
-{
-  "type": "info",
-  "body": "In the examples below, please replace `${ignite.version}` with actual Apache Ignite version you are interested in."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Common Dependencies"
-}
-[/block]
-Ignite data fabric comes with one mandatory dependency on `ignite-core.jar`. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-core</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-However, in many cases may wish to have more dependencies, for example, if you want to use Spring configuration or SQL queries.
-
-Here are the most commonly used optional modules:
-  * ignite-indexing (optional, add if you need SQL indexing)
-  * ignite-spring (optional, add if you plan to use Spring configuration) 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-core</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-indexing</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Importing Individual Modules A La Carte"
-}
-[/block]
-You can import Ignite modules a la carte, one by one. The only required module is `ignite-core`, all others are optional. All optional modules can be imported just like the core module, but with different artifact IDs.
-
-The following modules are available:
-  * `ignite-spring` (for Spring-based configuration support)
-  * `ignite-indexing` (for SQL querying and indexing)
-  * `ignite-geospatial` (for geospatial indexing)
-  * `ignite-hibernate` (for Hibernate integration)
-  * `ignite-web` (for Web Sessions Clustering)
-  * `ignite-schedule` (for Cron-based task scheduling)
-  * `ignite-logj4` (for Log4j logging)
-  * `ignite-jcl` (for Apache Commons logging)
-  * `ignite-jta` (for XA integration)
-  * `ignite-hadoop2-integration` (Integration with HDFS 2.0)
-  * `ignite-rest-http` (for HTTP REST messages)
-  * `ignite-scalar` (for Ignite Scala API)
-  * `ignite-sl4j` (for SL4J logging)
-  * `ignite-ssh` (for starting grid nodes on remote machines)
-  * `ignite-urideploy` (for URI-based deployment)
-  * `ignite-aws` (for seamless cluster discovery on AWS S3)
-  * `ignite-aop` (for AOP-based grid-enabling)
-  * `ignite-visor-console`  (open source command line management and monitoring tool)
\ No newline at end of file
diff --git a/wiki/documentation/basic-concepts/what-is-ignite.md b/wiki/documentation/basic-concepts/what-is-ignite.md
deleted file mode 100644
index 985ddf4..0000000
--- a/wiki/documentation/basic-concepts/what-is-ignite.md
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Apache Ignite In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/lydEeGB6Rs9hwbpcQxiw",
-        "apache-ignite.png",
-        "1024",
-        "310",
-        "#ec945e",
-        ""
-      ],
-      "caption": ""
-    }
-  ]
-}
-[/block]
-##Features
-You can view Ignite as a collection of independent, well-integrated, in-memory components geared to improve performance and scalability of you application. Some of these components include:
-
-  * [Advanced Clustering](doc:cluster)
-  * [Compute Grid](doc:compute-grid) 
-  * [Data Grid (JCache)](doc:data-grid) 
-  * [Service Grid](doc:service-grid)
-  * [Ignite File System](doc:igfs)
-  * [Distributed Data Structures](doc:queue-and-set) 
-  * [Distributed Messaging](doc:messaging) 
-  * [Distributed Events](doc:events) 
-  * Streaming & CEP
-  * In-Memory Hadoop Accelerator
\ No newline at end of file
diff --git a/wiki/documentation/basic-concepts/zero-deployment.md b/wiki/documentation/basic-concepts/zero-deployment.md
deleted file mode 100644
index 3d43134..0000000
--- a/wiki/documentation/basic-concepts/zero-deployment.md
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-The closures and tasks that you use for your computations may be of any custom class, including anonymous classes. In Ignite, the remote nodes will automatically become aware of those classes, and you won't need to explicitly deploy or move any .jar files to any remote nodes. 
-
-Such behavior is possible due to peer class loading (P2P class loading), a special **distributed  ClassLoader** in Ignite for inter-node byte-code exchange. With peer-class-loading enabled, you don't have to manually deploy your Java or Scala code on each node in the grid and re-deploy it each time it changes.
-
-A code example like below would run on all remote nodes due to peer class loading, without any explicit deployment step.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Compute instance over remote nodes.\nIgniteCompute compute = ignite.compute(cluster.forRemotes());\n\n// Print hello message on all remote nodes.\ncompute.broadcast(() -> System.out.println(\"Hello node: \" + cluster.localNode().id());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Here is how peer class loading can be configured:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...   \n    <!-- Explicitly enable peer class loading. -->\n    <property name=\"peerClassLoadingEnabled\" value=\"true\"/>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setPeerClassLoadingEnabled(true);\n\n// Start Ignite node.\nIgnite ignite = Ignition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Peer class loading sequence works as follows:
-1. Ignite will check if class is available on local classpath (i.e. if it was loaded at system startup), and if it was, it will be returned. No class loading from a peer node will take place in this case.
-2. If class is not locally available, then a request will be sent to the originating node to provide class definition. Originating node will send class byte-code definition and the class will be loaded on the worker node. This happens only once per class - once class definition is loaded on a node, it will never have to be loaded again.
-[block:callout]
-{
-  "type": "warning",
-  "title": "Development vs Production",
-  "body": "It is recommended that peer-class-loading is disabled in production. Generally you want to have a controlled production environment without any magic."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "warning",
-  "title": "Auto-Clearing Caches for Hot Redeployment",
-  "body": "Whenever you change class definitions for the data stored in cache, Ignite will automatically clear the caches for previous class definitions before peer-deploying the new data to avoid class-loading conflicts."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "3rd Party Libraries",
-  "body": "When utilizing peer class loading, you should be aware of the libraries that get loaded from peer nodes vs. libraries that are already available locally in the class path. Our suggestion is to include all 3rd party libraries into class path of every node. This way you will not transfer megabytes of 3rd party classes to remote nodes every time you change a line of code."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/aws-config.md b/wiki/documentation/clustering/aws-config.md
deleted file mode 100644
index 154d80d..0000000
--- a/wiki/documentation/clustering/aws-config.md
+++ /dev/null
@@ -1,59 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Node discovery on AWS cloud is usually proven to be more challenging. Amazon EC2, just like most of the other virtual environments, has the following limitations:
-* Multicast is disabled.
-* TCP addresses change every time a new image is started.
-
-Although you can use TCP-based discovery in the absence of the Multicast, you still have to deal with constantly changing IP addresses and constantly updating the configuration. This creates a major inconvenience and makes configurations based on static IPs virtually unusable in such environments.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Amazon S3 Based Discovery"
-}
-[/block]
-To mitigate constantly changing IP addresses problem, Ignite supports automatic node discovery by utilizing S3 store via `TcpDiscoveryS3IpFinder`. On startup nodes register their IP addresses with Amazon S3 store. This way other nodes can try to connect to any of the IP addresses stored in S3 and initiate automatic grid node discovery.
-[block:callout]
-{
-  "type": "success",
-  "body": "Such approach allows to create your configuration once and reuse it for all EC2 instances."
-}
-[/block]
-
-
-Here is an example of how to configure Amazon S3 IP finder:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder\">\n          <property name=\"awsCredentials\" ref=\"aws.creds\"/>\n          <property name=\"bucketName\" value=\"YOUR_BUCKET_NAME\"/>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>\n\n<!-- AWS credentials. Provide your access key ID and secret access key. -->\n<bean id=\"aws.creds\" class=\"com.amazonaws.auth.BasicAWSCredentials\">\n  <constructor-arg value=\"YOUR_ACCESS_KEY_ID\" />\n  <constructor-arg value=\"YOUR_SECRET_ACCESS_KEY\" />\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n\nBasicAWSCredentials creds = new BasicAWSCredentials(\"yourAccessKey\", \"yourSecreteKey\");\n\nTcpDiscoveryS3IpFinder ipFinder = new TcpDiscoveryS3IpFinder();\n\nipFinder.setAwsCredentials(creds);\n\nspi.setIpFinder(ipFinder);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "Refer to [Cluster Configuration](doc:cluster-config) for more information on various cluster configuration properties."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/cluster-config.md b/wiki/documentation/clustering/cluster-config.md
deleted file mode 100644
index aef113f..0000000
--- a/wiki/documentation/clustering/cluster-config.md
+++ /dev/null
@@ -1,193 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-In Ignite, nodes can discover each other by using `DiscoverySpi`. Ignite provides `TcpDiscoverySpi` as a default implementation of `DiscoverySpi` that uses TCP/IP for node discovery. Discovery SPI can be configured for Multicast and Static IP based node discovery.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Multicast Based Discovery"
-}
-[/block]
-`TcpDiscoveryMulticastIpFinder` uses Multicast to discover other nodes in the grid and is the default IP finder. You should not have to specify it unless you plan to override default settings. Here is an example of how to configure this finder via Spring XML file or programmatically from Java:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder\">\n          <property name=\"multicastGroup\" value=\"228.10.10.157\"/>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n \nTcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();\n \nipFinder.setMulticastGroup(\"228.10.10.157\");\n \nspi.setIpFinder(ipFinder);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Static IP Based Discovery"
-}
-[/block]
-For cases when Multicast is disabled, `TcpDiscoveryVmIpFinder` should be used with pre-configured list of IP addresses. You are only required to provide at least one IP address, but usually it is advisable to provide 2 or 3 addresses of the grid nodes that you plan to start first for redundancy. Once a connection to any of the provided IP addresses is established, Ignite will automatically discover all other grid nodes.
-[block:callout]
-{
-  "type": "success",
-  "body": "You do not need to specify IP addresses for all Ignite nodes, only for a couple of nodes you plan to start first."
-}
-[/block]
-
-Here is an example of how to configure this finder via Spring XML file or programmatically from Java:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder\">\n          <property name=\"addresses\">\n            <list>\n              <value>1.2.3.4</value>\n              \n              <!-- \n                  IP Address and optional port range.\n                  You can also optionally specify an individual port.\n              -->\n              <value>1.2.3.5:47500..47509</value>\n            </list>\n          </property>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n \nTcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();\n \n// Set initial IP addresses.\n// Note that you can optionally specify a port or a port range.\nipFinder.setAddresses(Arrays.asList(\"1.2.3.4\", \"1.2.3.5:47500..47509\"));\n \nspi.setIpFinder(ipFinder);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Multicast and Static IP Based Discovery"
-}
-[/block]
-You can use both, Multicast and Static IP based discovery together. In this case, in addition to addresses received via multicast, if any, `TcpDiscoveryMulticastIpFinder` can also work with pre-configured list of static IP addresses, just like Static IP-Based Discovery described above. Here is an example of how to configure Multicast IP finder with static IP addresses:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder\">\n          <property name=\"multicastGroup\" value=\"228.10.10.157\"/>\n           \n          <!-- list of static IP addresses-->\n          <property name=\"addresses\">\n            <list>\n              <value>1.2.3.4</value>\n              \n              <!-- \n                  IP Address and optional port range.\n                  You can also optionally specify an individual port.\n              -->\n              <value>1.2.3.5:47500..47509</value>\n            </list>\n          </property>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n \nTcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();\n \n// Set Multicast group.\nipFinder.setMulticastGroup(\"228.10.10.157\");\n\n// Set initial IP addresses.\n// Note that you can optionally specify a port or a port range.\nipFinder.setAddresses(Arrays.asList(\"1.2.3.4\", \"1.2.3.5:47500..47509\"));\n \nspi.setIpFinder(ipFinder);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Amazon S3 Based Discovery"
-}
-[/block]
-Refer to [AWS Configuration](doc:aws-config) documentation.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "JDBC Based Discovery"
-}
-[/block]
-You can have your database be a common shared storage of initial IP addresses. In this nodes will write their IP addresses to a database on startup. This is done via `TcpDiscoveryJdbcIpFinder`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.jdbc.TcpDiscoveryJdbcIpFinder\">\n          <property name=\"dataSource\" ref=\"ds\"/>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>\n\n<!-- Configured data source instance. -->\n<bean id=\"ds\" class=\"some.Datasource\">\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n\n// Configure your DataSource.\nDataSource someDs = MySampleDataSource(...);\n\nTcpDiscoveryJdbcIpFinder ipFinder = new TcpDiscoveryJdbcIpFinder();\n\nipFinder.setDataSource(someDs);\n\nspi.setIpFinder(ipFinder);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-Following configuration parameters can be optionally configured on `TcpDiscoverySpi`.
-[block:parameters]
-{
-  "data": {
-    "0-0": "`setIpFinder(TcpDiscoveryIpFinder)`",
-    "0-1": "IP finder that is used to share info about nodes IP addresses.",
-    "0-2": "`TcpDiscoveryMulticastIpFinder`\n\nProvided implementations can be used:\n`TcpDiscoverySharedFsIpFinder`\n`TcpDiscoveryS3IpFinder`\n`TcpDiscoveryJdbcIpFinder`\n`TcpDiscoveryVmIpFinder`",
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "h-3": "Default",
-    "0-3": "",
-    "1-0": "`setLocalAddress(String)`",
-    "1-1": "Sets local host IP address that discovery SPI uses.",
-    "1-3": "",
-    "1-2": "If not provided, by default a first found non-loopback address will be used. If there is no non-loopback address available, then `java.net.InetAddress.getLocalHost()` will be used.",
-    "2-0": "`setLocalPort(int)`",
-    "2-1": "Port the SPI listens to.",
-    "2-2": "47500",
-    "2-3": "",
-    "3-0": "`setLocalPortRange(int)`",
-    "3-1": "Local port range. \nLocal node will try to bind on first available port starting from local port up until local port + local port range.",
-    "3-2": "100",
-    "3-3": "100",
-    "4-0": "`setHeartbeatFrequency(long)`",
-    "4-1": "Delay in milliseconds between heartbeat issuing of heartbeat messages. \nSPI sends messages in configurable time interval to other nodes to notify them about its state.",
-    "4-3": "2000",
-    "4-2": "2000",
-    "5-0": "`setMaxMissedHeartbeats(int)`",
-    "5-1": "Number of heartbeat requests that could be missed before local node initiates status check.",
-    "5-3": "1",
-    "5-2": "1",
-    "6-0": "`setReconnectCount(int)`",
-    "6-1": "Number of times node tries to (re)establish connection to another node.",
-    "6-3": "2",
-    "6-2": "2",
-    "7-0": "`setNetworkTimeout(long)`",
-    "7-1": "Sets maximum network timeout in milliseconds to use for network operations.",
-    "7-2": "5000",
-    "7-3": "5000",
-    "8-0": "`setSocketTimeout(long)`",
-    "8-1": "Sets socket operations timeout. This timeout is used to limit connection time and write-to-socket time.",
-    "8-2": "2000",
-    "8-3": "2000",
-    "9-0": "`setAckTimeout(long)`",
-    "9-1": "Sets timeout for receiving acknowledgement for sent message. \nIf acknowledgement is not received within this timeout, sending is considered as failed and SPI tries to repeat message sending.",
-    "9-2": "2000",
-    "9-3": "2000",
-    "10-0": "`setJoinTimeout(long)`",
-    "10-1": "Sets join timeout. If non-shared IP finder is used and node fails to connect to any address from IP finder, node keeps trying to join within this timeout. If all addresses are still unresponsive, exception is thrown and node startup fails. \n0 means wait forever.",
-    "10-2": "0",
-    "10-3": "0",
-    "11-0": "`setThreadPriority(int)`",
-    "11-1": "Thread priority for threads started by SPI.",
-    "11-2": "0",
-    "11-3": "0",
-    "12-0": "`setStatisticsPrintFrequency(int)`",
-    "12-1": "Statistics print frequency in milliseconds. \n0 indicates that no print is required. If value is greater than 0 and log is not quiet then stats are printed out with INFO level once a period. This may be very helpful for tracing topology problems.",
-    "12-2": "true",
-    "12-3": "true",
-    "13-0": ""
-  },
-  "cols": 3,
-  "rows": 13
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/cluster-groups.md b/wiki/documentation/clustering/cluster-groups.md
deleted file mode 100644
index 7899807..0000000
--- a/wiki/documentation/clustering/cluster-groups.md
+++ /dev/null
@@ -1,227 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-`ClusterGroup` represents a logical grouping of cluster nodes. 
-
-In Ignite all nodes are equal by design, so you don't have to start any nodes in specific order, or assign any specific roles to them. However, Ignite allows users to logically group cluster nodes for any application specific purpose. For example, you may wish to deploy a service only on remote nodes, or assign a role of "worker" to some worker nodes for job execution.
-[block:callout]
-{
-  "type": "success",
-  "body": "Note that `IgniteCluster` interface is also a cluster group which includes all nodes in the cluster."
-}
-[/block]
-You can limit job execution, service deployment, messaging, events, and other tasks to run only within some cluster group. For example, here is how to broadcast a job only to remote nodes (excluding the local node).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\nIgniteCluster cluster = ignite.cluster();\n\n// Get compute instance which will only execute\n// over remote nodes, i.e. not this node.\nIgniteCompute compute = ignite.compute(cluster.forRemotes());\n\n// Broadcast to all remote nodes and print the ID of the node \n// on which this closure is executing.\ncompute.broadcast(() -> System.out.println(\"Hello Node: \" + ignite.cluster().localNode().id());\n",
-      "language": "java",
-      "name": "broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\nIgniteCluster cluster = ignite.cluster();\n\n// Get compute instance which will only execute\n// over remote nodes, i.e. not this node.\nIgniteCompute compute = ignite.compute(cluster.forRemotes());\n\n// Broadcast closure only to remote nodes.\ncompute.broadcast(new IgniteRunnable() {\n    @Override public void run() {\n        // Print ID of the node on which this runnable is executing.\n        System.out.println(\">>> Hello Node: \" + ignite.cluster().localNode().id());\n    }\n}",
-      "language": "java",
-      "name": "java7 broadcast"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Predefined Cluster Groups"
-}
-[/block]
-You can create cluster groups based on any predicate. For convenience Ignite comes with some predefined cluster groups.
-
-Here are examples of some cluster groups available on `ClusterGroup` interface.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Cluster group with remote nodes, i.e. other than this node.\nClusterGroup remoteGroup = cluster.forRemotes();",
-      "language": "java",
-      "name": "Remote Nodes"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// All nodes on wich cache with name \"myCache\" is deployed.\nClusterGroup cacheGroup = cluster.forCache(\"myCache\");",
-      "language": "java",
-      "name": "Cache Nodes"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// All nodes with attribute \"ROLE\" equal to \"worker\".\nClusterGroup attrGroup = cluster.forAttribute(\"ROLE\", \"worker\");",
-      "language": "java",
-      "name": "Nodes With Attributes"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Cluster group containing one random node.\nClusterGroup randomGroup = cluster.forRandom();\n\n// First (and only) node in the random group.\nClusterNode randomNode = randomGroup.node();",
-      "language": "java",
-      "name": "Random Node"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Pick random node.\nClusterGroup randomNode = cluster.forRandeom();\n\n// All nodes on the same physical host as the random node.\nClusterGroup cacheNodes = cluster.forHost(randomNode);",
-      "language": "java",
-      "name": "Host Nodes"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Dynamic cluster group representing the oldest cluster node.\n// Will automatically shift to the next oldest, if the oldest\n// node crashes.\nClusterGroup oldestNode = cluster.forOldest();",
-      "language": "java",
-      "name": "Oldest Node"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Cluster group with only this (local) node in it.\nClusterGroup localGroup = cluster.forLocal();\n\n// Local node.\nClusterNode localNode = localGroup.node();",
-      "language": "java",
-      "name": "Local Node"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cluster Groups with Node Attributes"
-}
-[/block]
-The unique characteristic of Ignite is that all grid nodes are equal. There are no master or server nodes, and there are no worker or client nodes either. All nodes are equal from Ignite’s point of view - however, users can configure nodes to be masters and workers, or clients and data nodes. 
-
-All cluster nodes on startup automatically register all environment and system properties as node attributes. However, users can choose to assign their own node attributes through configuration:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...\n    <property name=\"userAttributes\">\n        <map>\n            <entry key=\"ROLE\" value=\"worker\"/>\n        </map>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\nMap<String, String> attrs = Collections.singletonMap(\"ROLE\", \"worker\");\n\ncfg.setUserAttributes(attrs);\n\n// Start Ignite node.\nIgnite ignite = Ignition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "All environment variables and system properties are automatically registered as node attributes on startup."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "Node attributes are available via `ClusterNode.attribute(\"propertyName\")` method."
-}
-[/block]
-Following example shows how to get the nodes where "worker" attribute has been set.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\nClusterGroup workerGroup = cluster.forAttribute(\"ROLE\", \"worker\");\n\nCollection<GridNode> workerNodes = workerGroup.nodes();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Custom Cluster Groups"
-}
-[/block]
-You can define dynamic cluster groups based on some predicate. Such cluster groups will always only include the nodes that pass the predicate.
-
-Here is an example of a cluster group over nodes that have less than 50% CPU utilization. Note that the nodes in this group will change over time based on their CPU load.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Nodes with less than 50% CPU load.\nClusterGroup readyNodes = cluster.forPredicate((node) -> node.metrics().getCurrentCpuLoad() < 0.5);",
-      "language": "java",
-      "name": "custom group"
-    },
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Nodes with less than 50% CPU load.\nClusterGroup readyNodes = cluster.forPredicate(\n    new IgnitePredicate<ClusterNode>() {\n        @Override public boolean apply(ClusterNode node) {\n            return node.metrics().getCurrentCpuLoad() < 0.5;\n        }\n    }\n));",
-      "language": "java",
-      "name": "java7 custom group"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Combining Cluster Groups"
-}
-[/block]
-You can combine cluster groups by nesting them within each other. For example, the following code snippet shows how to get a random remote node by combing remote group with random group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Group containing oldest node out of remote nodes.\nClusterGroup oldestGroup = cluster.forRemotes().forOldest();\n\nClusterNode oldestNode = oldestGroup.node();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Getting Nodes from Cluster Groups"
-}
-[/block]
-You can get to various cluster group nodes as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "ClusterGroup remoteGroup = cluster.forRemotes();\n\n// All cluster nodes in the group.\nCollection<ClusterNode> grpNodes = remoteGroup.nodes();\n\n// First node in the group (useful for groups with one node).\nClusterNode node = remoteGroup.node();\n\n// And if you know a node ID, get node by ID.\nUUID myID = ...;\n\nnode = remoteGroup.node(myId);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cluster Group Metrics"
-}
-[/block]
-Ignite automatically collects metrics about all the cluster nodes. The cool thing about cluster groups is that it automatically aggregates the metrics across all the nodes in the group and provides proper averages, mins, and maxes within the group.
-
-Group metrics are available via `ClusterMetrics` interface which contains over 50 various metrics (note that the same metrics are available for individual cluster nodes as well).
-
-Here is an example of getting some metrics, including average CPU load and used heap, across all remote nodes:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Cluster group with remote nodes, i.e. other than this node.\nClusterGroup remoteGroup = ignite.cluster().forRemotes();\n\n// Cluster group metrics.\nClusterMetrics metrics = remoteGroup.metrics();\n\n// Get some metric values.\ndouble cpuLoad = metrics.getCurrentCpuLoad();\nlong usedHeap = metrics.getHeapMemoryUsed();\nint numberOfCores = metrics.getTotalCpus();\nint activeJobs = metrics.getCurrentActiveJobs();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/cluster.md b/wiki/documentation/clustering/cluster.md
deleted file mode 100644
index ce19719..0000000
--- a/wiki/documentation/clustering/cluster.md
+++ /dev/null
@@ -1,145 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite nodes can automatically discover each other. This helps to scale the cluster when needed, without having to restart the whole cluster. Developers can also leverage from Ignite’s hybrid cloud support that allows establishing connection between private cloud and public clouds such as Amazon Web Services, providing them with best of both worlds. 
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/KBkahg31S4qWXEBjfoya",
-        "ignite_cluster.png",
-        "500",
-        "350",
-        "#f48745",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * Pluggable Design via `IgniteDiscoverySpi`
-  * Dynamic topology management
-  * Automatic discovery on LAN, WAN, and AWS
-  * On-demand and direct deployment
-  * Support for virtual clusters and node groupings
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCluster"
-}
-[/block]
-Cluster functionality is provided via `IgniteCluster` interface. You can get an instance of `IgniteCluster` from `Ignite` as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCluster cluster = ignite.cluster();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Through `IgniteCluster` interface you can:
- * Start and stop remote cluster nodes
- * Get a list of all cluster members
- * Create logical [Cluster Groups](doc:cluster-groups)
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ClusterNode"
-}
-[/block]
-The `ClusterNode` interface has very concise API and deals only with the node as a logical network endpoint in the topology: its globally unique ID, the node metrics, its static attributes set by the user and a few other parameters.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cluster Node Attributes"
-}
-[/block]
-All cluster nodes on startup automatically register all environment and system properties as node attributes. However, users can choose to assign their own node attributes through configuration:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...\n    <property name=\"userAttributes\">\n        <map>\n            <entry key=\"ROLE\" value=\"worker\"/>\n        </map>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Following example shows how to get the nodes where "worker" attribute has been set.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "ClusterGroup workers = ignite.cluster().forAttribute(\"ROLE\", \"worker\");\n\nCollection<GridNode> nodes = workers.nodes();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "All node attributes are available via `ClusterNode.attribute(\"propertyName\")` method."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cluster Node Metrics"
-}
-[/block]
-Ignite automatically collects metrics for all cluster nodes. Metrics are collected in the background and are updated with every heartbeat message exchanged between cluster nodes.
-
-Node metrics are available via `ClusterMetrics` interface which contains over 50 various metrics (note that the same metrics are available for [Cluster Groups](doc:cluster-groups)  as well).
-
-Here is an example of getting some metrics, including average CPU load and used heap, for the local node:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Local Ignite node.\nClusterNode localNode = cluster.localNode();\n\n// Node metrics.\nClusterMetrics metrics = localNode.metrics();\n\n// Get some metric values.\ndouble cpuLoad = metrics.getCurrentCpuLoad();\nlong usedHeap = metrics.getHeapMemoryUsed();\nint numberOfCores = metrics.getTotalCpus();\nint activeJobs = metrics.getCurrentActiveJobs();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Local Cluster Node"
-}
-[/block]
-Local grid node is an instance of the `ClusterNode` representing *this* Ignite node. 
-
-Here is an example of how to get a local node:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "ClusterNode localNode = ignite.cluster().localNode();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/leader-election.md b/wiki/documentation/clustering/leader-election.md
deleted file mode 100644
index 1adf45f..0000000
--- a/wiki/documentation/clustering/leader-election.md
+++ /dev/null
@@ -1,76 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-When working in distributed environments, sometimes you need to have a guarantee that you always will pick the same node, regardless of the cluster topology changes. Such nodes are usually called **leaders**. 
-
-In many systems electing cluster leaders usually has to do with data consistency and is generally handled via collecting votes from cluster members. Since in Ignite the data consistency is handled by data grid affinity function (e.g. [Rendezvous Hashing](http://en.wikipedia.org/wiki/Rendezvous_hashing)), picking leaders in traditional sense for data consistency outside of the data grid is not really needed.
-
-However, you may still wish to have a *coordinator* node for certain tasks. For this purpose, Ignite lets you automatically always pick either oldest or youngest nodes in the cluster.
-[block:callout]
-{
-  "type": "warning",
-  "title": "Use Service Grid",
-  "body": "Note that for most *leader* or *singleton-like* use cases, it is recommended to use the **Service Grid** functionality, as it allows to automatically deploy various [Cluster Singleton Services](doc:cluster-singletons) and is usually easier to use."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Oldest Node"
-}
-[/block]
-Oldest node has a property that it remains constant whenever new nodes are added. The only time when the oldest node in the cluster changes is when it leaves the cluster or crashes.
-
-Here is an example of how to select [Cluster Group](doc:cluster-group) with only the oldest node in it.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Dynamic cluster group representing the oldest cluster node.\n// Will automatically shift to the next oldest, if the oldest\n// node crashes.\nClusterGroup oldestNode = cluster.forOldest();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Youngest Node"
-}
-[/block]
-Youngest node, unlike the oldest node, constantly changes every time a new node joins a cluster. However, sometimes it may still become handy, especially if you need to execute some task only on the newly joined node.
-
-Here is an example of how to select [Cluster Group](doc:cluster-groups) with only the youngest node in it.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "gniteCluster cluster = ignite.cluster();\n\n// Dynamic cluster group representing the youngest cluster node.\n// Will automatically shift to the next oldest, if the oldest\n// node crashes.\nClusterGroup youngestNode = cluster.forYoungest();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "Once the cluster group is obtained, you can use it for executing tasks, deploying services, sending messages, and more."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/network-config.md b/wiki/documentation/clustering/network-config.md
deleted file mode 100644
index c33c919..0000000
--- a/wiki/documentation/clustering/network-config.md
+++ /dev/null
@@ -1,118 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-`CommunicationSpi` provides basic plumbing to send and receive grid messages and is utilized for all distributed grid operations, such as task execution, monitoring data exchange, distributed event querying and others. Ignite provides `TcpCommunicationSpi` as the default implementation of `CommunicationSpi`, that uses the TCP/IP to communicate with other nodes. 
-
-To enable communication with other nodes, `TcpCommunicationSpi` adds `TcpCommuncationSpi.ATTR_ADDRS` and `TcpCommuncationSpi.ATTR_PORT` local node attributes. At startup, this SPI tries to start listening to local port specified by `TcpCommuncationSpi.setLocalPort(int)` method. If local port is occupied, then SPI will automatically increment the port number until it can successfully bind for listening. `TcpCommuncationSpi.setLocalPortRange(int)` configuration parameter controls maximum number of ports that SPI will try before it fails. 
-[block:callout]
-{
-  "type": "info",
-  "body": "Port range comes very handy when starting multiple grid nodes on the same machine or even in the same VM. In this case all nodes can be brought up without a single change in configuration.",
-  "title": "Local Port Range"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-Following configuration parameters can be optionally configured on `TcpCommuncationSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setLocalAddress(String)\t`",
-    "0-1": "Sets local host address for socket binding.",
-    "0-2": "Any available local IP address.",
-    "1-0": "`setLocalPort(int)`",
-    "2-0": "`setLocalPortRange(int)`",
-    "3-0": "`setTcpNoDelay(boolean)`",
-    "4-0": "`setConnectTimeout(long)`",
-    "5-0": "`setIdleConnectionTimeout(long)`",
-    "6-0": "`setBufferSizeRatio(double)`",
-    "7-0": "`setMinimumBufferedMessageCount(int)`",
-    "8-0": "`setDualSocketConnection(boolean)`",
-    "9-0": "`setSpiPortResolver(GridSpiPortResolver)`",
-    "10-0": "`setConnectionBufferSize(int)`",
-    "11-0": "`setSelectorsCount(int)`",
-    "12-0": "`setConnectionBufferFlushFrequency(long)`",
-    "13-0": "`setDirectBuffer(boolean)`",
-    "14-0": "`setDirectSendBuffer(boolean)`",
-    "15-0": "`setAsyncSend(boolean)`",
-    "16-0": "`setSharedMemoryPort(int)`",
-    "17-0": "`setSocketReceiveBuffer(int)`",
-    "18-0": "`setSocketSendBuffer(int)`",
-    "1-1": "Sets local port for socket binding.",
-    "1-2": "47100",
-    "2-1": "Controls maximum number of local ports tried if all previously tried ports are occupied.",
-    "2-2": "100",
-    "3-1": "Sets value for `TCP_NODELAY` socket option. Each socket accepted or created will be using provided value.\nThis should be set to true (default) for reducing request/response time during communication over TCP protocol. In most cases we do not recommend to change this option.",
-    "3-2": "true",
-    "4-1": "Sets connect timeout used when establishing connection with remote nodes.",
-    "4-2": "1000",
-    "5-1": "Sets maximum idle connection timeout upon which a connection to client will be closed.",
-    "5-2": "30000",
-    "6-1": "Sets the buffer size ratio for this SPI. As messages are sent, the buffer size is adjusted using this ratio.",
-    "6-2": "0.8 or `IGNITE_COMMUNICATION_BUF_RESIZE_RATIO` system property value, if set.",
-    "7-1": "Sets the minimum number of messages for this SPI, that are buffered prior to sending.",
-    "7-2": "512 or `IGNITE_MIN_BUFFERED_COMMUNICATION_MSG_CNT` system property value, if set.",
-    "8-1": "Sets flag indicating whether dual-socket connection between nodes should be enforced. If set to true, two separate connections will be established between communicating nodes: one for outgoing messages, and one for incoming. When set to false, single TCP connection will be used for both directions.\nThis flag is useful on some operating systems, when TCP_NODELAY flag is disabled and messages take too long to get delivered.",
-    "8-2": "false",
-    "9-1": "Sets port resolver for internal-to-external port mapping. In some cases network routers are configured to perform port mapping between external and internal networks and the same mapping must be available to SPIs in GridGain that perform communication over IP protocols.",
-    "9-2": "null",
-    "10-1": "This parameter is used only when `setAsyncSend(boolean)` is set to false. \n\nSets connection buffer size for synchronous connections. Increase buffer size if using synchronous send and sending large amount of small sized messages. However, most of the time this should be set to 0 (default).",
-    "10-2": "0",
-    "11-1": "Sets the count of selectors to be used in TCP server.",
-    "11-2": "Default count of selectors equals to the expression result - \nMath.min(4, Runtime.getRuntime() .availableProcessors())",
-    "12-1": "This parameter is used only when `setAsyncSend(boolean)` is set to false. \n\nSets connection buffer flush frequency in milliseconds. This parameter makes sense only for synchronous send when connection buffer size is not 0. Buffer will be flushed once within specified period if there is no enough messages to make it flush automatically.",
-    "12-2": "100",
-    "13-1": "Switches between using NIO direct and NIO heap allocation buffers. Although direct buffers perform better, in some cases (especially on Windows) they may cause JVM crashes. If that happens in your environment, set this property to false.",
-    "13-2": "true",
-    "14-1": "Switches between using NIO direct and NIO heap allocation buffers usage for message sending in asynchronous mode.",
-    "14-2": "false",
-    "15-1": "Switches between synchronous and asynchronous message sending.\nThis should be set to true (default) if grid nodes send large amount of data over network from multiple threads, however this maybe environment and application specific and we recommend to benchmark the application in both modes.",
-    "15-2": "true",
-    "16-1": "Sets port which will be used by `IpcSharedMemoryServerEndpoint`. \nNodes started on the same host will communicate over IPC shared memory (only for Linux and MacOS hosts). Set this to -1 to disable IPC shared memory communication.",
-    "16-2": "48100",
-    "17-1": "Sets receive buffer size for sockets created or accepted by this SPI. If not provided, default is 0 which leaves buffer unchanged after socket creation (i.e. uses Operating System default value).",
-    "17-2": "0",
-    "18-1": "Sets send buffer size for sockets created or accepted by this SPI. If not provided, default is 0 which leaves the buffer unchanged after socket creation (i.e. uses Operating System default value).",
-    "18-2": "0"
-  },
-  "cols": 3,
-  "rows": 19
-}
-[/block]
-##Example 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"communicationSpi\">\n    <bean class=\"org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi\">\n      <!-- Override local port. -->\n      <property name=\"localPort\" value=\"4321\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "TcpCommunicationSpi commSpi = new TcpCommunicationSpi();\n \n// Override local port.\ncommSpi.setLocalPort(4321);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default communication SPI.\ncfg.setCommunicationSpi(commSpi);\n \n// Start grid.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/clustering/node-local-map.md b/wiki/documentation/clustering/node-local-map.md
deleted file mode 100644
index 23f77de..0000000
--- a/wiki/documentation/clustering/node-local-map.md
+++ /dev/null
@@ -1,52 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Often it is useful to share a state between different compute jobs or different deployed services. For this purpose Ignite provides a shared concurrent **node-local-map** available on each node.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCluster cluster = ignite.cluster();\n\nConcurrentMap<String, Integer> nodeLocalMap = cluster.nodeLocalMap():",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Node-local values are similar to thread locals in a way that these values are not distributed and kept only on the local node. Node-local data can be used by compute jobs to share the state between executions. It can also be used by deployed services as well. 
-
-As an example, let's create a job which increments a node-local counter every time it executes on some node. This way, the node-local counter on each node will tell us how many times a job had executed on that cluster node. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "private IgniteCallable<Long> job = new IgniteCallable<Long>() {\n  @IgniteInstanceResource\n  private Ignite ignite;\n  \n  @Override \n  public Long call() {                  \n    // Get a reference to node local.\n    ConcurrentMap<String, AtomicLong> nodeLocalMap = ignite.cluster().nodeLocalMap();\n\n    AtomicLong cntr = nodeLocalMap.get(\"counter\");\n\n    if (cntr == null) {\n      AtomicLong old = nodeLocalMap.putIfAbsent(\"counter\", cntr = new AtomicLong());\n      \n      if (old != null)\n        cntr = old;\n    }\n    \n    return cntr.incrementAndGet();\n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Now let's execute this job 2 times on the same node and make sure that the value of the counter is 2.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "ClusterGroup random = ignite.cluster().forRandom();\n\nIgniteCompute compute = ignite.compute(random);\n\n// The first time the counter on the picked node will be initialized to 1.\nLong res = compute.call(job);\n\nassert res == 1;\n\n// Now the counter will be incremented and will have value 2.\nres = compute.call(job);\n\nassert res == 2;",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/checkpointing.md b/wiki/documentation/compute-grid/checkpointing.md
deleted file mode 100644
index b7ca7ac..0000000
--- a/wiki/documentation/compute-grid/checkpointing.md
+++ /dev/null
@@ -1,255 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Checkpointing provides an ability to save an intermediate job state. It can be useful when long running jobs need to store some intermediate state to protect from node failures. Then on restart of a failed node, a job would load the saved checkpoint and continue from where it left off. The only requirement for job checkpoint state is to implement `java.io.Serializable` interface.
-
-Checkpoints are available through the following methods on `GridTaskSession` interface:
-* `ComputeTaskSession.loadCheckpoint(String)`
-* `ComputeTaskSession.removeCheckpoint(String)`
-* `ComputeTaskSession.saveCheckpoint(String, Object)`
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Master Node Failure Protection"
-}
-[/block]
-One important use case for checkpoint that is not readily apparent is to guard against failure of the "master" node - the node that started the original execution. When master node fails, Ignite doesn’t anywhere to send the results of job execution to, and thus the result will be discarded.
-
-To failover this scenario one can store the final result of the job execution as a checkpoint and have the logic re-run the entire task in case of a "master" node failure. In such case the task re-run will be much faster since all the jobs' can start from the saved checkpoints.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Setting Checkpoints"
-}
-[/block]
-Every compute job can periodically *checkpoint* itself by calling `ComputeTaskSession.saveCheckpoint(...)` method.
-
-If job did save a checkpoint, then upon beginning of its execution, it should check if the checkpoint is available and start executing from the last saved checkpoint.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\ncompute.run(new IgniteRunnable() {\n  // Task session (injected on closure instantiation).\n  @TaskSessionResource\n  private ComputeTaskSession ses;\n\n  @Override \n  public Object applyx(Object arg) throws GridException {\n    // Try to retrieve step1 result.\n    Object res1 = ses.loadCheckpoint(\"STEP1\");\n\n    if (res1 == null) {\n      res1 = computeStep1(arg); // Do some computation.\n\n      // Save step1 result.\n      ses.saveCheckpoint(\"STEP1\", res1);\n    }\n\n    // Try to retrieve step2 result.\n    Object res2 = ses.loadCheckpoint(\"STEP2\");\n\n    if (res2 == null) {\n      res2 = computeStep2(res1); // Do some computation.\n\n      // Save step2 result.\n      ses.saveCheckpoint(\"STEP2\", res2);\n    }\n\n    ...\n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "CheckpointSpi"
-}
-[/block]
-In Ignite, checkpointing functionality is provided by `CheckpointSpi` which has the following out-of-the-box implementations:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Class",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "[SharedFsCheckpointSpi](#file-system-checkpoint-configuration)\n(default)",
-    "0-1": "This implementation uses a shared file system to store checkpoints.",
-    "0-2": "Yes",
-    "1-0": "[CacheCheckpointSpi](#cache-checkpoint-configuration)",
-    "1-1": "This implementation uses a cache to store checkpoints.",
-    "2-0": "[JdbcCheckpointSpi](#database-checkpoint-configuration)",
-    "2-1": "This implementation uses a database to store checkpoints.",
-    "3-1": "This implementation uses Amazon S3 to store checkpoints.",
-    "3-0": "[S3CheckpointSpi](#amazon-s3-checkpoint-configuration)"
-  },
-  "cols": 2,
-  "rows": 4
-}
-[/block]
-`CheckpointSpi` is provided in `IgniteConfiguration` and passed into Ignition class at startup. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "File System Checkpoint Configuration"
-}
-[/block]
-The following configuration parameters can be used to configure `SharedFsCheckpointSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setDirectoryPaths(Collection)`",
-    "0-1": "Sets directory paths to the shared folders where checkpoints are stored. The path can either be absolute or relative to the path specified in `IGNITE_HOME` environment or system varialble.",
-    "0-2": "`IGNITE_HOME/work/cp/sharedfs`"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean class=\"org.apache.ignite.spi.checkpoint.sharedfs.SharedFsCheckpointSpi\">\n    <!-- Change to shared directory path in your environment. -->\n      <property name=\"directoryPaths\">\n        <list>\n          <value>/my/directory/path</value>\n          <value>/other/directory/path</value>\n        </list>\n      </property>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n \nSharedFsCheckpointSpi checkpointSpi = new SharedFsCheckpointSpi();\n \n// List of checkpoint directories where all files are stored.\nCollection<String> dirPaths = new ArrayList<String>();\n \ndirPaths.add(\"/my/directory/path\");\ndirPaths.add(\"/other/directory/path\");\n \n// Override default directory path.\ncheckpointSpi.setDirectoryPaths(dirPaths);\n \n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(checkpointSpi);\n \n// Starts Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Checkpoint Configuration"
-}
-[/block]
-`CacheCheckpointSpi` is a cache-based implementation for checkpoint SPI. Checkpoint data will be stored in the Ignite data grid in a pre-configured cache. 
-
-The following configuration parameters can be used to configure `CacheCheckpointSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setCacheName(String)`",
-    "0-1": "Sets cache name to use for storing checkpoints.",
-    "0-2": "`checkpoints`"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Database Checkpoint Configuration"
-}
-[/block]
-`JdbcCheckpointSpi` uses database to store checkpoints. All checkpoints are stored in the database table and are available from all nodes in the grid. Note that every node must have access to the database. A job state can be saved on one node and loaded on another (e.g., if a job gets preempted on a different node after node failure).
-
-The following configuration parameters can be used to configure `JdbcCheckpointSpi` (all are optional):
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setDataSource(DataSource)`",
-    "0-1": "Sets DataSource to use for database access.",
-    "0-2": "No value",
-    "1-0": "`setCheckpointTableName(String)`",
-    "1-1": "Sets checkpoint table name.",
-    "1-2": "`CHECKPOINTS`",
-    "2-0": "`setKeyFieldName(String)`",
-    "2-1": "Sets checkpoint key field name.",
-    "2-2": "`NAME`",
-    "3-0": "`setKeyFieldType(String)`",
-    "3-1": "Sets checkpoint key field type. The field should have corresponding SQL string type (`VARCHAR` , for example).",
-    "3-2": "`VARCHAR(256)`",
-    "4-0": "`setValueFieldName(String)`",
-    "4-1": "Sets checkpoint value field name.",
-    "4-2": "`VALUE`",
-    "5-0": "`setValueFieldType(String)`",
-    "5-1": "Sets checkpoint value field type. Note, that the field should have corresponding SQL BLOB type. The default value is BLOB, won’t work for all databases. For example, if using HSQL DB, then the type should be `longvarbinary`.",
-    "5-2": "`BLOB`",
-    "6-0": "`setExpireDateFieldName(String)`",
-    "6-1": "Sets checkpoint expiration date field name.",
-    "6-2": "`EXPIRE_DATE`",
-    "7-0": "`setExpireDateFieldType(String)`",
-    "7-1": "Sets checkpoint expiration date field type. The field should have corresponding SQL `DATETIME` type.",
-    "7-2": "`DATETIME`",
-    "8-0": "`setNumberOfRetries(int)`",
-    "8-1": "Sets number of retries in case of any database errors.",
-    "8-2": "2",
-    "9-0": "`setUser(String)`",
-    "9-1": "Sets checkpoint database user name. Note that authentication will be performed only if both, user and password are set.",
-    "9-2": "No value",
-    "10-0": "`setPassword(String)`",
-    "10-1": "Sets checkpoint database password.",
-    "10-2": "No value"
-  },
-  "cols": 3,
-  "rows": 11
-}
-[/block]
-##Apache DBCP
-[Apache DBCP](http://commons.apache.org/proper/commons-dbcp/) project provides various wrappers for data sources and connection pools. You can use these wrappers as Spring beans to configure this SPI from Spring configuration file or code. Refer to [Apache DBCP](http://commons.apache.org/proper/commons-dbcp/) project for more information.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean class=\"org.apache.ignite.spi.checkpoint.database.JdbcCheckpointSpi\">\n      <property name=\"dataSource\">\n        <ref bean=\"anyPoolledDataSourceBean\"/>\n      </property>\n      <property name=\"checkpointTableName\" value=\"CHECKPOINTS\"/>\n      <property name=\"user\" value=\"test\"/>\n      <property name=\"password\" value=\"test\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "JdbcCheckpointSpi checkpointSpi = new JdbcCheckpointSpi();\n \njavax.sql.DataSource ds = ... // Set datasource.\n \n// Set database checkpoint SPI parameters.\ncheckpointSpi.setDataSource(ds);\ncheckpointSpi.setUser(\"test\");\ncheckpointSpi.setPassword(\"test\");\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(checkpointSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Amazon S3 Checkpoint Configuration"
-}
-[/block]
-`S3CheckpointSpi` uses Amazon S3 storage to store checkpoints. For information about Amazon S3 visit [http://aws.amazon.com/](http://aws.amazon.com/).
-
-The following configuration parameters can be used to configure `S3CheckpointSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setAwsCredentials(AWSCredentials)`",
-    "0-1": "Sets AWS credentials to use for storing checkpoints.",
-    "0-2": "No value (must be provided)",
-    "1-0": "`setClientConfiguration(Client)`",
-    "1-1": "Sets AWS client configuration.",
-    "1-2": "No value",
-    "2-0": "`setBucketNameSuffix(String)`",
-    "2-1": "Sets bucket name suffix.",
-    "2-2": "default-bucket"
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean class=\"org.apache.ignite.spi.checkpoint.s3.S3CheckpointSpi\">\n      <property name=\"awsCredentials\">\n        <bean class=\"com.amazonaws.auth.BasicAWSCredentials\">\n          <constructor-arg value=\"YOUR_ACCESS_KEY_ID\" />\n          <constructor-arg value=\"YOUR_SECRET_ACCESS_KEY\" />\n        </bean>\n      </property>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n \nS3CheckpointSpi spi = new S3CheckpointSpi();\n \nAWSCredentials cred = new BasicAWSCredentials(YOUR_ACCESS_KEY_ID, YOUR_SECRET_ACCESS_KEY);\n \nspi.setAwsCredentials(cred);\n \nspi.setBucketNameSuffix(\"checkpoints\");\n \n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(cpSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/collocate-compute-and-data.md b/wiki/documentation/compute-grid/collocate-compute-and-data.md
deleted file mode 100644
index e4d064e..0000000
--- a/wiki/documentation/compute-grid/collocate-compute-and-data.md
+++ /dev/null
@@ -1,46 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Collocation of computations with data allow for minimizing data serialization within network and can significantly improve performance and scalability of your application. Whenever possible, you should alway make best effort to colocate your computations with the cluster nodes caching the data that needs to be processed.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Affinity Call and Run Methods"
-}
-[/block]
-`affinityCall(...)`  and `affinityRun(...)` methods co-locate jobs with nodes on which data is cached. In other words, given a cache name and affinity key these methods try to locate the node on which the key resides on Ignite the specified Ignite cache, and then execute the job there. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor (int key = 0; key < KEY_CNT; key++) {\n    // This closure will execute on the remote node where\n    // data with the 'key' is located.\n    compute.affinityRun(CACHE_NAME, key, () -> { \n        // Peek is a local memory lookup.\n        System.out.println(\"Co-located [key= \" + key + \", value= \" + cache.peek(key) +']');\n    });\n}",
-      "language": "java",
-      "name": "affinityRun"
-    },
-    {
-      "code": "IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\nList<IgniteFuture<?>> futs = new ArrayList<>();\n\nfor (int key = 0; key < KEY_CNT; key++) {\n    // This closure will execute on the remote node where\n    // data with the 'key' is located.\n    asyncCompute.affinityRun(CACHE_NAME, key, () -> { \n        // Peek is a local memory lookup.\n        System.out.println(\"Co-located [key= \" + key + \", value= \" + cache.peek(key) +']');\n    });\n  \n    futs.add(asyncCompute.future());\n}\n\n// Wait for all futures to complete.\nfuts.stream().forEach(IgniteFuture::get);",
-      "language": "java",
-      "name": "async affinityRun"
-    },
-    {
-      "code": "final IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor (int i = 0; i < KEY_CNT; i++) {\n    final int key = i;\n \n    // This closure will execute on the remote node where\n    // data with the 'key' is located.\n    compute.affinityRun(CACHE_NAME, key, new IgniteRunnable() {\n        @Override public void run() {\n            // Peek is a local memory lookup.\n            System.out.println(\"Co-located [key= \" + key + \", value= \" + cache.peek(key) +']');\n        }\n    });\n}",
-      "language": "java",
-      "name": "java7 affinityRun"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/compute-grid.md b/wiki/documentation/compute-grid/compute-grid.md
deleted file mode 100644
index a8863f8..0000000
--- a/wiki/documentation/compute-grid/compute-grid.md
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Distributed computations are performed in parallel fashion to gain **high performance**, **low latency**, and **linear scalability**. Ignite compute grid provides a set of simple APIs that allow users distribute computations and data processing across multiple computers in the cluster. Distributed parallel processing is based on the ability to take any computation and execute it on any set of cluster nodes and return the results back.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/zrJB0GshRdS3hLn0QGlI",
-        "in_memory_compute.png",
-        "400",
-        "301",
-        "#da4204",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * [Distributed Closure Execution](doc:distributed-closures)
-  * [MapReduce & ForkJoin Processing](doc:compute-tasks)
-  * [Clustered Executor Service](doc:executor-service)
-  * [Collocation of Compute and Data](doc:collocate-compute-and-data) 
-  * [Load Balancing](doc:load-balancing) 
-  * [Fault Tolerance](doc:fault-tolerance)
-  * [Job State Checkpointing](doc:checkpointing) 
-  * [Job Scheduling](doc:job-scheduling) 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCompute"
-}
-[/block]
-`IgniteCompute` interface provides methods for running many types of computations over nodes in a cluster or a cluster group. These methods can be used to execute Tasks or Closures in distributed fashion.
-
-All jobs and closures are [guaranteed to be executed](doc:fault-tolerance) as long as there is at least one node standing. If a job execution is rejected due to lack of resources, a failover mechanism is provided. In case of failover, the load balancer picks the next available node to execute the job. Here is how you can get an `IgniteCompute` instance:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Get compute instance over all nodes in the cluster.\nIgniteCompute compute = ignite.compute();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-You can also limit the scope of computations to a [Cluster Group](doc:cluster-groups). In this case, computation will only execute on the nodes within the cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignitition.ignite();\n\nClusterGroup remoteGroup = ignite.cluster().forRemotes();\n\n// Limit computations only to remote nodes (exclude local node).\nIgniteCompute compute = ignite.compute(remoteGroup);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/compute-tasks.md b/wiki/documentation/compute-grid/compute-tasks.md
deleted file mode 100644
index ef15f15..0000000
--- a/wiki/documentation/compute-grid/compute-tasks.md
+++ /dev/null
@@ -1,122 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-`ComputeTask` is the Ignite abstraction for the simplified in-memory MapReduce, which is also very close to ForkJoin paradigm. Pure MapReduce was never built for performance and only works well when dealing with off-line batch oriented processing (e.g. Hadoop MapReduce). However, when computing on data that resides in-memory, real-time low latencies and high throughput usually take the highest priority. Also, simplicity of the API becomes very important as well. With that in mind, Ignite introduced the `ComputeTask` API, which is a light-weight MapReduce (or ForkJoin) implementation.
-[block:callout]
-{
-  "type": "info",
-  "body": "Use `ComputeTask` only when you need fine-grained control over the job-to-node mapping, or custom fail-over logic. For all other cases you should use simple closure executions on the cluster documented in [Distributed Computations](doc:compute) section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ComputeTask"
-}
-[/block]
-`ComputeTask` defines jobs to execute on the cluster, and the mappings of those jobs to nodes. It also defines how to process (reduce) the job results. All `IgniteCompute.execute(...)` methods execute the given task on the grid. User applications should implement `map(...)` and `reduce(...)` methods of ComputeTask interface.
-
-Tasks are defined by implementing the 2 or 3 methods on `ComputeTask` interface
-
-##Map Method
-Method `map(...)` instantiates the jobs and maps them to worker nodes. The method receives the collection of cluster nodes on which the task is run and the task argument. The method should return a map with jobs as keys and mapped worker nodes as values. The jobs are then sent to the mapped nodes and executed there.
-[block:callout]
-{
-  "type": "info",
-  "body": "Refer to [ComputeTaskSplitAdapter](#computetasksplitadapter) for simplified implementation of the `map(...)` method."
-}
-[/block]
-##Result Method
-Method `result(...)` is called each time a job completes on some cluster node. It receives the result returned by the completed job, as well as the list of all the job results received so far. The method should return a `ComputeJobResultPolicy` instance, indicating what to do next:
-  * `WAIT` - wait for all remaining jobs to complete (if any)
-  * `REDUCE` - immediately move to reduce step, discarding all the remaining jobs and unreceived yet results
-  * `FAILOVER` - failover the job to another node (see Fault Tolerance)
-All the received job results will be available in the `reduce(...)` method as well.
-
-##Reduce Method
-Method `reduce(...)` is called on reduce step, when all the jobs have completed (or REDUCE result policy was returned from the `result(...)` method). The method receives a list with all the completed results and should return a final result of the computation. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Compute Task Adapters"
-}
-[/block]
-It is not necessary to implement all 3 methods of the `ComputeTask` API each time you need to define a computation. There is a number of helper classes that let you describe only a particular piece of your logic, leaving out all the rest to Ignite to handle automatically. 
-
-##ComputeTaskAdapter
-`ComputeTaskAdapter` defines a default implementation of the `result(...)` method which returns `FAILOVER` policy if a job threw an exception and `WAIT` policy otherwise, thus waiting for all jobs to finish with a result.
-
-##ComputeTaskSplitAdapter
-`ComputeTaskSplitAdapter` extends `ComputeTaskAdapter` and adds capability to automatically assign jobs to nodes. It hides the `map(...)` method and adds a new `split(...)` method in which user only needs to provide a collection of the jobs to be executed (the mapping of those jobs to nodes will be handled automatically by the adapter in a load-balanced fashion). 
-
-This adapter is especially useful in homogeneous environments where all nodes are equally suitable for executing jobs and the mapping step can be done implicitly.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ComputeJob"
-}
-[/block]
-All jobs that are spawned by a task are implementations of the `ComputeJob` interface. The `execute()` method of this interface defines the job logic and should return a job result. The `cancel()` method defines the logic in case if the job is discarded (for example, in case when task decides to reduce immediately or to cancel).
-
-##ComputeJobAdapter
-Convenience adapter which provides a no-op implementation of the `cancel()` method.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Example"
-}
-[/block]
-Here is an example of `ComputeTask` and `ComputeJob` implementations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute task on the clustr and wait for its completion.\nint cnt = grid.compute().execute(CharacterCountTask.class, \"Hello Grid Enabled World!\");\n \nSystem.out.println(\">>> Total number of characters in the phrase is '\" + cnt + \"'.\");\n \n/**\n * Task to count non-white-space characters in a phrase.\n */\nprivate static class CharacterCountTask extends ComputeTaskSplitAdapter<String, Integer> {\n  // 1. Splits the received string into to words\n  // 2. Creates a child job for each word\n  // 3. Sends created jobs to other nodes for processing. \n  @Override \n  public List<ClusterNode> split(List<ClusterNode> subgrid, String arg) {\n    String[] words = arg.split(\" \");\n\n    List<ComputeJob> jobs = new ArrayList<>(words.length);\n\n    for (final String word : arg.split(\" \")) {\n      jobs.add(new ComputeJobAdapter() {\n        @Override public Object execute() {\n          System.out.println(\">>> Printing '\" + word + \"' on from compute job.\");\n\n          // Return number of letters in the word.\n          return word.length();\n        }\n      });\n    }\n\n    return jobs;\n  }\n\n  @Override \n  public Integer reduce(List<ComputeJobResult> results) {\n    int sum = 0;\n\n    for (ComputeJobResult res : results)\n      sum += res.<Integer>getData();\n\n    return sum;\n  }\n}",
-      "language": "java",
-      "name": "ComputeTaskSplitAdapter"
-    },
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute task on the clustr and wait for its completion.\nint cnt = grid.compute().execute(CharacterCountTask.class, \"Hello Grid Enabled World!\");\n \nSystem.out.println(\">>> Total number of characters in the phrase is '\" + cnt + \"'.\");\n \n/**\n * Task to count non-white-space characters in a phrase.\n */\nprivate static class CharacterCountTask extends ComputeTaskAdapter<String, Integer> {\n    // 1. Splits the received string into to words\n    // 2. Creates a child job for each word\n    // 3. Sends created jobs to other nodes for processing. \n    @Override \n    public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode> subgrid, String arg) {\n        String[] words = arg.split(\" \");\n      \n        Map<ComputeJob, ClusterNode> map = new HashMap<>(words.length);\n        \n        Iterator<ClusterNode> it = subgrid.iterator();\n         \n        for (final String word : arg.split(\" \")) {\n            // If we used all nodes, restart the iterator.\n            if (!it.hasNext())\n                it = subgrid.iterator();\n             \n            ClusterNode node = it.next();\n                \n            map.put(new ComputeJobAdapter() {\n                @Override public Object execute() {\n                    System.out.println(\">>> Printing '\" + word + \"' on this node from grid job.\");\n                  \n                    // Return number of letters in the word.\n                    return word.length();\n                }\n             }, node);\n        }\n      \n        return map;\n    }\n \n    @Override \n    public Integer reduce(List<ComputeJobResult> results) {\n        int sum = 0;\n      \n        for (ComputeJobResult res : results)\n            sum += res.<Integer>getData();\n      \n        return sum;\n    }\n}",
-      "language": "java",
-      "name": "ComputeTaskAdapter"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Distributed Task Session"
-}
-[/block]
-Distributed task session is created for every task execution. It is defined by `ComputeTaskSession interface. Task session is visible to the task and all the jobs spawned by it, so attributes set on a task or on a job can be accessed on other jobs.  Task session also allows to receive notifications when attributes are set or wait for an attribute to be set.
-
-The sequence in which session attributes are set is consistent across the task and all job siblings within it. There will never be a case when one job sees attribute A before attribute B, and another job sees attribute B before A.
-
-In the example below, we have all jobs synchronize on STEP1 before moving on to STEP2. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.commpute();\n\ncompute.execute(new ComputeTasSplitAdapter<Object, Object>() {\n  @Override \n  protected Collection<? extends GridJob> split(int gridSize, Object arg)  {\n    Collection<ComputeJob> jobs = new LinkedList<>();\n\n    // Generate jobs by number of nodes in the grid.\n    for (int i = 0; i < gridSize; i++) {\n      jobs.add(new ComputeJobAdapter(arg) {\n        // Auto-injected task session.\n        @TaskSessionResource\n        private ComputeTaskSession ses;\n        \n        // Auto-injected job context.\n        @JobContextResource\n        private ComputeJobContext jobCtx;\n\n        @Override \n        public Object execute() {\n          // Perform STEP1.\n          ...\n          \n          // Tell other jobs that STEP1 is complete.\n          ses.setAttribute(jobCtx.getJobId(), \"STEP1\");\n          \n          // Wait for other jobs to complete STEP1.\n          for (ComputeJobSibling sibling : ses.getJobSiblings())\n            ses.waitForAttribute(sibling.getJobId(), \"STEP1\", 0);\n          \n          // Move on to STEP2.\n          ...\n        }\n      }\n    }\n  }\n               \n  @Override \n  public Object reduce(List<ComputeJobResult> results) {\n    // No-op.\n    return null;\n  }\n}, null);\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/distributed-closures.md b/wiki/documentation/compute-grid/distributed-closures.md
deleted file mode 100644
index 035d361..0000000
--- a/wiki/documentation/compute-grid/distributed-closures.md
+++ /dev/null
@@ -1,124 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite compute grid allows to broadcast and load-balance any closure within the cluster or a cluster group, including plain Java `runnables` and `callables`.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Broadcast Methods"
-}
-[/block]
-All `broadcast(...)` methods broadcast a given job to all nodes in the cluster or cluster group. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast to remote nodes only.\nIgniteCompute compute = ignite.compute(ignite.cluster().forRemotes());\n\n// Print out hello message on remote nodes in the cluster group.\ncompute.broadcast(() -> System.out.println(\"Hello Node: \" + ignite.cluster().localNode().id()));",
-      "language": "java",
-      "name": "broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast to remote nodes only and \n// enable asynchronous mode.\nIgniteCompute compute = ignite.compute(ignite.cluster().forRemotes()).withAsync();\n\n// Print out hello message on remote nodes in the cluster group.\ncompute.broadcast(() -> System.out.println(\"Hello Node: \" + ignite.cluster().localNode().id()));\n\nComputeTaskFuture<?> fut = compute.future():\n\nfut.listenAsync(f -> System.out.println(\"Finished sending broadcast job.\"));",
-      "language": "java",
-      "name": "async broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast to rmeote nodes only.\nIgniteCompute compute = ignite.compute(ignite.cluster.forRemotes());\n\n// Print out hello message on remote nodes in projection.\ncompute.broadcast(\n    new IgniteRunnable() {\n        @Override public void run() {\n            // Print ID of remote node on remote node.\n            System.out.println(\">>> Hello Node: \" + ignite.cluster().localNode().id());\n        }\n    }\n);",
-      "language": "java",
-      "name": "java7 broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast to remote nodes only and \n// enable asynchronous mode.\nIgniteCompute compute = ignite.compute(ignite.cluster.forRemotes()).withAsync();\n\n// Print out hello message on remote nodes in the cluster group.\ncompute.broadcast(\n    new IgniteRunnable() {\n        @Override public void run() {\n            // Print ID of remote node on remote node.\n            System.out.println(\">>> Hello Node: \" + ignite.cluster().localNode().id());\n        }\n    }\n);\n\nComputeTaskFuture<?> fut = compute.future():\n\nfut.listenAsync(new IgniteInClosure<? super ComputeTaskFuture<?>>() {\n    public void apply(ComputeTaskFuture<?> fut) {\n        System.out.println(\"Finished sending broadcast job to cluster.\");\n    }\n});",
-      "language": "java",
-      "name": "java7 async broadcast"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Call and Run Methods"
-}
-[/block]
-All `call(...)` and `run(...)` methods execute either individual jobs or collections of jobs on the cluster or a cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n \n// Iterate through all words in the sentence and create callable jobs.\nfor (String word : \"Count characters using callable\".split(\" \"))\n    calls.add(word::length);\n\n// Execute collection of callables on the cluster.\nCollection<Integer> res = ignite.compute().call(calls);\n\n// Add all the word lengths received from cluster nodes.\nint total = res.stream().mapToInt(Integer::intValue).sum(); ",
-      "language": "java",
-      "name": "call"
-    },
-    {
-      "code": "IgniteCompute compute = Ignite.compute();\n\n// Iterate through all words and print \n// each word on a different cluster node.\nfor (String word : \"Print words on different cluster nodes\".split(\" \"))\n    // Run on some cluster node.\n    compute.run(() -> System.out.println(word));",
-      "language": "java",
-      "name": "run"
-    },
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n \n// Iterate through all words in the sentence and create callable jobs.\nfor (String word : \"Count characters using callable\".split(\" \"))\n    calls.add(word::length);\n\n// Enable asynchronous mode.\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\n// Asynchronously execute collection of callables on the cluster.\nasyncCompute.call(calls);\n\nasyncCompute.future().listenAsync(fut -> {\n    // Total number of characters.\n    int total = fut.get().stream().mapToInt(Integer::intValue).sum(); \n  \n    System.out.println(\"Total number of characters: \" + total);\n});",
-      "language": "java",
-      "name": "async call"
-    },
-    {
-      "code": "IgniteCompute asyncCompute = ignite.compute().withAsync();\n\nCollection<ComputeTaskFuture<?>> futs = new ArrayList<>();\n\n// Iterate through all words and print \n// each word on a different cluster node.\nfor (String word : \"Print words on different cluster nodes\".split(\" \")) {\n    // Asynchronously run on some cluster node.\n    asyncCompute.run(() -> System.out.println(word));\n\n    futs.add(asyncCompute.future());\n}\n\n// Wait for completion of all futures.\nfuts.stream().forEach(ComputeTaskFuture::get);",
-      "language": "java",
-      "name": "async run"
-    },
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n \n// Iterate through all words in the sentence and create callable jobs.\nfor (final String word : \"Count characters using callable\".split(\" \")) {\n    calls.add(new GridCallable<Integer>() {\n        @Override public Integer call() throws Exception {\n            return word.length(); // Return word length.\n        }\n    });\n}\n \n// Execute collection of callables on the cluster.\nCollection<Integer> res = ignite.compute().call(calls);\n\nint total = 0;\n\n// Total number of characters.\n// Looks much better in Java 8.\nfor (Integer i : res)\n  total += i;",
-      "language": "java",
-      "name": "java7 call"
-    },
-    {
-      "code": "IgniteCompute asyncCompute = ignite.compute().withAsync();\n\nCollection<ComputeTaskFuture<?>> futs = new ArrayList<>();\n\n// Iterate through all words and print\n// each word on a different cluster node.\nfor (String word : \"Print words on different cluster nodes\".split(\" \")) {\n    // Asynchronously run on some cluster node.\n    asyncCompute.run(new IgniteRunnable() {\n        @Override public void run() {\n            System.out.println(word);\n        }\n    });\n\n    futs.add(asyncCompute.future());\n}\n\n// Wait for completion of all futures.\nfor (ComputeTaskFuture<?> f : futs)\n  f.get();",
-      "language": "java",
-      "name": "java7 async run"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Apply Methods"
-}
-[/block]
-A closure is a block of code that encloses its body and any outside variables used inside of it as a function object. You can then pass such function object anywhere you can pass a variable and execute it. All apply(...) methods execute closures on the cluster. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute  = ignite.compute();\n\n// Execute closure on all cluster nodes.\nCollection<Integer> res = ignite.compute().apply(\n    String::length,\n    Arrays.asList(\"Count characters using closure\".split(\" \"))\n);\n     \n// Add all the word lengths received from cluster nodes.\nint total = res.stream().mapToInt(Integer::intValue).sum(); ",
-      "language": "java",
-      "name": "apply"
-    },
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\n// Execute closure on all cluster nodes.\n// If the number of closures is less than the number of \n// parameters, then Ignite will create as many closures \n// as there are parameters.\nCollection<Integer> res = ignite.compute().apply(\n    String::length,\n    Arrays.asList(\"Count characters using closure\".split(\" \"))\n);\n     \nasyncCompute.future().listenAsync(fut -> {\n    // Total number of characters.\n    int total = fut.get().stream().mapToInt(Integer::intValue).sum(); \n  \n    System.out.println(\"Total number of characters: \" + total);\n});",
-      "language": "java",
-      "name": "async apply"
-    },
-    {
-      "code": "// Execute closure on all cluster nodes.\n// If the number of closures is less than the number of \n// parameters, then Ignite will create as many closures \n// as there are parameters.\nCollection<Integer> res = ignite.compute().apply(\n    new IgniteClosure<String, Integer>() {\n        @Override public Integer apply(String word) {\n            // Return number of letters in the word.\n            return word.length();\n        }\n    },\n    Arrays.asList(\"Count characters using closure\".split(\" \"))\n).get();\n     \nint sum = 0;\n \n// Add up individual word lengths received from remote nodes\nfor (int len : res)\n    sum += len;",
-      "language": "java",
-      "name": "java7 apply"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/executor-service.md b/wiki/documentation/compute-grid/executor-service.md
deleted file mode 100644
index 3ea86fc..0000000
--- a/wiki/documentation/compute-grid/executor-service.md
+++ /dev/null
@@ -1,40 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-[IgniteCompute](doc:compute) provides a convenient API for executing computations on the cluster. However, you can also work directly with standard `ExecutorService` interface from JDK. Ignite provides a cluster-enabled implementation of `ExecutorService` and automatically executes all the computations in load-balanced fashion within the cluster. Your computations also become fault-tolerant and are guaranteed to execute as long as there is at least one node left. You can think of it as a distributed cluster-enabled thread pool. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Get cluster-enabled executor service.\nExecutorService exec = ignite.executorService();\n \n// Iterate through all words in the sentence and create jobs.\nfor (final String word : \"Print words using runnable\".split(\" \")) {\n  // Execute runnable on some node.\n  exec.submit(new IgniteRunnable() {\n    @Override public void run() {\n      System.out.println(\">>> Printing '\" + word + \"' on this node from grid job.\");\n    }\n  });\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
- 
-You can also limit the job execution with some subset of nodes from your grid:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Cluster group for nodes where the attribute 'worker' is defined.\nClusterGroup workerGrp = ignite.cluster().forAttribute(\"ROLE\", \"worker\");\n\n// Get cluster-enabled executor service for the above cluster group.\nExecutorService exec = icnite.executorService(workerGrp);\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/fault-tolerance.md b/wiki/documentation/compute-grid/fault-tolerance.md
deleted file mode 100644
index 1eed62d..0000000
--- a/wiki/documentation/compute-grid/fault-tolerance.md
+++ /dev/null
@@ -1,96 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports automatic job failover. In case of a node crash, jobs are automatically transferred to other available nodes for re-execution. However, in Ignite you can also treat any job result as a failure as well. The worker node can still be alive, but it may be running low on CPU, I/O, disk space, etc. There are many conditions that may result in a failure within your application and you can trigger a failover. Moreover, you have the ability to choose to which node a job should be failed over to, as it could be different for different applications or different computations within the same application.
-
-The `FailoverSpi` is responsible for handling the selection of a new node for the execution of a failed job. `FailoverSpi` inspects the failed job and the list of all available grid nodes on which the job execution can be retried. It ensures that the job is not re-mapped to the same node it had failed on. Failover is triggered when the method `ComputeTask.result(...)` returns the `ComputeJobResultPolicy.FAILOVER` policy. Ignite comes with a number of built-in customizable Failover SPI implementations.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "At Least Once Guarantee"
-}
-[/block]
-As long as there is at least one node standing, no job will ever be lost.
-
-By default, Ignite will failover all jobs from stopped or crashed nodes automatically. For custom failover behavior, you should implement `ComputeTask.result()` method. The example below triggers failover whenever a job throws any `IgniteException` (or its subclasses):
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyComputeTask extends ComputeTaskSplitAdapter<String, String> {\n    ...\n      \n    @Override \n    public ComputeJobResultPolicy result(ComputeJobResult res, List<ComputeJobResult> rcvd) {\n        IgniteException err = res.getException();\n     \n        if (err != null)\n            return ComputeJobResultPolicy.FAILOVER;\n    \n        // If there is no exception, wait for all job results.\n        return ComputeJobResultPolicy.WAIT;\n    }\n  \n    ...\n}\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Closure Failover"
-}
-[/block]
-Closure failover is by default governed by `ComputeTaskAdapter`, which is triggered if a remote node either crashes or rejects closure execution. This default behavior may be overridden by using `IgniteCompute.withNoFailover()` method, which creates an instance of `IgniteCompute` with a **no-failover flag** set on it . Here is an example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute().withNoFailover();\n\ncompute.apply(() -> {\n    // Do something\n    ...\n}, \"Some argument\");\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "AlwaysFailOverSpi"
-}
-[/block]
-`AlwaysFailoverSpi` always reroutes a failed job to another node. Note, that at first an attempt will be made to reroute the failed job to a node that the task was not executed on. If no such nodes are available, then an attempt will be made to reroute the failed job to the nodes that may be running other jobs from the same task. If none of the above attempts succeeded, then the job will not be failed over and null will be returned.
-
-The following configuration parameters can be used to configure `AlwaysFailoverSpi`.
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setMaximumFailoverAttempts(int)`",
-    "0-1": "Sets the maximum number of attempts to fail-over a failed job to other nodes.",
-    "0-2": "5"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <bean class=\"org.apache.ignite.spi.failover.always.AlwaysFailoverSpi\">\n    <property name=\"maximumFailoverAttempts\" value=\"5\"/>\n  </bean>\n  ...\n</bean>\n",
-      "language": "xml"
-    },
-    {
-      "code": "AlwaysFailoverSpi failSpi = new AlwaysFailoverSpi();\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override maximum failover attempts.\nfailSpi.setMaximumFailoverAttempts(5);\n \n// Override the default failover SPI.\ncfg.setFailoverSpi(failSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/job-scheduling.md b/wiki/documentation/compute-grid/job-scheduling.md
deleted file mode 100644
index 568cbc7..0000000
--- a/wiki/documentation/compute-grid/job-scheduling.md
+++ /dev/null
@@ -1,86 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-In Ignite, jobs are mapped to cluster nodes during initial task split or closure execution on the  client side. However, once jobs arrive to the designated nodes, then need to be ordered for execution. By default, jobs are submitted to a thread pool and are executed in random order.  However, if you need to have a fine-grained control over job ordering, you can enable `CollisionSpi`.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "FIFO Ordering"
-}
-[/block]
-`FifoQueueCollisionSpi` allows a certain number of jobs in first-in first-out order to proceed without interruptions. All other jobs will be put on a waiting list until their turn.
-
-Number of parallel jobs is controlled by `parallelJobsNumber` configuration parameter. Default is number of cores times 2.
-
-##One at a Time
-Note that by setting `parallelJobsNumber` to 1, you can guarantee that all jobs will be executed one-at-a-time, and no two jobs will be executed concurrently.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"collisionSpi\">\n    <bean class=\"org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi\">\n      <!-- Execute one job at a time. -->\n      <property name=\"parallelJobsNumber\" value=\"1\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();\n \n// Execute jobs sequentially, one at a time, \n// by setting parallel job number to 1.\ncolSpi.setParallelJobsNumber(1);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default collision SPI.\ncfg.setCollisionSpi(colSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": null
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Priority Ordering"
-}
-[/block]
-`PriorityQueueCollisionSpi` allows to assign priorities to individual jobs, so jobs with higher priority will be executed ahead of lower priority jobs. 
-
-#Task Priorities
-Task priorities are set in the [task session](/docs/compute-tasks#distributed-task-session) via `grid.task.priority` attribute. If no priority has been assigned to a task, then default priority of 0 is used.
-
-Below is an example showing how task priority can be set. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyUrgentTask extends ComputeTaskSplitAdapter<Object, Object> {\n  // Auto-injected task session.\n  @TaskSessionResource\n  private GridTaskSession taskSes = null;\n \n  @Override\n  protected Collection<ComputeJob> split(int gridSize, Object arg) {\n    ...\n    // Set high task priority.\n    taskSes.setAttribute(\"grid.task.priority\", 10);\n \n    List<ComputeJob> jobs = new ArrayList<>(gridSize);\n    \n    for (int i = 1; i <= gridSize; i++) {\n      jobs.add(new GridJobAdapter() {\n        ...\n      });\n    }\n    ...\n      \n    // These jobs will be executed with higher priority.\n    return jobs;\n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Just like with [FIFO Ordering](#fifo-ordering), number of parallel jobs is controlled by `parallelJobsNumber` configuration parameter. 
-
-##Configuration
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n\t...\n\t<property name=\"collisionSpi\">\n\t\t<bean class=\"org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi\">\n      <!-- \n        Change the parallel job number if needed.\n        Default is number of cores times 2.\n      -->\n\t\t\t<property name=\"parallelJobsNumber\" value=\"5\"/>\n\t\t</bean>\n\t</property>\n\t...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "PriorityQueueCollisionSpi colSpi = new PriorityQueueCollisionSpi();\n\n// Change the parallel job number if needed.\n// Default is number of cores times 2.\ncolSpi.setParallelJobsNumber(5);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default collision SPI.\ncfg.setCollisionSpi(colSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": ""
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/compute-grid/load-balancing.md b/wiki/documentation/compute-grid/load-balancing.md
deleted file mode 100644
index e249cf2..0000000
--- a/wiki/documentation/compute-grid/load-balancing.md
+++ /dev/null
@@ -1,76 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Load balancing component balances job distribution among cluster nodes. In Ignite load balancing is achieved via `LoadBalancingSpi` which controls load on all nodes and makes sure that every node in the cluster is equally loaded. In homogeneous environments with homogeneous tasks load balancing is achieved by random or round-robin policies. However, in many other use cases, especially under uneven load, more complex adaptive load-balancing policies may be needed.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that load balancing is triggered whenever your jobs are not collocated with data or have no real preference on which node to execute. If [Collocation Of Compute and Data](doc:collocate-compute-and-data) is used, then data affinity takes priority over load balancing.",
-  "title": "Data Affinity"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Round-Robin Load Balancing"
-}
-[/block]
-`RoundRobinLoadBalancingSpi` iterates through nodes in round-robin fashion and picks the next sequential node. Two modes of operation are supported: per-task and global.
-
-##Per-Task Mode
-When configured in per-task mode, implementation will pick a random node at the beginning of every task execution and then sequentially iterate through all nodes in topology starting from the picked node. This is the default configuration For cases when split size is equal to the number of nodes, this mode guarantees that all nodes will participate in the split.
-
-##Global Mode
-When configured in global mode, a single sequential queue of nodes is maintained for all tasks and the next node in the queue is picked every time. In this mode (unlike in per-task mode) it is possible that even if split size may be equal to the number of nodes, some jobs within the same task will be assigned to the same node whenever multiple tasks are executing concurrently.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"loadBalancingSpi\">\n    <bean class=\"org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi\">\n      <!-- Set to per-task round-robin mode (this is default behavior). -->\n      <property name=\"perTask\" value=\"true\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml",
-      "name": null
-    },
-    {
-      "code": "RoundRobinLoadBalancingSpi = new RoundRobinLoadBalancingSpi();\n \n// Configure SPI to use per-task mode (this is default behavior).\nspi.setPerTask(true);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default load balancing SPI.\ncfg.setLoadBalancingSpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Random and Weighted Load Balancing"
-}
-[/block]
-`WeightedRandomLoadBalancingSpi` picks a random node for job execution by default. You can also optionally assign weights to nodes, so nodes with larger weights will end up getting proportionally more jobs routed to them. By default all nodes get equal weight of 10.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"loadBalancingSpi\">\n    <bean class=\"org.apache.ignite.spi.loadbalancing.weightedrandom.WeightedRandomLoadBalancingSpi\">\n      <property name=\"useWeights\" value=\"true\"/>\n      <property name=\"nodeWeight\" value=\"10\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "WeightedRandomLoadBalancingSpi = new WeightedRandomLoadBalancingSpi();\n \n// Configure SPI to used weighted random load balancing.\nspi.setUseWeights(true);\n \n// Set weight for the local node.\nspi.setWeight(10);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default load balancing SPI.\ncfg.setLoadBalancingSpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/affinity-collocation.md b/wiki/documentation/data-grid/affinity-collocation.md
deleted file mode 100644
index 6bbfcc5..0000000
--- a/wiki/documentation/data-grid/affinity-collocation.md
+++ /dev/null
@@ -1,95 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Given that the most common ways to cache data is in `PARTITIONED` caches, collocating compute with data or data with data can significantly improve performance and scalability of your application.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocate Data with Data"
-}
-[/block]
-In many cases it is beneficial to colocate different cache keys together if they will be accessed together. Quite often your business logic will require access to more than one cache key. By collocating them together you can make sure that all keys with the same `affinityKey` will be cached on the same processing node, hence avoiding costly network trips to fetch data from remote nodes.
-
-For example, let's say you have `Person` and `Company` objects and you want to collocate `Person` objects with `Company` objects for which this person works. To achieve that, cache key used to cache `Person` objects should have a field or method annotated with `@CacheAffinityKeyMapped` annotation, which will provide the value of the company key for collocation. For convenience, you can also optionally use `CacheAffinityKey` class
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class PersonKey {\n    // Person ID used to identify a person.\n    private String personId;\n \n    // Company ID which will be used for affinity.\n    @GridCacheAffinityKeyMapped\n    private String companyId;\n    ...\n}\n\n// Instantiate person keys with the same company ID which is used as affinity key.\nObject personKey1 = new PersonKey(\"myPersonId1\", \"myCompanyId\");\nObject personKey2 = new PersonKey(\"myPersonId2\", \"myCompanyId\");\n \nPerson p1 = new Person(personKey1, ...);\nPerson p2 = new Person(personKey2, ...);\n \n// Both, the company and the person objects will be cached on the same node.\ncache.put(\"myCompanyId\", new Company(..));\ncache.put(personKey1, p1);\ncache.put(personKey2, p2);",
-      "language": "java",
-      "name": "using PersonKey"
-    },
-    {
-      "code": "Object personKey1 = new CacheAffinityKey(\"myPersonId1\", \"myCompanyId\");\nObject personKey2 = new CacheAffinityKey(\"myPersonId2\", \"myCompanyId\");\n \nPerson p1 = new Person(personKey1, ...);\nPerson p2 = new Person(personKey2, ...);\n \n// Both, the company and the person objects will be cached on the same node.\ncache.put(\"myCompanyId\", new Company(..));\ncache.put(personKey1, p1);\ncache.put(personKey2, p2);",
-      "language": "java",
-      "name": "using CacheAffinityKey"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "SQL Joins",
-  "body": "When performing [SQL distributed joins](/docs/cache-queries#sql-queries) over data residing in partitioned caches, you must make sure that the join-keys are collocated."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocating Compute with Data"
-}
-[/block]
-It is also possible to route computations to the nodes where the data is cached. This concept is known as Collocation Of Computations And Data. It allows to route whole units of work to a certain node. 
-
-To collocate compute with data you should use `IgniteCompute.affinityRun(...)` and `IgniteCompute.affinityCall(...)` methods.
-
-Here is how you can collocate your computation with the same cluster node on which company and persons from the example above are cached.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "String companyId = \"myCompanyId\";\n \n// Execute Runnable on the node where the key is cached.\nignite.compute().affinityRun(\"myCache\", companyId, () -> {\n  Company company = cache.get(companyId);\n\n  // Since we collocated persons with the company in the above example,\n  // access to the persons objects is local.\n  Person person1 = cache.get(personKey1);\n  Person person2 = cache.get(personKey2);\n  ...  \n});",
-      "language": "java",
-      "name": "affinityRun"
-    },
-    {
-      "code": "final String companyId = \"myCompanyId\";\n \n// Execute Runnable on the node where the key is cached.\nignite.compute().affinityRun(\"myCache\", companyId, new IgniteRunnable() {\n  @Override public void run() {\n    Company company = cache.get(companyId);\n    \n    Person person1 = cache.get(personKey1);\n    Person person2 = cache.get(personKey2);\n    ...\n  }\n};",
-      "language": "java",
-      "name": "java7 affinityRun"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCompute vs EntryProcessor"
-}
-[/block]
-Both, `IgniteCompute.affinityRun(...)` and `IgniteCache.invoke(...)` methods offer ability to collocate compute and data. The main difference is that `invoke(...)` methods is atomic and executes while holding a lock on a key. You should not access other keys from within the `EntryProcessor` logic as it may cause a deadlock. 
-
- `affinityRun(...)` and `affinityCall(...)`, on the other hand, do not hold any locks. For example, it is absolutely legal to start multiple transactions or execute cache queries from these methods without worrying about deadlocks. In this case Ignite will automatically detect that the processing is collocated and will employ a light-weight 1-Phase-Commit optimization for transactions (instead of 2-Phase-Commit).
-[block:callout]
-{
-  "type": "info",
-  "body": "See [JCache EntryProcessor](/docs/jcache#entryprocessor) documentation for more information about `IgniteCache.invoke(...)` method."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/automatic-db-integration.md b/wiki/documentation/data-grid/automatic-db-integration.md
deleted file mode 100644
index 27ee9bd..0000000
--- a/wiki/documentation/data-grid/automatic-db-integration.md
+++ /dev/null
@@ -1,119 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports integration with databases 'out-of-the-box' by `org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore` class that implements `org.apache.ignite.cache.store.CacheStore` interface.
-
-Ignite provides utility that will read database metadata and generate POJO classes and XML configuration.
-
-Utility can be started by script:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignite-schema-load.sh",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Connect to database"
-}
-[/block]
-JDBC drivers **are not supplied** with utility. You should download (and install if needed) appropriate JDBC driver for your database.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/jKCMIgmTi2uqSqgkgiTQ",
-        "ignite-schema-load-01.png",
-        "650",
-        "650",
-        "#d6363a",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Generate XML configuration and POJO classes"
-}
-[/block]
-Select tables you want to map to POJO classes and click 'Generate' button.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/13YM8mBRXaTB8yXWJkWI",
-        "ignite-schema-load-02.png",
-        "650",
-        "650",
-        "",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Produced output"
-}
-[/block]
-Utility will generate POJO classes,  XML configuration, and java code snippet of cache configuration by code.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "/**\n * PersonKey definition.\n *\n * Code generated by Apache Ignite Schema Load utility: 03/03/2015.\n */\npublic class PersonKey implements Serializable {\n    /** */\n    private static final long serialVersionUID = 0L;\n\n    /** Value for id. */\n    private int id;\n\n    /**\n     * Gets id.\n     *\n     * @return Value for id.\n     */\n    public int getId() {\n        return id;\n    }\n\n    /**\n     * Sets id.\n     *\n     * @param id New value for id.\n     */\n    public void setId(int id) {\n        this.id = id;\n    }\n\n    /** {@inheritDoc} */\n    @Override public boolean equals(Object o) {\n        if (this == o)\n            return true;\n\n        if (!(o instanceof PersonKey))\n            return false;\n\n        PersonKey that = (PersonKey)o;\n\n        if (id != that.id)\n            return false;\n\n        return true;\n    }\n\n    /** {@inheritDoc} */\n    @Override public int hashCode() {\n        int res = id;\n\n        return res;\n    }\n\n    /** {@inheritDoc} */\n    @Override public String toString() {\n        return \"PersonKey [id=\" + id +\n            \"]\";\n    }\n}",
-      "language": "java",
-      "name": "POJO Key class"
-    },
-    {
-      "code": "/**\n * Person definition.\n *\n * Code generated by Apache Ignite Schema Load utility: 03/03/2015.\n */\npublic class Person implements Serializable {\n    /** */\n    private static final long serialVersionUID = 0L;\n\n    /** Value for id. */\n    private int id;\n\n    /** Value for orgId. */\n    private Integer orgId;\n\n    /** Value for name. */\n    private String name;\n\n    /**\n     * Gets id.\n     *\n     * @return Value for id.\n     */\n    public int getId() {\n        return id;\n    }\n\n    /**\n     * Sets id.\n     *\n     * @param id New value for id.\n     */\n    public void setId(int id) {\n        this.id = id;\n    }\n\n    /**\n     * Gets orgId.\n     *\n     * @return Value for orgId.\n     */\n    public Integer getOrgId() {\n        return orgId;\n    }\n\n    /**\n     * Sets orgId.\n     *\n     * @param orgId New value for orgId.\n     */\n    public void setOrgId(Integer orgId) {\n        this.orgId = orgId;\n    }\n\n    /**\n     * Gets name.\n     *\n     * @return Value for name.\n     */\n    public String getName() {\n        return name;\n    }\n\n    /**\n     * Sets name.\n     *\n     * @param name New value for name.\n     */\n    public void setName(String name) {\n        this.name = name;\n    }\n\n    /** {@inheritDoc} */\n    @Override public boolean equals(Object o) {\n        if (this == o)\n            return true;\n\n        if (!(o instanceof Person))\n            return false;\n\n        Person that = (Person)o;\n\n        if (id != that.id)\n            return false;\n\n        if (orgId != null ? !orgId.equals(that.orgId) : that.orgId != null)\n            return false;\n\n        if (name != null ? !name.equals(that.name) : that.name != null)\n            return false;\n\n        return true;\n    }\n\n    /** {@inheritDoc} */\n    @Override public int hashCode() {\n        int res = id;\n\n        res = 31 * res + (orgId != null ? orgId.hashCode() : 0);\n\n        res = 31 * res + (name != null ? name.hashCode() : 0);\n\n        return res;\n    }\n\n    /** {@inheritDoc} */\n    @Override public String toString() {\n        return \"Person [id=\" + id +\n            \", orgId=\" + orgId +\n            \", name=\" + name +\n            \"]\";\n    }\n}",
-      "language": "java",
-      "name": "POJO Value class"
-    },
-    {
-      "code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<beans xmlns=\"http://www.springframework.org/schema/beans\"\n       xmlns:util=\"http://www.springframework.org/schema/util\"\n       xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n       xsi:schemaLocation=\"http://www.springframework.org/schema/beans\n                           http://www.springframework.org/schema/beans/spring-beans.xsd\n                           http://www.springframework.org/schema/util\n                           http://www.springframework.org/schema/util/spring-util.xsd\">\n    <bean class=\"org.apache.ignite.cache.CacheTypeMetadata\">\n        <property name=\"databaseSchema\" value=\"PUBLIC\"/>\n        <property name=\"databaseTable\" value=\"PERSON\"/>\n        <property name=\"keyType\" value=\"org.apache.ignite.examples.datagrid.store.model.PersonKey\"/>\n        <property name=\"valueType\" value=\"org.apache.ignite.examples.datagrid.store.model.Person\"/>\n        <property name=\"keyFields\">\n            <list>\n                <bean class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    <property name=\"databaseName\" value=\"ID\"/>\n                    <property name=\"databaseType\">\n                        <util:constant static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n                    <property name=\"javaName\" value=\"id\"/>\n                    <property name=\"javaType\" value=\"int\"/>\n                </bean>\n            </list>\n        </property>\n        <property name=\"valueFields\">\n            <list>\n                <bean class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    <property name=\"databaseName\" value=\"ID\"/>\n                    <property name=\"databaseType\">\n                        <util:constant static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n                    <property name=\"javaName\" value=\"id\"/>\n                    <property name=\"javaType\" value=\"int\"/>\n                </bean>\n                <bean class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    <property name=\"databaseName\" value=\"ORG_ID\"/>\n                    <property name=\"databaseType\">\n                        <util:constant static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n                    <property name=\"javaName\" value=\"orgId\"/>\n                    <property name=\"javaType\" value=\"java.lang.Integer\"/>\n                </bean>\n                <bean class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    <property name=\"databaseName\" value=\"NAME\"/>\n                    <property name=\"databaseType\">\n                        <util:constant static-field=\"java.sql.Types.VARCHAR\"/>\n                    </property>\n                    <property name=\"javaName\" value=\"name\"/>\n                    <property name=\"javaType\" value=\"java.lang.String\"/>\n                </bean>\n            </list>\n        </property>\n    </bean>\n</beans>",
-      "language": "xml",
-      "name": "XML Configuration"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n...\nCacheConfiguration ccfg = new CacheConfiguration<>();\n\nDataSource dataSource = null; // TODO: Create data source for your database.\n\n// Create store. \nCacheJdbcPojoStore store = new CacheJdbcPojoStore();\nstore.setDataSource(dataSource);\n\n// Create store factory. \nccfg.setCacheStoreFactory(new FactoryBuilder.SingletonFactory<>(store));\n\n// Configure cache to use store. \nccfg.setReadThrough(true);\nccfg.setWriteThrough(true);\n\ncfg.setCacheConfiguration(ccfg);\n\n// Configure cache types. \nCollection<CacheTypeMetadata> meta = new ArrayList<>();\n\n// PERSON.\nCacheTypeMetadata type = new CacheTypeMetadata();\ntype.setDatabaseSchema(\"PUBLIC\");\ntype.setDatabaseTable(\"PERSON\");\ntype.setKeyType(\"org.apache.ignite.examples.datagrid.store.model.PersonKey\");\ntype.setValueType(\"org.apache.ignite.examples.datagrid.store.model.Person\");\n\n// Key fields for PERSON.\nCollection<CacheTypeFieldMetadata> keys = new ArrayList<>();\nkeys.add(new CacheTypeFieldMetadata(\"ID\", java.sql.Types.INTEGER,\"id\", int.class));\ntype.setKeyFields(keys);\n\n// Value fields for PERSON.\nCollection<CacheTypeFieldMetadata> vals = new ArrayList<>();\nvals.add(new CacheTypeFieldMetadata(\"ID\", java.sql.Types.INTEGER,\"id\", int.class));\nvals.add(new CacheTypeFieldMetadata(\"ORG_ID\", java.sql.Types.INTEGER,\"orgId\", Integer.class));\nvals.add(new CacheTypeFieldMetadata(\"NAME\", java.sql.Types.VARCHAR,\"name\", String.class));\ntype.setValueFields(vals);\n...\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": "Java snippet"
-    }
-  ]
-}
-[/block]
-Copy generated POJO java classes to you project source folder.
-
-Copy declaration of CacheTypeMetadata from generated XML file and paste to your project XML configuration file under appropriate CacheConfiguration root.
-
-Or paste snippet with cache configuration into appropriate java class in your project.
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/cache-modes.md b/wiki/documentation/data-grid/cache-modes.md
deleted file mode 100644
index 4bcb3c8..0000000
--- a/wiki/documentation/data-grid/cache-modes.md
+++ /dev/null
@@ -1,254 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite provides three different modes of cache operation: `LOCAL`, `REPLICATED`, and `PARTITIONED`. A cache mode is configured for each cache. Cache modes are defined in `CacheMode` enumeration. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Local Mode"
-}
-[/block]
-`LOCAL` mode is the most light weight mode of cache operation, as no data is distributed to other cache nodes. It is ideal for scenarios where data is either read-only, or can be periodically refreshed at some expiration frequency. It also works very well with read-through behavior where data is loaded from persistent storage on misses. Other than distribution, local caches still have all the features of a distributed cache, such as automatic data eviction, expiration, disk swapping, data querying, and transactions.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Replicated Mode"
-}
-[/block]
-In `REPLICATED` mode all data is replicated to every node in the cluster. This cache mode provides the utmost availability of data as it is available on every node. However, in this mode every data update must be propagated to all other nodes which can have an impact on performance and scalability. 
-
-As the same data is stored on all cluster nodes, the size of a replicated cache is limited by the amount of memory available on the node with the smallest amount of RAM. This mode is ideal for scenarios where cache reads are a lot more frequent than cache writes, and data sets are small. If your system does cache lookups over 80% of the time, then you should consider using `REPLICATED` cache mode.
-[block:callout]
-{
-  "type": "success",
-  "body": "Replicated caches should be used when data sets are small and updates are infrequent."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Partitioned Mode"
-}
-[/block]
-`PARTITIONED` mode is the most scalable distributed cache mode. In this mode the overall data set is divided equally into partitions and all partitions are split equally between participating nodes, essentially creating one huge distributed in-memory store for caching data. This approach allows you to store as much data as can be fit in the total memory available across all nodes, hence allowing for multi-terabytes of data in cache memory across all cluster nodes. Essentially, the more nodes you have, the more data you can cache.
-
-Unlike `REPLICATED` mode, where updates are expensive because every node in the cluster needs to be updated, with `PARTITIONED` mode, updates become cheap because only one primary node (and optionally 1 or more backup nodes) need to be updated for every key. However, reads become somewhat more expensive because only certain nodes have the data cached. 
-
-In order to avoid extra data movement, it is important to always access the data exactly on the node that has that data cached. This approach is called *affinity colocation* and is strongly recommended when working with partitioned caches.
-[block:callout]
-{
-  "type": "success",
-  "body": "Partitioned caches are idea when working with large data sets and updates are frequent.",
-  "title": ""
-}
-[/block]
-The picture below illustrates a simple view of a partitioned cache. Essentially we have key K1 assigned to Node1, K2 assigned to Node2, and K3 assigned to Node3. 
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/7pGSgxCVR3OZSHqYLdJv",
-        "in_memory_data_grid.png",
-        "500",
-        "338",
-        "#d64304",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-See [configuration](#configuration) section below for an example on how to configure cache mode.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Distribution Mode"
-}
-[/block]
-Node can operate in four different cache distribution modes when `PARTITIONED` mode is used. Cache distribution mode is defined by `CacheDistributionMode` enumeration and can be configured via `distributionMode` property of `CacheConfiguration`.
-[block:parameters]
-{
-  "data": {
-    "h-0": "Distribution Mode",
-    "h-1": "Description",
-    "0-0": "`PARTITIONED_ONLY`",
-    "0-1": "Local node may store primary and/or backup keys, but does not cache recently accessed keys, which are neither primaries or backups, in near cache.",
-    "1-0": "`CLIENT_ONLY`",
-    "1-1": "Local node does not cache any data and communicates with other cache nodes via remote calls.",
-    "2-0": "`NEAR_ONLY`",
-    "2-1": "Local node will not be primary or backup node for any key, but will cache recently accessed keys in a smaller near cache. Amount of recently accessed keys to cache is controlled by near eviction policy.",
-    "3-0": "`NEAR_PARTITIONED`",
-    "3-1": "Local node may store primary and/or backup keys, and also will cache recently accessed keys in near cache. Amount of recently accessed keys to cache is controlled by near eviction policy."
-  },
-  "cols": 2,
-  "rows": 4
-}
-[/block]
-By default `PARTITIONED_ONLY` cache distribution mode is enabled. It can be turned on or off by setting the `distributionMode` configuration property in `CacheConfiguration`. For example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- Set a cache name. -->\n           \t<property name=\"name\" value=\"cacheName\"/>\n            \n          \t<!-- cache distribution mode. -->\n    \t\t\t\t<property name=\"distributionMode\" value=\"NEAR_ONLY\"/>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setDistributionMode(CacheDistributionMode.NEAR_ONLY);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Atomic Write Order Mode"
-}
-[/block]
-When using partitioned cache in `CacheAtomicityMode.ATOMIC` mode, one can configure atomic cache write order mode. Atomic write order mode determines which node will assign write version (sender or primary node) and is defined by `CacheAtomicWriteOrderMode` enumeration. There are 2 modes, `CLOCK` and `PRIMARY`. 
-
-In `CLOCK` write order mode, write versions are assigned on a sender node. `CLOCK` mode is automatically turned on only when `CacheWriteSynchronizationMode.FULL_SYNC` is used, as it  generally leads to better performance since write requests to primary and backups nodes are sent at the same time. 
-
-In `PRIMARY` write order mode, cache version is assigned only on primary node. In this mode the sender will only send write requests to primary nodes, which in turn will assign write version and forward them to backups.
-
-Atomic write order mode can be configured via `atomicWriteOrderMode` property of `CacheConfiguration`. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           \t<!-- Set a cache name. -->\n           \t<property name=\"name\" value=\"cacheName\"/>\n          \t\n          \t<!-- Atomic write order mode. -->\n    \t\t\t\t<property name=\"atomicWriteOrderMode\" value=\"PRIMARY\"/>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setAtomicWriteOrderMode(CacheAtomicWriteOrderMode.CLOCK);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "body": "For more information on `ATOMIC` mode, refer to [Transactions](/docs/transactions) section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Primary and Backup Nodes"
-}
-[/block]
-In `PARTITIONED` mode, nodes to which the keys are assigned to are called primary nodes for those keys. You can also optionally configure any number of backup nodes for cached data. If the number of backups is greater than 0, then Ignite will automatically assign backup nodes for each individual key. For example if the number of backups is 1, then every key cached in the data grid will have 2 copies, 1 primary and 1 backup.
-[block:callout]
-{
-  "type": "info",
-  "body": "By default, backups are turned off for better performance."
-}
-[/block]
-Backups can be configured by setting `backups()` property of 'CacheConfiguration`, like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           \t<!-- Set a cache name. -->\n           \t<property name=\"name\" value=\"cacheName\"/>\n          \n          \t<!-- Set cache mode. -->\n    \t\t\t\t<property name=\"cacheMode\" value=\"PARTITIONED\"/>\n          \t\n          \t<!-- Number of backup nodes. -->\n    \t\t\t\t<property name=\"backups\" value=\"1\"/>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setCacheMode(CacheMode.PARTITIONED);\n\ncacheCfg.setBackups(1);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Near Caches"
-}
-[/block]
-A partitioned cache can also be fronted by a `Near` cache, which is a smaller local cache that stores most recently or most frequently accessed data. Just like with a partitioned cache, the user can control the size of the near cache and its eviction policies. 
-
-In the vast majority of use cases, whenever utilizing Ignite with affinity colocation, near caches should not be used. If computations are collocated with the proper partition cache nodes then the near cache is simply not needed because all the data is available locally in the partitioned cache.
-
-However, there are cases when it is simply impossible to send computations to remote nodes. For cases like this near caches can significantly improve scalability and the overall performance of the application.
-
-Following are configuration parameters related to near cache. These parameters make sense for `PARTITIONED` cache only.
-[block:parameters]
-{
-  "data": {
-    "0-0": "`setNearEvictionPolicy(CacheEvictionPolicy)`",
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-1": "Eviction policy for near cache.",
-    "0-2": "`CacheLruEvictionPolicy` with max size of 10,000.",
-    "1-0": "`setEvictNearSynchronized(boolean)`",
-    "1-1": "Flag indicating whether eviction is synchronized with near caches on remote nodes.",
-    "1-2": "true",
-    "2-0": "`setNearStartSize(int)`",
-    "3-0": "",
-    "2-2": "256",
-    "2-1": "Start size for near cache."
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- Set a cache name. -->\n           \t<property name=\"name\" value=\"cacheName\"/>\n          \n           \t<!-- Start size for near cache. -->\n    \t\t\t\t<property name=\"nearStartSize\" value=\"512\"/>\n \n            <!-- Configure LRU eviction policy for near cache. -->\n            <property name=\"nearEvictionPolicy\">\n                <bean class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n                    <!-- Set max size to 1000. -->\n                    <property name=\"maxSize\" value=\"1000\"/>\n                </bean>\n            </property>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setNearStartSize(512);\n\nCacheLruEvictionPolicy evctPolicy = new CacheLruEvictionPolicy();\nevctPolicy.setMaxSize(1000);\n\ncacheCfg.setNearEvictionPolicy(evctPolicy);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-
-Cache modes are configured for each cache by setting the `cacheMode` property of `CacheConfiguration` like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           \t<!-- Set a cache name. -->\n           \t<property name=\"name\" value=\"cacheName\"/>\n            \n          \t<!-- Set cache mode. -->\n    \t\t\t\t<property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setCacheMode(CacheMode.PARTITIONED);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/cache-queries.md b/wiki/documentation/data-grid/cache-queries.md
deleted file mode 100644
index 9df0c9e..0000000
--- a/wiki/documentation/data-grid/cache-queries.md
+++ /dev/null
@@ -1,181 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports a very elegant query API with support for
-
-  * [Predicate-based Scan Queries](#scan-queries)
-  * [SQL Queries](#sql-queries)
-  * [Text Queries](#text-queries)
-  * [Continuous Queries](#continuous-queries)
-  
-For SQL queries ignites supports in-memory indexing, so all the data lookups are extremely fast. If you are caching your data in [off-heap memory](doc:off-heap-memory), then query indexes will also be cached in off-heap memory as well.
-
-Ignite also provides support for custom indexing via `IndexingSpi` and `SpiQuery` class.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Main Abstractions"
-}
-[/block]
-`IgniteCache` has several query methods all of which receive some sublcass of `Query` class and return `QueryCursor`.
-##Query
-`Query` abstract class represents an abstract paginated query to be executed on the distributed cache. You can set the page size for the returned cursor via `Query.setPageSize(...)` method (default is `1024`).
-
-##QueryCursor
-`QueryCursor` represents query result set and allows for transparent page-by-page iteration. Whenever user starts iterating over the last page, it will automatically request the next page in the background. For cases when pagination is not needed, you can use `QueryCursor.getAll()` method which will fetch the whole query result and store it in a collection.
-[block:callout]
-{
-  "type": "info",
-  "title": "Closing Cursors",
-  "body": "Cursors will close automatically if you iterate to the end of the result set. If you need to stop iteration sooner, you must close() the cursor explicitly or use `AutoCloseable` syntax."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Scan Queries"
-}
-[/block]
-Scan queries allow for querying cache in distributed form based on some user defined predicate. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\n// Find only persons earning more than 1,000.\ntry (QueryCursor cursor = cache.query(new ScanQuery((k, p) -> p.getSalary() > 1000)) {\n  for (Person p : cursor)\n    System.out.println(p.toString());\n}",
-      "language": "java",
-      "name": "scan"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\n// Find only persons earning more than 1,000.\nIgniteBiPredicate<Long, Person> filter = new IgniteByPredicate<>() {\n  @Override public boolean apply(Long key, Perons p) {\n  \treturn p.getSalary() > 1000;\n\t}\n};\n\ntry (QueryCursor cursor = cache.query(new ScanQuery(filter)) {\n  for (Person p : cursor)\n    System.out.println(p.toString());\n}",
-      "language": "java",
-      "name": "java7 scan"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "SQL Queries"
-}
-[/block]
-Ignite supports free-form SQL queries virtually without any limitations. SQL syntax is ANSI-99 compliant. You can use any SQL function, any aggregation, any grouping and Ignite will figure out where to fetch the results from.
-
-##SQL Joins
-Ignite supports distributed SQL joins. Moreover, if data resides in different caches, Ignite allows for cross-cache joins as well. 
-
-Joins between `PARTITIONED` and `REPLICATED` caches always work without any limitations. However, if you do a join between two `PARTITIONED` data sets, then you must make sure that the keys you are joining on are **collocated**. 
-
-##Field Queries
-Instead of selecting the whole object, you can choose to select only specific fields in order to minimize network and serialization overhead. For this purpose Ignite has a concept of `fields queries`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\nSqlQuery sql = new SqlQuery(Person.class, \"salary > ?\");\n\n// Find only persons earning more than 1,000.\ntry (QueryCursor<Entry<Long, Person> cursor = cache.query(sql.setArgs(1000)) {\n  for (Entry<Long, Person> e : cursor)\n    System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "sql"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\n// SQL join on Person and Organization.\nSqlQuery sql = new SqlQuery(Person.class,\n  \"from Person, Organization \"\n  + \"where Person.orgId = Organization.id \"\n  + \"and lower(Organization.name) = lower(?)\");\n\n// Find all persons working for Ignite organization.\ntry (QueryCursor<Entry<Long, Person> cursor = cache.query(sql.setArgs(\"Ignite\")) {\n  for (Entry<Long, Person> e : cursor)\n    System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "sql join"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\nSqlFieldsQuery sql = new SqlFieldsQuery(\"select concat(firstName, ' ', lastName) from Person\");\n\n// Select concatinated first and last name for all persons.\ntry (QueryCursor<List<?>> cursor = cache.query(sql) {\n  for (List<?> row : cursor)\n    System.out.println(\"Full name: \" + row.get(0));\n}",
-      "language": "java",
-      "name": "sql fields"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\n// Select with join between Person and Organization.\nSqlFieldsQuery sql = new SqlFieldsQuery(\n  \"select concat(firstName, ' ', lastName), Organization.name \"\n  + \"from Person, Organization where \"\n  + \"Person.orgId = Organization.id and \"\n  + \"Person.salary > ?\");\n\n// Only find persons with salary > 1000.\ntry (QueryCursor<List<?>> cursor = cache.query(sql.setArgs(1000)) {\n  for (List<?> row : cursor)\n    System.out.println(\"personName=\" + row.get(0) + \", orgName=\" + row.get(1));\n}",
-      "language": "java",
-      "name": "sql fields & join"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Text Queries"
-}
-[/block]
-Ignite also supports text-based queries based on Lucene indexing.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = ignite.jcache(\"mycache\");\n\n// Query for all people with \"Master Degree\" in their resumes.\nTextQuery txt = new TextQuery(Person.class, \"Master Degree\");\n\ntry (QueryCursor<Entry<Long, Person>> masters = cache.query(txt)) {\n  for (Entry<Long, Person> e : cursor)\n    System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "text query"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Continuous Queries"
-}
-[/block]
-Continuous queries are good for cases when you want to execute a query and then continue to get notified about the data changes that fall into your query filter.
-
-Continuous queries are supported via `ContinuousQuery` class, which supports the following:
-## Initial Query
-Whenever executing continuous query, you have an option to execution initial query before starting to listen to updates. The initial query can be set via `ContinuousQuery.setInitialQuery(Query)` method and can be of any query type, [Scan](#scan-queries), [SQL](#sql-queries), or [TEXT](#text-queries). This parameter is optional, and if not set, will not be used.
-## Remote Filter
-This filter is executed on the primary node for a given key and evaluates whether the event should be propagated to the listener. If the filter returns `true`, then the listener will be notified, otherwise the event will be skipped. Filtering events on the node on which they have occurred allows to minimize unnecessary network traffic for listener notifications. Remote filter can be set via `ContinuousQuery.setRemoteFilter(CacheEntryEventFilter<K, V>)` method.
-## Local Listener
-Whenever events pass the remote filter, they will be send to the client to notify the local listener there. Local listener is set via `ContinuousQuery.setLocalListener(CacheEntryUpdatedListener<K, V>)` method.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Integer, String> cache = ignite.jcache(\"mycache\");\n\n// Create new continuous query.\nContinuousQuery<Integer, String> qry = new ContinuousQuery<>();\n\n// Optional initial query to select all keys greater than 10.\nqry.setInitialQuery(new ScanQuery<Integer, String>((k, v) -> k > 10)):\n\n// Callback that is called locally when update notifications are received.\nqry.setLocalListener((evts) -> \n\tevts.stream().forEach(e -> System.out.println(\"key=\" + e.getKey() + \", val=\" + e.getValue())));\n\n// This filter will be evaluated remotely on all nodes.\n// Entry that pass this filter will be sent to the caller.\nqry.setRemoteFilter(e -> e.getKey() > 10);\n\n// Execute query.\ntry (QueryCursor<Cache.Entry<Integer, String>> cur = cache.query(qry)) {\n  // Iterate through existing data stored in cache.\n  for (Cache.Entry<Integer, String> e : cur)\n    System.out.println(\"key=\" + e.getKey() + \", val=\" + e.getValue());\n\n  // Add a few more keys and watch a few more query notifications.\n  for (int i = 5; i < 15; i++)\n    cache.put(i, Integer.toString(i));\n}\n",
-      "language": "java",
-      "name": "continuous query"
-    },
-    {
-      "code": "IgniteCache<Integer, String> cache = ignite.jcache(CACHE_NAME);\n\n// Create new continuous query.\nContinuousQuery<Integer, String> qry = new ContinuousQuery<>();\n\nqry.setInitialQuery(new ScanQuery<Integer, String>(new IgniteBiPredicate<Integer, String>() {\n  @Override public boolean apply(Integer key, String val) {\n    return key > 10;\n  }\n}));\n\n// Callback that is called locally when update notifications are received.\nqry.setLocalListener(new CacheEntryUpdatedListener<Integer, String>() {\n  @Override public void onUpdated(Iterable<CacheEntryEvent<? extends Integer, ? extends String>> evts) {\n    for (CacheEntryEvent<Integer, String> e : evts)\n      System.out.println(\"key=\" + e.getKey() + \", val=\" + e.getValue());\n  }\n});\n\n// This filter will be evaluated remotely on all nodes.\n// Entry that pass this filter will be sent to the caller.\nqry.setRemoteFilter(new CacheEntryEventFilter<Integer, String>() {\n  @Override public boolean evaluate(CacheEntryEvent<? extends Integer, ? extends String> e) {\n    return e.getKey() > 10;\n  }\n});\n\n// Execute query.\ntry (QueryCursor<Cache.Entry<Integer, String>> cur = cache.query(qry)) {\n  // Iterate through existing data.\n  for (Cache.Entry<Integer, String> e : cur)\n    System.out.println(\"key=\" + e.getKey() + \", val=\" + e.getValue());\n\n  // Add a few more keys and watch more query notifications.\n  for (int i = keyCnt; i < keyCnt + 10; i++)\n    cache.put(i, Integer.toString(i));\n}",
-      "language": "java",
-      "name": "java7 continuous query"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Configuration"
-}
-[/block]
-Queries can be configured from code by using `@QuerySqlField` annotations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class Person implements Serializable {\n  /** Person ID (indexed). */\n  @QuerySqlField(index = true)\n  private long id;\n\n  /** Organization ID (indexed). */\n  @QuerySqlField(index = true)\n  private long orgId;\n\n  /** First name (not-indexed). */\n  @QuerySqlField\n  private String firstName;\n\n  /** Last name (not indexed). */\n  @QuerySqlField\n  private String lastName;\n\n  /** Resume text (create LUCENE-based TEXT index for this field). */\n  @QueryTextField\n  private String resume;\n\n  /** Salary (indexed). */\n  @QuerySqlField(index = true)\n  private double salary;\n  \n  ...\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/data-grid.md b/wiki/documentation/data-grid/data-grid.md
deleted file mode 100644
index eed906c..0000000
--- a/wiki/documentation/data-grid/data-grid.md
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite in-memory data grid has been built from the ground up with a notion of horizontal scale and ability to add nodes on demand in real-time; it has been designed to linearly scale to hundreds of nodes with strong semantics for data locality and affinity data routing to reduce redundant data noise.
-
-Ignite data grid supports local, replicated, and partitioned data sets and allows to freely cross query between these data sets using standard SQL syntax. Ignite supports standard SQL for querying in-memory data including support for distributed SQL joins. 
-
-Ignite data grid is lightning fast and is one of the fastest implementations of transactional or atomic data in a  cluster today.
-[block:callout]
-{
-  "type": "success",
-  "title": "Data Consistency",
-  "body": "As long as your cluster is alive, Ignite will guarantee that the data between different cluster nodes will always remain consistent regardless of crashes or topology changes."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "title": "JCache (JSR 107)",
-  "body": "Ignite Data Grid implements [JCache](doc:jcache) (JSR 107) specification (currently undergoing JSR 107 TCK testing)"
-}
-[/block]
-
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/ZBWQwPXbQmyq6RRUyWfm",
-        "in-memory-data-grid-1.jpg",
-        "500",
-        "338",
-        "#e8893c",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * Distributed In-Memory Caching
-  * Lightning Fast Performance
-  * Elastic Scalability
-  * Distributed In-Memory Transactions
-  * Web Session Clustering
-  * Hibernate L2 Cache Integration
-  * Tiered Off-Heap Storage
-  * Distributed SQL Queries with support for Joins
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache"
-}
-[/block]
-`IgniteCache` interface is a gateway into Ignite cache implementation and provides methods for storing and retrieving data, executing queries, including SQL, iterating and scanning, etc.
-
-##JCache
-`IgniteCache` interface extends `javax.cache.Cache` interface from JCache specification and adds additional functionality to it, mainly having to do with local vs. distributed operations, queries, metrics, etc.
-
-You can obtain an instance of `IgniteCache` as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Obtain instance of cache named \"myCache\".\n// Note that different caches may have different generics.\nIgniteCache<Integer, String> cache = ignite.jcache(\"myCache\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/data-loading.md b/wiki/documentation/data-grid/data-loading.md
deleted file mode 100644
index 04bf661..0000000
--- a/wiki/documentation/data-grid/data-loading.md
+++ /dev/null
@@ -1,94 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Data loading usually has to do with initializing cache data on startup. Using standard cache `put(...)` or `putAll(...)` operations is generally inefficient for loading large amounts of data. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteDataLoader"
-}
-[/block]
-For fast loading of large amounts of data Ignite provides a utility interface, `IgniteDataLoader`, which internally will properly batch keys together and collocate those batches with nodes on which the data will be cached. 
-
-The high loading speed is achieved with the following techniques:
-  * Entries that are mapped to the same cluster member will be batched together in a buffer.
-  * Multiple buffers can coexist at the same time.
-  * To avoid running out of memory, data loader has a maximum number of buffers it can process concurrently.
-
-To add data to the data loader, you should call `IgniteDataLoader.addData(...)` method.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Get the data loader reference and load data.\ntry (IgniteDataLoader<Integer, String> ldr = ignite.dataLoader(\"myCache\")) {    \n    // Load entries.\n    for (int i = 0; i < 100000; i++)\n        ldr.addData(i, Integer.toString(i));\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-## Allow Overwrite
-By default, the data loader will only support initial data loading, which means that if it will encounter an entry that is already in cache, it will skip it. This is the most efficient and performant mode, as the data loader does not have to worry about data versioning in the background.
-
-If you anticipate that the data may already be in the cache and you need to overwrite it, you should set `IgniteDataLoader.allowOverwrite(true)` parameter.
-
-## Using Updater
-For cases when you need to execute some custom logic instead of just adding new data, you can take advantage of `IgniteDataLoader.Updater` API. 
-
-In the example below, we  generate random numbers and store them as key. The number of times the same number is generated is stored as value. The `Updater` helps to increment the value by 1 each time we try to load that same key into the cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Closure that increments passed in value.\nfinal GridClosure<Long, Long> INC = new GridClosure<Long, Long>() {\n    @Override public Long apply(Long e) {\n        return e == null ? 1L : e + 1;\n    }\n};\n\n// Get the data loader reference and load data.\ntry (GridDataLoader<Integer, String> ldr = grid.dataLoader(\"myCache\")) {   \n    // Configure the updater.\n    ldr.updater((cache, entries) -> {\n      for (Map.Entry<Integer, Long> e : entries)\n        cache.invoke(e.getKey(), (entry, args) -> {\n          Integer val = entry.getValue();\n\n          entry.setValue(val == null ? 1 : val + 1);\n\n          return null;\n        });\n    });\n \n    for (int i = 0; i < CNT; i++)\n        ldr.addData(RAND.nextInt(100), 1L);\n}",
-      "language": "java",
-      "name": "updater"
-    },
-    {
-      "code": "// Closure that increments passed in value.\nfinal GridClosure<Long, Long> INC = new GridClosure<Long, Long>() {\n    @Override public Long apply(Long e) {\n        return e == null ? 1L : e + 1;\n    }\n};\n\n// Get the data loader reference and load data.\ntry (GridDataLoader<Integer, String> ldr = grid.dataLoader(\"myCache\")) {   \n    // Configure updater.\n    ldr.updater(new GridDataLoadCacheUpdater<Integer, Long>() {\n        @Override public void update(GridCache<Integer, Long> cache,\n            Collection<Map.Entry<Integer, Long>> entries) throws GridException {\n                for (Map.Entry<Integer, Long> e : entries)\n                    cache.transform(e.getKey(), INC);\n        }\n    });\n \n    for (int i = 0; i < CNT; i++)\n        ldr.addData(RAND.nextInt(100), 1L);\n}",
-      "language": "java",
-      "name": "java7 updater"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache.loadCache()"
-}
-[/block]
-Another way to load large amounts of data into cache is through [CacheStore.loadCache()](docs/persistent-store#loadcache-) method, which allows for cache data loading even without passing all the keys that need to be loaded. 
-
-`IgniteCache.loadCache()` method will delegate to `CacheStore.loadCache()` method on every cluster member that is running the cache. To invoke loading only on the local cluster node, use `IgniteCache.localLoadCache()` method.
-[block:callout]
-{
-  "type": "info",
-  "body": "In case of partitioned caches, keys that are not mapped to this node, either as primary or backups, will be automatically discarded by the cache."
-}
-[/block]
-Here is an example of how `CacheStore.loadCache()` implementation. For a complete example of how a `CacheStore` can be implemented refer to [Persistent Store](doc:persistent-store).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class CacheJdbcPersonStore extends CacheStoreAdapter<Long, Person> {\n\t...\n  // This mehtod is called whenever \"IgniteCache.loadCache()\" or\n  // \"IgniteCache.localLoadCache()\" methods are called.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == null)\n      throw new CacheLoaderException(\"Expected entry count parameter is not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    Connection conn = null;\n\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs = st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), rs.getString(2), rs.getString(3));\n\n            clo.apply(person.getId(), person);\n\n            cnt++;\n          }\n        }\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load values from cache store.\", e);\n    }\n  }\n  ...\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/evictions.md b/wiki/documentation/data-grid/evictions.md
deleted file mode 100644
index bbf28ae..0000000
--- a/wiki/documentation/data-grid/evictions.md
+++ /dev/null
@@ -1,103 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Eviction policies control the maximum number of elements that can be stored in a cache on-heap memory.  Whenever maximum on-heap cache size is reached, entries are evicted into [off-heap space](doc:off-heap-memory), if one is enabled. 
-
-In Ignite eviction policies are pluggable and are controlled via `CacheEvictionPolicy` interface. An implementation of eviction policy is notified of every cache change and defines the algorithm of choosing the entries to evict from cache. 
-[block:callout]
-{
-  "type": "info",
-  "body": "If your data set can fit in memory, then eviction policy will not provide any benefit and should be disabled, which is the default behavior."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Least Recently Used (LRU)"
-}
-[/block]
-LRU eviction policy is based on [Least Recently Used (LRU)](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) algorithm, which ensures that the least recently used entry (i.e. the entry that has not been touched the longest) gets evicted first. 
-[block:callout]
-{
-  "type": "success",
-  "body": "LRU eviction policy nicely fits most of the use cases for caching. Use it whenever in doubt."
-}
-[/block]
-This eviction policy is implemented by `CacheLruEvictionPolicy` and can be configured via `CacheConfiguration`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  <property name=\"name\" value=\"myCache\"/>\n    ...\n    <property name=\"evictionPolicy\">\n        <!-- LRU eviction policy. -->\n        <bean class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n            <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n            <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new CacheLruEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "First In First Out (FIFO)"
-}
-[/block]
-FIFO eviction policy is based on [First-In-First-Out (FIFO)](https://en.wikipedia.org/wiki/FIFO) algorithm which ensures that entry that has been in cache the longest will be evicted first. It is different from `CacheLruEvictionPolicy` because it ignores the access order of entries. 
-
-This eviction policy is implemented by `CacheFifoEvictionPolicy` and can be configured via `CacheConfiguration`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  <property name=\"name\" value=\"myCache\"/>\n    ...\n    <property name=\"evictionPolicy\">\n        <!-- FIFO eviction policy. -->\n        <bean class=\"org.apache.ignite.cache.eviction.fifo.CacheFifoEvictionPolicy\">\n            <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n            <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new CacheFifoEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Random"
-}
-[/block]
-Random eviction policy which randomly chooses entries to evict. This eviction policy is mainly used for debugging and benchmarking purposes.
-
-This eviction policy is implemented by `CacheRandomEvictionPolicy` and can be configured via `CacheConfiguration`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  <property name=\"name\" value=\"myCache\"/>\n    ...\n    <property name=\"evictionPolicy\">\n        <!-- Random eviction policy. -->\n        <bean class=\"org.apache.ignite.cache.eviction.random.CacheRandomEvictionPolicy\">\n            <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n            <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new CacheRandomEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/hibernate-l2-cache.md b/wiki/documentation/data-grid/hibernate-l2-cache.md
deleted file mode 100644
index 5567845..0000000
--- a/wiki/documentation/data-grid/hibernate-l2-cache.md
+++ /dev/null
@@ -1,190 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite In-Memory Data Fabric can be used as [Hibernate](http://hibernate.org) Second-Level cache (or L2 cache), which can significantly speed-up the persistence layer of your application.
-
-[Hibernate](http://hibernate.org) is a well-known and widely used framework for Object-Relational Mapping (ORM). While interacting closely with an SQL database, it performs caching of retrieved data to minimize expensive database requests.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/D35hL3OuQ46YA4v3BLwJ",
-        "hibernate-L2-cache.png",
-        "600",
-        "478",
-        "#b7917a",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-All work with Hibernate database-mapped objects is done within a session, usually bound to a worker thread or a Web session. By default, Hibernate only uses per-session (L1) cache, so, objects, cached in one session, are not seen in another. However, a global second-level (L2) cache may be used, in which the cached objects are seen for all sessions that use the same L2 cache configuration. This usually gives a significantly greater performance gain, because each newly-created session can take full advantage of the data already present in L2 cache memory (which outlives any session-local L1 cache).
-
-While L1 cache is always enabled and fully implemented by Hibernate internally, L2 cache is optional and can have multiple pluggable implementaions. Ignite can be easily plugged-in as an L2 cache implementation, and can be used in all access modes (`READ_ONLY`, `READ_WRITE`, `NONSTRICT_READ_WRITE`, and `TRANSACTIONAL`), supporting a wide range of related features:
-  * caching to memory and disk, as well as off-heap memory.
-  * cache transactions, that make `TRANSACTIONA`L mode possible.
-  * clustering, with 2 different replication modes: `REPLICATED` and `PARTITIONED`
-
-To start using GridGain as a Hibernate L2 cache, you need to perform 3 simple steps:
-  * Add Ignite libraries to your application's classpath.
-  * Enable L2 cache and specify Ignite implementation class in L2 cache configuration.
-  * Configure Ignite caches for L2 cache regions and start the embedded Ignite node (and, optionally, external Ignite nodes). 
- 
-In the section below we cover these steps in more detail.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "L2 Cache Configuration"
-}
-[/block]
-To configure Ignite In-Memory Data Fabric as a Hibernate L2 cache, without any changes required to the existing Hibernate code, you need to:
-  * Configure Hibernate itself to use Ignite as L2 cache.
-  * Configure Ignite cache appropriately. 
-
-##Hibernate Configuration Example
-A typical Hibernate configuration for L2 cache with Ignite would look like the one below:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<hibernate-configuration>\n    <session-factory>\n        ...\n        <!-- Enable L2 cache. -->\n        <property name=\"cache.use_second_level_cache\">true</property>\n        \n        <!-- Generate L2 cache statistics. -->\n        <property name=\"generate_statistics\">true</property>\n        \n        <!-- Specify GridGain as L2 cache provider. -->\n        <property name=\"cache.region.factory_class\">org.gridgain.grid.cache.hibernate.GridHibernateRegionFactory</property>\n        \n        <!-- Specify the name of the grid, that will be used for second level caching. -->\n        <property name=\"org.gridgain.hibernate.grid_name\">hibernate-grid</property>\n        \n        <!-- Set default L2 cache access type. -->\n        <property name=\"org.gridgain.hibernate.default_access_type\">READ_ONLY</property>\n        \n        <!-- Specify the entity classes for mapping. -->\n        <mapping class=\"com.mycompany.MyEntity1\"/>\n        <mapping class=\"com.mycompany.MyEntity2\"/>\n        \n        <!-- Per-class L2 cache settings. -->\n        <class-cache class=\"com.mycompany.MyEntity1\" usage=\"read-only\"/>\n        <class-cache class=\"com.mycompany.MyEntity2\" usage=\"read-only\"/>\n        <collection-cache collection=\"com.mycompany.MyEntity1.children\" usage=\"read-only\"/>\n        ...\n    </session-factory>\n</hibernate-configuration>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Here, we do the following:
-  * Enable L2 cache (and, optionally, the L2 cache statistics generation).
-  * Specify Ignite as L2 cache implementation.
-  * Specify the name of the caching grid (should correspond to the one in Ignite configuration).
-  * Specify the entity classes and configure caching for each class (a corresponding cache region should be configured in Ignite). 
-
-##Ignite Configuration Example
-A typical Ignite configuration for Hibernate L2 caching looks like this:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<!-- Basic configuration for atomic cache. -->\n<bean id=\"atomic-cache\" class=\"org.apache.ignite.configutation.CacheConfiguration\" abstract=\"true\">\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    <property name=\"atomicityMode\" value=\"ATOMIC\"/>\n    <property name=\"writeSynchronizationMode\" value=\"FULL_SYNC\"/>\n</bean>\n \n<!-- Basic configuration for transactional cache. -->\n<bean id=\"transactional-cache\" class=\"org.apache.ignite.configutation.CacheConfiguration\" abstract=\"true\">\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    <property name=\"atomicityMode\" value=\"TRANSACTIONAL\"/>\n    <property name=\"writeSynchronizationMode\" value=\"FULL_SYNC\"/>\n</bean>\n \n<bean id=\"ignite.cfg\" class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    <!-- \n        Specify the name of the caching grid (should correspond to the \n        one in Hibernate configuration).\n    -->\n    <property name=\"gridName\" value=\"hibernate-grid\"/>\n    ...\n    <!-- \n        Specify cache configuration for each L2 cache region (which corresponds \n        to a full class name or a full association name).\n    -->\n    <property name=\"cacheConfiguration\">\n        <list>\n            <!--\n                Configurations for entity caches.\n            -->\n            <bean parent=\"transactional-cache\">\n                <property name=\"name\" value=\"com.mycompany.MyEntity1\"/>\n            </bean>\n            <bean parent=\"transactional-cache\">\n                <property name=\"name\" value=\"com.mycompany.MyEntity2\"/>\n            </bean>\n            <bean parent=\"transactional-cache\">\n                <property name=\"name\" value=\"com.mycompany.MyEntity1.children\"/>\n            </bean>\n \n            <!-- Configuration for update timestamps cache. -->\n            <bean parent=\"atomic-cache\">\n                <property name=\"name\" value=\"org.hibernate.cache.spi.UpdateTimestampsCache\"/>\n            </bean>\n \n            <!-- Configuration for query result cache. -->\n            <bean parent=\"atomic-cache\">\n                <property name=\"name\" value=\"org.hibernate.cache.internal.StandardQueryCache\"/>\n            </bean>\n        </list>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Here, we specify the cache configuration for each L2 cache region:
-  * We use `PARTITIONED` cache to split the data between caching nodes. Another possible strategy is to enable `REPLICATED` mode, thus replicating a full dataset between all caching nodes. See Cache Distribution Models for more information.
-  * We specify the cache name that corresponds an L2 cache region name (either a full class name or a full association name).
-  * We use `TRANSACTIONAL` atomicity mode to take advantage of cache transactions.
-  * We enable `FULL_SYNC` to be always fully synchronized with backup nodes.
-
-Additionally, we specify a cache for update timestamps, which may be `ATOMIC`, for better performance.
-
-Having configured Ignite caching node, we can start it from within our code the following way:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignition.start(\"my-config-folder/my-ignite-configuration.xml\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-After the above line is executed, the internal Ignite node is started and is ready to cache the data. We can also start additional standalone nodes by running the following command from console:
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$IGNITE_HOME/bin/ignite.sh my-config-folder/my-ignite-configuration.xml",
-      "language": "text"
-    }
-  ]
-}
-[/block]
-For Windows, use the `.bat` script in the same folder.
-[block:callout]
-{
-  "type": "success",
-  "body": "The nodes may be started on other hosts as well, forming a distributed caching cluster. Be sure to specify the right network settings in GridGain configuration file for that."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Cache"
-}
-[/block]
-In addition to L2 cache, Hibernate offers a query cache. This cache stores the results of queries (either HQL or Criteria) with a given set of parameters, so, when you repeat the query with the same parameter set, it hits the cache without going to the database. 
-
-Query cache may be useful if you have a number of queries, which may repeat with the same parameter values. Like in case of L2 cache, Hibernate relies on a 3-rd party cache implementation, and Ignite In-Memory Data Fabric can be used as such.
-[block:callout]
-{
-  "type": "success",
-  "body": "Consider using support for [SQL-based In-Memory Queries](/docs/cache-queries) in Ignite which should perform faster than going through Hibernate."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Cache Configuration"
-}
-[/block]
-The [configuration](#l2-cache-configuration) information above totally applies to query cache, but some additional configuration and code change is required.
-
-##Hibernate Configuration
-To enable query cache in Hibernate, you only need one additional line in configuration file:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<!-- Enable query cache. -->\n<property name=\"cache.use_query_cache\">true</property>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Yet, a code modification is required: for each query that you want to cache, you should enable `cacheable` flag by calling `setCacheable(true)`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Session ses = ...;\n \n// Create Criteria query.\nCriteria criteria = ses.createCriteria(cls);\n \n// Enable cacheable flag.\ncriteria.setCacheable(true);\n \n...",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-After this is done, your query results will be cached.
-
-##Ignite Configuration
-To enable Hibernate query caching in Ignite, you need to specify an additional cache configuration:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "\n<property name=\"cacheConfiguration\">\n    <list>\n        ...\n        <!-- Query cache (refers to atomic cache defined in above example). -->\n        <bean parent=\"atomic-cache\">\n            <property name=\"name\" value=\"org.hibernate.cache.internal.StandardQueryCache\"/>\n        </bean>\n    </list>\n</property>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Notice that the cache is made `ATOMIC` for better performance.
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/jcache.md b/wiki/documentation/data-grid/jcache.md
deleted file mode 100644
index c2ba448..0000000
--- a/wiki/documentation/data-grid/jcache.md
+++ /dev/null
@@ -1,116 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Apache Ignite data grid is an implementation of JCache (JSR 107) specification (currently undergoing JSR107 TCK testing). JCache provides a very simple to use, but yet very powerful API for data access. However, the specification purposely omits any details about data distribution and consistency to allow vendors enough freedom in their own implementations. 
-
-In addition to JCache, Ignite provides ACID transactions, data querying capabilities (including SQL), various memory models, queries, transactions, etc...
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache"
-}
-[/block]
-`IgniteCache` is based on **JCache (JSR 107)**, so at the very basic level the APIs can be reduced to `javax.cache.Cache` interface. However, `IgniteCache` API also provides functionality that facilitates features outside of JCache spec, like data loading, querying, asynchronous mode, etc.
-
-You can get an instance of `IgniteCache` directly from `Ignite`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCache cache = ignite.jcache(\"mycache\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Basic Operations"
-}
-[/block]
-Here are some basic JCache atomic operation examples.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "try (Ignite ignite = Ignition.start(\"examples/config/example-cache.xml\")) {\n    IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n \n    // Store keys in cache (values will end up on different cache nodes).\n    for (int i = 0; i < 10; i++)\n        cache.put(i, Integer.toString(i));\n \n    for (int i = 0; i < 10; i++)\n        System.out.println(\"Got [key=\" + i + \", val=\" + cache.get(i) + ']');\n}",
-      "language": "java",
-      "name": "Put & Get"
-    },
-    {
-      "code": "// Put-if-absent which returns previous value.\nInteger oldVal = cache.getAndPutIfAbsent(\"Hello\", 11);\n  \n// Put-if-absent which returns boolean success flag.\nboolean success = cache.putIfAbsent(\"World\", 22);\n  \n// Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.\noldVal = cache.getAndReplace(\"Hello\", 11);\n \n// Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.\nsuccess = cache.replace(\"World\", 22);\n  \n// Replace-if-matches operation.\nsuccess = cache.replace(\"World\", 2, 22);\n  \n// Remove-if-matches operation.\nsuccess = cache.remove(\"Hello\", 1);",
-      "language": "java",
-      "name": "Atomic"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "EntryProcessor"
-}
-[/block]
-Whenever doing `puts` and `updates` in cache, you are usually sending full state object state across the network. `EntryProcessor` allows for processing data directly on primary nodes, often transferring only the deltas instead of the full state. 
-
-Moreover, you can embed your own logic into `EntryProcessors`, for example, taking previous cached value and incrementing it by 1.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<String, Integer> cache = ignite.jcache(\"mycache\");\n\n// Increment cache value 10 times.\nfor (int i = 0; i < 10; i++)\n  cache.invoke(\"mykey\", (entry, args) -> {\n    Integer val = entry.getValue();\n\n    entry.setValue(val == null ? 1 : val + 1);\n\n    return null;\n  });",
-      "language": "java",
-      "name": "invoke"
-    },
-    {
-      "code": "IgniteCache<String, Integer> cache = ignite.jcache(\"mycache\");\n\n// Increment cache value 10 times.\nfor (int i = 0; i < 10; i++)\n  cache.invoke(\"mykey\", new EntryProcessor<String, Integer, Void>() {\n    @Override \n    public Object process(MutableEntry<Integer, String> entry, Object... args) {\n      Integer val = entry.getValue();\n\n      entry.setValue(val == null ? 1 : val + 1);\n\n      return null;\n    }\n  });",
-      "language": "java",
-      "name": "java7 invoke"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "Atomicity",
-  "body": "`EntryProcessors` are executed atomically within a lock on the given cache key."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Asynchronous Support"
-}
-[/block]
-Just like all distributed APIs in Ignite, `IgniteCache` extends [IgniteAsynchronousSupport](doc:async-support) interface and can be used in asynchronous mode.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCache<String, Integer> asyncCache = ignite.jcache(\"mycache\").withAsync();\n\n// Asynhronously store value in cache.\nasyncCache.getAndPut(\"1\", 1);\n\n// Get future for the above invocation.\nIgniteFuture<Integer> fut = asyncCache.future();\n\n// Asynchronously listen for the operation to complete.\nfut.listenAsync(f -> System.out.println(\"Previous cache value: \" + f.get()));",
-      "language": "java",
-      "name": "Async"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/off-heap-memory.md b/wiki/documentation/data-grid/off-heap-memory.md
deleted file mode 100644
index d184dd8..0000000
--- a/wiki/documentation/data-grid/off-heap-memory.md
+++ /dev/null
@@ -1,197 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Off-Heap memory allows your cache to overcome lengthy JVM Garbage Collection (GC) pauses when working with large heap sizes by caching data outside of main Java Heap space, but still in RAM.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/iXCEc4RsQ4a1SM1Vfjnl",
-        "off-heap-memory.png",
-        "450",
-        "354",
-        "#6c521f",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "Off-Heap Indexes",
-  "body": "Note that when off-heap memory is configured, Ignite will also store query indexes off-heap as well. This means that indexes will not take any portion of on-heap memory."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "You can also manage GC pauses by starting multiple processes with smaller heap on the same physical server. However, such approach is wasteful when using REPLICATED caches as we will end up with caching identical *replicated* data for every started JVM process.",
-  "title": "Off-Heap Memory vs. Multiple Processes"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Tiered Off-Heap Storage"
-}
-[/block]
-Ignite provides tiered storage model, where data can be stored and moved between **on-heap**, **off-heap**, and **swap space**. Going up the tier provides more data storage capacity, with gradual increase in latency. 
-
-Ignite provides three types of memory modes, defined in `CacheMemoryMode`, for storing cache entries, supporting tiered storage model:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Memory Mode",
-    "h-1": "Description",
-    "0-0": "`ONHEAP_TIERED`",
-    "0-1": "Store entries on-heap and evict to off-heap and optionally to swap.",
-    "1-0": "`OFFHEAP_TIERED`",
-    "2-0": "`OFFHEAP_VALUES`",
-    "1-1": "Store entries off-heap, bypassing on-heap and optionally evicting to swap.",
-    "2-1": "Store keys on-heap and values off-heap."
-  },
-  "cols": 2,
-  "rows": 3
-}
-[/block]
-Cache can be configured to use any of the three modes by setting the `memoryMode` configuration property of `CacheConfiguration`, as described below.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ONHEAP_TIERED"
-}
-[/block]
-In Ignite, `ONHEAP_TIERED`  is the default memory mode, where all cache entries are stored on-heap. Entries can be moved from on-heap to off-heap storage and later to swap space, if one is configured.
-
-To configure `ONHEAP_TIERED` memory mode, you need to:
-
-1. Set `memoryMode` property of `CacheConfiguration` to `ONHEAP_TIERED`. 
-2. Enable off-heap memory (optionally).
-3. Configure *eviction policy* for on-heap memory.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- Store cache entries on-heap. -->\n  <property name=\"memoryMode\" value=\"ONHEAP_TIERED\"/> \n\n  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->\n  <property name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n\n  <!-- Configure eviction policy. -->\n  <property name=\"evictionPolicy\">\n    <bean class=\"org.apache.ignite.cache.eviction.fifo.CacheFifoEvictionPolicy\">\n      <!-- Evict to off-heap after cache size reaches maxSize. -->\n      <property name=\"maxSize\" value=\"100000\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);\n\n// Set off-heap memory to 10GB (0 for unlimited)\ncacheCfg.setOffHeapMaxMemory(10 * 1024L * 1024L * 1024L);\n\nCacheFifoEvictionPolicy evctPolicy = new CacheFifoEvictionPolicy();\n\n// Store only 100,000 entries on-heap.\nevctPolicy.setMaxSize(100000);\n\ncacheCfg.setEvictionPolicy(evctPolicy);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "warning",
-  "body": "Note that if you do not enable eviction policy in ONHEAP_TIERED mode, data will never be moved from on-heap to off-heap memory.",
-  "title": "Eviction Policy"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "OFFHEAP_TIERED"
-}
-[/block]
-This memory mode allows you to configure your cache to store entries directly into off-heap storage, bypassing on-heap memory. Since all entries are stored off-heap, there is no need to explicitly configure an eviction policy. If off-heap storage size is exceeded (0 for unlimited), then LRU eviction policy is used to evict entries from off-heap store and optionally moving them to swap space, if one is configured.
-
-To configure `OFFHEAP_TIERED` memory mode, you need to:
-
-1. Set `memoryMode` property of `CacheConfiguration` to `OFFHEAP_TIERED`. 
-2. Enable off-heap memory (optionally).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- Always store cache entries in off-heap memory. -->\n  <property name=\"memoryMode\" value=\"OFFHEAP_TIERED\"/>\n\n  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->\n  <property name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);\n\n// Set off-heap memory to 10GB (0 for unlimited)\ncacheCfg.setOffHeapMaxMemory(10 * 1024L * 1024L * 1024L);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "OFFHEAP_VALUES"
-}
-[/block]
-Setting this memory mode allows you to store keys on-heap and values off-heap. This memory mode is useful when keys are small and values are large.
-
-To configure `OFFHEAP_VALUES` memory mode, you need to:
-
-1. Set `memoryMode` property of `CacheConfiguration` to `OFFHEAP_VALUES`. 
-2. Enable off-heap memory.
-3. Configure *eviction policy* for on-heap memory (optionally).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- Always store cache entries in off-heap memory. -->\n  <property name=\"memoryMode\" value=\"OFFHEAP_VALUES\"/>\n\n  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->\n  <property name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_VALUES);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Swap Space"
-}
-[/block]
-Whenever your data set exceeds the limits of on-heap and off-heap memory, you can configure swap space in which case Ignite will evict entries to the disk instead of discarding them.
-[block:callout]
-{
-  "type": "warning",
-  "title": "Swap Space Performance",
-  "body": "Since swap space is on-disk, it is significantly slower than on-heap or off-heap memory."
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- Enable swap. -->\n  <property name=\"swapEnabled\" value=\"true\"/> \n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setSwapEnabled(true);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/persistent-store.md b/wiki/documentation/data-grid/persistent-store.md
deleted file mode 100644
index ce913a4..0000000
--- a/wiki/documentation/data-grid/persistent-store.md
+++ /dev/null
@@ -1,128 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-JCache specification comes with APIs for [javax.cache.inegration.CacheLoader](https://ignite.incubator.apache.org/jcache/1.0.0/javadoc/javax/cache/integration/CacheLoader.html) and [javax.cache.inegration.CacheWriter](https://ignite.incubator.apache.org/jcache/1.0.0/javadoc/javax/cache/integration/CacheWriter.html) which are used for **write-through** and **read-through** to and from an underlying persistent storage respectively (e.g. an RDBMS database like Oracle or MySQL, or NoSQL database like MongoDB or Couchbase).
-
-While Ignite allows you to configure the `CacheLoader` and `CacheWriter` separately, it is very awkward to implement a transactional store within 2 separate classes, as multiple `load` and `put` operations have to share the same connection within the same transaction. To mitigate that, Ignite provides `org.apache.ignite.cache.store.CacheStore` interface which extends both, `CacheLoader` and `CacheWriter`. 
-[block:callout]
-{
-  "type": "info",
-  "title": "Transactions",
-  "body": "`CacheStore` is fully transactional and automatically merges into the ongoing cache transaction."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "CacheStore"
-}
-[/block]
-`CacheStore` interface in Ignite is used to write and load data to and from the underlying data store. In addition to standard JCache loading and storing methods, it also introduces end-of-transaction demarcation and ability to bulk load a cache from the underlying data store.
-
-## loadCache()
-`CacheStore.loadCache()` method allows for cache loading even without passing all the keys that need to be loaded. It is generally used for hot-loading the cache on startup, but can be also called at any point after the cache has been started.
-
-`IgniteCache.loadCache()` method will delegate to `CacheStore.loadCache()` method on every cluster member that is running the cache. To invoke loading only on the local cluster node, use `IgniteCache.localLoadCache()` method.
-[block:callout]
-{
-  "type": "info",
-  "body": "In case of partitioned caches, keys that are not mapped to this node, either as primary or backups, will be automatically discarded by the cache."
-}
-[/block]
-## load(), write(), delete()
-Methods `load()`, `write()`, and `delete()` on the `CacheStore` are called whenever methods `get()`, `put()`, and `remove()` are called correspondingly on the `IgniteCache` interface. These methods are used to enable **read-through** and **write-through** behavior when working with individual cache entries.
-
-## loadAll(), writeAll(), deleteAll()
-Methods `loadAll()`, `writeAll()`, and `deleteAll()` on the `CacheStore` are called whenever methods `getAll()`, `putAll()`, and `removeAll()` are called correspondingly on the `IgniteCache` interface. These methods are used to enable **read-through** and **write-through** behavior when working with multiple cache entries and should generally be implemented using batch operations to provide better performance.
-[block:callout]
-{
-  "type": "info",
-  "title": "",
-  "body": "`CacheStoreAdapter` provides default implementation for `loadAll()`, `writeAll()`, and `deleteAll()` methods which simply iterates through all keys one by one."
-}
-[/block]
-## sessionEnd()
-Ignite has a concept of store session which may span more than one cache store operation. Sessions are especially useful when working with transactions.
-
-In case of `ATOMIC` caches, method `sessionEnd()` is called after completion of each `CacheStore` method. In case of `TRANSACTIONAL` caches, `sessionEnd()` is called at the end of each transaction, which allows to either commit or rollback multiple operations on the underlying persistent store.
-[block:callout]
-{
-  "type": "info",
-  "body": "`CacheStoreAdapater` provides default empty implementation of `sessionEnd()` method."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "CacheStoreSession"
-}
-[/block]
-The main purpose of cache store session is to hold the context between multiple store invocations whenever `CacheStore` is used in a cache transaction. For example, if using JDBC, you can store the ongoing database connection via `CacheStoreSession.attach()` method. You can then commit this connection in the `CacheStore#sessionEnd(boolean)` method.
-
-`CacheStoreSession` can be injected into your cache store implementation via `@GridCacheStoreSessionResource` annotation.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "CacheStore Example"
-}
-[/block]
-Below are a couple of different possible cache store implementations. Note that transactional implementation works with and without transactions.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class CacheJdbcPersonStore extends CacheStoreAdapter<Long, Person> {\n  // This mehtod is called whenever \"get(...)\" methods are called on IgniteCache.\n  @Override public Person load(Long key) {\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS where id=?\")) {\n        st.setLong(1, key);\n\n        ResultSet rs = st.executeQuery();\n\n        return rs.next() ? new Person(rs.getLong(1), rs.getString(2), rs.getString(3)) : null;\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load: \" + key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"put(...)\" methods are called on IgniteCache.\n  @Override public void write(Cache.Entry<Long, Person> entry) {\n    try (Connection conn = connection()) {\n      // Syntax of MERGE statement is database specific and should be adopted for your database.\n      // If your database does not support MERGE statement then use sequentially update, insert statements.\n      try (PreparedStatement st = conn.prepareStatement(\n        \"merge into PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for (Cache.Entry<Long, Person> entry : entries) {\n          Person val = entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n          st.setString(2, val.getFirstName());\n          st.setString(3, val.getLastName());\n          \n          st.executeUpdate();\n        }\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to write [key=\" + key + \", val=\" + val + ']', e);\n    }\n  }\n\n  // This mehtod is called whenever \"remove(...)\" methods are called on IgniteCache.\n  @Override public void delete(Object key) {\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where id=?\")) {\n        st.setLong(1, (Long)key);\n\n        st.executeUpdate();\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to delete: \" + key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"loadCache()\" and \"localLoadCache()\"\n  // methods are called on IgniteCache. It is used for bulk-loading the cache.\n  // If you don't need to bulk-load the cache, skip this method.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == null)\n      throw new CacheLoaderException(\"Expected entry count parameter is not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs = st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), rs.getString(2), rs.getString(3));\n\n            clo.apply(person.getId(), person);\n\n            cnt++;\n          }\n        }\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load values from cache store.\", e);\n    }\n  }\n\n  // Open JDBC connection.\n  private Connection connection() throws SQLException  {\n    // Open connection to your RDBMS systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, etc.)\n    // In this example we use H2 Database for simplification.\n    Connection conn = DriverManager.getConnection(\"jdbc:h2:mem:example;DB_CLOSE_DELAY=-1\");\n\n    conn.setAutoCommit(true);\n\n    return conn;\n  }\n}",
-      "language": "java",
-      "name": "jdbc non-transactional"
-    },
-    {
-      "code": "public class CacheJdbcPersonStore extends CacheStoreAdapter<Long, Person> {\n  /** Auto-injected store session. */\n  @CacheStoreSessionResource\n  private CacheStoreSession ses;\n\n  // Complete transaction or simply close connection if there is no transaction.\n  @Override public void sessionEnd(boolean commit) {\n    try (Connection conn = ses.getAttached()) {\n      if (conn != null && ses.isWithinTransaction()) {\n        if (commit)\n          conn.commit();\n        else\n          conn.rollback();\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to end store session.\", e);\n    }\n  }\n\n  // This mehtod is called whenever \"get(...)\" methods are called on IgniteCache.\n  @Override public Person load(Long key) {\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS where id=?\")) {\n        st.setLong(1, key);\n\n        ResultSet rs = st.executeQuery();\n\n        return rs.next() ? new Person(rs.getLong(1), rs.getString(2), rs.getString(3)) : null;\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load: \" + key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"put(...)\" methods are called on IgniteCache.\n  @Override public void write(Cache.Entry<Long, Person> entry) {\n    try (Connection conn = connection()) {\n      // Syntax of MERGE statement is database specific and should be adopted for your database.\n      // If your database does not support MERGE statement then use sequentially update, insert statements.\n      try (PreparedStatement st = conn.prepareStatement(\n        \"merge into PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for (Cache.Entry<Long, Person> entry : entries) {\n          Person val = entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n          st.setString(2, val.getFirstName());\n          st.setString(3, val.getLastName());\n          \n          st.executeUpdate();\n        }\n      }\n    }        \n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to write [key=\" + key + \", val=\" + val + ']', e);\n    }\n  }\n\n  // This mehtod is called whenever \"remove(...)\" methods are called on IgniteCache.\n  @Override public void delete(Object key) {\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where id=?\")) {\n        st.setLong(1, (Long)key);\n\n        st.executeUpdate();\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to delete: \" + key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"loadCache()\" and \"localLoadCache()\"\n  // methods are called on IgniteCache. It is used for bulk-loading the cache.\n  // If you don't need to bulk-load the cache, skip this method.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == null)\n      throw new CacheLoaderException(\"Expected entry count parameter is not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs = st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), rs.getString(2), rs.getString(3));\n\n            clo.apply(person.getId(), person);\n\n            cnt++;\n          }\n        }\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load values from cache store.\", e);\n    }\n  }\n\n  // Opens JDBC connection and attaches it to the ongoing\n  // session if within a transaction.\n  private Connection connection() throws SQLException  {\n    if (ses.isWithinTransaction()) {\n      Connection conn = ses.getAttached();\n\n      if (conn == null) {\n        conn = openConnection(false);\n\n        // Store connection in the session, so it can be accessed\n        // for other operations within the same transaction.\n        ses.attach(conn);\n      }\n\n      return conn;\n    }\n    // Transaction can be null in case of simple load or put operation.\n    else\n      return openConnection(true);\n  }\n\n  // Opens JDBC connection.\n  private Connection openConnection(boolean autocommit) throws SQLException {\n    // Open connection to your RDBMS systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, etc.)\n    // In this example we use H2 Database for simplification.\n    Connection conn = DriverManager.getConnection(\"jdbc:h2:mem:example;DB_CLOSE_DELAY=-1\");\n\n    conn.setAutoCommit(autocommit);\n\n    return conn;\n  }\n}",
-      "language": "java",
-      "name": "jdbc transactional"
-    },
-    {
-      "code": "public class CacheJdbcPersonStore extends CacheStore<Long, Person> {\n  // Skip single operations and open connection methods.\n  // You can copy them from jdbc non-transactional or jdbc transactional examples.\n  ...\n  \n  // This mehtod is called whenever \"getAll(...)\" methods are called on IgniteCache.\n  @Override public Map<K, V> loadAll(Iterable<Long> keys) {\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\n        \"select firstName, lastName from PERSONS where id=?\")) {\n        Map<K, V> loaded = new HashMap<>();\n        \n        for (Long key : keys) {\n          st.setLong(1, key);\n          \n          try(ResultSet rs = st.executeQuery()) {\n            if (rs.next())\n              loaded.put(key, new Person(key, rs.getString(1), rs.getString(2));\n          }\n        }\n\n        return loaded;\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to loadAll: \" + keys, e);\n    }\n  }\n  \n  // This mehtod is called whenever \"putAll(...)\" methods are called on IgniteCache.\n  @Override public void writeAll(Collection<Cache.Entry<Long, Person>> entries) {\n    try (Connection conn = connection()) {\n      // Syntax of MERGE statement is database specific and should be adopted for your database.\n      // If your database does not support MERGE statement then use sequentially update, insert statements.\n      try (PreparedStatement st = conn.prepareStatement(\n        \"merge into PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for (Cache.Entry<Long, Person> entry : entries) {\n          Person val = entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n          st.setString(2, val.getFirstName());\n          st.setString(3, val.getLastName());\n          \n          st.addBatch();\n        }\n        \n\t\t\t\tst.executeBatch();\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to writeAll: \" + entries, e);\n    }\n  }\n  \n  // This mehtod is called whenever \"removeAll(...)\" methods are called on IgniteCache.\n  @Override public void deleteAll(Collection<Long> keys) {\n    try (Connection conn = connection()) {\n      try (PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where id=?\")) {\n        for (Long key : keys) {\n          st.setLong(1, key);\n          \n          st.addBatch();\n        }\n        \n\t\t\t\tst.executeBatch();\n      }\n    }\n    catch (SQLException e) {\n      throw new CacheWriterException(\"Failed to deleteAll: \" + keys, e);\n    }\n  }\n}",
-      "language": "java",
-      "name": "jdbc bulk operations"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-`CacheStore` interface can be set on `IgniteConfiguration` via a `Factory` in much the same way like `CacheLoader` and `CacheWriter` are being set.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n    <property name=\"cacheConfiguration\">\n      <list>\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          ...\n          <property name=\"cacheStoreFactory\">\n            <bean class=\"javax.cache.configuration.FactoryBuilder$SingletonFactory\">\n              <constructor-arg>\n                <bean class=\"foo.bar.MyPersonStore\">\n    \t\t\t\t\t\t\t...\n    \t\t\t\t\t\t</bean>\n    \t\t\t\t\t</constructor-arg>\n    \t\t\t\t</bean>\n\t    \t\t</property>\n    \t\t\t...\n    \t\t</bean>\n    \t</list>\n    </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\nCacheConfiguration<Long, Person> cacheCfg = new CacheConfiguration<>();\n\nCacheStore<Long, Person> store;\n\nstore = new MyPersonStore();\n\ncacheCfg.setCacheStoreFactory(new FactoryBuilder.SingletonFactory<>(store));\ncacheCfg.setReadThrough(true);\ncacheCfg.setWriteThrough(true);\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/rebalancing.md b/wiki/documentation/data-grid/rebalancing.md
deleted file mode 100644
index 5ffe6a6..0000000
--- a/wiki/documentation/data-grid/rebalancing.md
+++ /dev/null
@@ -1,122 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-When a new node joins topology, existing nodes relinquish primary or back up ownership of some keys to the new node so that keys remain equally balanced across the grid at all times.
-
-If the new node becomes a primary or backup for some partition, it will fetch data from previous primary node for that partition or from one of the backup nodes for that partition. Once a partition is fully loaded to the new node, it will be marked obsolete on the old node and will be eventually evicted after all current transactions on that node are finished. Hence, for some short period of time, after topology changes, there can be a case when a cache will have more backup copies for a key than configured. However once rebalancing completes, extra backup copies will be removed from node caches.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Preload Modes"
-}
-[/block]
-Following preload modes are defined in `CachePreloadMode` enum.
-[block:parameters]
-{
-  "data": {
-    "0-0": "`SYNC`",
-    "h-0": "CachePreloadMode",
-    "h-1": "Description",
-    "0-1": "Synchronous rebalancing mode. Distributed caches will not start until all necessary data is loaded from other available grid nodes. This means that any call to cache public API will be blocked until rebalancing is finished.",
-    "1-1": "Asynchronous rebalancing mode. Distributed caches will start immediately and will load all necessary data from other available grid nodes in the background.",
-    "1-0": "`ASYNC`",
-    "2-1": "In this mode no rebalancing will take place which means that caches will be either loaded on demand from persistent store whenever data is accessed, or will be populated explicitly.",
-    "2-0": "`NONE`"
-  },
-  "cols": 2,
-  "rows": 3
-}
-[/block]
-By default, `ASYNC` preload mode is enabled. To use another mode, you can set the `preloadMode` property of `CacheConfiguration`, like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">          \t\t\n          \t<!-- Set synchronous preloading. -->\n    \t\t\t\t<property name=\"preloadMode\" value=\"SYNC\"/>\n            ... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setPreloadMode(CachePreloadMode.SYNC);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Rebalance Message Throttling"
-}
-[/block]
-When re-balancer transfers data from one node to another, it splits the whole data set into batches and sends each batch in a separate message. If your data sets are large and there are a lot of messages to send, the CPU or network can get over-consumed. In this case it can be reasonable to wait between rebalance messages so that negative performance impact caused by preloading process is minimized. This time interval is controlled by `preloadThrottle` configuration property of  `CacheConfiguration`. Its default value is 0, which means that there will be no pauses between messages. Note that size of a single message can be also customized by `preloadBatchSize` configuration property (default size is 512K).
-
-For example, if you want preloader to send 2MB of data per message with 100 ms throttle interval, you should provide the following configuration: 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">          \t\t\n          \t<!-- Set batch size. -->\n    \t\t\t\t<property name=\"preloadBatchSize\" value=\"#{2 * 1024 * 1024}\"/>\n \n    \t\t\t\t<!-- Set throttle interval. -->\n    \t\t\t\t<property name=\"preloadThrottle\" value=\"100\"/>\n            ... \n        </bean\n    </property>\n</bean> ",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setPreloadBatchSize(2 * 1024 * 1024);\n            \ncacheCfg.setPreloadThrottle(100);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-Cache preloading behavior can be customized by optionally setting the following configuration properties:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setPreloadMode`",
-    "0-1": "Preload mode for distributed cache. See Preload Modes section for details.",
-    "1-0": "`setPreloadPartitionedDelay`",
-    "1-1": "Preloading delay in milliseconds. See Delayed And Manual Preloading section for details.",
-    "2-0": "`setPreloadBatchSize`",
-    "2-1": "Size (in bytes) to be loaded within a single preload message. Preloading algorithm will split total data set on every node into multiple batches prior to sending data.",
-    "3-0": "`setPreloadThreadPoolSize`",
-    "3-1": "Size of preloading thread pool. Note that size serves as a hint and implementation may create more threads for preloading than specified here (but never less threads).",
-    "4-0": "`setPreloadThrottle`",
-    "4-1": "Time in milliseconds to wait between preload messages to avoid overloading of CPU or network. When preloading large data sets, the CPU or network can get over-consumed with preloading messages, which consecutively may slow down the application performance. This parameter helps tune the amount of time to wait between preload messages to make sure that preloading process does not have any negative performance impact. Note that application will continue to work properly while preloading is still in progress.",
-    "5-0": "`setPreloadOrder`",
-    "6-0": "`setPreloadTimeout`",
-    "5-1": "Order in which preloading should be done. Preload order can be set to non-zero value for caches with SYNC or ASYNC preload modes only. Preloading for caches with smaller preload order will be completed first. By default, preloading is not ordered.",
-    "6-1": "Preload timeout (ms).",
-    "0-2": "`ASYNC`",
-    "1-2": "0 (no delay)",
-    "2-2": "512K",
-    "3-2": "2",
-    "4-2": "0 (throttling disabled)",
-    "5-2": "0",
-    "6-2": "10000"
-  },
-  "cols": 3,
-  "rows": 7
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/transactions.md b/wiki/documentation/data-grid/transactions.md
deleted file mode 100644
index 88a47a0..0000000
--- a/wiki/documentation/data-grid/transactions.md
+++ /dev/null
@@ -1,144 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports 2 modes for cache operation, *transactional* and *atomic*. In `transactional` mode you are able to group multiple cache operations in a transaction, while `atomic` mode supports multiple atomic operations, one at a time. `Atomic` mode is more light-weight and generally has better performance over `transactional` caches.
-
-However, regardless of which mode you use, as long as your cluster is alive, the data between different cluster nodes must remain consistent. This means that whichever node is being used to retrieve data, it will never get data that has been partially committed or that is inconsistent with other data.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteTransactions"
-}
-[/block]
-`IgniteTransactions` interface contains functionality for starting and completing transactions, as well as subscribing listeners or getting metrics.
-[block:callout]
-{
-  "type": "info",
-  "title": "Cross-Cache Transactions",
-  "body": "You can combine multiple operations from different caches into one transaction. Note that this allows to update caches of different types, like `REPLICATED` and `PARTITIONED` caches, in one transaction."
-}
-[/block]
-You can obtain an instance of `IgniteTransactions` as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteTransactions transactions = ignite.transactions();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Here is an example of how transactions can be performed in Ignite:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "try (Transaction tx = transactions.txStart()) {\n    Integer hello = cache.get(\"Hello\");\n  \n    if (hello == 1)\n        cache.put(\"Hello\", 11);\n  \n    cache.put(\"World\", 22);\n  \n    tx.commit();\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Two-Phase-Commit (2PC)"
-}
-[/block]
-Ignite utilizes 2PC protocol for its transactions with many one-phase-commit optimizations whenever applicable. Whenever data is updated within a transaction, Ignite will keep transactional state in a local transaction map until `commit()` is called, at which point, if needed, the data is transferred to participating remote nodes.
-
-For more information on how Ignite 2PC works, you can check out these blogs:
-  * [Two-Phase-Commit for Distributed In-Memory Caches](http://gridgain.blogspot.com/2014/09/two-phase-commit-for-distributed-in.html)
-  *  [Two-Phase-Commit for In-Memory Caches - Part II](http://gridgain.blogspot.com/2014/09/two-phase-commit-for-in-memory-caches.html) 
-  * [One-Phase-Commit - Fast Transactions For In-Memory Caches](http://gridgain.blogspot.com/2014/09/one-phase-commit-fast-transactions-for.html) 
-[block:callout]
-{
-  "type": "success",
-  "body": "Ignite provides fully ACID (**A**tomicity, **C**onsistency, **I**solation, **D**urability) compliant transactions that ensure guaranteed consistency.",
-  "title": "ACID Compliance"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Optimistic and Pessimistic"
-}
-[/block]
-Whenever `TRANSACTIONAL` atomicity mode is configured, Ignite supports `OPTIMISTIC` and `PESSIMISTIC` concurrency modes for transactions. The main difference is that in `PESSIMISTIC` mode locks are acquired at the time of access, while in `OPTIMISTIC` mode locks are acquired during the `commit` phase.
-
-Ignite also supports the following isolation levels:
-  * `READ_COMMITED` - data is always fetched from the primary node, even if it already has been accessed within the transaction.
-  * `REPEATABLE_READ` - data is fetched form the primary node only once on first access and stored in the local transactional map. All consecutive access to the same data is local.
-  * `SERIALIZABLE` - when combined with `OPTIMISTIC` concurrency, transactions may throw `TransactionOptimisticException` in case of concurrent updates. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteTransactions txs = ignite.transactions();\n\n// Start transaction in pessimistic mode with repeatable read isolation level.\nTransaction tx = txs.txStart(TransactionConcurrency.OPTIMISTIC, TransactionIsolation.REPEATABLE_READ);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Atomicity Mode"
-}
-[/block]
-Ignite supports 2 atomicity modes defined in `CacheAtomicityMode` enum:
-  * `TRANSACTIONAL`
-  * `ATOMIC`
-
-`TRANSACTIONAL` mode enables fully ACID-compliant transactions, however, when only atomic semantics are needed, it is recommended that  `ATOMIC` mode is used for better performance.
-
-`ATOMIC` mode provides better performance by avoiding transactional locks, while still providing data atomicity and consistency. Another difference in `ATOMIC` mode is that bulk writes, such as `putAll(...)`and `removeAll(...)` methods are no longer executed in one transaction and can partially fail. In case of partial failure, `CachePartialUpdateException` will be thrown which will contain a list of keys for which the update failed.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that transactions are disabled whenever `ATOMIC` mode is used, which allows to achieve much higher performance and throughput in cases when transactions are not needed.",
-  "title": "Performance"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-Atomicity mode is defined in `CacheAtomicityMode` enum and can be configured via `atomicityMode` property of `CacheConfiguration`. 
-
-Default atomicity mode is `ATOMIC`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    <property name=\"cacheConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- Set a cache name. -->\n   \t\t\t\t\t<property name=\"name\" value=\"myCache\"/>\n\n            <!-- Set atomicity mode, can be ATOMIC or TRANSACTIONAL. -->\n    \t\t\t\t<property name=\"atomicityMode\" value=\"TRANSACTIONAL\"/>\n            ... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/data-grid/web-session-clustering.md b/wiki/documentation/data-grid/web-session-clustering.md
deleted file mode 100644
index 9968ed1..0000000
--- a/wiki/documentation/data-grid/web-session-clustering.md
+++ /dev/null
@@ -1,253 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite In-Memory Data Fabric is capable of caching web sessions of all Java Servlet containers that follow Java Servlet 3.0 Specification, including Apache Tomcat, Eclipse Jetty, Oracle WebLogic, and others.
-
-Web sessions caching becomes useful when running a cluster of app servers. When running a web application in a servlet container, you may face performance and scalability problems. A single app server is usually not able to handle large volumes of traffic by itself. A common solution is to scale your web application across multiple clustered instances:
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/AlvqqQhZRym15ji5iztA",
-        "web_sessions_1.png",
-        "561",
-        "502",
-        "#7f9eaa",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-In the architecture shown above, High Availability Proxy (Load Balancer) distributes requests between multiple Application Server instances (App Server 1, App Server 2, ...), reducing the load on each instance and providing service availability if any of the instances fails. The problem here is web session availability. A web session keeps an intermediate logical state between requests by using cookies, and is normally bound to a particular application instance. Generally this is handled using sticky connections, ensuring that requests from the same user are handled by the same app server instance. However, if that instance fails, the session is lost, and the user will have to create it anew, loosing all the current unsaved state:
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/KtAyyVzrQ5CwhxODgEVV",
-        "web_sessions_2.png",
-        "561",
-        "502",
-        "#fb7661",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-A solution here is to use Ignite In-Memory Data Fabric web sessions cache - a distributed cache that maintains a copy of each created session, sharing them between all instances. If any of your application instances fails, Ignite will automatically restore the sessions, owned by the failed instance, from the distributed cache regardless of which app server the next request will be forwarded to. Moreover, with web session caching sticky connections become less important as the session is available on any app server the web request may be routed to.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/8WyBbutWSm4PRYDNWRr7",
-        "web_sessions_3.png",
-        "561",
-        "502",
-        "#f73239",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-In this chapter we give a brief architecture overview of Ignite's web session caching functionality and instructions on how to configure your web application to enable web sessions caching.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Architecture"
-}
-[/block]
-To set up a distributed web sessions cache with Ignite, you normally configure your web application to start a Ignite node (embedded mode). When multiple application server instances are started, all Ignite nodes connect with each-other forming a distributed cache.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that not every Ignite caching node has to be running inside of application server. You can also start additional, standalone Ignite nodes and add them to the topology as well."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Replication Strategies"
-}
-[/block]
-There are several replication strategies you can use when storing sessions in Ignite In-Memory Data Fabric. The replication strategy is defined by the backing cache settings. In this section we briefly cover most common configurations.
-
-##Fully Replicated Cache
-This strategy stores copies of all sessions on each Ignite node, providing maximum availability. However with this approach you can only cache as many web sessions as can fit in memory on a single server. Additionally, the performance may suffer as every change of web session state now must be replicated to all other cluster nodes.
-
-To enable fully replicated strategy, set cacheMode of your backing cache to `REPLICATED`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache mode. -->\n    <property name=\"cacheMode\" value=\"REPLICATED\"/>\n    ...\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-##Partitioned Cache with Backups
-In partitioned mode, web sessions are split into partitions and every node is responsible for caching only partitions assigned to that node. With this approach, the more nodes you have, the more data can be cached. New nodes can always be added on the fly to add more memory.
-[block:callout]
-{
-  "type": "info",
-  "body": "With `Partitioned` mode, redundancy is addressed by configuring number of backups for every web session being cached."
-}
-[/block]
-To enable partitioned strategy, set cacheMode of your backing cache to `PARTITIONED`, and set the number of backups with `backups` property of `CacheConfiguration`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache mode. -->\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    <property name=\"backups\" value=\"1\"/>\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "body": "See [Cache Distribution Models](doc:cache-distribution-models) for more information on different replication strategies available in Ignite."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Expiration and Eviction"
-}
-[/block]
-Stale sessions are cleaned up from cache automatically when they expire. However, if there are a lot of long-living sessions created, you may want to save memory by evicting dispensable sessions from cache when cache reaches a certain limit. This can be done by setting up cache eviction policy and specifying the maximum number of sessions to be stored in cache. For example, to enable automatic eviction with LRU algorithm and a limit of 10000 sessions, you will need to use the following cache configuration:
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache name. -->\n    <property name=\"name\" value=\"session-cache\"/>\n \n    <!-- Set up LRU eviction policy with 10000 sessions limit. -->\n    <property name=\"evictionPolicy\">\n        <bean class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n            <property name=\"maxSize\" value=\"10000\"/>\n        </bean>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "body": "For more information about various eviction policies, see Eviction Policies section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-To enable web session caching in your application with Ignite, you need to:
-
-1\. **Add Ignite JARs** - Download Ignite and add the following jars to your application’s classpath (`WEB_INF/libs` folder):
-  * `ignite.jar`
-  * `ignite-web.jar`
-  * `ignite-log4j.jar`
-  * `ignite-spring.jar`
-
-Or, if you have a Maven based project, add the following to your application's pom.xml.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<dependency>\n      <groupId>org.ignite</groupId>\n      <artifactId>ignite-fabric</artifactId>\n      <version> ${ignite.version}</version>\n      <type>pom</type>\n</dependency>\n\n<dependency>\n    <groupId>org.ignite</groupId>\n    <artifactId>ignite-web</artifactId>\n    <version> ${ignite.version}</version>\n</dependency>\n\n<dependency>\n    <groupId>org.ignite</groupId>\n    <artifactId>ignite-log4j</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n\n<dependency>\n    <groupId>org.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Make sure to replace ${ignite.version} with actual Ignite version.
-
-2\. **Configure Cache Mode** - Configure Ignite cache in either `PARTITIONED` or `REPLICATED` mode (See [examples](#replication-strategies) above).
-
-3\. **Update `web.xml`** - Declare a context listener and web session filter in `web.xml`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "...\n\n<listener>\n   <listener-class>org.apache.ignite.startup.servlet.IgniteServletContextListenerStartup</listener-class>\n</listener>\n\n<filter>\n   <filter-name>IgniteWebSessionsFilter</filter-name>\n   <filter-class>org.apache.ignite.cache.websession.IgniteWebSessionFilter</filter-class>\n</filter>\n\n<!-- You can also specify a custom URL pattern. -->\n<filter-mapping>\n   <filter-name>IgniteWebSessionsFilter</filter-name>\n   <url-pattern>/*</url-pattern>\n</filter-mapping>\n\n<!-- Specify Ignite configuration (relative to META-INF folder or Ignite_HOME). -->\n<context-param>\n   <param-name>IgniteConfigurationFilePath</param-name>\n   <param-value>config/default-config.xml </param-value>\n</context-param>\n\n<!-- Specify the name of Ignite cache for web sessions. -->\n<context-param>\n   <param-name>IgniteWebSessionsCacheName</param-name>\n   <param-value>partitioned</param-value>\n</context-param>\n\n...",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-On application start, the listener will start a Ignite node within your application, which will connect to other nodes in the network, forming a distributed cache.
-
-4\. **Set Eviction Policy (Optional)** - Set eviction policy for stale web sessions data lying in cache (See [example](#expiration-and-eviction) above).
-
-##Configuration Parameters
-`IgniteServletContextListenerStartup` has the following configuration parameters:
-[block:parameters]
-{
-  "data": {
-    "0-0": "`IgniteConfigurationFilePath`",
-    "0-1": "Path to Ignite configuration file (relative to `META_INF` folder or `IGNITE_HOME`).",
-    "0-2": "`/config/default-config.xml`",
-    "h-2": "Default",
-    "h-1": "Description",
-    "h-0": "Parameter Name"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-`IgniteWebSessionFilter` has the following configuration parameters:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Parameter Name",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`IgniteWebSessionsGridName`",
-    "0-1": "Grid name for a started Ignite node. Should refer to grid in configuration file (if a grid name is specified in configuration).",
-    "0-2": "null",
-    "1-0": "`IgniteWebSessionsCacheName`",
-    "2-0": "`IgniteWebSessionsMaximumRetriesOnFail`",
-    "1-1": "Name of Ignite cache to use for web sessions caching.",
-    "1-2": "null",
-    "2-1": "Valid only for `ATOMIC` caches. Specifies number of retries in case of primary node failures.",
-    "2-2": "3"
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Supported Containers"
-}
-[/block]
-Ignite has been officially tested with following servlet containers:
-  * Apache Tomcat 7
-  * Eclipse Jetty 9
-  * Apache Tomcat 6
-  * Oracle WebLogic >= 10.3.4
\ No newline at end of file
diff --git a/wiki/documentation/distributed-data-structures/atomic-types.md b/wiki/documentation/distributed-data-structures/atomic-types.md
deleted file mode 100644
index 25446f5..0000000
--- a/wiki/documentation/distributed-data-structures/atomic-types.md
+++ /dev/null
@@ -1,114 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports distributed ***atomic long*** and ***atomic reference*** , similar to `java.util.concurrent.atomic.AtomicLong` and `java.util.concurrent.atomic.AtomicReference` respectively. 
-
-Atomics in Ignite are distributed across the cluster, essentially enabling performing atomic operations (such as increment-and-get or compare-and-set) with the same globally-visible value. For example, you could update the value of an atomic long on one node and read it from another node.
-
-##Features
-  * Retrieve current value.
-  * Atomically modify current value.
-  * Atomically increment or decrement current value.
-  * Atomically compare-and-set the current value to new value.
-
-Distributed atomic long and atomic reference can be obtained via `IgniteAtomicLong` and `IgniteAtomicReference` interfaces respectively, as shown below:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteAtomicLong atomicLong = ignite.atomicLong(\n    \"atomicName\", // Atomic long name.\n    0,        \t\t// Initial value.\n    false     \t\t// Create if it does not exist.\n)",
-      "language": "java",
-      "name": "AtomicLong"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Create an AtomicReference.\nIgniteAtomicReference<Boolean> ref = ignite.atomicReference(\n    \"refName\",  // Reference name.\n    \"someVal\",  // Initial value for atomic reference.\n    true        // Create if it does not exist.\n);",
-      "language": "java",
-      "name": "AtomicReference"
-    }
-  ]
-}
-[/block]
-
-Below is a usage example of `IgniteAtomicLong` and `IgniteAtomicReference`:
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic long.\nfinal IgniteAtomicLong atomicLong = ignite.atomicLong(\"atomicName\", 0, true);\n\n// Increment atomic long on local node.\nSystem.out.println(\"Incremented value: \" + atomicLong.incrementAndGet());\n",
-      "language": "java",
-      "name": "AtomicLong"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic reference.\nIgniteAtomicReference<String> ref = ignite.atomicReference(\"refName\", \"someVal\", true);\n\n// Compare old value to new value and if they are equal,\n//only then set the old value to new value.\nref.compareAndSet(\"WRONG EXPECTED VALUE\", \"someNewVal\"); // Won't change.",
-      "language": "java",
-      "name": "AtomicReference"
-    }
-  ]
-}
-[/block]
-All atomic operations provided by `IgniteAtomicLong` and `IgniteAtomicReference` are synchronous. The time an atomic operation will take depends on the number of nodes performing concurrent operations with the same instance of atomic long, the intensity of these operations, and network latency.
-[block:callout]
-{
-  "type": "info",
-  "title": "",
-  "body": "`IgniteCache` interface has `putIfAbsent()` and `replace()` methods, which provide the same CAS functionality as atomic types."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Atomic Configuration"
-}
-[/block]
-Atomics in Ignite can be configured via `atomicConfiguration` property of `IgniteConfiguration`. The following configuration parameters can be used :
-[block:parameters]
-{
-  "data": {
-    "0-0": "`setBackups(int)`",
-    "1-0": "`setCacheMode(CacheMode)`",
-    "2-0": "`setAtomicSequenceReserveSize(int)`",
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-1": "Set number of backups.",
-    "0-2": "0",
-    "1-1": "Set cache mode for all atomic types.",
-    "1-2": "`PARTITIONED`",
-    "2-1": "Sets the number of sequence values reserved for `IgniteAtomicSequence` instances.",
-    "2-2": "1000"
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-##Example 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    <property name=\"atomicConfiguration\">\n        <bean class=\"org.apache.ignite.configuration.AtomicConfiguration\">\n            <!-- Set number of backups. -->\n            <property name=\"backups\" value=\"1\"/>\n          \t\n          \t<!-- Set number of sequence values to be reserved. -->\n          \t<property name=\"atomicSequenceReserveSize\" value=\"5000\"/>\n        </bean>\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "AtomicConfiguration atomicCfg = new AtomicConfiguration();\n \n// Set number of backups.\natomicCfg.setBackups(1);\n\n// Set number of sequence values to be reserved. \natomicCfg.setAtomicSequenceReserveSize(5000);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n  \n// Use atomic configuration in Ignite configuration.\ncfg.setAtomicConfiguration(atomicCfg);\n  \n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/distributed-data-structures/countdownlatch.md b/wiki/documentation/distributed-data-structures/countdownlatch.md
deleted file mode 100644
index f244e97..0000000
--- a/wiki/documentation/distributed-data-structures/countdownlatch.md
+++ /dev/null
@@ -1,41 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-If you are familiar with `java.util.concurrent.CountDownLatch` for synchronization between threads within a single JVM, Ignite provides `IgniteCountDownLatch` to allow similar behavior across cluster nodes. 
-
-A distributed CountDownLatch in Ignite can be created as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCountDownLatch latch = ignite.countDownLatch(\n    \"latchName\", // Latch name.\n    10,        \t // Initial count.\n    false        // Auto remove, when counter has reached zero.\n    true         // Create if it does not exist.\n);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-After the above code is executed, all nodes in the specified cache will be able to synchronize on the latch named - `latchName`. Below is an example of such synchronization:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nfinal IgniteCountDownLatch latch = ignite.countDownLatch(\"latchName\", 10, false, true);\n\n// Execute jobs.\nfor (int i = 0; i < 10; i++)\n    // Execute a job on some remote cluster node.\n    ignite.compute().run(() -> {\n        int newCnt = latch.countDown();\n\n        System.out.println(\"Counted down: newCnt=\" + newCnt);\n    });\n\n// Wait for all jobs to complete.\nlatch.await();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/distributed-data-structures/id-generator.md b/wiki/documentation/distributed-data-structures/id-generator.md
deleted file mode 100644
index 1793dfb..0000000
--- a/wiki/documentation/distributed-data-structures/id-generator.md
+++ /dev/null
@@ -1,57 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Distributed atomic sequence provided by `IgniteCacheAtomicSequence`  interface is similar to distributed atomic long, but its value can only go up. It also supports reserving a range of values to avoid costly network trips or cache updates every time a sequence must provide a next value. That is, when you perform `incrementAndGet()` (or any other atomic operation) on an atomic sequence, the data structure reserves ahead a range of values, which are guaranteed to be unique across the cluster for this sequence instance. 
-
-Here is an example of how atomic sequence can be created:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteAtomicSequence seq = ignite.atomicSequence(\n    \"seqName\", // Sequence name.\n    0,       // Initial value for sequence.\n    true     // Create if it does not exist.\n);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Below is a simple usage example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic sequence.\nfinal IgniteAtomicSequence seq = ignite.atomicSequence(\"seqName\", 0, true);\n\n// Increment atomic sequence.\nfor (int i = 0; i < 20; i++) {\n  long currentValue = seq.get();\n  long newValue = seq.incrementAndGet();\n  \n  ...\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Sequence Reserve Size"
-}
-[/block]
-The key parameter of `IgniteAtomicSequence` is `atomicSequenceReserveSize` which is the number of sequence values reserved, per node .  When a node tries to obtain an instance of `IgniteAtomicSequence`, a number of sequence values will be reserved for that node and consequent increments of sequence will happen locally without communication with other nodes, until the next reservation has to be made. 
-
-The default value for `atomicSequenceReserveSize` is `1000`. This default setting can be changed by modifying the `atomicSequenceReserveSize` property of `AtomicConfiguration`. 
-[block:callout]
-{
-  "type": "info",
-  "body": "Refer to [Atomic Configuration](/docs/atomic-types#atomic-configuration) for more information on various atomic configuration properties, and examples on how to configure them."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/distributed-data-structures/queue-and-set.md b/wiki/documentation/distributed-data-structures/queue-and-set.md
deleted file mode 100644
index 4bfe3a8..0000000
--- a/wiki/documentation/distributed-data-structures/queue-and-set.md
+++ /dev/null
@@ -1,133 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite In-Memory Data Fabric, in addition to providing standard key-value map-like storage, also provides an implementation of a fast Distributed Blocking Queue and Distributed Set.
-
-`IgniteQueue` and `IgniteSet`, an implementation of `java.util.concurrent.BlockingQueue` and `java.util.Set` interface respectively,  also support all operations from `java.util.Collection` interface. Both, queue and set can be created in either collocated or non-collocated mode.
-
-Below is an example of how to create a distributed queue and set.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteQueue<String> queue = ignite.queue(\n    \"queueName\", // Queue name.\n    0,          // Queue capacity. 0 for unbounded queue.\n    null         // Collection configuration.\n);",
-      "language": "java",
-      "name": "Queue"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteSet<String> set = ignite.set(\n    \"setName\", // Queue name.\n    null       // Collection configuration.\n);",
-      "language": "java",
-      "name": "Set"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocated vs. Non-Collocated Mode"
-}
-[/block]
-If you plan to create just a few queues or sets containing lots of data, then you would create them in non-collocated mode. This will make sure that about equal portion of each queue or set will be stored on each cluster node. On the other hand, if you plan to have many queues or sets, relatively small in size (compared to the whole cache), then you would most likely create them in collocated mode. In this mode all queue or set elements will be stored on the same cluster node, but about equal amount of queues/sets will be assigned to every node.
-A collocated queue and set can be created by setting the `collocated` property of `CollectionConfiguration`, like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nCollectionConfiguration colCfg = new CollectionConfiguration();\n\ncolCfg.setCollocated(true); \n\n// Create collocated queue.\nIgniteQueue<String> queue = ignite.queue(\"queueName\", 0, colCfg);\n\n// Create collocated set.\nIgniteSet<String> set = ignite.set(\"setName\", colCfg);",
-      "language": "java",
-      "name": "Queue"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nCollectionConfiguration colCfg = new CollectionConfiguration();\n\ncolCfg.setCollocated(true); \n\n// Create collocated set.\nIgniteSet<String> set = ignite.set(\"setName\", colCfg);",
-      "language": "text",
-      "name": "Set"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "",
-  "body": "Non-collocated mode only makes sense for and is only supported for `PARTITIONED` caches."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Bounded Queues"
-}
-[/block]
-Bounded queues allow users to have many queues with maximum size which gives a better control over the overall cache capacity. They can be either *collocated* or *non-collocated*. When bounded queues are relatively small and used in collocated mode, all queue operations become extremely fast. Moreover, when used in combination with compute grid, users can collocate their compute jobs with cluster nodes on which queues are located to make sure that all operations are local and there is none (or minimal) data distribution. 
-
-Here is an example of how a job could be send directly to the node on which a queue resides:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nCollectionConfiguration colCfg = new CollectionConfiguration();\n\ncolCfg.setCollocated(true); \n\ncolCfg.setCacheName(\"cacheName\");\n\nfinal IgniteQueue<String> queue = ignite.queue(\"queueName\", 20, colCfg);\n \n// Add queue elements (queue is cached on some node).\nfor (int i = 0; i < 20; i++)\n    queue.add(\"Value \" + Integer.toString(i));\n \nIgniteRunnable queuePoller = new IgniteRunnable() {\n    @Override public void run() throws IgniteException {\n        // Poll is local operation due to collocation.\n        for (int i = 0; i < 20; i++)\n            System.out.println(\"Polled element: \" + queue.poll());\n    }\n};\n\n// Drain queue on the node where the queue is cached.\nignite.compute().affinityRun(\"cacheName\", \"queueName\", queuePoller);",
-      "language": "java",
-      "name": "Queue"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "Refer to [Collocate Compute and Data](doc:collocate-compute-and-data) section for more information on collocating computations with data."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Queues and Load Balancing"
-}
-[/block]
-Given that elements will remain in the queue until someone takes them, and that no two nodes should ever receive the same element from the queue, cache queues can be used as an alternate work distribution and load balancing approach within Ignite. 
-
-For example, you could simply add computations, such as instances of `IgniteRunnable` to a queue, and have threads on remote nodes call `IgniteQueue.take()`  method which will block if queue is empty. Once the `take()` method will return a job, a thread will process it and call `take()` again to get the next job. Given this approach, threads on remote nodes will only start working on the next job when they have completed the previous one, hence creating ideally balanced system where every node only takes the number of jobs it can process, and not more.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collection Configuration"
-}
-[/block]
-Ignite collections can be in configured in API via `CollectionConfiguration` (see above examples). The following configuration parameters can be used:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "0-0": "`setCollocated(boolean)`",
-    "1-0": "`setCacheName(String)`",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-2": "false",
-    "0-1": "Sets collocation mode.",
-    "1-1": "Set name of the cache to store this collection.",
-    "1-2": "null"
-  },
-  "cols": 3,
-  "rows": 2
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/distributed-events/automatic-batching.md b/wiki/documentation/distributed-events/automatic-batching.md
deleted file mode 100644
index f89ffe0..0000000
--- a/wiki/documentation/distributed-events/automatic-batching.md
+++ /dev/null
@@ -1,33 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite automatically groups or batches event notifications that are generated as a result of cache events occurring within the cluster.
-
-Each activity in cache can result in an event notification being generated and sent. For systems where cache activity is high, getting notified for every event could be network intensive, possibly leading to a decreased performance of cache operations in the grid.
-
-In Ignite, event notifications can be grouped together and sent in batches or timely intervals. Here is an example of how this can be done:
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n \n// Get an instance of named cache.\nfinal IgniteCache<Integer, String> cache = ignite.jcache(\"cacheName\");\n \n// Sample remote filter which only accepts events for keys\n// that are greater than or equal to 10.\nIgnitePredicate<CacheEvent> rmtLsnr = new IgnitePredicate<CacheEvent>() {\n    @Override public boolean apply(CacheEvent evt) {\n        System.out.println(\"Cache event: \" + evt);\n \n        int key = evt.key();\n \n        return key >= 10;\n    }\n};\n \n// Subscribe to cache events occuring on all nodes \n// that have the specified cache running. \n// Send notifications in batches of 10.\nignite.events(ignite.cluster().forCacheNodes(\"cacheName\")).remoteListen(\n\t\t10 /*batch size*/, 0 /*time intervals*/, false, null, rmtLsnr, EVTS_CACHE);\n \n// Generate cache events.\nfor (int i = 0; i < 20; i++)\n    cache.put(i, Integer.toString(i));",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/distributed-events/events.md b/wiki/documentation/distributed-events/events.md
deleted file mode 100644
index 2c7f43a..0000000
--- a/wiki/documentation/distributed-events/events.md
+++ /dev/null
@@ -1,118 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite distributed events functionality allows applications to receive notifications when a variety of events occur in the distributed grid environment. You can automatically get notified for task executions, read, write or query operations occurring on local or remote nodes within the cluster.
-
-Distributed events functionality is provided via `IgniteEvents` interface. You can get an instance of `IgniteEvents` from Ignite as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteEvents evts = ignite.events();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Subscribe for Events"
-}
-[/block]
-Listen methods can be used to receive notification for specified events happening in the cluster. These methods register a listener on local or remotes nodes for the specified events. Whenever the event occurs on the node, the listener is notified. 
-##Local Events
-`localListen(...)`  method registers event listeners with specified events on local node only.
-##Remote Events
-`remoteListen(...)` method registers event listeners with specified events on all nodes within the cluster or cluster group. Following is an example of each method:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Local listener that listenes to local events.\nIgnitePredicate<CacheEvent> locLsnr = evt -> {\n  System.out.println(\"Received event [evt=\" + evt.name() + \", key=\" + evt.key() + \n    \", oldVal=\" + evt.oldValue() + \", newVal=\" + evt.newValue());\n\n  return true; // Continue listening.\n};\n\n// Subscribe to specified cache events occuring on local node.\nignite.events().localListen(locLsnr,\n  EventType.EVT_CACHE_OBJECT_PUT,\n  EventType.EVT_CACHE_OBJECT_READ,\n  EventType.EVT_CACHE_OBJECT_REMOVED);\n\n// Get an instance of named cache.\nfinal IgniteCache<Integer, String> cache = ignite.jcache(\"cacheName\");\n\n// Generate cache events.\nfor (int i = 0; i < 20; i++)\n  cache.put(i, Integer.toString(i));\n",
-      "language": "java",
-      "name": "local listen"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Get an instance of named cache.\nfinal IgniteCache<Integer, String> cache = ignite.jcache(\"cacheName\");\n\n// Sample remote filter which only accepts events for keys\n// that are greater than or equal to 10.\nIgnitePredicate<CacheEvent> rmtLsnr = evt -> evt.<Integer>key() >= 10;\n\n// Subscribe to specified cache events on all nodes that have cache running.\nignite.events(ignite.cluster().forCacheNodes(\"cacheName\")).remoteListen(null, rmtLsnr,                                                                 EventType.EVT_CACHE_OBJECT_PUT,\n  EventType.EVT_CACHE_OBJECT_READ,\n  EventType.EVT_CACHE_OBJECT_REMOVED);\n\n// Generate cache events.\nfor (int i = 0; i < 20; i++)\n  cache.put(i, Integer.toString(i));\n",
-      "language": "java",
-      "name": "remote listen"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n \n// Get an instance of named cache.\nfinal IgniteCache<Integer, String> cache = ignite.jcache(\"cacheName\");\n \n// Sample remote filter which only accepts events for keys\n// that are greater than or equal to 10.\nIgnitePredicate<CacheEvent> rmtLsnr = new IgnitePredicate<CacheEvent>() {\n    @Override public boolean apply(CacheEvent evt) {\n        System.out.println(\"Cache event: \" + evt);\n \n        int key = evt.key();\n \n        return key >= 10;\n    }\n};\n \n// Subscribe to specified cache events occuring on \n// all nodes that have the specified cache running.\nignite.events(ignite.cluster().forCacheNodes(\"cacheName\")).remoteListen(null, rmtLsnr,                                                                 EVT_CACHE_OBJECT_PUT,                                      \t\t    \t\t   EVT_CACHE_OBJECT_READ,                                                     EVT_CACHE_OBJECT_REMOVED);\n \n// Generate cache events.\nfor (int i = 0; i < 20; i++)\n    cache.put(i, Integer.toString(i));",
-      "language": "java",
-      "name": "java7 listen"
-    }
-  ]
-}
-[/block]
-In the above example `EVT_CACHE_OBJECT_PUT`,`EVT_CACHE_OBJECT_READ`, and `EVT_CACHE_OBJECT_REMOVED` are pre-defined event type constants defined in `EventType` interface.  
-[block:callout]
-{
-  "type": "info",
-  "body": "`EventType` interface defines various event type constants that can be used with listen methods. Refer to [javadoc](https://ignite.incubator.apache.org/releases/1.0.0/javadoc/) for complete list of these event types."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "warning",
-  "body": "Event types passed in as parameter in  `localListen(...)` and `remoteListen(...)` methods must also be configured in `IgniteConfiguration`. See [configuration](#configuration) example below."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query for Events"
-}
-[/block]
-All events generated in the system are kept locally on the local node. `IgniteEvents` API provides methods to query for these events.
-##Local Events
-`localQuery(...)`  method queries for events on the local node using the passed in predicate filter. If all predicates are satisfied, a collection of events happening on the local node is returned.
-##Remote Events
-`remoteQuery(...)`  method asynchronously queries for events on remote nodes in this projection using the passed in predicate filter. This operation is distributed and hence can fail on communication layer and generally can take much longer than local event notifications. Note that this method will not block and will return immediately with future.
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-To get notified of any tasks or cache events occurring within the cluster, `includeEventTypes` property of `IgniteConfiguration` must be enabled.  
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n \t\t... \n    <!-- Enable cache events. -->\n    <property name=\"includeEventTypes\">\n        <util:constant static-field=\"org.apache.ignite.events.EventType.EVTS_CACHE\"/>\n    </property>\n  \t...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\n// Enable cache events.\ncfg.setIncludeEventTypes(EVTS_CACHE);\n\n// Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-By default, event notifications are turned off for performance reasons.
-[block:callout]
-{
-  "type": "success",
-  "body": "Since thousands of events per second are generated, it creates an additional load on the system. This can lead to significant performance degradation. Therefore, it is highly recommended to enable only those events that your application logic requires."
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/distributed-file-system/igfs.md b/wiki/documentation/distributed-file-system/igfs.md
deleted file mode 100644
index ca11fd8..0000000
--- a/wiki/documentation/distributed-file-system/igfs.md
+++ /dev/null
@@ -1,18 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-We are currently adding documentation for the Ignite File System. In the mean time you can refer to the [javadoc](https://ignite.incubator.apache.org/releases/1.0.0/javadoc/org/apache/ignite/IgniteFs.html).
\ No newline at end of file
diff --git a/wiki/documentation/distributed-messaging/messaging.md b/wiki/documentation/distributed-messaging/messaging.md
deleted file mode 100644
index 92871b9..0000000
--- a/wiki/documentation/distributed-messaging/messaging.md
+++ /dev/null
@@ -1,90 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite distributed messaging allows for topic based cluster-wide communication between all nodes. Messages with a specified message topic can be distributed to all or sub-group of nodes that have subscribed to that topic. 
-
-Ignite messaging is based on publish-subscribe paradigm where publishers and subscribers  are connected together by a common topic. When one of the nodes sends a message A for topic T, it is published on all nodes that have subscribed to T.
-[block:callout]
-{
-  "type": "info",
-  "body": "Any new node joining the cluster automatically gets subscribed to all the topics that other nodes in the cluster (or [cluster group](/docs/cluster-groups)) are subscribed to."
-}
-[/block]
-Distributed Messaging functionality in Ignite is provided via `IgniteMessaging` interface. You can get an instance of `IgniteMessaging`, like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Messaging instance over this cluster.\nIgniteMessaging msg = ignite.message();\n\n// Messaging instance over given cluster group (in this case, remote nodes).\nIgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Publish Messages"
-}
-[/block]
-Send methods help sending/publishing messages with a specified message topic to all nodes. Messages can be sent in *ordered* or *unordered* manner.
-##Ordered Messages
-`sendOrdered(...)` method can be used if you want to receive messages in the order they were sent. A timeout parameter is passed to specify how long a message will stay in the queue to wait for messages that are supposed to be sent before this message. If the timeout expires, then all the messages that have not yet arrived for a given topic on that node will be ignored.
-##Unordered Messages
-`send(...)` methods do not guarantee message ordering. This means that, when you sequentially send message A and message B, you are not guaranteed that the target node first receives A and then B.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Subscribe for Messages"
-}
-[/block]
-Listen methods help to listen/subscribe for messages. When these methods are called, a listener with specified message topic is registered on  all (or sub-group of ) nodes to listen for new messages. With listen methods, a predicate is passed that returns a boolean value which tells the listener to continue or stop listening for new messages. 
-##Local Listen
-`localListen(...)` method registers a message listener with specified topic only on the local node and listens for messages from any node in *this* cluster group.
-##Remote Listen
-`remoteListen(...)` method registers message listeners with specified topic on all nodes in *this* cluster group and listens for messages from any node in *this* cluster group . 
-
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Example"
-}
-[/block]
-Following example shows message exchange between remote nodes.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());\n \n// Add listener for unordered messages on all remote nodes.\nrmtMsg.remoteListen(\"MyOrderedTopic\", (nodeId, msg) -> {\n    System.out.println(\"Received ordered message [msg=\" + msg + \", from=\" + nodeId + ']');\n \n    return true; // Return true to continue listening.\n});\n \n// Send ordered messages to remote nodes.\nfor (int i = 0; i < 10; i++)\n    rmtMsg.sendOrdered(\"MyOrderedTopic\", Integer.toString(i));",
-      "language": "java",
-      "name": "Ordered Messaging"
-    },
-    {
-      "code": " Ignite ignite = Ignition.ignite();\n \nIgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());\n \n// Add listener for unordered messages on all remote nodes.\nrmtMsg.remoteListen(\"MyUnOrderedTopic\", (nodeId, msg) -> {\n    System.out.println(\"Received unordered message [msg=\" + msg + \", from=\" + nodeId + ']');\n \n    return true; // Return true to continue listening.\n});\n \n// Send unordered messages to remote nodes.\nfor (int i = 0; i < 10; i++)\n    rmtMsg.send(\"MyUnOrderedTopic\", Integer.toString(i));",
-      "language": "java",
-      "name": "Unordered Messaging"
-    },
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Get cluster group of remote nodes.\nClusterGroup rmtPrj = ignite.cluster().forRemotes();\n\n// Get messaging instance over remote nodes.\nIgniteMessaging msg = ignite.message(rmtPrj);\n\n// Add message listener for specified topic on all remote nodes.\nmsg.remoteListen(\"myOrderedTopic\", new IgniteBiPredicate<UUID, String>() {\n    @Override public boolean apply(UUID nodeId, String msg) {\n        System.out.println(\"Received ordered message [msg=\" + msg + \", from=\" + nodeId + ']');\n\n        return true; // Return true to continue listening.\n    }\n});\n\n// Send ordered messages to all remote nodes.\nfor (int i = 0; i < 10; i++)\n    msg.sendOrdered(\"myOrderedTopic\", Integer.toString(i), 0);",
-      "language": "java",
-      "name": "java7 ordered"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/http/configuration.md b/wiki/documentation/http/configuration.md
deleted file mode 100644
index 4696661..0000000
--- a/wiki/documentation/http/configuration.md
+++ /dev/null
@@ -1,67 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "General Configuration"
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "Parameter name",
-    "h-3": "Default value",
-    "h-2": "Optional",
-    "h-1": "Description",
-    "0-0": "**setSecretKey(String)**",
-    "0-1": "Defines secret key used for client authentication. When provided, client request must contain HTTP header **X-Signature** with Base64 encoded SHA1 hash of the string \"[1];[2]\", where [1] is timestamp in milliseconds and [2] is the secret key.",
-    "0-3": "**null**",
-    "0-2": "Yes",
-    "1-0": "**setPortRange(int)**",
-    "1-1": "Port range for Jetty server. In case port provided in Jetty configuration or **IGNITE_JETTY_PORT** system property is already in use, Ignite will iteratively increment port by 1 and try binding once again until provided port range is exceeded.",
-    "1-3": "**100**",
-    "1-2": "Yes",
-    "2-0": "**setJettyPath(String)**",
-    "2-1": "Path to Jetty configuration file. Should be either absolute or relative to **IGNITE_HOME**. If not provided then GridGain will start Jetty server with simple HTTP connector. This connector will use **IGNITE_JETTY_HOST** and **IGNITE_JETTY_PORT** system properties as host and port respectively. In case **IGNITE_JETTY_HOST** is not provided, localhost will be used as default. In case **IGNITE_JETTY_PORT** is not provided, port 8080 will be used as default.",
-    "2-3": "**null**",
-    "2-2": "Yes"
-  },
-  "cols": 4,
-  "rows": 3
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Sample Jetty XML configuration"
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<?xml version=\"1.0\"?>\n<!DOCTYPE Configure PUBLIC \"-//Jetty//Configure//EN\" \"http://www.eclipse.org/jetty/configure.dtd\">\n<Configure id=\"Server\" class=\"org.eclipse.jetty.server.Server\">\n    <Arg name=\"threadPool\">\n        <!-- Default queued blocking thread pool -->\n        <New class=\"org.eclipse.jetty.util.thread.QueuedThreadPool\">\n            <Set name=\"minThreads\">20</Set>\n            <Set name=\"maxThreads\">200</Set>\n        </New>\n    </Arg>\n    <New id=\"httpCfg\" class=\"org.eclipse.jetty.server.HttpConfiguration\">\n        <Set name=\"secureScheme\">https</Set>\n        <Set name=\"securePort\">8443</Set>\n        <Set name=\"sendServerVersion\">true</Set>\n        <Set name=\"sendDateHeader\">true</Set>\n    </New>\n    <Call name=\"addConnector\">\n        <Arg>\n            <New class=\"org.eclipse.jetty.server.ServerConnector\">\n                <Arg name=\"server\"><Ref refid=\"Server\"/></Arg>\n                <Arg name=\"factories\">\n                    <Array type=\"org.eclipse.jetty.server.ConnectionFactory\">\n                        <Item>\n                            <New class=\"org.eclipse.jetty.server.HttpConnectionFactory\">\n                                <Ref refid=\"httpCfg\"/>\n                            </New>\n                        </Item>\n                    </Array>\n                </Arg>\n                <Set name=\"host\">\n                  <SystemProperty name=\"IGNITE_JETTY_HOST\" default=\"localhost\"/>\n              \t</Set>\n                <Set name=\"port\">\n                  <SystemProperty name=\"IGNITE_JETTY_PORT\" default=\"8080\"/>\n              \t</Set>\n                <Set name=\"idleTimeout\">30000</Set>\n                <Set name=\"reuseAddress\">true</Set>\n            </New>\n        </Arg>\n    </Call>\n    <Set name=\"handler\">\n        <New id=\"Handlers\" class=\"org.eclipse.jetty.server.handler.HandlerCollection\">\n            <Set name=\"handlers\">\n                <Array type=\"org.eclipse.jetty.server.Handler\">\n                    <Item>\n                        <New id=\"Contexts\" class=\"org.eclipse.jetty.server.handler.ContextHandlerCollection\"/>\n                    </Item>\n                </Array>\n            </Set>\n        </New>\n    </Set>\n    <Set name=\"stopAtShutdown\">false</Set>\n</Configure>",
-      "language": "xml",
-      "name": ""
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/http/rest-api.md b/wiki/documentation/http/rest-api.md
deleted file mode 100644
index 26f3240..0000000
--- a/wiki/documentation/http/rest-api.md
+++ /dev/null
@@ -1,1663 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite provides HTTP REST client that gives you the ability to communicate with the grid over HTTP and HTTPS protocols using REST approach. REST APIs can be used to perform various operations like read/write from/to cache, execute tasks, get cache and node metrics  and more.
-
-* [Returned value](#returned-value)
-* [Log](#log)
-* [Version](#version)
-* [Decrement](#decrement)
-* [Increment](#increment)
-* [Cache metrics](#cache-metrics)
-* [Compare-And-Swap](#compare-and-swap)
-* [Prepend](#prepend)
-* [Append](#append)
-* [Replace](#replace)
-* [Remove all](#remove-all)
-* [Remove](#remove) 
-* [Add](#add)
-* [Put all](#put-all)
-* [Put](#put) 
-* [Get all](#get-all)
-* [Get](#get)
-* [Node](#node)
-* [Topology](#topology)
-* [Execute](#execute)
-* [Result](#result)
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Returned value"
-}
-[/block]
-HTTP REST request returns JSON object which has similar structure for each command. This object has the following structure:
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "affinityNodeId",
-    "0-1": "string",
-    "0-2": "Affinity node ID.",
-    "0-3": "2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37",
-    "1-0": "error",
-    "1-1": "string",
-    "1-2": "The field contains description of error if server could not handle the request",
-    "1-3": "specifically for each command",
-    "2-0": "response",
-    "2-1": "jsonObject",
-    "2-2": "The field contains result of command.",
-    "2-3": "specifically for each command",
-    "3-0": "successStatus",
-    "3-1": "integer",
-    "3-2": "Exit status code. It might have the following values:\n  * success = 0\n  * failed = 1 \n  * authorization failed = 2\n  * security check failed = 3",
-    "3-3": "0"
-  },
-  "cols": 4,
-  "rows": 4
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Log"
-}
-[/block]
-**Log** command shows server logs.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=log&from=10&to=100&path=/var/log/ignite.log",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request Parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-3": "Should be **log** lowercase.",
-    "0-4": "",
-    "0-1": "string",
-    "0-2": "No",
-    "1-0": "from",
-    "1-1": "integer",
-    "1-2": "Yes",
-    "1-3": "Number of line to start from. Parameter is mandatory if **to** is passed.",
-    "1-4": "0",
-    "2-0": "path",
-    "2-1": "string",
-    "2-2": "Yes",
-    "2-3": "The path to log file. If not provided, will be used the following value **work/log/ignite.log**",
-    "2-4": "log/cache_server.log",
-    "3-0": "to",
-    "3-1": "integer",
-    "3-2": "Yes",
-    "3-3": "Number to line to finish on. Parameter is mandatory if **from** is passed.",
-    "3-4": "1000"
-  },
-  "cols": 5,
-  "rows": 4
-}
-[/block]
-##Response example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"error\": \"\",\n  \"response\": [\"[14:01:56,626][INFO ][test-runner][GridDiscoveryManager] Topology snapshot [ver=1, nodes=1, CPUs=8, heap=1.8GB]\"],\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-3": "[\"[14:01:56,626][INFO ][test-runner][GridDiscoveryManager] Topology snapshot [ver=1, nodes=1, CPUs=8, heap=1.8GB]\"]",
-    "0-1": "string",
-    "0-2": "logs"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Version"
-}
-[/block]
-**Version**  command shows current Ignite version.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=version",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request Parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "description",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **version** lowercase."
-  },
-  "cols": 5,
-  "rows": 1
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"error\": \"\",\n  \"response\": \"1.0.0\",\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-3": "1.0.0",
-    "0-1": "string",
-    "0-2": "Ignite version"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Decrement"
-}
-[/block]
-**Decrement** command subtracts and gets current value of given atomic long.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=decr&cacheName=partionedCache&key=decrKey&init=15&delta=10",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **decr** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "2-0": "key",
-    "2-1": "string",
-    "2-3": "The name of atomic long.",
-    "2-4": "counter",
-    "3-0": "init",
-    "3-1": "long",
-    "3-2": "Yes",
-    "3-3": "Initial value.",
-    "3-4": "15",
-    "4-4": "42",
-    "4-3": "Number to be subtracted.",
-    "4-0": "delta",
-    "4-1": "long"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"e05839d5-6648-43e7-a23b-78d7db9390d5\",\n  \"error\": \"\",\n  \"response\": -42,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "long",
-    "0-2": "Value after operation.",
-    "0-3": "-42"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Increment"
-}
-[/block]
-**Increment** command adds and gets current value of given atomic long.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=incr&cacheName=partionedCache&key=decrKey&init=15&delta=10",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be ** incr** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "2-0": "key",
-    "2-1": "string",
-    "2-3": "The name of atomic long.",
-    "2-4": "counter",
-    "3-0": "init",
-    "3-1": "long",
-    "3-2": "Yes",
-    "3-3": "Initial value.",
-    "3-4": "15",
-    "4-4": "42",
-    "4-3": "Number to be added.",
-    "4-0": "delta",
-    "4-1": "long"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"e05839d5-6648-43e7-a23b-78d7db9390d5\",\n  \"error\": \"\",\n  \"response\": 42,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "long",
-    "0-2": "Value after operation.",
-    "0-3": "42"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache metrics"
-}
-[/block]
-**Cache metrics** command shows metrics for Ignite cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=cache&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **cache** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "2-0": "destId",
-    "2-1": "string",
-    "2-3": "Node ID for which the metrics are to be returned.",
-    "2-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "2-2": "Yes"
-  },
-  "cols": 5,
-  "rows": 3
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"\",\n  \"error\": \"\",\n  \"response\": {\n    \"createTime\": 1415179251551,\n    \"hits\": 0,\n    \"misses\": 0,\n    \"readTime\": 1415179251551,\n    \"reads\": 0,\n    \"writeTime\": 1415179252198,\n    \"writes\": 2\n  },\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "The JSON object contains cache metrics such as create time, count reads and etc.",
-    "0-3": "{\n\"createTime\": 1415179251551, \"hits\": 0, \"misses\": 0, \"readTime\":1415179251551, \"reads\": 0,\"writeTime\": 1415179252198, \"writes\": 2\n}"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Compare-And-Swap"
-}
-[/block]
-**CAS** command stores given key-value pair in cache only if the previous value is equal to the expected value passed in.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=cas&key=casKey&val2=casOldVal&val1=casNewVal&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **cas** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "5-0": "destId",
-    "5-1": "string",
-    "5-3": "Node ID for which the metrics are to be returned.",
-    "5-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "5-2": "Yes",
-    "2-0": "key",
-    "3-0": "val",
-    "4-0": "val2",
-    "2-1": "string",
-    "3-1": "string",
-    "4-1": "string",
-    "2-2": "Key to store in cache.",
-    "3-2": "Value associated with the given key.",
-    "4-2": "Expected value.",
-    "2-3": "name",
-    "3-3": "Jack",
-    "4-3": "Bob"
-  },
-  "cols": 5,
-  "rows": 6
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if replace happened, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Prepend"
-}
-[/block]
-**Prepend** command prepends a line for value which is associated with key.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=prepend&key=prependKey&val=prefix_&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **prepend** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "4-0": "destId",
-    "4-1": "string",
-    "4-3": "Node ID for which the metrics are to be returned.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "key",
-    "3-0": "val",
-    "2-1": "string",
-    "3-1": "string",
-    "2-2": "Key to store in cache.",
-    "3-2": "Value to be prepended to the current value.",
-    "2-3": "name",
-    "3-3": "Name_"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if replace happened, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Append"
-}
-[/block]
-**Append** command appends a line for value which is associated with key.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=append&key=appendKey&val=_suffix&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **append** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "4-0": "destId",
-    "4-1": "string",
-    "4-3": "Node ID for which the metrics are to be returned.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "key",
-    "3-0": "val",
-    "2-1": "string",
-    "3-1": "string",
-    "2-2": "Key to store in cache.",
-    "3-2": "Value to be appended to the current value.",
-    "2-3": "name",
-    "3-3": "Jack"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if replace happened, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Replace"
-}
-[/block]
-**Replace** command stores a given key-value pair in cache only if there is a previous mapping for it.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=rep&key=repKey&val=newValue&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **rmvall** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "4-0": "destId",
-    "4-1": "string",
-    "4-3": "Node ID for which the metrics are to be returned.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "key",
-    "3-0": "val",
-    "2-1": "string",
-    "3-1": "string",
-    "2-2": "Key to store in cache.",
-    "3-2": "Value associated with the given key.",
-    "2-3": "name",
-    "3-3": "Jack"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if replace happened, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Remove all"
-}
-[/block]
-**Remove all** command removes given key mappings from cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=rmvall&k1=rmKey1&k2=rmKey2&k3=rmKey3&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **rmvall** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "3-0": "destId",
-    "3-1": "string",
-    "3-3": "Node ID for which the metrics are to be returned.",
-    "3-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "3-2": "Yes",
-    "2-0": "k1...kN",
-    "2-1": "string",
-    "2-2": "Keys whose mappings are to be removed from cache.",
-    "2-3": "name"
-  },
-  "cols": 5,
-  "rows": 4
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if replace happened, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Remove"
-}
-[/block]
-**Remove** command removes the given key mapping from cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=rmv&key=rmvKey&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **rmv** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "3-0": "destId",
-    "3-1": "string",
-    "3-3": "Node ID for which the metrics are to be returned.",
-    "3-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "3-2": "Yes",
-    "2-0": "key",
-    "2-1": "string",
-    "2-2": "Key - for which the mapping is to be removed from cache.",
-    "2-3": "name"
-  },
-  "cols": 5,
-  "rows": 4
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if replace happened, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Add"
-}
-[/block]
-**Add** command stores a given key-value pair in cache only if there isn't a previous mapping for it.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=add&key=newKey&val=newValue&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **add** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "4-0": "destId",
-    "4-1": "string",
-    "4-3": "Node ID for which the metrics are to be returned.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "key",
-    "2-1": "string",
-    "2-3": "Key to be associated with the value.",
-    "2-4": "name",
-    "3-0": "val",
-    "3-1": "string",
-    "3-3": "Value to be associated with the key.",
-    "3-4": "Jack"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if value was stored in cache, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Put all"
-}
-[/block]
-**Put all** command stores the given key-value pairs in cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=putall&k1=putKey1&k2=putKey2&k3=putKey3&v1=value1&v2=value2&v3=value3&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **putall** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "4-0": "destId",
-    "4-1": "string",
-    "4-3": "Node ID for which the metrics are to be returned.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "k1...kN",
-    "2-1": "string",
-    "2-3": "Keys to be associated with values.",
-    "2-4": "name",
-    "3-0": "v1...vN",
-    "3-1": "string",
-    "3-3": "Values to be associated with keys.",
-    "3-4": "Jack"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if values was stored in cache, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Put"
-}
-[/block]
-**Put** command stores the given key-value pair in cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=put&key=newKey&val=newValue&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **put** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "4-0": "destId",
-    "4-1": "string",
-    "4-3": "Node ID for which the metrics are to be returned.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "key",
-    "2-1": "string",
-    "2-3": "Key to be associated with values.",
-    "2-4": "name",
-    "3-0": "val",
-    "3-1": "string",
-    "3-3": "Value to be associated with keys.",
-    "3-4": "Jack"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"1bcbac4b-3517-43ee-98d0-874b103ecf30\",\n  \"error\": \"\",\n  \"response\": true,\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "boolean",
-    "0-2": "True if value was stored in cache, false otherwise.",
-    "0-3": "true"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Get all"
-}
-[/block]
-**Get all** command retrieves values mapped to the specified keys from cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=getall&k1=getKey1&k2=getKey2&k3=getKey3&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **getall** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "3-0": "destId",
-    "3-1": "string",
-    "3-3": "Node ID for which the metrics are to be returned.",
-    "3-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "3-2": "Yes",
-    "2-0": "k1...kN",
-    "2-1": "string",
-    "2-3": "Keys whose associated values are to be returned.",
-    "2-4": "key1, key2, ..., keyN"
-  },
-  "cols": 5,
-  "rows": 4
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"\",\n  \"error\": \"\",\n  \"response\": {\n    \"key1\": \"value1\",\n    \"key2\": \"value2\"\n  },\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "The map of key-value pairs.",
-    "0-3": "{\"key1\": \"value1\",\"key2\": \"value2\"}"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Get"
-}
-[/block]
-**Get** command retrieves value mapped to the specified key from cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=get&key=getKey&cacheName=partionedCache&destId=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **get** lowercase.",
-    "1-0": "cacheName",
-    "1-1": "string",
-    "1-2": "Yes",
-    "1-3": "Cache name. If not provided, default cache will be used.",
-    "1-4": "partionedCache",
-    "3-0": "destId",
-    "3-1": "string",
-    "3-3": "Node ID for which the metrics are to be returned.",
-    "3-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "3-2": "Yes",
-    "2-0": "key",
-    "2-1": "string",
-    "2-3": "Key whose associated value is to be returned",
-    "2-4": "testKey"
-  },
-  "cols": 5,
-  "rows": 4
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"affinityNodeId\": \"2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37\",\n  \"error\": \"\",\n  \"response\": \"value\",\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "Value for the given key.",
-    "0-3": "{\"name\": \"bob\"}"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Node"
-}
-[/block]
-**Node** command gets information about a node.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=node&attr=true&mtr=true&id=c981d2a1-878b-4c67-96f6-70f93a4cd241",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **node** lowercase.",
-    "1-0": "mtr",
-    "1-1": "boolean",
-    "1-2": "Yes",
-    "1-3": "Response will include metrics, if this parameter has value true.",
-    "1-4": "true",
-    "4-0": "id",
-    "4-1": "string",
-    "4-3": "This parameter is optional, if ip parameter is passed. Response will be returned for node which has the node ID.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "",
-    "2-0": "attr",
-    "2-1": "boolean",
-    "2-3": "Response will include attributes, if this parameter has value true.",
-    "2-4": "true",
-    "3-0": "ip",
-    "3-1": "string",
-    "3-3": "This parameter is optional, if id parameter is passed. Response will be returned for node which has the IP.",
-    "3-4": "192.168.0.1",
-    "2-2": "Yes"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"error\": \"\",\n  \"response\": {\n    \"attributes\": null,\n    \"caches\": {},\n    \"consistentId\": \"127.0.0.1:47500\",\n    \"defaultCacheMode\": \"REPLICATED\",\n    \"metrics\": null,\n    \"nodeId\": \"2d0d6510-6fed-4fa3-b813-20f83ac4a1a9\",\n    \"replicaCount\": 128,\n    \"tcpAddresses\": [\"127.0.0.1\"],\n    \"tcpHostNames\": [\"\"],\n    \"tcpPort\": 11211\n  },\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "Information about only one node.",
-    "0-3": "{\n\"attributes\": null,\n\"caches\": {},\n\"consistentId\": \"127.0.0.1:47500\",\n\"defaultCacheMode\": \"REPLICATED\",\n\"metrics\": null,\n\"nodeId\": \"2d0d6510-6fed-4fa3-b813-20f83ac4a1a9\",\n\"replicaCount\": 128,\n\"tcpAddresses\": [\"127.0.0.1\"],\n\"tcpHostNames\": [\"\"],\n\"tcpPort\": 11211\n}"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Topology"
-}
-[/block]
-**Topology** command gets information about grid topology.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=top&attr=true&mtr=true&id=c981d2a1-878b-4c67-96f6-70f93a4cd241",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **top** lowercase.",
-    "1-0": "mtr",
-    "1-1": "boolean",
-    "1-2": "Yes",
-    "1-3": "Response will include metrics, if this parameter has value true.",
-    "1-4": "true",
-    "4-0": "id",
-    "4-1": "string",
-    "4-3": "This parameter is optional, if ip parameter is passed. Response will be returned for node which has the node ID.",
-    "4-4": "8daab5ea-af83-4d91-99b6-77ed2ca06647",
-    "4-2": "Yes",
-    "2-0": "attr",
-    "2-1": "boolean",
-    "2-3": "Response will include attributes, if this parameter has value true.",
-    "2-4": "true",
-    "3-0": "ip",
-    "3-1": "string",
-    "3-3": "This parameter is optional, if id parameter is passed. Response will be returned for node which has the IP.",
-    "3-4": "192.168.0.1",
-    "2-2": "Yes",
-    "3-2": "Yes"
-  },
-  "cols": 5,
-  "rows": 5
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"error\": \"\",\n  \"response\": [\n    {\n      \"attributes\": {\n        ...\n      },\n      \"caches\": {},\n      \"consistentId\": \"127.0.0.1:47500\",\n      \"defaultCacheMode\": \"REPLICATED\",\n      \"metrics\": {\n        ...\n      },\n      \"nodeId\": \"96baebd6-dedc-4a68-84fd-f804ee1ed995\",\n      \"replicaCount\": 128,\n      \"tcpAddresses\": [\"127.0.0.1\"],\n      \"tcpHostNames\": [\"\"],\n      \"tcpPort\": 11211\n   },\n   {\n     \"attributes\": {\n       ...\n     },\n     \"caches\": {},\n      \"consistentId\": \"127.0.0.1:47501\",\n      \"defaultCacheMode\": \"REPLICATED\",\n      \"metrics\": {\n        ...\n      },\n      \"nodeId\": \"2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37\",\n      \"replicaCount\": 128,\n      \"tcpAddresses\": [\"127.0.0.1\"],\n      \"tcpHostNames\": [\"\"],\n      \"tcpPort\": 11212\n   }\n  ],\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "Information about grid topology.",
-    "0-3": "[\n{\n\"attributes\": {\n...\n},\n\"caches\": {},\n\"consistentId\": \"127.0.0.1:47500\",\n\"defaultCacheMode\": \"REPLICATED\",\n\"metrics\": {\n...\n},\n\"nodeId\": \"96baebd6-dedc-4a68-84fd-f804ee1ed995\",\n...\n\"tcpPort\": 11211\n},\n{\n\"attributes\": {\n...\n},\n\"caches\": {},\n\"consistentId\": \"127.0.0.1:47501\",\n\"defaultCacheMode\": \"REPLICATED\",\n\"metrics\": {\n...\n},\n\"nodeId\": \"2bd7b049-3fa0-4c44-9a6d-b5c7a597ce37\",\n...\n\"tcpPort\": 11212\n}\n]"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Execute"
-}
-[/block]
-**Execute** command executes given task on grid.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=exe&name=taskName&p1=param1&p2=param2&async=true",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **exe** lowercase.",
-    "1-0": "name",
-    "1-1": "string",
-    "1-2": "",
-    "1-3": "Name of the task to execute.",
-    "1-4": "summ",
-    "2-0": "p1...pN",
-    "2-1": "string",
-    "2-3": "Argument of task execution.",
-    "2-4": "arg1...argN",
-    "3-0": "async",
-    "3-1": "boolean",
-    "3-3": "The flag determines whether the task is performed asynchronously.",
-    "3-4": "true",
-    "2-2": "Yes",
-    "3-2": "Yes"
-  },
-  "cols": 5,
-  "rows": 4
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"error\": \"\",\n  \"response\": {\n    \"error\": \"\",\n    \"finished\": true,\n    \"id\": \"~ee2d1688-2605-4613-8a57-6615a8cbcd1b\",\n    \"result\": 4\n  },\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "JSON object contains message about error, unique identifier of task, result of computation and status of computation.",
-    "0-3": "{\n\"error\": \"\",\n\"finished\": true,\n\"id\": \"~ee2d1688-2605-4613-8a57-6615a8cbcd1b\",\n\"result\": 4\n}"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Result"
-}
-[/block]
-**Result** command returns computation result for the given task.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "http://host:port/ignite?cmd=res&id=8daab5ea-af83-4d91-99b6-77ed2ca06647",
-      "language": "curl"
-    }
-  ]
-}
-[/block]
-##Request parameters
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "optional",
-    "h-3": "decription",
-    "h-4": "example",
-    "0-0": "cmd",
-    "0-4": "",
-    "0-1": "string",
-    "0-3": "Should be **res** lowercase.",
-    "1-0": "id",
-    "1-1": "string",
-    "1-2": "",
-    "1-3": "ID of the task whose result is to be returned.",
-    "1-4": "69ad0c48941-4689aae0-6b0e-4d52-8758-ce8fe26f497d~4689aae0-6b0e-4d52-8758-ce8fe26f497d"
-  },
-  "cols": 5,
-  "rows": 2
-}
-[/block]
-##Response example
-[block:code]
-{
-  "codes": [
-    {
-      "code": "{\n  \"error\": \"\",\n  \"response\": {\n    \"error\": \"\",\n    \"finished\": true,\n    \"id\": \"69ad0c48941-4689aae0-6b0e-4d52-8758-ce8fe26f497d~4689aae0-6b0e-4d52-8758-ce8fe26f497d\",\n    \"result\": 4\n  },\n  \"successStatus\": 0\n}",
-      "language": "json"
-    }
-  ]
-}
-[/block]
-
-[block:parameters]
-{
-  "data": {
-    "h-0": "name",
-    "h-1": "type",
-    "h-2": "description",
-    "h-3": "example",
-    "0-0": "response",
-    "0-1": "jsonObject",
-    "0-2": "JSON object contains message about error, ID of task, result of computation and status of computation.",
-    "0-3": "{\n    \"error\": \"\",\n    \"finished\": true,\n    \"id\": \"~ee2d1688-2605-4613-8a57-6615a8cbcd1b\",\n    \"result\": 4\n}"
-  },
-  "cols": 4,
-  "rows": 1
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/release-notes/release-notes.md b/wiki/documentation/release-notes/release-notes.md
deleted file mode 100644
index af4b352..0000000
--- a/wiki/documentation/release-notes/release-notes.md
+++ /dev/null
@@ -1,30 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-##Apache Ignite In-Memory Data Fabric 1.0
-This is the first release of Apache Ignite project. The source code is based on the 7 year old GridGain In-Memory Data Fabric, open source edition, v. 6.6.2, which was donated to Apache Software Foundation in September 2014.
-
-The main feature set of Ignite In-Memory Data Fabric includes:
-* Advanced Clustering
-* Compute Grid
-* Data Grid
-* Service Grid
-* IGFS - Ignite File System
-* Distributed Data Structures
-* Distributed Messaging
-* Distributed Events
-* Streaming & CEP
\ No newline at end of file
diff --git a/wiki/documentation/service-grid/cluster-singletons.md b/wiki/documentation/service-grid/cluster-singletons.md
deleted file mode 100644
index 0b7beec..0000000
--- a/wiki/documentation/service-grid/cluster-singletons.md
+++ /dev/null
@@ -1,111 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-`IgniteServices` facade allows to deploy any number of services on any of the grid nodes. However, the most commonly used feature is to deploy singleton services on the cluster. Ignite will manage the singleton contract regardless of topology changes and node crashes.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that in case of topology changes, due to network delays, there may be a temporary situation when a singleton service instance will be active on more than one node (e.g. crash detection delay)."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cluster Singleton"
-}
-[/block]
-You can deploy a cluster-wide singleton service. Ignite will guarantee that there is always one instance of the service in the cluster. In case the cluster node on which the service was deployed crashes or stops, Ignite will automatically redeploy it on another node. However, if the node on which the service is deployed remains in topology, then the service will always be deployed on that node only, regardless of topology changes.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteServices svcs = ignite.services();\n\nsvcs.deployClusterSingleton(\"myClusterSingleton\", new MyService());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-The above method is analogous to calling 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "svcs.deployMultiple(\"myClusterSingleton\", new MyService(), 1, 1)",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Node Singleton"
-}
-[/block]
-You can deploy a per-node singleton service. Ignite will guarantee that there is always one instance of the service running on each node. Whenever new nodes are started within the cluster group, Ignite will automatically deploy one instance of the service on every new node.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteServices svcs = ignite.services();\n\nsvcs.deployNodeSingleton(\"myNodeSingleton\", new MyService());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-The above method is analogous to calling 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "svcs.deployMultiple(\"myNodeSingleton\", new MyService(), 0, 1);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Key Affinity Singleton"
-}
-[/block]
-You can deploy one instance of this service on the primary node for a given affinity key. Whenever topology changes and primary key node assignment changes, Ignite will always make sure that the service is undeployed on the previous primary node and is deployed on the new primary node. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteServices svcs = ignite.services();\n\nsvcs.deployKeyAffinitySingleton(\"myKeySingleton\", new MyService(), \"myCache\", new MyCacheKey());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-The above method is analogous to calling
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteServices svcs = ignite.services();\n\nServiceConfiguration cfg = new ServiceConfiguration();\n \ncfg.setName(\"myKeySingleton\");\ncfg.setService(new MyService());\ncfg.setCacheName(\"myCache\");\ncfg.setAffinityKey(new MyCacheKey());\ncfg.setTotalCount(1);\ncfg.setMaxPerNodeCount(1);\n \nsvcs.deploy(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/service-grid/service-configuration.md b/wiki/documentation/service-grid/service-configuration.md
deleted file mode 100644
index 61343da..0000000
--- a/wiki/documentation/service-grid/service-configuration.md
+++ /dev/null
@@ -1,50 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-In addition to deploying managed services by calling any of the provided `IgniteServices.deploy(...)` methods, you can also automatically deploy services on startup by setting `serviceConfiguration` property of IgniteConfiguration:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...  \n    <!-- Distributed Service configuration. -->\n    <property name=\"serviceConfiguration\">\n        <list>\n            <bean class=\"org.apache.ignite.services.ServiceConfiguration\">\n                <property name=\"name\" value=\"MyClusterSingletonSvc\"/>\n                <property name=\"maxPerNodeCount\" value=\"1\"/>\n                <property name=\"totalCount\" value=\"1\"/>\n                <property name=\"service\">\n                  <ref bean=\"myServiceImpl\"/>\n                </property>\n            </bean>\n        </list>\n    </property>\n</bean>\n \n<bean id=\"myServiceImpl\" class=\"foo.bar.MyServiceImpl\">\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "ServiceConfiguration svcCfg1 = new ServiceConfiguration();\n \n// Cluster-wide singleton configuration.\nsvcCfg1.setName(\"MyClusterSingletonSvc\");\nsvcCfg1.setMaxPerNodeCount(1);\nsvcCfg1.setTotalCount(1);\nsvcCfg1.setService(new MyClusterSingletonImpl());\n \nServiceConfiguration svcCfg2 = new ServiceConfiguration();\n \n// Per-node singleton configuration.\nsvcCfg2.setName(\"MyNodeSingletonSvc\");\nsvcCfg2.setMaxPerNodeCount(1);\nsvcCfg2.setService(new MyNodeSingletonImpl());\n\nIgniteConfiguration igniteCfg = new IgniteConfiguration();\n \nigniteCfg.setServiceConfiguration(svcCfg1, svcCfg2);\n...\n\n// Start Ignite node.\nIgnition.start(gridCfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Deploying After Startup"
-}
-[/block]
-You can configure and deploy services after the startup of Ignite nodes. Besides multiple convenience methods that allow deployment of various [cluster singletons](doc:cluster-singletons), you can also create and deploy service with custom configuration.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "ServiceConfiguration cfg = new ServiceConfiguration();\n \ncfg.setName(\"myService\");\ncfg.setService(new MyService());\n\n// Maximum of 4 service instances within cluster.\ncfg.setTotalCount(4);\n\n// Maximum of 2 service instances per each Ignite node.\ncfg.setMaxPerNodeCount(2);\n \nignite.services().deploy(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/service-grid/service-example.md b/wiki/documentation/service-grid/service-example.md
deleted file mode 100644
index e316af5..0000000
--- a/wiki/documentation/service-grid/service-example.md
+++ /dev/null
@@ -1,111 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Define Your Service Interface"
-}
-[/block]
-As an example, let's define a simple counter service as a  `MyCounterService` interface. Note that this is a simple Java interface without any special annotations or methods.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyCounterService {\n    /**\n     * Increment counter value and return the new value.\n     */\n    int increment() throws CacheException;\n     \n    /**\n     * Get current counter value.\n     */\n    int get() throws CacheException;\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Service Implementation"
-}
-[/block]
-An implementation of a distributed service has to implement both, `Service` and `MyCounterService` interfaces. 
-
-We implement our counter service by storing the counter value in cache. The key for this counter value is the name of the service. This allows us to reuse the same cache for multiple instances of the counter service.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyCounterServiceImpl implements Service, MyCounterService {\n  /** Auto-injected instance of Ignite. */\n  @IgniteInstanceResource\n  private Ignite ignite;\n\n  /** Distributed cache used to store counters. */\n  private IgniteCache<String, Integer> cache;\n\n  /** Service name. */\n  private String svcName;\n\n  /**\n   * Service initialization.\n   */\n  @Override public void init(ServiceContext ctx) {\n    // Pre-configured cache to store counters.\n    cache = ignite.cache(\"myCounterCache\");\n\n    svcName = ctx.name();\n\n    System.out.println(\"Service was initialized: \" + svcName);\n  }\n\n  /**\n   * Cancel this service.\n   */\n  @Override public void cancel(ServiceContext ctx) {\n    // Remove counter from cache.\n    cache.remove(svcName);\n    \n    System.out.println(\"Service was cancelled: \" + svcName);\n  }\n\n  /**\n   * Start service execution.\n   */\n  @Override public void execute(ServiceContext ctx) {\n    // Since our service is simply represented by a counter\n    // value stored in cache, there is nothing we need\n    // to do in order to start it up.\n    System.out.println(\"Executing distributed service: \" + svcName);\n  }\n\n  @Override public int get() throws CacheException {\n    Integer i = cache.get(svcName);\n\n    return i == null ? 0 : i;\n  }\n\n  @Override public int increment() throws CacheException {\n    return cache.invoke(svcName, new CounterEntryProcessor());\n  }\n\n  /**\n   * Entry processor which atomically increments value currently stored in cache.\n   */\n  private static class CounterEntryProcessor implements EntryProcessor<String, Integer, Integer> {\n    @Override public Integer process(MutableEntry<String, Integer> e, Object... args) {\n      int newVal = e.exists() ? e.getValue() + 1 : 1;\n      \n      // Update cache.\n      e.setValue(newVal);\n\n      return newVal;\n    }      \n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Service Deployment"
-}
-[/block]
-We should deploy our counter service as per-node-singleton within the cluster group that has our cache "myCounterCache" deployed.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Cluster group which includes all caching nodes.\nClusterGroup cacheGrp = ignite.cluster().forCache(\"myCounterService\");\n\n// Get an instance of IgniteServices for the cluster group.\nIgniteServices svcs = ignite.services(cacheGrp);\n \n// Deploy per-node singleton. An instance of the service\n// will be deployed on every node within the cluster group.\nsvcs.deployNodeSingleton(\"myCounterService\", new MyCounterServiceImpl());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Service Proxy"
-}
-[/block]
-You can access an instance of the deployed service from any node within the cluster. If the service is deployed on that node, then the locally deployed instance will be returned. Otherwise, if service is not locally available, a remote proxy for the service will be created automatically.
-
-# Sticky vs Not-Sticky Proxies
-Proxies can be either *sticky* or not. If proxy is sticky, then Ignite will always go back to the same cluster node to contact a remotely deployed service. If proxy is *not-sticky*, then Ignite will load balance remote service proxy invocations among all cluster nodes on which the service is deployed.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Get service proxy for the deployed service.\nMyCounterService cntrSvc = ignite.services().\n  serviceProxy(\"myCounterService\", MyCounterService.class, /*not-sticky*/false);\n\n// Ivoke a method on 'MyCounterService' interface.\ncntrSvc.increment();\n\n// Print latest counter value from our counter service.\nSystem.out.println(\"Incremented value : \" + cntrSvc.get());",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Access Service from Computations"
-}
-[/block]
-For convenience, you can inject an instance of service proxy into your computation using `@ServiceResource` annotation.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = igntie.compute();\n\ncompute.run(new IgniteRunnable() {\n  @ServiceResource(serviceName = \"myCounterService\");\n  private MyCounterService counterSvc;\n  \n  public void run() {\n\t\t// Ivoke a method on 'MyCounterService' interface.\n\t\tint newValue = cntrSvc.increment();\n\n\t\t// Print latest counter value from our counter service.\n\t\tSystem.out.println(\"Incremented value : \" + newValue);\n  }\n});",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file
diff --git a/wiki/documentation/service-grid/service-grid.md b/wiki/documentation/service-grid/service-grid.md
deleted file mode 100644
index 1620e93..0000000
--- a/wiki/documentation/service-grid/service-grid.md
+++ /dev/null
@@ -1,79 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Service Grid allows for deployments of arbitrary user-defined services on the cluster. You can implement and deploy any service, such as custom counters, ID generators, hierarchical maps, etc.
-
-Ignite allows you to control how many instances of your service should be deployed on each cluster node and will automatically ensure proper deployment and fault tolerance of all the services . 
-
-##Features
-  * **Continuous availability** of deployed services regardless of topology changes or crashes.
-  * Automatically deploy any number of distributed service instances in the cluster.
-  * Automatically deploy [singletons](doc:cluster-singletons), including cluster-singleton, node-singleton, or key-affinity-singleton.
-  * Automatically deploy distributed services on node start-up by specifying them in the  configuration.
-  * Undeploy any of the deployed services.
-  * Get information about service deployment topology within the cluster.
-  * Create service proxy for accessing remotely deployed distributed services.
-[block:callout]
-{
-  "type": "success",
-  "body": "Please refer to [Service Example](doc:service-example) for information on service deployment and accessing service API."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteServices"
-}
-[/block]
-All service grid functionality is available via `IgniteServices` interface.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Get services instance spanning all nodes in the cluster.\nIgniteServices svcs = ignite.services();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-You can also limit the scope of service deployment to a Cluster Group. In this case, services will only span the nodes within the cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignitition.ignite();\n\nClusterGroup remoteGroup = ignite.cluster().forRemotes();\n\n// Limit service deployment only to remote nodes (exclude the local node).\nIgniteServices svcs = ignite.services(remoteGroup);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Load Balancing"
-}
-[/block]
-In all cases, other than singleton service deployment, Ignite will automatically make sure that about an equal number of services are deployed on each node within the cluster. Whenever cluster topology changes, Ignite will re-evaluate service deployments and may re-deploy an already deployed service on another node for better load balancing.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Fault Tolerance"
-}
-[/block]
-Ignite always guarantees that services are continuously available, and are deployed according to the specified configuration, regardless of any topology changes or node crashes.
\ No newline at end of file
diff --git a/wiki/licence-prepender.sh b/wiki/licence-prepender.sh
deleted file mode 100755
index 20f1a62..0000000
--- a/wiki/licence-prepender.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/bin/bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-osname=`uname`
-
-license="<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the \"License\"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an \"AS IS\" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->"
-
-case $osname in
-    Darwin*)
-        for file in `find . -name "*.md"`;
-        do
-            cat $file | pbcopy && echo "$license" > $file && pbpaste >> $file
-        done
-    ;;
-
-    *)
-        for file in `find . -name "*.md"`;
-        do
-            cat $file | xclip -i && echo "$license" > $file && xclip -o >> $file
-        done
-esac
\ No newline at end of file