Publishing from a4aae5212d8c89dc97568a7e00c1892456bda719
diff --git a/content/blog/2019/10/ycsb.html b/content/blog/2019/10/ycsb.html
index 10c554d..1f8efdc 100644
--- a/content/blog/2019/10/ycsb.html
+++ b/content/blog/2019/10/ycsb.html
@@ -230,7 +230,7 @@
 When measuring the latency performance of a system, what you actually want to see is how the latency is affected as the system gets increasingly loaded. The YCSB benchmark is based on a synchronous database interface for updates and reads which means that in order to create high system load one essentially needs a large number of threads, and, most likely a large number of machines. Crail, on the other hand, does have an asynchronous interface and it is relatively straightforward to manage multiple simultaneous outstanding operations per client. 
 </p>
 <p>
-We used Crail's asynchronous API to benchmark Crail's key-value performance under load. In a first set of experiments, we increase the number of clients from 1 to 64 but each client always only has one outstanding PUT/GET operation in flight. The two figures below show the latency (shown on the y-axis) of Crail's DRAM, Optane and Flash tiers under increasing load measured in terms of operations per second (shown on the x-axis). As can be seen, Crail delivers stable latencies up to a reasonably high throughput. For DRAM, the get latencies stay at 12-15μs up to 4M IOPS, at which point the metadata server became the bottleneck (note: Crail's metadata plane can be scaled out by adding more metadata servers if needed). For the Optane NVM configuration, latencies stay at 20μs up until almost 1M IOPS, which is very close to the device limit (we have two Intel Optane SSDs in single machine). The Flash latencies are higher but the Samsung drives combined (we have 16 Samsung drives in 4 machines) also have a higher throughput limit. In fact, 64 clients with queue depth 1 could not saturate the Samsung devices.
+We used Crail's asynchronous API to benchmark Crail's key-value performance under load. In a first set of experiments, we increase the number of clients from 1 to 64 but each client always only has one outstanding PUT/GET operation in flight. The two figures below show the latency (shown on the y-axis) of Crail's DRAM, Optane and Flash tiers under increasing load measured in terms of operations per second (shown on the x-axis). As can be seen, Crail delivers stable latencies up to a reasonably high throughput. For DRAM, the get latencies stay at 12-15μs up to 4M IOPS, at which point the metadata server became the bottleneck (note: Crail's metadata plane can be scaled out by adding more metadata servers if needed). For the Optane NVM configuration, latencies stay at 20μs up until almost 1M IOPS, which is very close to the device limit (we have two Intel Optane SSDs in a single machine). The Flash latencies are higher but the Samsung drives combined (we have 16 Samsung drives in 4 machines) also have a higher throughput limit. In fact, 64 clients with queue depth 1 could not saturate the Samsung devices.
 </p>
 </div>
 
diff --git a/content/feed.xml b/content/feed.xml
index 115ebe8..b00704c 100644
--- a/content/feed.xml
+++ b/content/feed.xml
@@ -1,4 +1,4 @@
-<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.5">Jekyll</generator><link href="http://crail.incubator.apache.org//feed.xml" rel="self" type="application/atom+xml" /><link href="http://crail.incubator.apache.org//" rel="alternate" type="text/html" /><updated>2019-10-10T11:01:13+02:00</updated><id>http://crail.incubator.apache.org//feed.xml</id><title type="html">The Apache Crail (Incubating) Project</title><entry><title type="html">YCSB Benchmark with Crail on DRAM, Flash and Optane over RDMA and NVMe-over-Fabrics</title><link href="http://crail.incubator.apache.org//blog/2019/10/ycsb.html" rel="alternate" type="text/html" title="YCSB Benchmark with Crail on DRAM, Flash and Optane over RDMA and NVMe-over-Fabrics" /><published>2019-10-09T00:00:00+02:00</published><updated>2019-10-09T00:00:00+02:00</updated><id>http://crail.incubator.apache.org//blog/2019/10/ycsb</id><content type="html" xml:base="http://crail.incubator.apache.org//blog/2019/10/ycsb.html">&lt;div style=&quot;text-align: justify&quot;&gt; 
+<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.5">Jekyll</generator><link href="http://crail.incubator.apache.org//feed.xml" rel="self" type="application/atom+xml" /><link href="http://crail.incubator.apache.org//" rel="alternate" type="text/html" /><updated>2019-10-10T11:04:15+02:00</updated><id>http://crail.incubator.apache.org//feed.xml</id><title type="html">The Apache Crail (Incubating) Project</title><entry><title type="html">YCSB Benchmark with Crail on DRAM, Flash and Optane over RDMA and NVMe-over-Fabrics</title><link href="http://crail.incubator.apache.org//blog/2019/10/ycsb.html" rel="alternate" type="text/html" title="YCSB Benchmark with Crail on DRAM, Flash and Optane over RDMA and NVMe-over-Fabrics" /><published>2019-10-09T00:00:00+02:00</published><updated>2019-10-09T00:00:00+02:00</updated><id>http://crail.incubator.apache.org//blog/2019/10/ycsb</id><content type="html" xml:base="http://crail.incubator.apache.org//blog/2019/10/ycsb.html">&lt;div style=&quot;text-align: justify&quot;&gt; 
 &lt;p&gt;
 Recently, suppport for Crail has been added to the &lt;a href=&quot;https://github.com/brianfrankcooper/YCSB&quot;&gt;YCSB&lt;/a&gt; benchmark suite. In this blog we describe how to run the benchmark and briefly show some performance comparisons between Crail and other key-value stores running on DRAM, Flash and Optane such as &lt;a href=&quot;https://www.aerospike.com&quot;&gt;Aerospike&lt;/a&gt; or &lt;a href=&quot;https://ramcloud.atlassian.net/wiki/spaces/RAM/overview&quot;&gt;RAMCloud&lt;/a&gt;. 
 &lt;/p&gt;
@@ -151,7 +151,7 @@
 When measuring the latency performance of a system, what you actually want to see is how the latency is affected as the system gets increasingly loaded. The YCSB benchmark is based on a synchronous database interface for updates and reads which means that in order to create high system load one essentially needs a large number of threads, and, most likely a large number of machines. Crail, on the other hand, does have an asynchronous interface and it is relatively straightforward to manage multiple simultaneous outstanding operations per client. 
 &lt;/p&gt;
 &lt;p&gt;
-We used Crail's asynchronous API to benchmark Crail's key-value performance under load. In a first set of experiments, we increase the number of clients from 1 to 64 but each client always only has one outstanding PUT/GET operation in flight. The two figures below show the latency (shown on the y-axis) of Crail's DRAM, Optane and Flash tiers under increasing load measured in terms of operations per second (shown on the x-axis). As can be seen, Crail delivers stable latencies up to a reasonably high throughput. For DRAM, the get latencies stay at 12-15μs up to 4M IOPS, at which point the metadata server became the bottleneck (note: Crail's metadata plane can be scaled out by adding more metadata servers if needed). For the Optane NVM configuration, latencies stay at 20μs up until almost 1M IOPS, which is very close to the device limit (we have two Intel Optane SSDs in single machine). The Flash latencies are higher but the Samsung drives combined (we have 16 Samsung drives in 4 machines) also have a higher throughput limit. In fact, 64 clients with queue depth 1 could not saturate the Samsung devices.
+We used Crail's asynchronous API to benchmark Crail's key-value performance under load. In a first set of experiments, we increase the number of clients from 1 to 64 but each client always only has one outstanding PUT/GET operation in flight. The two figures below show the latency (shown on the y-axis) of Crail's DRAM, Optane and Flash tiers under increasing load measured in terms of operations per second (shown on the x-axis). As can be seen, Crail delivers stable latencies up to a reasonably high throughput. For DRAM, the get latencies stay at 12-15μs up to 4M IOPS, at which point the metadata server became the bottleneck (note: Crail's metadata plane can be scaled out by adding more metadata servers if needed). For the Optane NVM configuration, latencies stay at 20μs up until almost 1M IOPS, which is very close to the device limit (we have two Intel Optane SSDs in a single machine). The Flash latencies are higher but the Samsung drives combined (we have 16 Samsung drives in 4 machines) also have a higher throughput limit. In fact, 64 clients with queue depth 1 could not saturate the Samsung devices.
 &lt;/p&gt;
 &lt;/div&gt;