Details about the Optane SSDs and Flash drives
diff --git a/site/_posts/blog/2019-10-09-ycsb.md b/site/_posts/blog/2019-10-09-ycsb.md
index 0649e46..ce9a588 100644
--- a/site/_posts/blog/2019-10-09-ycsb.md
+++ b/site/_posts/blog/2019-10-09-ycsb.md
@@ -167,7 +167,7 @@
 When measuring the latency performance of a system, what you actually want to see is how the latency is affected as the system gets increasingly loaded. The YCSB benchmark is based on a synchronous database interface for updates and reads which means that in order to create high system load one essentially needs a large number of threads, and, most likely a large number of machines. Crail, on the other hand, does have an asynchronous interface and it is relatively straightforward to manage multiple simultaneous outstanding operations per client. 
 </p>
 <p>
-We used Crail's asynchronous API to benchmark Crail's key-value performance under load. In a first set of experiments, we increase the number of clients from 1 to 64 but each client always only has one outstanding PUT/GET operation in flight. The two figures below show the latency (shown on the y-axis) of Crail's DRAM, Optane and Flash tiers under increasing load measured in terms of operations per second (shown on the x-axis). As can be seen, Crail delivers stable latencies up to a reasonably high throughput. For DRAM, the get latencies stay at 12-15μs up to 4M IOPS, at which point the metadata server became the bottleneck (note: Crail's metadata plane can be scaled out by adding more metadata servers if needed). For the Optane NVM configuration, latencies stay at 20μs up until almost 1M IOPS, which is very close to the device limit. The Flash latencies are higher but the Samsung drives combined also have a higher throughput limit. In fact, 64 clients with queue depth 1 could not saturate the Samsung devices.
+We used Crail's asynchronous API to benchmark Crail's key-value performance under load. In a first set of experiments, we increase the number of clients from 1 to 64 but each client always only has one outstanding PUT/GET operation in flight. The two figures below show the latency (shown on the y-axis) of Crail's DRAM, Optane and Flash tiers under increasing load measured in terms of operations per second (shown on the x-axis). As can be seen, Crail delivers stable latencies up to a reasonably high throughput. For DRAM, the get latencies stay at 12-15μs up to 4M IOPS, at which point the metadata server became the bottleneck (note: Crail's metadata plane can be scaled out by adding more metadata servers if needed). For the Optane NVM configuration, latencies stay at 20μs up until almost 1M IOPS, which is very close to the device limit (we have two Intel Optane SSDs in single machine). The Flash latencies are higher but the Samsung drives combined (we have 16 Samsung drives in 4 machines) also have a higher throughput limit. In fact, 64 clients with queue depth 1 could not saturate the Samsung devices.
 </p>
 </div>