layout: post title: “Crail Storage Performance -- Part II: NVMf” author: Jonas Pfefferle category: blog comments: true
Hardware Configuration
The specific cluster configuration used for the experiments in this blog:
- Cluster
- Node configuration
- CPU: 2x OpenPOWER Power8 10-core @2.9Ghz
- DRAM: 512GB DDR4
- 4x 512 GB Samsung 960Pro NVMe SSDs (512Byte sector size, no metadata)
- Network: 1x100Gbit/s Mellanox ConnectX-4 IB
- Software
- RedHat 7.3 with Linux kernel version 3.10
- Crail 1.0, internal version 2843
- SPDK git commit 5109f56ea5e85b99207556c4ff1d48aa638e7ceb with patches for POWER support
- DPDK git commit bb7927fd2179d7482de58d87352ecc50c69da427
The Crail NVMf Storage Tier
Performance comparison to native SPDK NVMf
Sequential Throughput
Random Read Latency
Tiering DRAM - NVMf
To summarize, in this blog we have shown that the NVMf storage backend for Crail -- due to its efficient user-level implementation -- offers latencies and throughput very close to the hardware speed. The Crail NVMf storage tier can be used conveniently in combination with the Crail DRAM tier to either save cost or to handle situations where the available DRAM is not sufficient to store the working set of a data processing workload.