Sunday, December 7, 2025
spot_img

Why you should invest in a really fast all-flash or NVMe-based networked storage system!

spot_img
spot_img
- Advertisement -
- Advertisement -Image 2
- Advertisement -

Before 2001, the computing world only knew single core processors. In 2001 IBM introduced the world’s first multicore Power4 processor with two 64-bit microprocessors in the same die. Today more popularly we call a single chip die as a processor socket. The first Dual Core Athlon Processor was launched by AMD on April 21, 2005 and Dual Core Xeon Processor was launched by Intel on October 10, 2005.

The frequency of the processor core determines how fast a compute instruction is processed by the processor so essentially the single core processors were doing sequential computing. However, large computers or servers with multiple processor sockets, were able to do parallel computing with multiple processors plugged in, supporting symmetric multi-processing (SMP). There was a requirement for the operating system and the application software to be multi-threaded to support SMP, which means that the software should be able to distribute its compute requirements into multiple threads and allocate it to different processors with the help of the operating system to be able to take advantage of SMP.

But what was the need of higher core frequency in processors? 

Before the era of multicore processors world’s major software were either licensed on ‘per server or per physical processor or socket’ basis. Currently most database software, which are the most expensive licenses used by enterprises across the globe, are on core-based licenses. Essentially to derive more performance of your databases and to make your applications run faster you need to add more processor cores to the databases.

Now how do we reduce the cost of expensive database or any other software who license it on per-core basis. Every processor core is not working in isolation. It must interact and interface with Random Access Memory (RAM) and I/O such as USB and Ethernet, and the most important of all is the disk which is the persistent data storage system in a compute architecture. All these interfaces may not be as fast as the Processor itself, which means that the interface may not be able to respond at the same speed as a CPU instruction cycle. Here comes into picture the Processor Wait State. Await state is a delay experienced by a compute processor when accessing external memory or another device that is slow to respond.

Now if we compare the interfaces of RAM, I/O and storage disk, the storage disk traditionally has been the slowest component in the entire stack to respond to the processor cycles. This means that the RAM and I/O do induce wait states in the processor cores but are much less and the storage disk induces the maximum wait states in the processor core.

My point here is, if I we able to increase the speed of the disk to respond to the processor then our wait states reduce, and we can use more cycles of the processor core to compute which means a faster response of our application software.

Now coming to the disk storage part, typically, mechanical disks are the slowest of the lot. Flash Disks are much faster and the latest NVMe Flash Disks are the fastest of the lot. The cost also increases directly in proportion to the performance. Another factor which induces latency in the disk subsystem is the disk interface or storage network. Most popular direct disk attached interfaces are SAS and SATA which I do not want to discuss here for obvious reasons, but for networked storage systems, the most popular interfaces are Fiber Channel and iSCSI over Ethernet. With the high speed and low latency Ethernet such as 40G and 100G, iSCSI is becoming very popular now. The advantage of iSCSI has been reduced complexity of deployment, management and moreover the flexibility of a TCP/IP network, it provides. But Fiber Channel has traditionally been the fastest and lowest in latencies before the NVMe over Fabrics (NVMeOF) was introduced.

If you invest in a really fast storage system like NVMe disk based; using your disk subsystem reduces the processor wait states and results in dramatic reduction in processor cycles usage, which means that you can run the same application with same or better performance using much lesser number of processor cores and end up saving a lot of licensing costs.

When I architected the setup of ZeaCloud, my clear choice was NVMe storage and NVMeOF which results in running more workloads on the same processor and happier customers because of better application performance.

Author: Mr. Santosh Agrawal, CEO of Esconet Technologies and IT & Cloud Infrastructure Architect

- Advertisement -
spot_img
spot_img