This post is part of the blog series related to Storage Field Day 13 and the Pure Storage Accelerate conferences. Find out the entire SFD13 content, presentations, articles, presenting companies and delegates here.
Redefining X-IO Axellio
I’ve written about X-IO and Axellio before flying to Storage Field Day 13 and at the time, despite writting comprehensively to the best extent of my knowledge, I wasn’t well aware of the purpose of Axellio, and you can even see me considering if that is « Tier-0 hyperconverged » or a new kind of storage appliance. I stand corrected on this, as both qualifications would suppose a certain level of scalability.
Is it bad ? I don’t think so, though it is a bit disappointing from a pure nerd perspective that it is not intended for general public consumption « as-is ». Axellio is an OEM solution, which is meant to be consumed for vendors developing business-critical (or rather performance-critical) applications and solutions. The Axellio platform is meant to be used as a self-contained 2U form factor platform on top of which applications can run. Tuning it to run into a larger storage system would require substantial changes to the existing architecture and would be likely to introduce additional I/O hops and induced latency in these, even if minimal.
The Edge Explained
The concept of Edge Computing as put forth by X-IO is the answer to the problem of data locality in use cases where massive amounts of data need to be ingested simultaneously and consistently with a very low latency.
This data needs not only to be processed (often in real-time) but also to be stored. The more data is ingested, the more its mass increases : as it becomes heavier (larger dataset), the longer it would take for it to be eventually transported from point A (the edge, where data is processed) to point B (the core, your data center). If it even makes sense at all to transport it.
In the case of Axellio, the expectation is that of data locality, as in a self-contained OEM appliance that takes care of data ingestion, processing and output locally because the imperatives of low latency and real-time processing of the data make any sort of data movement inefficient and void of sense. However, the data outputs generated from applications that will hypothetically run on the Axellio platform may eventually need to be moved from the edge back to the core, but that is another story, as this doesn’t devoids the previous arguments.
Discovering the keystone of X-IO Axellio
I did cover partially the architecture of Axellio on my previous post, but didn’t cover in depth what makes this platform unique. Of course 72 dual-port NVMe SSD drives is nice. Indeed up to 4 Xeon E5-2699 v4 is sexy. But still, the keystone of the Axellio architecture remains the use of the PCI Express lanes of the system bus as the communication layer in lieu of the more traditional network interfaces. Why the PCIe bus? It’s integrated in all motherboards and CPUs, and available « for free ». Each Intel Xeon E5 v4 processor, even the cheapest ones, comes with 40 PCIe lanes. One PCIe lane has a 1 Gbps bandwidth when used in half-duplex mode, giving 2 Gbps in full-duplex. If we were to use all of the PCIe lanes of a single Intel Xeon E5 v4 processor in full-duplex, we would be able to leverage 80 Gbps of bandwidth per processor, at very low latency speeds, and without having to traverse a network and other parts of the communication stack.
Used in conjunction with the NVMe protocol, which was designed to address modern storage requirements and offers not only a deep queue depth (64k commands/queue vs 32 commands/queue for SATA), but also massive parallelism (64k queues vs 1 queue for SATA), the potential in terms of speed, low latency and data ingestion capabilities is nothing short of phenomenal, especially considering that this architecture completely eliminates network interconnects between the two compute nodes that are bound within the appliance.
The high storage density is achieved by X-IO’s proprietary FabricXpress interconnect that leverages and extends the traditional PCI Express architecture while allowing to connect up to 72 dual-port NVMe SSD drives and extra PCIe modules such as Xeon Phi coprocessors but also NVidia, Tesla and other compute specific modules.
And performance in all of this?
X-IO provided us with an overview of their platform’s performance. Using 48 NVMe SSD drives and 4Kb blocks, they tested 100% random writes, then 100% random reads. The write profile reached 7.8 million IOPS with a 30.5 GB/s bandwidth @ 90 μS latency with a queue depth of 8. The read profile reached 8.6 millions IOPS with a 33.7 GB/s bandwidth @ 164 μS latency with a queue depth of 16. Definitely not Mom & Pop’s 2-node commodity server.
Interestingly, because the PCIe bus is full-duplex, it’s also possible to perform both read and write operations in parallel. In fact X-IO were able to measure up to 60 GB/s of bandwidth while doing write and reads simultaneously.
Today’s world generates massive / apocalyptical / abyssal amounts of data and solutions like Axellio are a great way to tackle the problem of massive ingest & processing. X-IO have not just introduced us to the Edge Computing use case and term. They also convinced us (or me at least), through a very solid argumentation that Edge Computing is not the next big buzzword, but an entirely new kind of architecture, one that has emerged thanks to the combined (and seemingly infinite) possibilities given to us by the PCI Express bus as well as the NVMe protocol.
In « The Brave New World of NVMe » episode of Gestalt IT’s On Premise IT Roundtable Podcast discussion which (predictably) was dedicated to NVMe, my friend Ray Lucchesi had accurately prophetized that we are just at the beginning of the NVMe revolution : It’s gonna change your world ! I cannot even imagine what will arise from the NVMe revolution, but X-IO’s Axellio is a pretty damn exciting preview of what’s in for the upcoming years.
This disclosure is written specifically for the Storage Field Day 13 and Pure Storage Accelerate events. I was invited to the Pure Storage Accelerate and Storage Field Day 13 events by Gestalt IT & Pure Storage. Gestalt IT & Pure Storage covered travel expenses to the event, accommodation and food were also covered for the entire event duration. Transportation from home to PRG airport and back, transportation from SFO airport to the hotel as well as food and accommodation costs (on 10-Jun-17) were covered by me.
I did not receive any compensation for the participation in this event, for which I took unpaid time off to be able to attend (as it is the case with any events I participate to). I am not obliged to blog or produce any kind of content. Any tweets, blog articles or any other form of content I have produced or may produce in the future related to this event is the exclusive product of my interest in technology and my will to share information with my peers.
In line with the concept of freedom of thought/critical thinking I commit to share only my own point of view and analysis about any products, technologies, strategies & concepts I was introduced to.