A few weeks ago I had the opportunity to be briefed by the folks at Liqid, a Colorado-based startup. Their website claims that “The future is on-demand composable infrastructure”. There are now several companies claiming that they operate in this space, so we set to understand what is Liqid vision about.
The Vision
Liqid’s vision is that for composable infrastructure to make sense, it needs to be fully abstracted, in the sense that underlying hardware just becomes a consumable pool of resources regardless of its type (CPU, GPU, RAM, Storage etc.). Furthermore, resources should not only be pooled, but it should be done without any of the constraints that we know with current server architectures, which bind all of these majors component within a single box.
The concept behind true composable infrastructure is the disaggregation of resources into independent pools: just as you can buy trays (racks) of NVMe JBOF storage, you should also be able to buy trays of CPUs, GPU controllers, RAM. And obviously, be able not only to scale resources independently, but also slice them dynamically as you need, without going through complex re-engineering of the underlying hardware platform.
Implementation
“How?!” is the question I asked myself during the briefing call. We are so much used to the standard x86 architecture model of one server with everything inside, that it just boggles the mind to imagine a box filled with RAM or CPU, dependent on some other component, not self-sufficient and without a motherboard (a traditional one, at least).
What Liqid sells is, to put it in layman’s terms, a top-of-rack PCIe switch (Liqid Grid PCie fabric switch) fitted with their patented software solution which orchestrates and manages the entire environment (Liqid Command Center). Customers then bring in readily available off-the-shelf resource trays (CPU, GPU, RAM, Storage etc), connect those with the PCIe ToR switch, and are more or less ready to go.
Liqid’s composable infrastructure implementation is bare-metal based and is not aiming to target virtualization use cases. The idea is to create abstracted bare-metal servers that take up slices of the existing resources, instead of sharing existing resources as in a virtualized environment. Such a server, which has a certain amount of resources allocated to it will have its resources guaranteed, as it is, in fact, a physical server, even if abstracted from the palatable, geometrical constructs we know.
Use cases: a solution built for the future
Liqid is an architecture implementation that was built with the most demanding workloads of today in mind, whether it is research, artificial intelligence, virtual/augmented reality and video processing.
This architecture enables customers who have massive resource usage requirements but competing projects and limited resources to allocate resources dynamically between projects based on demand. It helps change the paradigm from “project based” hardware consumption to a true pooling of resources which is beneficial to organisations. Instead of each project buying their own expensive GPUs, the orgs can now purchase a larger pool of GPUs and re-allocate its use as needed.
As I am writing this, I’m currently in the Silicon Valley and have left my briefing notes at home in Europe however I recall the Liqid team told us in the briefing that they are able to leverage PCIe networking to share computational power in GPU modules without having to cope with CPU bottlenecks. It’s a bit of black magic talk to me, but it sounds certainly exciting.
Impact on existing architectures
How does this impacts traditional workloads? I don’t think we are going to see the end of XY-converged solutions yet (Converged, Hyper-Converged, Open-Converged), because many of these are still built with the premise of supporting virtualised environments, and the use cases for Liqid composable are clearly different.
The shift towards composable infrastructures will however slowly happen. The elastic scalability capabilities are evident, the question remains about when PCIe-based architectures will become cheaper, as there is currently no price incentive for organisations running traditional workloads to move over to a high-speed PCIe architecture.
As I am writing this section, I feel this needs much more development and unfortunately I’m constrained by time. However our friend Scott D. Lowe will be presenting on the topic of HCI, Open Convergence and Composable Infrastructures at our TECHunplugged New York 2018 Conference on 22-Mar-18. Perhaps you should check it out, and we will have a video recording of this as well.
Max’s Opinion
While Liqid’s vision of Composable Infrastructure doesn’t fundamentally transforms the standard Von Neumann Architecture model, it is a novel approach to its physical implementation, where resources are physically separated and truly independently scalable, with granularity at the silicon component level. Moreover we finally start to understand the benefits of PCIe as a communication bus beyond storage implementations.
In one sense it might seem to be a return to the past, where central processing units, memory units etc were all separate physical components linked together, however the sheer speeds and performance we are seeing today make this approach seem the best possible. The vertiginous capabilities provided by the PCIe technology not only in terms of speed but also in terms of parallelisation make this possible and eliminate the bottlenecks of motherboard-based approaches.
Liqid will GA their solution around the end of March 2018. I strongly recommend to follow what they are doing in this space, for they are very probably going to lead the next revolution in x86 architecture implementation and consumption.