This post is part of the blog series related to Storage Field Day 10. Find out the entire SFD10 content, presentations, articles, presenting companies and delegates here.
In an age where the major storage pundits and influencers claim that « The Storage Industry is Doomed », where software-defined is the new cloud-like buzzword (I tend to abuse this word lately too) and where most vendors either capitalize on older R&D investments by rebranding old gear with a new name and a new GUI, or simply purchase commodity hardware built overseas, it may seem strange, incongruous or even a waste of money for a storage vendor to propose their own custom-tailored solution. It’s not necessarily the case and I’ll try to explain why the FlashBlade system by Pure Storage got my interest (beyond its sleek design), and why their approach has its merits.
A Storage System for our Decade?
With the FlashBlade system, Pure decided to build an AFA (All-Flash Array) that would tackle multiple challenges such as the technical limitations of SSD drives, the connectivity problems when operating at scale, as well as power and heating. We will get to these technical points later, but for now, what the FlashBlade does, and which industries/segments/uses can benefit of it?
According to Pure Storage, the FlashBlade was built to address the most demanding workloads of the present (and the future) that current architectures cannot serve in an efficient fashion. This storage system currently provides either NFS or S3 with a promise to include other protocols after GA.
The FlashBlade isn’t GA yet, however Par Botes gave us an overview of current beta installs that are seeing the following types of workloads per application area:
- Rich Media
- SFX, Studios, Movies
- User Image, Diagnostic Imaging, Financial Images
- Data Analytics
- Log/Sensor Analysis
- Reservoir Simulation
- Genomic Analysis
- Financial Modeling, Risk Models
- Technical Computing
- Software dev, CI&CD
- EDA (chip design testing), Verification
- Machine Learning, Deep Learning
- Simulation, Compute Farms
It’s worthy to note that the FlashBlade uses a blade form factor in a 4U chassis with the following specifications:
- 15 blade slots
- built-in network backplane based on Broadcom Trident II ASICs with software defined networking (more below)
- 8x 40 Gbps QSFP+ ports per chassis
- Blade mix and match possible (currently 8 TB or 52 TB blades available)
A full list of specs (before GA) is available here.
A look at the Blade Architecture
So, why a blade form factor in an era where the rackmount form factor has the lead with most newcomers? Pure Storage probably had a look at their installed base of FlashArray //m series and at the data center challenges in general, and found out that, when operating at scale, three problems would generally be encountered: connectivity, power consumption and heat/cooling requirements. In the earlier development stages, they also decided to avoid using a large 4U rackmount chassis because of the failure domain. Instead, the idea of using a blade chassis emerged, which would allow for a denser setup with an integrated network backplane, and the benefits of reducing the footprint not only from a space usage standpoint, but also from a heating and power consumption standpoint.
Now OK, we have a blade chassis in a 4U size, with 15 blade slots. It’s an AFA, where are the SSD drives?! And bang, here you go, that’s the novelty of the FlashBlade. One could say that each blade is a mini-server in its own right: each of them comes with a low power Xeon System-on-a-Chip, Flash chips, DRAM as well as NVRAM to back write operations in case of power loss, plus FPGA ARM chips that run the core of FlashBlade, the “Elasticity” software.
Blades over SSDs – a carefully thought decision
While at Pure HQ, one of the most interesting parts was how Brian Gold (Engineering Director) explained SSD internals, the reasons that led them to develop the blades over using SSD drives, and the burden of legacy protocols that hamper Flash raw efficiency: the bottleneck is no longer the device latency (like in harddisks) but the bus throughput.
A large part of the discussion went over explaining the various levels of data in an SSD, and how the Flash Translation Layer works to translate logical write or read requests and map them to the complex geometry of SSD drives. I was probably too busy typing thus I don’t have a photo, but Brian explained the various layers as follows (from tiniest to largest). Apologies in advance for the lack of definition, you may have to put google to help:
Code (encode/decode) 1-2 kb
Page (read/write) 16 Kb
Block (Erase) multiple pages
LUN (concurrent access)
Chips (multiple LUNs)
Channel (concurrent transfer)
SSD (multiple channels)
A problem is that all the metadata that needs to be processed to address the specific layers on an SSD drive increases and with this the usage of DRAM and processing power increases. Also, as the size of SSD drives increase, the compute requirement to run low-level processes such as free space search and garbage collection also increase. This led Pure Storage to look at ways to eliminate the Flash Translation Layer and build their own Flash modules, while moving the flash translation functions at the array-level software (Elasticity). I am picturing below one of their early prototypes.
I could go on endlessly on these interesting aspects but for the sake of this post’s length I encourage you to go watch the related video instead, and listen to Brian Gold’s own precise explanations:
FlashBlade: A Software-Defined Array?
What is interesting as well in the FlashBlade is the software architecture. The software engineers had three challenges in mind: distributing connections, distributing and protecting data, and distributing the controller logic. Rob Lee went to great lengths to explain the unique architecture of the FlashBlade system, therefore my short notes below may seem pretty bland as I was more occupied in listening and watching than in writing.
Also, don’t miss the great post written by Tom Hollingsworth on the (fantastic) networking aspects of the FlashBlade – because without networking, that would just be (probably) an expensive heavy brick. And a bonus picture!
Distribute and protect data
- Federated namespace, portions mapped to storage nodes. No coordination needed for data access.
- Stripe wide for encoding efficiency
- At GA, the scaling limit will be a single chassis, however multi chassis support is planned after GA
- Smallest configuration starts at 7 blades (out of 15)
Distribute controller logic
- Partial distribution : shared metadata / distributed locking
- FlashBlade: Global distribution: to achieve parallelism and scale
- Metadata Efficiency: query and update performance / Storage caching efficiency
- Shared-Nothing Partitioning: no SPOFs/bottlenecks; create independence and parallelism in control processing
- Instant Performance: migrate processing immediately – no data movements required, predictable performance when it counts
I had high expectations about the FlashBlade system and I must say I was impressed, to say the least. The sessions we had with Pure Storage were awesome, and not just in their form and very professional execution.
Pure and their engineering team have taken a radical, no shortcut approach to deliver uncompromising flash-based performance, even if that goes ‘against the grain’ of the trend we see today with SSD-based All-Flash Arrays.
With well-thought and nicely coupled hardware and software engineering, they managed to build a very solid and unique solution that is not only likely to deliver outstanding performance but was built, like Par Botes said, with the storage needs of this decade in mind.
High performance flash has a price, but looking at the beta installs and use cases, it appears that there is a market which is demanding uncompromising performance. While it may seem normal to folks operating in the largest countries, I’d like to remember that in my market (Czech Republic), the average “data center” is made of 3 hosts with vSphere Essentials Plus Kit and the cheapest array you can find around.
I’m sometimes a dreamer who believes that well-engineered products built without shortcuts should be rewarded by the market, so let’s wish Pure Storage all the best and a lot of success for the launch of their FlashBlade. Because, honestly, they deserve it.
You can read this post by Dan Frith about the FlashBlade – he does a great job at explaining the product, the challenges and the value without going through the often obscure intricacies & the convolutions of my thought process.
As mentioned above, Tom Hollingsworth post on the (fantastic) networking aspects of the FlashBlade is also well worth a read, especially if you like those strange devices with a lot of blinking LEDs.
A bit earlier in May 2016 (before SFD10), Chris Evans wrote a preview article about Pure Storage.
P.S.: « Against the grain » is also the name of a brilliant novel published in 1884 by Joris-Karl Huysmans. A bedside book I can confidently recommend to anyone interested in the « decadent » literature style.
SFD10 Disclosure: this post is a part of my SFD10 post series. I am invited to the Storage Field Day 10 event by Gestalt IT. Gestalt IT will cover travel, accommodation and food during the SFD10 event duration. I will not receive any compensation for participation in this event, and I am also not obliged to blog or produce any kind of content. Any tweets, blog articles or any other form of content I may produce are the exclusive product of my interest in technology and my will to share information with my peers. I will commit to share only my own point of view and analysis of the products and technologies I will be seeing/listening about during this event.
Pure Storage Disclosure: I received a Tile, a box of mints, a pen, a 8 GB stick, a paper block and a « Certificate of Pure Genius ». These gifts did not influence this post.