This post is part of the blog series related to Storage Field Day 14. Find out the entire SFD14 content, presentations, articles, presenting companies and delegates here.
As I wrote in my announcement of joining Storage Field Day 14 and Commvault GO, Infinidat is a company I had been looking at in 2016 at VMworld Barcelona and even had a short chat with some of the team at their booth. Unfortunately I didn’t get to write about them in 2016 or until now, but as you understood this will change soon. Let’s go through my usual Storage Field Day « Primer » article series structure. We’ll cover history, solution architecture, market orientation and finally some personal thoughts. It appears that because of current funding-related activities (C Series, see below), Infinidat had to move their appearance at a later date. I will nevertheless go ahead with this post, as it was almost complete when I was provided the news.
Company History
Infinidat was founded in 2011 and exited stealth mode in April 2015. The mind behind Infinidat is Moshe Yanai, who developed the Symmetrix platform during his tenure at EMC, then left & developed XIV which was then sold to IBM. With such a track record, some folks would be enjoying a very comfortable retirement, but it seems that the creative mind is often restless and up to drafting the next big thing. That is, at least, how I was explained the birth of Infinidat by one of their employees at VMworld 2016.
Infinidat have received a total funding of 325M USD according to Crunchbase, and the latest round (Series C) is very fresh, as it completed on 3-Oct-17 and has seen the influx of no less than 95M USD from TPG Growth and Goldman Sachs. Considering that Infinidat leverages a sizeable portion of spindle drives when the industry is massively shifting to flash and especially NVMe, this last round of investment at 95M USD means either Infinidat have a crazy good solution, or the investors have just gone completely mad. Go ahead and read the entire article to make up your mind. My outlook tends towards « crazy good ».
In terms of valuation and growth, we have to cross sources of information but we hear from Infinidat’s own website that they are currently valued at 1.6B USD based on an article from the Boston Business Journal (3 articles can be read for free, then paywall). That same article mentions that the company saw a 250% growth in revenue between Q2-16 and Q2-17, and became profitable in 2016. It’s difficult to verify these claims because Infinidat is not yet a public company, but if it is true it’s promising indeed. The exit strategy for Infinidat seems to be going public, i.e. an IPO once customer base and product line offerings have been expanded (same article source).
The company is based out of Herzliya in Israel, but appears to have other headquarters across the globe. As a matter of fact, we’ll be meeting them in the Bay Area during Storage Field Day 14 and you’d expect them to have a major hub in the Silicon Valley, like many other technology companies.
InfiniBox: System Architecture
The main product sold by Infinidat is a series of appliances called InfiniBox, but they also seem to have a backup appliance. This post is a primer after all, so I will just focus on the InfiniBox platform. A lot of what will follow comes from documentation that is available on Infinidat website.
InfiniBox is a software-defined storage solution delivered in the form of an hybrid storage array leveraging flash and hard disks bundled into commodity x86 hardware. Infinidat claims multi-petabyte capacity, 7-nines availability (99.99999%, or 3.15 seconds of unavailability per year, according to this nice table on Wikipedia) and 1 million IOPS in a 42U rack. I’d like to get more details on how this is achieved, but meanwhile let’s continue on the architecture.
Looking at the architecture, InfiniBox systems are built as a mesh infrastructure consisting of nodes and drives, where all nodes are connected to all drives. Each InfiniBox system consists of 3 nodes per rack. A node consists of an x86 server with RAM and flash cache. Those nodes are connected to eight SAS drive enclosures that can contain as much as 480 drives, with capacities of up to 8TB per drive. The InfiniBox architecture can sport up to 3TB RAM and 200TB flash cache (highest values for the F6000 model).

An InfiniBox in black and green glorious livery
From a connectivity perspective, three connectivity layers are available within an InfiniBox system:
- Fiber Channel (24x 8 Gbps) and Ethernet ports (12x 10 GbE) front-end ports that are dedicated to connecting client systems, management of the environment as well as replication (to another InfiniBox, presumably?)
- Infiniband for high-speed, low-latency communication between nodes
- Back-end SAS connectivity for connecting each node to all of the drive enclosures
The cache space is global to all nodes and ensures that no block is cached more than once. Because the system leverages a lot of HDD drives, it would take a lot of time to read data from the HDDs in case the cache was too small. A cache set of up to 200TB per system ensures that most if not all the current active data set is cached and ready to be served. An advantage of the global cache is that if a block is fetched on node 1, and the block is already cached on node 2, the system will serve the block from node 2 cache instead of requesting node 1 to read the block from the HDD layer. Regarding writes, those are aggregated and buffered in a protected memory area, and are subsequently sent to HDDs once the data has « cooled down » i.e. I’m assuming when the blocks in scope of write activities are no longer being hit/updated. At this point in time, the data is written to all the disks simultaneously, thus providing very high throughput.
With claims of 7-nines availability, the system needs to be built very carefully to guarantee such a high level of availability. From a connectivity perspective, all front-end connections are redundant. An abstraction layer was put in with software-based FC targets and floating IP addresses that can failover within physical ports and nodes. On the Infiniband layer, messages can be routed to other nodes in case of failure. In case of SAS-level failures, the I/O can be rerouted to the Infiniband layer via another node.
Regarding data integrity, this is ensured via Infinidat’s InfiniRaid. It appears to be a special RAID implementation developed by Infinidat. It uses a 14+2 schema with a twist: instead of using full disks, the disks are somehow sliced into smaller sections called « data sections », and two additional « parity sections » used for parity. Each of the data sections have a data integrity field and a lost-write protection field to avoid logical corruption and disk level errors on top of the parity sections.

A view of an Infinidat’s InfiniRaid stripe
When data lands in the system it is aggregated to follow the model described above (thus creating a RAID stripe) then the data is sent to 16 disks in the system, with each RAID stripe landing on different disks to minimise failure risk, while making sure the data is evenly distributed across the system (avoiding to create hot spot zones where more data is concentrated). This system is supposed to speed-up double-failure RAID rebuilds and allows the entire system to operate without any hot spare disks.

Placement of an InfiniRaid stripe across multiple drives according to Infinidat.
Infinidat’s writes that in case of double drive failure, InfiniRaid can immediately calculate which data stripes are affected and initiate rebuild activities (by targeting the most risky RAID groups first) that take considerably less time than usual thanks to the chunking of all drives in data sections which ensure data is read from all drives at the same time.
InfiniBox systems come in four models: the entry model (F1000) starts at 14U and the high end model, the F6000, takes an entire 42U rack. Thankfully, the folks at Infinidat did some really good job at building a beautiful, sci-fi like rack that would make a gamer or two completely jealous. My passion for nice design is entirely fulfilled, though I do not know if you can just order the nodes and stuff them in your own racks, or if you need to make space to welcome the beauty & the beast in a 42U form factor. Considering that Cloud / MSPs make a good chunk of their customers (see « Market Orientation » below) you can probably expect some agility on this side.
From a capacity perspective, the F1000 starts at 115TB usable capacity, with the F6000 at 1035 to 2765TB, always talking about usable capacity. The vendor papers put emphasis on « effective capacity » based on « average savings of 2:1, based on common data types & the use of in-line compression ». When looking at effective capacity, the capacities announced above are doubled.
From a services and protocols perspective, InfiniBox supports FC and iSCSI connectivity as well as NFSv3. FICON (IBM proprietary FC connectivity protocol) is not a well known one but very important as it enables InfiniBox systems to deliver storage capabilities to mainframes such as IBM z-Series for example. SMB was in the making and has been supposedly released not long ago, and I recall seeing that S3 is also supported but found no trace in Infinidat documents.
Market Orientation
Regarding the potential customers of Infinidat, it’s clear that at this scale you are probably looking at any company that has a bit more space than a little 12U cabinet.
The solution seems to target customers who:
- are looking at cost-efficient alternatives to AFAs (All-Flash-Arrays) with a similar performance footprint
- seek a solution that can satisfy very large capacity requirements without compromising on performance
- look for a unified storage platform with very high availability (7-nines) to sustain a variety of critical workloads
The Boston Business Journal article referenced in our « Company History » section above states that 30% of Infinidat customers are cloud providers, a smaller user base but with a usually very large appetite for capacity and performance (and affordable prices), so while this customer base may account for 30% of their total customer types, they may account for a much larger sales volume.
Regarding use cases for InfiniBox systems, all the regular use cases for SAN and NAS are covered. It is an hybrid primary storage system after all. Thanks to its FICON support, InfiniBox systems can deliver very cost-effective high capacity storage to the expensive world of mainframe systems, making it virtually a no brainer for CIOs who previously had their hands tied by vendor lock-in from mainframe systems providers.
Max’s Opinion
Today’s competition in the storage market happens at two edges, one is primary storage with fast but expensive NVMe-based AFA’s, the other being secondary storage with very large capacities and cheap disk-based storage but at the cost of speed. It’s perfectly valid to argue about the speed on secondary storage systems since they are optimised for capacity and low cost, and furthermore those systems target a different use case than primary storage systems and are often backed by flash-enabled caching mechanisms. We can also talk about HCI but that would limit the use case only to certain kinds of workloads.
Looking at the primary storage market and NVMe based AFA systems, customers can get tremendously fast systems with high throughput and low latency, but at a high price tag and often at the expense of capacity. Infinidat is interesting in proposing a petabyte-scale solution that offers not only high capacities, but also a seemingly very efficient caching mechanism backed by an innovative RAID implementation, all of this with very high availability.
The only limitation I can see for now is operators with extremely large active data sets that cannot be fully contained in the large pool of flash cache (up to 200TB for the F6000) that is provided in the solution. One aspect that hasn’t been exactly clarified yet is the scalability of the Infinidat platform. Does each InfiniBox system forms a single domain, or is it possible to scale and extend an InfiniBox across multiple racks? It might be possible in theory as nodes are connected via InfiniBand, but I don’t have much details about how the InfiniBand backend looks like.
Infinidat have certainly done a lot of work in determining the proper flash cache pool for their systems, but perhaps there is still the possibility to increase the pool, at the cost of more expensive flash, and depending on whether flash modules of higher capacities (and ideally lower prices) will become available. This, coupled with the availability of larger HDDs, and the hypothetical implementation in the future of new technologies in HDDs such as MAMR (articles here and here), makes Infinidat very well placed in the race to deliver outstanding performance at the lowest $/GB ratio.
Disclosure
This post is a part of my SFD14 post series. I am invited to the Storage Field Day 14 event and Commvault GO by Gestalt IT. Gestalt IT will cover expenses related to the events travel, accommodation and food during the Commvault GO and SFD14 event duration. I will cover my own accommodation costs outside of the days when the events take place. I will not receive any compensation for participation in this event, and I am also not obliged to blog or produce any kind of content. Any tweets, blog articles or any other form of content I may produce are the exclusive product of my interest in technology and my will to share information with my peers. I will commit to share only my own point of view and analysis of the products and technologies I will be seeing/listening about during this event.