Let me start by saying Chris Mellor of The Register has been doing a great job of keeping up with the product announcements from the big vendors of the server based Flash memory products. I’m not talking simply Solid State Disks (SSD) with flash memory modules and Serial ATA (SATA) controllers. The new Enterprise level product that supersedes SSD disks is a much higher speed (faster than SATA) caches that plug into the PCIe slots on rack based servers. The fashion followed by many data center storage farms was to host large arrays of hot online, or warm nearly online spinning disks. Over time de-duplication was added to prevent unnecessary copies and backups being made on this valuable and scarce resource. Offline storage to tape back-up could be made throughout the day as a third tier of storage with the disks acting as the second tier. What was first tier? Well it would be the disks on the individual servers themselves or the vast RAM memory that the online transactional databases were running on. So RAM, disk, tape the three tier fashion came into being. But as data grows and grows, more people want some of the stuff that was being warehoused out to tape to do regression analysis on historical data. Everyone wants to create a model for trends they might spot in the old data. So what to do?
So as new data comes in, and old data gets analyzed it would seem there’s a need to hold everything in memory all the time, right? Why can’t we just always have it available? Arguing against this in corporate environment is useless. Similarly explaining why you can’t speed up the analysis of historical data is also futile. Thank god there’s a technological solution and that is higher throughput. Spinning disks are a hard limit in terms of Input/Output (I/O). You can only copy so many GBits per second over the SATA interface on a spinning disk hard drive. Even if you fake it by copying alternate bits to adjacent hard drives using RAID techniques you’re still limited. So Flash based SSDs have helped considerably as a tier of storage between the the old disk arrays and the demands made by the corporate overseers who want to see all their data all the time. The big 3 disk storage array makers IBM/Hitachi, EMC, and NetApp are all making hybrid, Flash SSD and spinning disk arrays and optimizing the throughput through the software running the whole mess. Speeds have improved considerably. More companies are doing online analysis to data that previously would be loaded from tape to do offline analysis.
And the interconnects to the storage arrays has improved considerably too. Fibre Channel was a godsend in the storage farm as it allowed much higher speed (first 2Gbytes per second, then doubling with each new generation). The proliferation of Fibre Channel alone made up for a number of failings in the speed of spinning disks and acted as a way of abstracting or virtualizing the physical and logical disks of the storage array. In terms of Fibre Channel the storage control software offers up a ‘virtual’ disk but can manage it on the storage array itself anyway it sees fit. Flexibility and speed reign supreme. But still there’s an upper limit to the Fibre Channel interface and the motherboard of the server itself. It’s the PCIe interface. And evenwith PCIe 2.0 there’s an upper limit to how much throughput you can get off the machine and back onto the machine. Enter the PCIe disk cache.
In this article I review the survey of PCIe based SSD and Flash memory disk caches since they entered the market (as it was written in The Register. It’s not a really mainstream technology. It’s prohibitively expensive to buy and is going to be purchased by those who can afford it in order to gain the extra speed. But even in the short time since STEC was marketing it’s SSDs to the big 3 storage makers, a lot of engineering and design has created a brand new product category and the performance within that category has made steady progress.
LSI’s entry into the market is still very early and shipping product isn’t being widely touted. The Register is the only website actively covering this product segment right now. But the speeds and the density of the chips on these products just keeps getting bigger, better and faster. Which provides a nice parallel to Moore’s Law but in a storage device context. Prior to the PCIe flash cache market opening, SATA, Serial Attached Storage (SAS) was the upper limit of what could be accomplished with even a flash memory chip. Soldering those chips directly onto an add-on board connected directly to the CPU through the PCIe 8-Lane channel is nothing short of miraculous in the speeds it has gained. Now the competition between current vendors is to build one off, customized setups to bench test the theoretical top limit of what can be done with these new products. And this recent article from Chris Mellor shines a light on the newest product on the market the LSI SSS6200. In this article Chris concludes:
None of these million IOPS demos can be regarded as benchmarks and so are not directly comparable. But they do show how the amount of flash kit you need to get a million IOPS has been shrinking
Moore’s law holds true now for the Flash caches which are now becoming the high speed storage option for many datacenters who absolutely have to have the highest I/O disk throughput available. And as the sizes and quantity of the chips continues to shrink and the storage volume increases who knows what the upper limit might be? But news travels swiftly and Chris Mellor got a whitepaper press release from Samsung and began drawing some conclusions.
Interestingly, the owner of the Korean Samsung 20nm process foundry has just taken a stake in Fusion-io, a supplier of PCIe-connected flash solid-state drives. This should mean an increase in Fusion-io product capacities, once Samsung makes parts for Fusion using the new process
The new Flash memory makers are now in an arms race with the product manufacturers. Apple and Fusion-io get first dibs on the shipping product as the new generation of Flash chips enters the market. Apple has Toshiba, and Fusion-io gets Samsung. In spite of LSI’s benchmark of 1million IOPs in their test system, I give the advantage to Fusion-io in the very near future. Another recent announcement from Fusion-io is a small round of venture capital funding that will hopefully cement its future as a going concern. Let’s hope their next generation caches top out at a size that is competitive with all its competitors and that its speed is equal to or faster than currently shipping product.
Outside the datacenter however things are more boring. I’m not seeing anyone try to peer into the future of the desktop or laptop and create a flash cache that performs at this level. Fusion-io does have a desktop product currently shipping mostly targeted at the PC gaming market. I have not seen Tom’s Hardware try it out or attempt to integrate it into a desktop system. The premium price is enough to make it very limited in its appeal (it lists MSRP $799 I think). But let’s step back and imagine what the future might be like. Given that Intel has incorporated the RAM memory controller into its i7 cpus and given that their cpu design rules have shrunk so far that adding the memory controller was not a big sacrifice, Is it possible the PCIe interface electronics could be migrated on CPU away from the Northbridge chipset? I’m not saying there should be no chipset at all. A bridge chip is absolutely necessary for really slow I/O devices like the USB interface. But maybe there could be at least on 16x PCIe lane directly into the CPU or possibly even an 8x PCIe lane. If this product existed, a Fusion-io cache could have almost 1TB storage of flash directly connected into the CPU and act as the highest speed storage yet available on the desktop.
Other routes to higher speed storage could even be another tier of memory slots with an accompanying JEDEC standard for ‘storage’ memory. So RAM would go in one set of slots, Flash in the other. And you could mix, match and add on as much Flash memory as you liked. This potentially could be addressed through the same memory controllers already built into Intel’s currently shipping CPUs. Why does this even matter or why do I think about it at all? I am awaiting the next big speed increase in desktop computing that’s why. Ever since the Megahertz Wars died out, much of the increase in performance has been so micro incremental that there’s not a dime’s worth of difference between any currently shipping PC. Disk storage has reigned supreme and has becoming painfully obvious as the last link in the I/O chain that has stayed pretty static. Parallel ATA migration to Serial ATA has improved things, but nothing like the march of improvements that occurred with each new generation of Intel chips. So I vote for dumping disks once and for all. Move to 2TByte Flash memory storage and let’s run it through the fastest channel we can onto and off the CPU. There’s not telling what new things we might be able to accomplish with the speed boost. Not just games, not just watching movies and not just scientific calculations. It seems to me everything OS and Apps both would receive a big benefit by dumping the disk.
One response to “PCIe based Flash caches”
[…] https://carpetbomberz.com/2010/04/20/pci-based-flash-caches/https://carpetbomberz.com/2010/03/18/datacenter-flash/http://www.rsperform.com/http://www.storagesearch.comhttp://www.storagesearch.com/ssd-fastest.htmlhttp://www.storagesearch.com/ssd-pcie.htmlhttp://blog.fastmail.fm/2009/10/27/the-state-of-ssd-storage-for-a-database-server/ « Disk I/O: Solid State Disks […]