Fusion-io demos billion IOPS server config • The Register

Fusion-io has achieved a billion IOPS from eight servers in a demonstration at the DEMO Enterprise event

Image representing Fusion-io as depicted in Cr...
Image via CrunchBase

in San Francisco.

The cracking performance needed just eight HP DL370 G6 servers, running Linux 2.6.35.6-45 on two, 6-core Intel processors, 96GB RAM. Each server was fitted with eight 2.4TB ioDrive2 Duo PCIE flash drives; thats 19.2TB of flash per server and 153.6TB of flash in total.

via Fusion-io demos billion IOPS server config • The Register.

This is in a word, no mean feat. 1 Million IOPS was the target to beat not just 2 years ago for anyone attempting to buy/build their own Flash based storage from the top Enterprise Level manufacturers. So the bar has risen no less than 3 orders of magnitude higher than the top end from 1 year ago. Add to that the magic sauce of bypassing the host OS and using the Flash memory as just an enhanced large memory.

This makes me wonder, how exactly does the Flash memory get used alongside the RAM memory pool?

How do the Applications use the Flash memory, and how does the OS use it?

Those are the details I think that no one else other than Fusion-io can provide as a value-add beyond the PCIe based flash memory modules itself. Instead of hardware being the main differentiator (drive controllers, Single Level Cells, etc.) Fusion-io is using a different path through the OS to the Flash memory. The File I/O system traditionally tied to hard disk storage and more generically ‘storage’ of some kind is being sacrificed. But I understand the logic, design and engineering of bypassing the overhead of the ‘storage’ route and redefining the Flash memory as another form of system memory.

Maybe the old style Von Neumann architecture or Harvard architecture computers are too old school for this new paradigm of a larger tiered memory pool with DRAM and Flash memory modules consisting of the most important parts of the computer. Maybe disk storage could be used as a mere backup of the data held in the Flash memory? Hard to say, and I think Fusion-io is right to hold this info close as they might be able to make this a more general case solution to the I/O problems facing some customers (not just Wall Street type high frequency traders).

Advertisements

EMC’s all-flash benediction: Turbulence ahead • The Register

All the Fear, Uncertainty and Doubt (FUD) spread by big legacy manufacturers of hard drive storage in the data center is a way to stem or delay the burgeoning tidal wave of Flash memory based storage. Yes the economics of Flash based storage are not quite there yet, but for the high performance, high throughput folks the future is now.

msystems
Image via Wikipedia

A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”

via EMC’s all-flash benediction: Turbulence ahead • The Register.

I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.

In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.

TMS flash array blows Big Blue away • The Register

While flash memory chips cost more per Gigabyte to store data than a comparable spinning disk drive, we’re not talking about that at all. We’re talking about fast blinding speed of Input/Output operations, reading and writing data. And for that Flash Memory Chips are king of the hill, and now have the benchmarks to prove it once and for all time.

Memory collection
Image by teclasorg via Flickr

Texas Memory Systems has absolutely creamed the SPC-1 storage benchmark with a system that comfortably exceeds the current record-holding IBM system at a cost per transaction of 95 per cent less.

via TMS flash array blows Big Blue away • The Register.

One might ask a simple question, how is this even possible given the cost of the storage media involved. How is it a Flash based storage array from RamSan beat a huge pile of IBM hard drives all networked and bound together in a massive storage system? And how did it do it for less? Woe be to those unschooled in the ways of the Per-feshunal Data Center purchasing dept. You cannot enter the halls of the big players unless you got million dollar budgets for big iron servers and big iron storage. Fibre Channel and Infiniband rule the day when it comes to big data throughput. All those spinning drives accessed simultaneously as if each one held one slice of the data you were asking for, each one delivering up it’s 1/10 of 1% of the total file you were trying to retrieve. And the resulting speed makes it look like one hard drive that is 10X10 faster than your desktop computer hard drive all through the smoke and mirrors of the storage controllers and the software that makes them go. But what if, just what if we decided to take Flash memory chips and knit them together with a storage controller that made them appear to be just like a big iron storage system? Well since Flash obviously costs something more than $1 per gigabyte and disk drives cost somewhere less than 10 cents per gigabyte the Flash storage loses right?

In terms of total storage capacity Flash will lose for quite some time when you are talking about holding everything on disk all at the same time. But that is not what’s being benchmarked here at all. No, in fact what is being benchmarked is the rate at which Input (writing of data) and Output (reading of data) is done through the storage controllers. IOPS measure the total number of completed reads/writes done in a given amount of time. Previous to this latest example of the RamSan-630, IBM was king of the mountain with it’s huge striped Fibre Channel arrays all linked up through it’s own storage array controllers. RamSan came in at 400,503.2 IOPS as compared to IBM’s top of the line San Volume Controller with 380,489.3. That’s not very much difference you say, especially considering how much smaller the amount of data a RamSan can hold,… And that would be a valid argument but consider again, that’s not what we’re benchmarking it is the IOPS.

Total cost for the IBM benchmarked system per IOP was $18.83. RamSan (which best IBM in total IOPS) was a measly $1.05 per IOP. The cost is literally 95% less than IBM’s cost. Why? Consider the price (even if it was steeply discounted as most Tech Writers will say as a cavea) for IBM’s benchmarked system costs $7.17Million dollars. Remember I said you need million dollar budgets to play in the data center space. Now consider the RamSan-630 costs $419,000. If you want speed, dump your spinning hard drives, Flash is here to stay and you cannot argue with the speed versus the price at this level of performance. No doubt this is going to threaten the livelihood of a few big iron storage manufacturers. But through disruption, progress is made.

Viking Modular plugs flash chips into memory sockets • The Register

I remember the old days when computers ‘re-used’ the system level RAM memory for video on the computer. However the performance disadvantage of doing this was readily apparent when Intel developed video bus technologies VESA Local Bus, AGP, PCIe. So I’m a little surprised a company has developed Flash memory modules for the RAM memory slots on a PC motherboard. What do they hope to gain by building this hybrid disk, memory module. How is seen by the OS? Is it a disk or is RAM memory, many questions.

The 536,870,912 byte (512×2 20 ) capacity of t...
Image via Wikipedia

What a brilliant idea: put flash chips into memory sockets. Thats what Viking Modular is doing this with its SATADIMM product.

via Viking Modular plugs flash chips into memory sockets • The Register.

This sounds like an interesting evolution of the SSD type of storage. But, I don’t know if there is a big advantage forcing a RAM memory controller to be the bridge to a Flash Memory controller. In terms of bandwidth, the speed seems comparable to a 4x PCIe interface. I’m thinking now of how it might compare to PCIe based SSD from OCZ or Fusion-io. It seems like the advantage is still held by PCIe in terms of total bandwidth and capacity (above 500MB/sec and 2Terabytes total storage). It maybe a slightly lower cost, but the use of Single Level Cell Flash memory chips raises the cost considerably for any given size of storage, and this product from Viking uses the Single Level Cell flash memory. I think if this product ships, it will not compete very well against products like consumer level SSDs, PCIe SSDs, etc. However if they continue to develop the product and evolve it, there might be a niche where it can be performance or price competitive.

Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista

Everyone wants to know when the next iPhone is coming out. And manufacturers of components that typically go into making an iPhone continue to do research and development on their components to make them more attractive to the high end manufacturers. Apple is very demanding and rewarding when it comes to Flash memory production. They command more product volume than any manufacturer out there. But in spite of all this activity what’s been happening with each new revision of the Flash memory production lines.

The microcontroller on the right of this USB f...
Image via Wikipedia

The schedules may help back mounting beliefs that the iPhone 5 will 64GB iPhone 4 prototype appeared last month that hinted Apple was exploring the idea as early as last year. Just on Tuesday, a possible if disputed iPod touch with 128GB of storage also appeared and hinted at an upgrade for the MP3 player as well. Both the iPhone and the iPod have been stuck at 32GB and 64GB of storage respectively since 2009 and are increasingly overdue for additional space.

via Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista.

Toshiba has revised its flash memory production lines again to keep pace with the likes of Intel, Micron and Samsung. Higher densities and smaller form factors seemed to indicate they are gearing up for a big production run of the highest capacity memory modules they can make. It’s looking like a new iPhone might be the candidate to receive newer multi-layer single chip 64GB Flash memory modules this year.

A note of caution in this arms race of ever smaller feature sizes on the flash memory modules, the smaller you go the less memory read/write cycles you get. I’m becoming aware that each new generation of flash memory production has lost an amount of robustness. This problem has been camouflaged maybe even handled outright by the increase in over-provisioning of chips on a given size Solid State Disk (sometimes as low as 17% more chips than that which is typically used when the drive is full). Through careful statistical modeling and use of algorithms, an ideal shuffling of the deck of available flash memory chips allows the load to be spread out. No single chip fails as it’s workload is shifted continuously to insure it doesn’t receive anywhere near the maximum number of reliable read write cycles. Similarly, attempts to ‘recover’ data from failing memory cells within a chip module are also making up for these problems. Last but not least outright error-correcting hardware has been implemented on chip to insure everything just works from the beginning of the life of the Solid State Disk (SSD) to the finals days of its useful life.

We may not see the SSD eclipse the venerable kind off high density storage, the Hard Disk Drive (HDD). Given the point of diminishing return provided by Moore’s Law (scaling down increases density, increases speed, lowers costs), Flash may never get down to the level of density we enjoy in a typical consumer brand HDD (2TBytes). We may have to settle for other schemes that get us to that target through other means. Which brings me to my favorite product of the moment, the PCIe based SSD. Which is nothing more than a big circuit board with a bunch of SSD’s tied together in a disk array with a big fat memory controller/error-correction controller sitting on it. In terms of speeds using the PCI Express bus, there are current products that beat single SATA 6 SSDs by a factor of two. And given the requirements of PCI, the form factor of any given module could be several times bigger and two generations older to reach the desired 2Terbyte storage of a typical SATA Hard Drive of today. Which to me sounds like a great deal if we could also see drops in price and increases in reliability by using older previous generation products and technology.

But the mobile market is hard to please, as they are driving most decisions when it comes to what kind of Flash memory modules get ordered en masse. No doubt Apple, Samsung and anyone in consumer electronics will advise manufacturers to consistently shrink their chip sizes to increase density and keep prices up on final shipping product. I don’t know how efficiently an iPhone or iPad use the available memory say on a 64GByte iTouch let’s say. Most of that goes into storing the music, TV shows, and Apps people want to have readily available while passing time. The beauty of that design is it rewards consumption by providing more capacity and raising marginal profit at the same time. This engine of consumer electronics design doesn’t look likely to end in spite of the physical limitations of shrinking down Flash memory chips. But there will be a day of reckoning soon, not unlike when Intel hit the wall at 4Ghz serial processors and had to go multi-core to keep it’s marginal revenue flowing. It’s been very lateral progress in terms of processor performance since then. It is more than likely Flash memory chips cannot get any smaller without being really unreliable and defective, thereby sliding  into the same lateral incrementalism Intel has adopted. Get ready for the plateau.

Disk I/O: PCI Based SSDs (via makeitfaster)

If you want an experts view of the currently shipping crop of PCIe Flash cards, here is a great survey from the blog makeitfaster.

Great article and lots of hardcore important details like drivers and throughput. It’s early days yet for the PCI based SSDs, so there’s going to be lots of changes and architectures until a great design or a cheap design begins to dominate the market. And while some PCIe cards may not be ready for the Enterprise Data Center, there may be a market in the high end gamer fanboy product segment. Stay Tuned!

Disk I/O: PCI Based SSDs The next step up from a regular sata based Solid State Disk is the PCIe based solid state disk. They bypass the SATA bottleneck and go straight through the PCI-Express bus, and are able to achieve better throughput. The access time is similar to a normal SSD, as that limit is imposed by the NAND chips themselves, and not the controller. So how is this different than taking a high end raid controller in a PCIe slot and slapping 8 or 12 good SSDs o … Read More

via makeitfaster

PCIe based Flash caches

A chain of press releases from Flash memory product manufacturers has led me to an interesting conclusion. We already have Flash caches in the datacenter. How soon will they be on the desktop? Intel’s SpeedBoost cache was a joke compared to Fusion-io’s PCI cards. What might happen if every computer had no disk drive, but used a really high speed Flash memory cache instead?

Let me start by saying Chris Mellor of The Register has been doing a great job of keeping up with the product announcements from the big vendors of the server based Flash memory products. I’m not talking simply Solid State Disks (SSD) with flash memory modules and Serial ATA (SATA) controllers. The new Enterprise level product that supersedes SSD disks is a much higher speed (faster than SATA) caches that plug into the PCIe slots on rack based servers. The fashion followed by many data center storage farms was to host large arrays of hot online, or warm nearly online spinning disks. Over time de-duplication was added to prevent unnecessary copies and backups being made on this valuable and scarce resource. Offline storage to tape back-up could be made throughout the day as a third tier of storage with the disks acting as the second tier. What was first tier? Well it would be the disks on the individual servers themselves or the vast RAM memory that the online transactional databases were running on. So RAM, disk, tape the three tier fashion came into being. But as data grows and grows, more people want some of the stuff that was being warehoused out to tape to do regression analysis on historical data. Everyone wants to create a model for trends they might spot in the old data. So what to do?

So as new data comes in, and old data gets analyzed it would seem there’s a need to hold everything in memory all the time, right? Why can’t we just always have it available? Arguing against this in corporate environment is useless. Similarly explaining why you can’t speed up the analysis of historical data is also futile. Thank god there’s a technological solution and that is higher throughput. Spinning disks are a hard limit in terms of Input/Output (I/O). You can only copy so many GBits per second over the SATA interface on a spinning disk hard drive. Even if you fake it by copying alternate bits to adjacent hard drives using RAID techniques you’re still limited. So Flash based SSDs have helped considerably as a tier of storage between the the old disk arrays and the demands made by the corporate overseers who want to see all their data all the time. The big 3 disk storage array makers IBM/Hitachi, EMC, and NetApp are all making hybrid, Flash SSD and spinning disk arrays and optimizing the throughput through the software running the whole mess. Speeds have improved considerably. More companies are doing online analysis to data that previously would be loaded from tape to do offline analysis.

And the interconnects to the storage arrays has improved considerably too. Fibre Channel was a godsend in the storage farm as it allowed much higher speed (first 2Gbytes per second, then doubling with each new generation). The proliferation of Fibre Channel alone made up for a number of failings in the speed of spinning disks and acted as a way of abstracting or virtualizing the physical and logical disks of the storage array. In terms of Fibre Channel the storage control software offers up a ‘virtual’ disk but can manage it on the storage array itself anyway it sees fit. Flexibility and speed reign supreme. But still there’s an upper limit to the Fibre Channel interface and the motherboard of the server itself. It’s the PCIe interface. And evenwith PCIe 2.0 there’s an upper limit to how much throughput you can get off the machine and back onto the machine. Enter the PCIe disk cache.

In this article I review the survey of PCIe based SSD and Flash memory disk caches since they entered the market (as it was written in The Register. It’s not a really mainstream technology. It’s prohibitively expensive to buy and is going to be purchased by those who can afford it in order to gain the extra speed. But even in the short time since STEC was marketing it’s SSDs to the big 3 storage makers, a lot of engineering and design has created a brand new product category and the performance within that category has made steady progress.

LSI’s entry into the market is still very early and shipping product isn’t being widely touted. The Register is the only website actively covering this product segment right now. But the speeds and the density of the chips on these products just keeps getting bigger, better and faster. Which provides a nice parallel to Moore’s Law but in a storage device context. Prior to the PCIe flash cache market opening, SATA, Serial Attached Storage (SAS) was the upper limit of what could be accomplished with even a flash memory chip. Soldering those chips directly onto an add-on board connected directly to the CPU through the PCIe 8-Lane channel is nothing short of miraculous in the speeds it has gained. Now the competition between current vendors is to build one off, customized setups to bench test the theoretical top limit of what can be done with these new products. And this recent article from Chris Mellor shines a light on the newest product on the market the LSI SSS6200. In this article Chris concludes:

None of these million IOPS demos can be regarded as benchmarks and so are not directly comparable. But they do show how the amount of flash kit you need to get a million IOPS has been shrinking

Moore’s law holds true now for the Flash caches which are now becoming the high speed storage option for many datacenters who absolutely have to have the highest I/O disk throughput available. And as the sizes and quantity of the chips continues to shrink and the storage volume increases who knows what the upper limit might be? But news travels swiftly and Chris Mellor got a whitepaper press release from Samsung and began drawing some conclusions.

Interestingly, the owner of the Korean Samsung 20nm process foundry has just taken a stake in Fusion-io, a supplier of PCIe-connected flash solid-state drives. This should mean an increase in Fusion-io product capacities, once Samsung makes parts for Fusion using the new process

The new Flash memory makers are now in an arms race with the product manufacturers. Apple and Fusion-io get first dibs on the shipping product as the new generation of Flash chips enters the market. Apple has Toshiba, and Fusion-io gets Samsung. In spite of LSI’s benchmark of 1million IOPs in their test system, I give the advantage to Fusion-io in the very near future. Another recent announcement from Fusion-io is a small round of venture capital funding that will hopefully cement its future as a going concern. Let’s hope their next generation caches top out at a size that is competitive with all its competitors and that its speed is equal to or faster than currently shipping product.

Outside the datacenter however things are more boring. I’m not seeing anyone try to peer into the future of the desktop or laptop and create a flash cache that performs at this level. Fusion-io does have a desktop product currently shipping mostly targeted at the PC gaming market. I have not seen Tom’s Hardware try it out or attempt to integrate it into a desktop system. The premium price is enough to make it very limited in its appeal (it lists MSRP $799 I think). But let’s step back and imagine what the future might be like. Given that Intel has incorporated the RAM memory controller into its i7 cpus and given that their cpu design rules have shrunk so far that adding the memory controller was not a big sacrifice, Is it possible the PCIe interface electronics could be migrated on CPU away from the Northbridge chipset? I’m not saying there should be no chipset at all. A bridge chip is absolutely necessary for really slow I/O devices like the USB interface. But maybe there could be at least on 16x PCIe lane directly into the CPU or possibly even an 8x PCIe lane. If this product existed, a Fusion-io cache could have almost 1TB storage of flash directly connected into the CPU and act as the highest speed storage yet available on the desktop.

Other routes to higher speed storage could even be another tier of memory slots with an accompanying JEDEC standard for ‘storage’ memory. So RAM would go in one set of slots, Flash in the other. And you could mix, match and add on as much Flash memory as you liked. This potentially could be addressed through the same memory controllers already built into Intel’s currently shipping CPUs. Why does this even matter or why do I think about it at all? I am awaiting the next big speed increase in desktop computing that’s why. Ever since the Megahertz Wars died out, much of the increase in performance has been so micro incremental that there’s not a dime’s worth of difference between any currently shipping PC. Disk storage has reigned supreme and has becoming painfully obvious as the last link in the I/O chain that has stayed pretty static. Parallel ATA migration to Serial ATA has improved things, but nothing like the march of improvements that occurred with each new generation of Intel chips. So I vote for dumping disks once and for all. Move to 2TByte Flash memory storage and let’s run it through the fastest channel we can onto and off the CPU. There’s not telling what new things we might be able to accomplish with the speed boost. Not just games, not just watching movies and not just scientific calculations. It seems to me everything OS and Apps both would receive a big benefit by dumping the disk.