Categories
cloud computers data center flash memory SSD technology

EMC’s all-flash benediction: Turbulence ahead • The Register

msystems
Image via Wikipedia

A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”

via EMC’s all-flash benediction: Turbulence ahead • The Register.

I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.

In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.

Categories
computers data center flash memory SSD technology

TMS flash array blows Big Blue away • The Register

Memory collection
Image by teclasorg via Flickr

Texas Memory Systems has absolutely creamed the SPC-1 storage benchmark with a system that comfortably exceeds the current record-holding IBM system at a cost per transaction of 95 per cent less.

via TMS flash array blows Big Blue away • The Register.

One might ask a simple question, how is this even possible given the cost of the storage media involved. How is it a Flash based storage array from RamSan beat a huge pile of IBM hard drives all networked and bound together in a massive storage system? And how did it do it for less? Woe be to those unschooled in the ways of the Per-feshunal Data Center purchasing dept. You cannot enter the halls of the big players unless you got million dollar budgets for big iron servers and big iron storage. Fibre Channel and Infiniband rule the day when it comes to big data throughput. All those spinning drives accessed simultaneously as if each one held one slice of the data you were asking for, each one delivering up it’s 1/10 of 1% of the total file you were trying to retrieve. And the resulting speed makes it look like one hard drive that is 10X10 faster than your desktop computer hard drive all through the smoke and mirrors of the storage controllers and the software that makes them go. But what if, just what if we decided to take Flash memory chips and knit them together with a storage controller that made them appear to be just like a big iron storage system? Well since Flash obviously costs something more than $1 per gigabyte and disk drives cost somewhere less than 10 cents per gigabyte the Flash storage loses right?

In terms of total storage capacity Flash will lose for quite some time when you are talking about holding everything on disk all at the same time. But that is not what’s being benchmarked here at all. No, in fact what is being benchmarked is the rate at which Input (writing of data) and Output (reading of data) is done through the storage controllers. IOPS measure the total number of completed reads/writes done in a given amount of time. Previous to this latest example of the RamSan-630, IBM was king of the mountain with it’s huge striped Fibre Channel arrays all linked up through it’s own storage array controllers. RamSan came in at 400,503.2 IOPS as compared to IBM’s top of the line San Volume Controller with 380,489.3. That’s not very much difference you say, especially considering how much smaller the amount of data a RamSan can hold,… And that would be a valid argument but consider again, that’s not what we’re benchmarking it is the IOPS.

Total cost for the IBM benchmarked system per IOP was $18.83. RamSan (which best IBM in total IOPS) was a measly $1.05 per IOP. The cost is literally 95% less than IBM’s cost. Why? Consider the price (even if it was steeply discounted as most Tech Writers will say as a cavea) for IBM’s benchmarked system costs $7.17Million dollars. Remember I said you need million dollar budgets to play in the data center space. Now consider the RamSan-630 costs $419,000. If you want speed, dump your spinning hard drives, Flash is here to stay and you cannot argue with the speed versus the price at this level of performance. No doubt this is going to threaten the livelihood of a few big iron storage manufacturers. But through disruption, progress is made.