The big question is endurance, however we wont see a reduction in write cycles this time around. IMFTs 20nm client-grade compute NAND used in consumer SSDs is designed for 3K – 5K write cycles, identical to its 25nm process.
If true this will help considerably in driving down cost of Flash memory chips while maintaining the current level of wear and performance drop seen over the lifetime of a chip. Stories I have read previously indicated that Flash memory might not continue to evolve using the current generation of silicon chip manufacturing technology. Performance drops occur as memory cells wear out. Memory cells were wearing out faster and faster as the wires and transistors got smaller and narrower on the Flash memory chip.
The reason for this is memory cells have to be erased in order to free them up and writing and erasing take a toll on the memory cell each time one of these operations is performed. Single Level memory cells are the most robust, and can go through many thousands even millions of write and erase cycles before they wear out. However the cost per megabyte of Single Level memory cells make it an Enterprise level premium price level for Corporate customers generally speaking. Two Level memory cells are much more cost effective, but the structure of the cells makes them less durable than Single Level cells. And as the wires connecting them get thinner and narrower, the amount of write and erase cycles they can endure without failing drops significantly. Enterprise customers in the past would not purchase products specifically because of this limitation of the Two level memory cell.
As companies like Intel and Samsung tried to make Flash memory chips smaller and less expensive to manufacture, the durability of the chips became less and less. The question everyone asked is there a point of diminishing return where smaller design rules, thinner wires is going to make chips so fragile? The solution for most manufacturers is to add spare memory cells, “over-providing” so that when a cell fails, you can unlock a spare and continue using the whole chip. The over -provisioning no so secret trick has been the way most Solid State Disks (SSDs) have handled the write/erase problem for Two Level memory cells. But even then, the question is how much do you over-provision? Another technique used is called wear-levelling where a memory controller distributes writes/erases over ALL the chips available to it. A statistical scheme is used to make sure each and every chip suffers equally and gets the same number of wear and tear apllied to it. It’s difficult balancing act manufacturers of Flash Memory and storage product manufacturers who consume those chips to make products that perform adequately, do not fail unexpectedly and do not cost too much for laptop and desktop manufacturers to offer to their customers.
If Intel and Micron can successfully address the fragility of Flash chips as the wiring and design rules get smaller and smaller, we will start to see larger memories included in more mobile devices. I predict you will see iPhones and Samsung Android smartphones with upwards of 128GBytes of Flash memory storage. Similarly, tablets and ultra-mobile laptops will also start to have larger and larger SSDs available. Costs should stay about where they are now in comparison to current shipping products. We’ll just have more products to choose from, say like 1TByte SSDs instead of the more typical high end 512GByte SSDs we see today. Prices might also come down, but that’s bound to take a little longer until all the other Flash memory manufacturers catch up.
Image via Wikipedia: Wiring of a Flash Memory Cell
Fusion-io has crammed eight ioDrive flash modules on one PCIe card to give servers 10TB of app-accelerating flash.
This follows on from its second generation ioDrives: PCIe-connected flash cards using single level cell and multi-level cell flash to provide from 400GB to 2.4TB of flash memory, which can be used by applications to get stored data many times faster than from disk. By putting eight 1.28TB multi-level cell ioDrive 2 modules on a single wide ioDrive Octal PCIe card Fusion reaches a 10TB capacity level.
This is some big news in the fight to be king of the PCIe SSD market. I declare: Advantage Fusion-io. They now have the lead in terms of not just speed but also overall capacity at the price point they have targeted. As densities increase and prices more or less stay flat, the value add is more data can stay resident on the PCIe card and not be swapped out to Fibre-Channel array storage on the Storage Area Network (SAN). Performance is likely to be wicked cool and early adopters will now doubt reap big benefits from transaction processing and online analytic processing as well.
Through first quarter of 2012, Intel will be releasing new SSDs: Intel SSD 520 “Cherryville” Series replacement for the Intel SSD 510 Series, Intel SSD 710 “Lyndonville” Series Enterprise HET-MLC SSD replacement for X25-E series, and Intel SSD 720 “Ramsdale” Series PCIe based SSD. In addition, you will be seeing two additional mSATA SSDs codenamed “Hawley Creek” by the end of the fourth quarter 2011.
That’s right folks Intel is jumping on the high performance PCIe SSD bandwagon with the Intel SSD 720 in the first quarter of 2012. Don’t know what price they will charge but given quotes and pre-releases of specs it’s going to compete against products from competitors like RamSan, Fusion-io and the top level OCZ PCIe prouct the R4. My best guess is based on pricing for those products it will be in the roughly $10,000+ category with an 8x PCI interface and fully complement of Flash memory (usually over 1TB on this class of PCIe card).
Knowing that Intel’s got some big engineering resources behind their SSD designs, I’m curious to see how close they can come to the performance statistics quoted in this table here:
2200 Mbytes/sec of Read throughput and 1100Mbytes/sec of Write throughput. Those are some pretty heft numbers compared to currently shipping products in the upper pro-summer and lower Enterprise Class price category. Hopefully Anandtech will get a shipping or even pre-release version before the end of the year and give it a good torture test. Following Anand Lai Shimpi on his Twitter feed, I’m seeing all kinds of tweets about how a lot of pre-release products from manufacturers off SSDs and PCIe SSDs fail during the benchmarks. Doesn’t bode well for the Quality Control depts. at the manufacturers assembling and testing these products. Especially considering the price premium of these items, it would be much more reassuring if the testing was more rigorous and conservative.
In the enterprise segment where 1U and 2U servers are common, PCI Express SSDs are very attractive. You may not always have a ton of 2.5″ drive bays but theres usually at least one high-bandwidth PCIe slot unused. The RevoDrive family of PCIe SSDs were targeted at the high-end desktop or workstation market, but for an enterprise-specific solution OCZ has its Z-Drive line.
Anandtech is breaking new ground covering some Enterprise level segments of the Solid State Disk industry. While I doubt he’ll be doing ratings of Violin and Texas Memory Systems gear very soon, the OCZ low end Enterprise PCIe cards is still beginning to approach that target. We’re talking $10,000 USD and up for anyone who wants to participate. Which puts it in the middle to high end of Fusion-io and barely touches the lower end of Violin and TMS not to mention Virident. Given that, it is still wild to see what kind of architecture and performance optimization one gets for the money they pay. SandForce rules the day at OCZ for anything requiring the top speeds for write performance. It’s also interesting to find out about the SandForce 25xx series use of super-capacitors to hold enough reserve power to flush the write caches on a power outage. It’s expensive, but moves the product up a few notches in the Enterprise level reliability scale.
If you want more speed, then you will have to look to PCI-Express for the answer. Austrian-based Angelbird has opened its online storefront with its Wings add-in card and SSDs.
After more than one year of being announced Angelbird has designed and manufactured a new PCIe flash card. The design of which is full expandable over time depending on your budget needs. Fusion-io has a few ‘expandable’ cards in its inventory too, but the price class of Fusion-io is much higher than the consumer level Angelbird product. So if you cannot afford to build a 1TB flash-based PCIe card, do not worry. Buy what you can and outfit it later over time as your budget allows. Now that’s something any gamer fanboy or desktop enthusiast can get behind.
Angelbird does warn in advance power demands for typical 2.5″ SATA flash modules are higher than what the PCIe bus can provide typically. They recommend using their own memory modules to add onto their base level PCIe card. Up until I read those recommendations I had forgotten some of the limitations and workarounds Graphics Card manufacturers typical use. These have become so routine that there are now 2-3 extra power taps provided even by typical desktop manufacturers for their desktop machines. All this to accommodate the extra graphics chip power required by today’s display adapters. It makes me wonder if Angelbird could do a Rev. of the base level PCIe card with a little 4-pin power input or something similar. It’s doesn’t need another 150watts, it’s going to be closer to 20watts for this type of device I think. I wish Angelbird well and I hope sales start strong so they can sell out their first production run.
By bypassing the SATA bottleneck, OCZs RevoDrive Hybrid promises transfer speeds up to 910 MB/s and up to 120,000 IOPS 4K random write. The SSD aspect reportedly uses a SandForce SF-2281 controller and the hard drive platters spin at 5,400rpm. On a whole, the hybrid drive makes good use of the companys proprietary Virtualized Controller Architecture.
Good news on the Consumer Electronics front, OCZ continues to innovate on the desktop aftermarket introducing a new PCIe Flash product that marries a nice 1TByte Hard Drive to a 100GB flash-based SSD. The best of both worlds all in one neat little package. Previously you might buy these two devices seperately, 1 average sized Flash drive and 1 spacious Hard drive. Then you would configure the Flash Drive as your System boot drive and then using some kind of alias/shortcut trick have the Hard drive as your user folder to hold videos, pictures, etc. This has caused some very conservative types to sit out and wait for even bigger Flash drives hoping to store everything on one logical volume. But what they really want is a hybrid of big storage and fast speed and that according to the press release is what the OCZ Hybrid Drive delivers. With a SandForce drive controller and two drives the whole architecture is hidden away along with the caching algorithm that moves files between the flash and hard drive storage areas. To the end user, they see but one big Hard drive (albeit installed in one of their PCI card slots), but experience the faster bootup times, faster application loading times. I’m seriously considering adding one of these devices into a home computer we have and migrating the bootdrive and user home directories over to that, using the current hard drives as the Windows backup device. I think that would be a pretty robust setup and could accommodate a lot of future growth and expansion.
OCZ says it is available for evaluation now by OEMs and, we presume, OCZ will be using it in its own flash products. Were looking at 1TB SSDs using TLC flash, shipping sequential data out at 500MB/sec which boot quickly, and could be combined to provide multi-TB flash data stores. Parallelising data access would provide multi-GB/sec I/O. The flash future looks bright.
Who knew pairing an ARM core with the drive electronics for a Flash based SSD could be so successful. Not only are the ARM chips helping to drive the cpus on our handheld devices, they are now becoming the SSD Drive controllers too! If OCZ is able to create these drive controllers with good yields (say 70% on the first run) then they are going to hopefully give themselves a pricing advantage and get a higher profit margin per device sold. This is assuming they don’t have to pay royalties for the SandForce drive controller on every device they ship.
If OCZ was able to draw up their own drive controller, I would be surprised. However, since they have acquired Indilinx it seems like they are making good on the promise held by Indilinx’s current crop of drive controllers. Let’s just hope they are able to match the performance of SandForce at the same price points as well. Otherwise it’s nothing more than a kind of patent machine that will allow OCZ to wage lawsuits against competitors for Intellectual Property they acquired through the acquisition of Indilinx. And we have seen too much of that recently with Apple’s secret bid for Nortel’s patent pool and Google’s acquisition of Motorola.
SeaMicro has been peddling its SM10000-64 micro server, based on Intels dual-core, 64-bit Atom N570 processor and cramming 256 of these chips into a 10U chassis. . .
. . . The SM10000-64 is not so much a micro server as a complete data center in a box, designed for low power consumption and loosely coupled parallel processing, such as Hadoop or Memcached, or small monolithic workloads, like Web servers.
While it is not always easy to illustrate the cost/benefit and Return on Investment on a lower power box like the Seamicro, running it head to head on a similar workload with a bunch of off the shelf Xeon boxes really shows the difference. The calculation of the benefit is critical too. What do you measure? Is it speed? Is it speed per transaction? Is it total volume allowed through? Or is it cost per unit transaction within a set amount of transactions? You’re getting closer with that last one. The test setup used a set number of transaction needing to be done in a set period of time. The benchmark then measure total power dissipation to accomplish that number of transactions in the set amount of time. SeaMicro came away the winner in unit cost per transaction in power terms. While the Xeon based servers had huge excess speed and capacity the power dissipation put it pretty far into the higher cost per transaction category.
However it is very difficult to communicate this advantage that SeaMicro has over Intel. Future tests/benchmarks need to be constructed with clearly stated goals and criteria. Specifically if it can be communicated as a Case History of a particular problem that could be solved by either a SeaMicro server or a bunch of Intel boxes running Xeon cpus with big caches. Once that Case History is well described, then the two architectures are then put to work showing what the end goal is in clear terms (cost per transaction). Then and only then will SeaMicro communicate effectively how it does things different and how that can save money. Otherwise it’s too different to measure effectively versus a Intel Xeon based rack of servers.
This is the shortest presentation I’ve seen and most pragmatic about what SSDs can do for you. He recommends buying Intel 320s and getting your feet wet by moving from a bicycle to a Ferrari. Later on if you need to go with a PCIe SSD do it, but it’s like the difference between a Formula 1 race car and a Ferrari. Personally in spite of the lack of major difference Artur is trying to illustrate I still like the idea of buying once and getting more than you need. And if this doesn’t start you down the road of seriously buying SSDs of some sort check out this interview with Violin Memory CEO, Don Bazile:
Basile said: “Larry is telling people to use flash … That’s the fundamental shift in the industry. … Customers know their competitors will adopt the technology. Will they be first, second or last in their industry to do so? … It will happen and happen relatively quickly. It’s not just speed; its the lowest cost of data base transaction in history. [Flash] is faster and cheaper on the exact same software. It’s a no-brainer.”
Violin Memory is the current market leader in data center SSD installations for transactional data or analytical processing. The boost folks are getting from putting the databases on Violin Memory boxes is automatic, requires very little tuning and the results are just flat out astounding. The ‘Larry’ quoted above is the Larry Ellison of Oracle, the giant database maker. So with that kind of praise I’m going to say the tipping point is near, but please read the article. Chris Mellor lays out a pretty detailed future of evolution in SSD sales and new product development. 3-bit Multi-Level memory cells in NAND flash is what Mellor thinks will be the tipping point as price is still the biggest sticking point for anyone responsible for bidding on new storage system installs. However while that price sticking point is a bigger issue for batch oriented off-line data warehouse analysis, for online streaming analysis SSD is cheaper per byte per second throughput. So depending on the typical style of database work you do or performance you need SSD is putting the big iron spinning hard disk vendors to shame. The inertia of these big capital outlays and cozy relationships with these vendors will make some shops harder to adopt the new technology (But IBM is giving us such a big discount!…WE are an EMC shop,etc.). However the competitors of the folks owning those datacenters will soon eat all that low hanging fruit a simple cutover to SSDs will afford and the competitive advantage will swing to the early adopters.
*Late Note: Chris Mellor just followed up Monday night (June 27th) with an editorial further laying out the challenge to disk storage presented by the data center Flash Array vendors. Check it out:
What should the disk drive array vendors do, if this scenario plays out?They should buy in or develop their own all-flash array technology. Having a tier of SSD storage in a disk drive array is a good start but customers will want the simpler choice of an all-flash array and, anyway, they are here now. Guys like Violin and Whiptail and TMS are knocking on the storage array vendors customer doors right now.
Tuesday at Computex, OCZ claimed that it set a new benchmark of 1 million 4K write IOPS and 1.5 million read IOPS with a single Z-Drive R4 88-equipped 3U Colfax International Server.
Between the RevoDrive and the Z-Drive OCZ is tearing up the charts with product releases announced in Taipei, Taiwan‘s Computex 2011 trade show. This particular one off demonstration was using a number of OCZ’s announced but as yet unreleased Z-Drive R4 88 packed into a 3U Colfax International enclosure. In other words, it’s an idealized demonstration of what kind of performance you might achieve in a best case scenario. The speeds are in excess of 3Gbytes/sec. for writing and reading which for Webserving or Database hosting is going to make a big difference for people that need the I/O. Previously you would have had to use a very expensive large scale Fibre Channel hard drive array that split and RAID’d the data across so many spinning hard drive spindles that you might come partially close to matching these speeds. But the SIZE! Ohmigosh. You would not be able to fit that amount of hardware into a 3U enclosure, never. So space constrained data centers will benefit enormously from dumping some of their drive array infrastructure for these more compact I/O monsters (some are from other manufacturers too, like Violin, RamSan and Fusion-io). Again, as I have said before when Anandtech and Tom’s Hardware can get sample hardware to benchmark the performance I will be happy to see what else these PCIe SSDs can do.