May the SandForce be with you • The Register

Just a quick link to a Press Release from the makers of the most popular highest performance disk controller for Solid State Drives. Higher speeds and more throughput make SSDs even more attractive to the Enterprise data centers and they mean faster laptops and desktops for us all.



Image representing SandForce as depicted in Cr...
Image via CrunchBase


SandForce has now announced an SF-2000 controller that doubles up the I/O performance of the SF-1500. The new product runs at 60,000 sustained read and write IOPS and does 500MB/sec when handling read or write data. It uses a 6Gbit/s SATA interface and SandForce says it can make use of single-level cell flash, MLC or the enterprise MLC put out by Micron.

via May the SandForce be with you • The Register.

Sandforce is continuing to make great strides in its SSD disk controller architecture. There’s no stopping the train now. But as always read the fine print on any SSD product you buy and find out who manufactures the drive controller and what version it is. Benchmarks are always a good thing to consult too before you buy.

Micron intros SSD speed king • The Register

The market for SSDs is expanding and a few notable players are starting the leverage their consumer products by re-engineering proven designs as enterprise level hardware. Micron is moving the old RealSSD C300 into the RealSSD P300 and hoping to reap a big, high margin enterprise windfall.

The RealSSD P300 comes in a 2.5-inch form factor and in 50GB, 100GB and 200GB capacity points, and is targeted at servers, high-end workstations and storage arrays. The product is being sampled with customers now and mass production should start in October.

via Micron intros SSD speed king • The Register.

Sandisk C300 ssd drive
The C300 as it appears on

I am now for the first time after SSDs have hit the market looking at the drive performance of each new product being offered. What I’ve begun to realize is the speeds of each product are starting to fall into a familiar range. For instance I can safely say that for a drive in the 120GB range with Multi-Level Cells you’re going to see a minimum 200MB/sec read/write speeds (writing is usually faster than reading by some amount on every drive). This is a vague estimate of course, but it’s becoming more and more common. Smaller size drives have slower speeds and suffer on benchmarks due in part to the smaller number of parallel data channels. Bigger capacity drives have more channels and therefore can write more data per second. A good capacity for a boot/data drive is going to be in the 120-128GB category. And while it won’t be the best for archiving all your photos and videos, that’s fine. Use a big old 2-3TB SATA drive for those heavy lifting duties. I think that will be a more common architecture in the future and not a premium choice as it is now. SSD for boot/data and typical HDD for big archive and backup.

On the enterprise front things are a little different speed and throughput are important, but the drive interface is as well. With SATA being the most widely used interface for consumer hardware, big drive arrays for the data center are wedded to a form of Serial Attached Storage (SAS) or Fibre Channel (FC). So now manufacturers and designers like Sandisk need to engineer niche products for the high margin markets that require SAS or FC versions of the SSD. As was the case with the transition from Parallel ATA top Serial ATA, the first products are going to SATA to X interface adapters and electronics on board to make them compatible. Likely this will be the standard procedure for quite a while as a ‘native’ Fibre or SAS interface will require a bit of engineering to be done and cost increases to accommodate the enterprise interfaces. Speeds however will likely always be tuned for the higher volume consumer market and the SATA version of each drive will likely be the highest possible throughput version in each drive category. I’m thinking that the data center folks should adapt and adjust and go with the consumer level gear or adopt SATA SSDs now that the drives are not mechanically spinning disks. Similarly as more and more manufacturers are also doing their own error correction and wear leveling on the memory chips in SSDs the reliability will be equal to or exceed that of a FC or SAS spinning disks.

And speaking of spinning disks, the highest throughput I’ve ever seen quoted for a SATA disk was always 150MB/sec. Hands down that was theoretically the best it could ever do. More likely you would only see 80MB/sec (which takes me back to the old days of Fast/Wide SCSI and the Barracuda). Given the limits of moving media like spinning disks and read/write heads tracking across their surface, Flash throughput is just stunning. We are now in an era that while the Flash SSDs are slower than RAM, they are awfully fast and fast enough to notice when booting a computer. I think the only real speed enhancement beyond the drive interface is to put Flash SSDs on the motherboard directly and build a SATA drive controller directly on the CPU to make read/write requests. I doubt it would be cost effective for the amount of improvement, but it would eliminate some of the motherboard electronics and smooth the flow a bit. Something to look for certainly in netbook or slate style computers in the future.

OCZ’s RevoDrive Preview: An Affordable PCIe SSD – AnandTech

Previously I’ve posted a lot about the Enterprise level PCIe SSD products. Most of them don’t have Manufacturer Suggested Retail Price (MSRP) listed anywhere on their websites. The reason being, if you have to ask how much it costs, you cannot afford it. That is the Enterprise Market, that’s how they roll. But for the rest of us, me in particular, having an option other than simply dropping in an SSD to a desktop machine is attractive. Especially if it is a higher performance option. So what would a Consumer Level PCIe SSD cost?

We have seen a turnaround however. At last year’s IDF Intel showed off a proof of concept PCIe SSD that could push 1 million IOPS. And with the consumer SSD market dominated by a few companies, the smaller players turned to building their own PCIe SSDs to go after the higher margin enterprise market. Enterprise customers had the budget and the desire to push even more bandwidth. Throw a handful of Indilinx controllers on a PCB, give it a good warranty and you had something you could sell to customers for over a thousand dollars.

via OCZ’s RevoDrive Preview: An Affordable PCIe SSD – AnandTech :: Your Source for Hardware Analysis and News.

Anandtech does a review of the OCZ RevoDrive. A PCIe SSD for the consumer market. It’s not as fast as a Fusion-io, but then it isn’t nearly as expensive either. How fast is it say compared to a typical SATA SSD? Based on the benchmarks in this review it seems as though the RevoDrive is a little faster than most SATA SSDs, but it also costs about $20 more than a really good 120GB SSD. Be warned that this is the Suggest Retail price, and no shipping product yet exists. Prices may vary once this PCIe card finally hits the market. But I agree 100% with this quote from the end of the review:

“If OCZ is able to deliver a single 120GB RevoDrive at $369.99 this is going to be a very tempting value.”

Indeed, much more reasonable than a low end Fusion-io priced closer to $700+, but not as fast either. You picks your products, you pays yer money.

Seagate, Toshiba to Make SSD + HDD Hybrid?

We saw a quick and unceremonious demise to Microsoft’s ReadyBoost and ReadyDrive technology released at the dawn of the Vista era. Flash caches on the motherboard or worse yet, reusing a Flash memory stick as a disk cache never caught on. But now Seagate’s revisiting the idea of a hybrid hard drive by marrying an SSD and HDD into one logical disk. Is it as fast as an SSD? Is it cheaper than an SSD? Let’s take a look.

Seagate, Toshiba to Make SSD + HDD Hybrid?.

Some people may remember the poorly marketed and badly implemented Microsoft ReadyBoost technology hyped prior to the launch of Windows Vista. Microsoft’s intention was to speed throughput on machines without sufficient RAM memory to cache large parts of the Windows OS and shared libraries. By using a small Flash memory module on

Intel Turbo memory module for PCIe
Intel Turbo Memory to be used as ReadyDrive storage cache

the motherboard (Intel’s Turbo Memory) or by using a USB connected Flash memory stick one could create a Flash memory cache that would offset the effect of having 512MB or less RAM installed. In early testing done by folks like Anandtech and Tom’s Hardware system performance suffered terribly on computers with more than the 512MB of RAM targeted by Microsoft. By trying to use these techniques to offset the lack of RAM on computers with more than 512MB of RAM the computers ran slower using Vista. I had great hopes ReadyBoost at the time the flash cache method of speeding throughput on a desktop PC was heralding a new early of desktop PC performance. In the end it was all a myth created by the Microsoft marketing department.

Some time has passed since then Vista was released. RAM prices have slowly gone down. Even low end machines have more than adequate RAM installed to run Vista or now Windows 7 (no more machines with 512MB of RAM). The necessity of working around those limits of RAM is unnecessary. However total system level I/O has seen some gains through using somewhat expensive Flash based SSD (solid state disks). Really this is what we have all been waiting for all along. It’s flash memory modules like the ones Intel tried using for it’s  ReadyDrive capable Turbo Memory technology. However these were wired into a PCIe controller and optimized for fast I/O, faster than a real spinning hard disk. The advantage over the ReadyBoost was the speed of the PCIe interface connected to the Flash memory chips. Enterprise data centers have begun using some Flash SSDs as caches with some very high end product using all Flash SSDs in their storage arrays. The entry level price though can be daunting to say the least. 500GB SSD disks are the top of the line, premium priced products and not likely to be sold in large quantity until the prices come down.

Seagate is now offering a product that has a hybrid Flash cache and spinning disk all tied into one SATA disk controller.

Seagate hybrid hard drive
Seagate Momentus XT

The beauty of this design is the OS doesn’t enter into the fray. So it’s OS agnostic. Similarly the OS doesn’t try to be a disk controller. Seagate manages all the details on its side of the SATA controller and OS just sees what it thinks is a  hard disk that it sends read/write commands. In theory this sounds like a step up from simple spinning disks and maybe a step below a full flash based SSD. What is the performance of a hybrid drive like this?

As it turns out The Register did publish a follow-up with a quick benchmark (performed by Seagate) of the Seagate Moments XT compared to middle and top of the line spinning hard drives. The Seagate hybrid drive performs almost as well as an the Western Digital SSD included in the benchmark. That flash memory caches the stuff that needs quick access, and is able to refine what it stores over time based on what it is accessed most often by the OS. Your boot times speed up, file read/write times speed up all as a result of the internal controller on the hybrid drive. The availability if you check Amazon’s website is 1-2months which means you and I cannot yet purchase this item. But it’s encourage and I would like to see some more innovation in this product category. No doubt lots of optimization and algorithms can be tried out to balance the Flash memory and spinning hard disks. I say this because of the static ram cache that’s built into the Momentus XT which is 32MBytes in size. Decide when data goes in and out, which cache it uses (RAM or Flash) and when it finally gets written to disk is one of those difficult Computer Science type optimization problems. And there are likely as many answers as there are Computer Scientists to compute the problem. There will be lots of room to innovate if this product segment takes hold.

Disk I/O: PCI Based SSDs (via makeitfaster)

If you want an experts view of the currently shipping crop of PCIe Flash cards, here is a great survey from the blog makeitfaster.

Great article and lots of hardcore important details like drivers and throughput. It’s early days yet for the PCI based SSDs, so there’s going to be lots of changes and architectures until a great design or a cheap design begins to dominate the market. And while some PCIe cards may not be ready for the Enterprise Data Center, there may be a market in the high end gamer fanboy product segment. Stay Tuned!

Disk I/O: PCI Based SSDs The next step up from a regular sata based Solid State Disk is the PCIe based solid state disk. They bypass the SATA bottleneck and go straight through the PCI-Express bus, and are able to achieve better throughput. The access time is similar to a normal SSD, as that limit is imposed by the NAND chips themselves, and not the controller. So how is this different than taking a high end raid controller in a PCIe slot and slapping 8 or 12 good SSDs o … Read More

via makeitfaster

PCIe based Flash caches

A chain of press releases from Flash memory product manufacturers has led me to an interesting conclusion. We already have Flash caches in the datacenter. How soon will they be on the desktop? Intel’s SpeedBoost cache was a joke compared to Fusion-io’s PCI cards. What might happen if every computer had no disk drive, but used a really high speed Flash memory cache instead?

Let me start by saying Chris Mellor of The Register has been doing a great job of keeping up with the product announcements from the big vendors of the server based Flash memory products. I’m not talking simply Solid State Disks (SSD) with flash memory modules and Serial ATA (SATA) controllers. The new Enterprise level product that supersedes SSD disks is a much higher speed (faster than SATA) caches that plug into the PCIe slots on rack based servers. The fashion followed by many data center storage farms was to host large arrays of hot online, or warm nearly online spinning disks. Over time de-duplication was added to prevent unnecessary copies and backups being made on this valuable and scarce resource. Offline storage to tape back-up could be made throughout the day as a third tier of storage with the disks acting as the second tier. What was first tier? Well it would be the disks on the individual servers themselves or the vast RAM memory that the online transactional databases were running on. So RAM, disk, tape the three tier fashion came into being. But as data grows and grows, more people want some of the stuff that was being warehoused out to tape to do regression analysis on historical data. Everyone wants to create a model for trends they might spot in the old data. So what to do?

So as new data comes in, and old data gets analyzed it would seem there’s a need to hold everything in memory all the time, right? Why can’t we just always have it available? Arguing against this in corporate environment is useless. Similarly explaining why you can’t speed up the analysis of historical data is also futile. Thank god there’s a technological solution and that is higher throughput. Spinning disks are a hard limit in terms of Input/Output (I/O). You can only copy so many GBits per second over the SATA interface on a spinning disk hard drive. Even if you fake it by copying alternate bits to adjacent hard drives using RAID techniques you’re still limited. So Flash based SSDs have helped considerably as a tier of storage between the the old disk arrays and the demands made by the corporate overseers who want to see all their data all the time. The big 3 disk storage array makers IBM/Hitachi, EMC, and NetApp are all making hybrid, Flash SSD and spinning disk arrays and optimizing the throughput through the software running the whole mess. Speeds have improved considerably. More companies are doing online analysis to data that previously would be loaded from tape to do offline analysis.

And the interconnects to the storage arrays has improved considerably too. Fibre Channel was a godsend in the storage farm as it allowed much higher speed (first 2Gbytes per second, then doubling with each new generation). The proliferation of Fibre Channel alone made up for a number of failings in the speed of spinning disks and acted as a way of abstracting or virtualizing the physical and logical disks of the storage array. In terms of Fibre Channel the storage control software offers up a ‘virtual’ disk but can manage it on the storage array itself anyway it sees fit. Flexibility and speed reign supreme. But still there’s an upper limit to the Fibre Channel interface and the motherboard of the server itself. It’s the PCIe interface. And evenwith PCIe 2.0 there’s an upper limit to how much throughput you can get off the machine and back onto the machine. Enter the PCIe disk cache.

In this article I review the survey of PCIe based SSD and Flash memory disk caches since they entered the market (as it was written in The Register. It’s not a really mainstream technology. It’s prohibitively expensive to buy and is going to be purchased by those who can afford it in order to gain the extra speed. But even in the short time since STEC was marketing it’s SSDs to the big 3 storage makers, a lot of engineering and design has created a brand new product category and the performance within that category has made steady progress.

LSI’s entry into the market is still very early and shipping product isn’t being widely touted. The Register is the only website actively covering this product segment right now. But the speeds and the density of the chips on these products just keeps getting bigger, better and faster. Which provides a nice parallel to Moore’s Law but in a storage device context. Prior to the PCIe flash cache market opening, SATA, Serial Attached Storage (SAS) was the upper limit of what could be accomplished with even a flash memory chip. Soldering those chips directly onto an add-on board connected directly to the CPU through the PCIe 8-Lane channel is nothing short of miraculous in the speeds it has gained. Now the competition between current vendors is to build one off, customized setups to bench test the theoretical top limit of what can be done with these new products. And this recent article from Chris Mellor shines a light on the newest product on the market the LSI SSS6200. In this article Chris concludes:

None of these million IOPS demos can be regarded as benchmarks and so are not directly comparable. But they do show how the amount of flash kit you need to get a million IOPS has been shrinking

Moore’s law holds true now for the Flash caches which are now becoming the high speed storage option for many datacenters who absolutely have to have the highest I/O disk throughput available. And as the sizes and quantity of the chips continues to shrink and the storage volume increases who knows what the upper limit might be? But news travels swiftly and Chris Mellor got a whitepaper press release from Samsung and began drawing some conclusions.

Interestingly, the owner of the Korean Samsung 20nm process foundry has just taken a stake in Fusion-io, a supplier of PCIe-connected flash solid-state drives. This should mean an increase in Fusion-io product capacities, once Samsung makes parts for Fusion using the new process

The new Flash memory makers are now in an arms race with the product manufacturers. Apple and Fusion-io get first dibs on the shipping product as the new generation of Flash chips enters the market. Apple has Toshiba, and Fusion-io gets Samsung. In spite of LSI’s benchmark of 1million IOPs in their test system, I give the advantage to Fusion-io in the very near future. Another recent announcement from Fusion-io is a small round of venture capital funding that will hopefully cement its future as a going concern. Let’s hope their next generation caches top out at a size that is competitive with all its competitors and that its speed is equal to or faster than currently shipping product.

Outside the datacenter however things are more boring. I’m not seeing anyone try to peer into the future of the desktop or laptop and create a flash cache that performs at this level. Fusion-io does have a desktop product currently shipping mostly targeted at the PC gaming market. I have not seen Tom’s Hardware try it out or attempt to integrate it into a desktop system. The premium price is enough to make it very limited in its appeal (it lists MSRP $799 I think). But let’s step back and imagine what the future might be like. Given that Intel has incorporated the RAM memory controller into its i7 cpus and given that their cpu design rules have shrunk so far that adding the memory controller was not a big sacrifice, Is it possible the PCIe interface electronics could be migrated on CPU away from the Northbridge chipset? I’m not saying there should be no chipset at all. A bridge chip is absolutely necessary for really slow I/O devices like the USB interface. But maybe there could be at least on 16x PCIe lane directly into the CPU or possibly even an 8x PCIe lane. If this product existed, a Fusion-io cache could have almost 1TB storage of flash directly connected into the CPU and act as the highest speed storage yet available on the desktop.

Other routes to higher speed storage could even be another tier of memory slots with an accompanying JEDEC standard for ‘storage’ memory. So RAM would go in one set of slots, Flash in the other. And you could mix, match and add on as much Flash memory as you liked. This potentially could be addressed through the same memory controllers already built into Intel’s currently shipping CPUs. Why does this even matter or why do I think about it at all? I am awaiting the next big speed increase in desktop computing that’s why. Ever since the Megahertz Wars died out, much of the increase in performance has been so micro incremental that there’s not a dime’s worth of difference between any currently shipping PC. Disk storage has reigned supreme and has becoming painfully obvious as the last link in the I/O chain that has stayed pretty static. Parallel ATA migration to Serial ATA has improved things, but nothing like the march of improvements that occurred with each new generation of Intel chips. So I vote for dumping disks once and for all. Move to 2TByte Flash memory storage and let’s run it through the fastest channel we can onto and off the CPU. There’s not telling what new things we might be able to accomplish with the speed boost. Not just games, not just watching movies and not just scientific calculations. It seems to me everything OS and Apps both would receive a big benefit by dumping the disk.