The RealSSD P300 comes in a 2.5-inch form factor and in 50GB, 100GB and 200GB capacity points, and is targeted at servers, high-end workstations and storage arrays. The product is being sampled with customers now and mass production should start in October.
I am now for the first time after SSDs have hit the market looking at the drive performance of each new product being offered. What I’ve begun to realize is the speeds of each product are starting to fall into a familiar range. For instance I can safely say that for a drive in the 120GB range with Multi-Level Cells you’re going to see a minimum 200MB/sec read/write speeds (writing is usually faster than reading by some amount on every drive). This is a vague estimate of course, but it’s becoming more and more common. Smaller size drives have slower speeds and suffer on benchmarks due in part to the smaller number of parallel data channels. Bigger capacity drives have more channels and therefore can write more data per second. A good capacity for a boot/data drive is going to be in the 120-128GB category. And while it won’t be the best for archiving all your photos and videos, that’s fine. Use a big old 2-3TB SATA drive for those heavy lifting duties. I think that will be a more common architecture in the future and not a premium choice as it is now. SSD for boot/data and typical HDD for big archive and backup.
On the enterprise front things are a little different speed and throughput are important, but the drive interface is as well. With SATA being the most widely used interface for consumer hardware, big drive arrays for the data center are wedded to a form of Serial Attached Storage (SAS) or Fibre Channel (FC). So now manufacturers and designers like Sandisk need to engineer niche products for the high margin markets that require SAS or FC versions of the SSD. As was the case with the transition from Parallel ATA top Serial ATA, the first products are going to SATA to X interface adapters and electronics on board to make them compatible. Likely this will be the standard procedure for quite a while as a ‘native’ Fibre or SAS interface will require a bit of engineering to be done and cost increases to accommodate the enterprise interfaces. Speeds however will likely always be tuned for the higher volume consumer market and the SATA version of each drive will likely be the highest possible throughput version in each drive category. I’m thinking that the data center folks should adapt and adjust and go with the consumer level gear or adopt SATA SSDs now that the drives are not mechanically spinning disks. Similarly as more and more manufacturers are also doing their own error correction and wear leveling on the memory chips in SSDs the reliability will be equal to or exceed that of a FC or SAS spinning disks.
And speaking of spinning disks, the highest throughput I’ve ever seen quoted for a SATA disk was always 150MB/sec. Hands down that was theoretically the best it could ever do. More likely you would only see 80MB/sec (which takes me back to the old days of Fast/Wide SCSI and the Barracuda). Given the limits of moving media like spinning disks and read/write heads tracking across their surface, Flash throughput is just stunning. We are now in an era that while the Flash SSDs are slower than RAM, they are awfully fast and fast enough to notice when booting a computer. I think the only real speed enhancement beyond the drive interface is to put Flash SSDs on the motherboard directly and build a SATA drive controller directly on the CPU to make read/write requests. I doubt it would be cost effective for the amount of improvement, but it would eliminate some of the motherboard electronics and smooth the flow a bit. Something to look for certainly in netbook or slate style computers in the future.