Tag: ssd

  • Next-Gen SandForce Controller Seen on OCZ SSD

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    Last week during CES 2011, The Tech Report spotted OCZ’s Vertex 3 Pro SSD–running in a demo system–using a next-generation SandForce SF-2582 controller and a 6Gbps Serial ATA interface. OCZ demonstrated its read and write speeds by running the ATTO Disk Benchmark which clearly showed the disk hitting sustained read speeds of 550 MB/s and sustained write speeds of 525 MB/s.

    via Next-Gen SandForce Controller Seen on OCZ SSD.

    Big news, test samples of the SandForce SF-2000 series flash memory controllers are being shown in products demoed at the Consumer Electronics Shows. And SSDs with SATA interfaces are testing through the roof. The numbers quoted for a 6GB/sec. SATA SSD are in the 500+GB/sec. range. Previously you would need to choose a PCIe based SSD drive from OCZ or Fusion-io to get anywhere near that high of  speed sustained. Combine this with the future possibility of SF-2000 being installed on future PCIe based SSDs and there’s no telling how much the throughput will scale. If four of the Vertex drives were bound together as a RAID 0 set with SF-2000 drive controllers managing it, is it possible to see a linear scaling of throughput. Could we see 2,000 MB/sec. on PCIe 8x SSD cards? And what would be the price on such a card fully configured with 1.2 TB of SSD drives? Hard to say what things may come, but just the thought of being able to buy retail versions of these makes me think a paradigm shift is in the works that neither Intel nor Microsoft are really thinking about right now.

    One comment on this article as posted on the original website, Tom’s Hardware, included the observation that the speeds quoted for this SATA 6GBps drive are approaching the memory bandwidth of several generations old PC-133 DRAM memory chips. And as I have said previously, I still have an old first generation Titanium Powerbook from Apple that uses that same memory chip standard PC-133. So given that SSD hard drives are fast approaching the speed of somewhat older main memory chips I can only say we are fast approaching a paradigm shift in desktop and enterprise computing. I dub thee, the All Solid State (ASS) era where no magnetic or rotating mechanical media enter into the equation. We run on silicon semiconductors from top to bottom, no Giant Magneto-Resistive technology necessary. Even our removable media are flash memory based USB drives we put in our pockets and walk around with on key chains.

  • CES 2011: Corsair Performance Series 3 SSD Benchmarks – AnandTech :: Your Source for Hardware Analysis and News

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    The next wave of high end consumer SSDs will begin shipping this month, and I believe Corsair may be the first out the gate. Micron will follow shortly with its C400 and then we’ll likely see a third generation offering from Intel before eventually getting final hardware based on SandForce’s SF-2000 controllers in May.

    via CES 2011: Corsair Performance Series 3 SSD Benchmarks – AnandTech :: Your Source for Hardware Analysis and News.

    This just in from Consumer Electronics Show in Las Vegas, via Anandtech. SandForce SF-2000 scheduled to drop in May of this year. Get ready as you will see a huge upsurge in releases of new SSD products attempting to best one another in the sustained Read/Write category. And I’m not talking just SSDs but PCIe based cards with SSD RAIDs embedded on them communicating through a 2 Lane 8X PCI Express interface. I’m going to take a wild guess and say you will see products fitting this description easily hitting 700 to 900 MB/s sustained Read and Write. Prices will be on the top end of the scale as even the current shipping products all fall in to the $1200 to $1500 range. Expect the top end to be LSI based products for $15,000 or third party OEM manufacturers who might be willing to sell a fully configured 1TByte card for maybe ~$2,000. After the SF-2000 is released, I don’t know how long it will take for designers to prototype and release to manufacturing any new designs incorporating this top of the line SSD flash memory controller. It’s possible as the top end continues to increase in performance current shipping product might start to fall in price to clear out the older, lower performance designs.

  • Micron’s ClearNAND: 25nm + ECC

    Image representing Intel as depicted in CrunchBase
    Intel is a partner wish Micron

    Micron’s ClearNAND: 25nm + ECC, Combats Increasing Error Rates – AnandTech

    This is a really good technical article on attempts made by Micron and Intel to fix read/write errors in their Solid State memory based on Flash memory chips. Each revision of their design and materials for manufacture helps decrease the size of the individual memory cells on the flash memory chip however as the design rules (the distance between the wires) decrease, random errors increase. And the materials themselves suffer from fatigue with each read and write cycle. The fatigue is due in no small part (pun intended) on the size, specifically thickness of some layers in the sandwich that make up a flash memory cell. Thinner materials just wear out quicker. Typically this wearing out was addressed by adding extra unused memory cells that could act as a spare memory cell whenever one of them finally gave up the ghost, stopped working altogether. Another technique is to spread reads/writes over an area much greater than (sometimes 23% bigger) than the size of the storage on the outside of the packing. This is called wear levelling and it’s like rotating your tires to ensure they don’t start to get bare patches on them too quickly.

    All these techniques will only go so far as the sizes and thickness continue to shrink. So taking a chapter out of the bad old days of computing, we are back into Error Correcting Codes or ECC. When memory errors were common and you needed to guarantee your electronic logic was not creating spontaneous errors, bits of data called parity bits would be woven into all the operations to insure something didn’t accidentally flip from being a 1 to a 0. ECC memory is still widely used in data center computers that need to guarantee the spontaneous bits don’t get flipped by say, a stray cosmic ray raining down upon us. Now however ECC is becoming the next tool after spare memory cells and wear leveling to insure flash memory can continue to grow smaller and still be reliable.

    Two methods in operation today are to build the ECC memory controllers into the Flash memory modules themselves. This raises the cost of the chip, but lowers the cost to the manufacturer of a Solid State Disk or MP3 player. They don’t have to add the error correction after the fact or buy another part and integrate it into their design. The other more ‘state of the art’ method is to build the error correction into the Flash memory controller (as opposed to the memory cells), providing much more leeway in how it can be implemented, updated over time. As it turns out the premier manufacturer/designer of Flash memory controllers SandForce already does this with the current shipping version of their SF-1200 Flash memory controller. SandForce still has two more advanced controllers yet to hit the market, so they are only going to become stronger if they have already adopted ECC into their current shipping product.

    Which way the market chooses to go will depend on how low the target price is for the final shipping product. Low margin, high volume goods will most likely go with no error correction and take their chances. Other higher end goods may adopted the embedded ECC from Micron and Intel. Top of the line data center purchasers will not stray far from the cream of the crop, high margin SandForce controllers as they are still providing great performance/value even in their early generation products.

  • Hitachi GST ends STEC’s monopoly • The Register

    Hitachi GST flash drives are hitting the streets and, at last, ending STEC’s monopoly in the supply of Fibre Channel interface SSDs.

    EMC startled the enterprise storage array world by embracing STEC SSDs (solid state drives) in its arrays last year as a way of dramatically lowering the latency for access to the most important data in the arrays. It has subsequently delivered FAST automated data movement across different tiers of storage in its arrays, ensuring that sysadms don’t have to involved in managing data movement at a tedious and time-consuming level.

    via Hitachi GST ends STEC’s monopoly • The Register.

    In the computer world the data center is often the measure of all things in terms of speed and performance. Time was, the disk drive interface of choice was the SCSI drive and then it’s higher speed evolutions Fast/Wide UltraSCSI. But then a new interface hit that used fibre optic cables to move storage out of the computer box to a separate box that managed all the hard drives in one spot and this was called a Storage Array. The new connector/cable combo was named Fibre Channel and it was fast, fast, fast. It become the absolute brand name off all vendors trying to sell more and more hard drives into the data center. Newer evolved versions of Fibre Channel came to market, each one slightly faster than the rest. And eventually Fibre Channel was built right into the hard drives themselves, so that you could be assured the speed was native Fibre Channel 3Gigabytes per second from one end to the other. But Fibre Channel has always been prohibitively expensive though a lot of it has been sold over the years. Volume has not brought down the price of Fibre Channel one bit in the time that it’s been the most widely deployed disk drive interface. A few competitors have cropped up the old Parallel ATA and Serial ATA drives from the desktop market have attempted to compete. And a newer SCSI drive interface called Serial Attached SCSI is now seeing some wider acceptance. However the old guard who are mentally and emotionally attached to their favorite Fibre Channel drive interface are not about to give up even has spinning disk speeds have been trumped by the almighty Flash memory based solid state drive (SSD). And a company named STEC knew it could sell a lot of SSDs if only someone could put a Fibre Channel interface on the circuit board, allaying any fears of the Fibre Channel adherents that they needed to evolve and change.

    Yes it’s true STEC was the only game in town for what I consider the Fibre Channel legacy interface in old-line Storage Array manufacturers. They have sold tons of their drives to third parties who package up their wares into turnkey ‘Enterprise’ solutions for drive arrays and cache controllers (all of which just speed up things). And being the first-est with the most-est is a good business strategy until the second source of your product comes online. So it’s always a race to sell as much as you can until the deadline hits and everyone rushes to the second source. Here now is Hitachi’s announcement they are now manufacturing an SSD with a Fibre Channel interface onboard for the Enterprise data center customers.

  • LSI Launches $11,500 SSD, Crushes Other SSDs

    Tuesday LSI Corp announced the WarpDrive SLP-300 PCIe-based acceleration card, offering 300 GB of SLC solid state storage and performance up to 240,000 sustained IOPS. It also delivers I/O performance equal to hundreds of mechanical hard drives while consuming less than 25W of power–all for a meaty $11,500 USD.

    via LSI Launches $11,500 SSD, Crushes Other SSDs.

    This is the cost of entry for anyone working on an Enterprise Level project. You cannot participate unless you can cross the threshold of a PCIe card costing $11,500 USD. This is the first time I have seen an actual price quote on one of these cards that swims in the Data center consulting and provisioning market. Fusion-io cannot be too far off of this price when it’s not sold as a full package as part of a larger project RFP. I am somewhat stunned at the price premium, but LSI is a top engineering firm and they definitely can design their own custom silicon to get the top speed out of just about any commercial off the shelf Flash memory chips. I am impressed they went with the PCI Express (8X) 8 lane interface. I’m guessing that’s a requirement for more server owners whereas 4X is for the desktop market. Still I don’t see any 16X interfaces as of yet (that’s the interface most desktops use for their graphics cards from AMD and nVidia). One more part that makes this a premium offering is the choice of Single Level Cell Flash memory chips for the ultimate in speed and reliability along with the Serial Attached Storage interface onboard the PCIe card itself. Desktop models opt for SATA to PCI-X to PCI-e bridge chips forcing you to translate and re-order your data multiple times. I have a feel SAS bridges to PCI-e at the full 8X interface speeds and that is the key to getting faster than 1,000 MB/sec. speeds for write and reads. This part is quoted as getting in the range of ~1,400 MB/sec. and other than some very expensive turnkey boxes from manufacturers like Violin, this is a great user installable part to get the benefit of a really fast SSD drive array on a PCIe card.

  • OCZ Reveals New Bootable PCIe SSD (quick comparison to Angelbird PCIe)

    PCI Express slots (from top to bottom: x4, x16...
    Image via Wikipedia

    Box packaging for the RevoDrive
    First version of the RevoDrive PCIe

    Building upon the original 1st-generation RevoDrive, the new version boasts speeds up to 740 MB/s and up to 120,000 IOPS, almost three times the throughput of other high-end SATA-based solutions.

    via OCZ Reveals New Bootable PCIe SSD.

    One cannot make this stuff up, two weeks ago Angelbird announced its bootable PCI Express SSD. Late yesterday OCZ one of the biggest 3rd party after market makers of SSDs announces a new PCI Express SSD which is also bootable. Big difference between the Angelbird product and OCZ’s RevoDrive is the throughput on the top end. This means if you purchase the most expensive fully equipped card from either manufacturer you will get 900+MBytes/sec. on the Angelfire versus 700+MBytes/sec. on the Revodrive from OCZ. Other differences include the ‘native’ support of the OCZ on the Host OS. I think this means that they aren’t using the ‘virtual OS’ on the embedded chips to boot so much as having the PCIe drive electronics make everything appear to be a real native boot drive. Angelbird uses an embedded OS to virtualize and abstract the hardware so that you get to boot any OS you want and run it off the flash memory onboard.

    The other difference I can see from reading the announcements is that only the largest configured size on the Angelbird that gets you the fastest throughput. As drives are added the RAID array is striped over more available flash drives. The OCZ product also does a RAID array to increase speed, however they hit the maximum throughput at an intermediate size (~250GByte configuration) and at the maximum size too. So if you want an ‘normal’ to ‘average’ size storage but better throughput you don’t have to buy the maxed out most expensive version of the OCZ RevoDrive to get there. Which means this could be a more manageable price for the gaming market or for the PC fanboys who want faster boot times. Don’t get me wrong though, I’m not recommending buying an expensive 250GByte RevoDrive if a similarly sized SATA SSD costs a good deal less. No far from it, the speed difference may not be worth the price you pay. But, the RevoDrive could be upgraded over time and keep your speeds at the max 700+MBytes/sec. you get with its high throughput intermediate configuration. Right now, I don’t have any prices to compare for either the Angelbird or OCZ Revodrive products. I can tell you however that the Fusion-io low end desktop product is in the $700-$800 range and doesn’t come with upgradeable storage, you get a few sizes to choose from, and that’s it. If either of the two products ship at a price significantly less than the Fusion-io product everyone will flock to them I’m sure.

    Two other significant features touted by both product announcements are the SandForce SF-1200 flash controller. Right now that controller is the de facto standard high throughput part everyone is using for the SATA SSD products. There’s even an intermediate part on the market called the SF-1500 (their top end offering). So it’s de rigeur to include the SandForce SF-1200t in any product you hope to sell to a wide audience (especially hardware fanboys). However, let me caution you that in the flurry of product announcements and always keeping an eye on preventing buyers remorse, SandForce did announce very recently a new drive controller they have labelled the SF-2000 series. This part may or may not be targeted for the consumer desktop market, but depending on how well it performs once it starts shipping you may want to wait and see if the revision of this crop of newly announced PCIe cards adopts the SandForce controller chip to gain the extra throughput it is touting. The new controller is rated at 740MBytes/sec. all by itself, with 4 SSDs attached to it on a PCIe card, theoretically four times 740 equals 2,096 and that is a substantially large quantity of data coming through th PCI Express data bus. Luckily for most of us the PCI Express interface on a 4X (four lane) data bus has a while to go before it gets saturated by all this disk throughput. The question is how long will it take to overwhelm the a four lane PCI Express connector? I hope to see the day this happens.

  • Intel forms flash gang of five • The Register

    Intel, Dell, EMC, Fujitsu and IBM are forming a working group to standardise PCIe-based solid state drives SSD, and have a webcast coming out today to discuss it.

    via Intel forms flash gang of five • The Register.

    Now this is interesting in that just two weeks after Angelbird pre-announces its own PCIe flash based SSD product, now Intel is forming a consortium. Things are heating up, this is now a hot new category and I want to draw your attention to a sentence in this Register article:

    By connecting to a server’s PCIe bus, SSDs can pour out their contents faster to the server than by using Fibre Channel or SAS connectivity. The flash is used as a tier of memory below DRAM and cuts out drive array latency when reading and writing data.

    This is without a doubt the first instance I have read that there is a belief, even just in the minds of the author of this article, that Fibre Channel and Serial Attached SCSI aren’t fast enough. Who knew PCI Express would be preferable to an old storage interface when it comes to enterprise computing? Lookout world, there’s a new sheriff in town and his name is PCIe SSD. This product category though will be not for the consumer end of the market at least not for this consortium. It is targeting the high margin, high end, data center market where interoperability keeps vendor lock-in from occurring. By choosing interoperability everyone has to gain an advantage not through engineering necessarily but through firmware most likely. If that’s the differentiator than whomever has the best embedded programming team will have the best throughput and the highest rated product. Let’s hope this all eventually finds a market saturation point driving the technology down into the consumer desktop, thus enabling a next big burst in desktop computer performance. I hope PCIe SSD’s become the next storage of choice and that motherboards can be rid of all SATA disk I/O ports and firmware in the near future. We don’t need SATA SSDs, we do need PCIe SSDs.

  • Angelbird to Bring PCIe SSD on the Cheap and Iomega has a USB 3 external SSD

     

    msystems
    Image via Wikipedia

     

    From Tom’s Hardware:

    Extreme SSD performance over PCI-Express on the cheap? There’s hope!

    A company called Angelbird is working on bringing high-performance SSD solutions to the masses, specifically, user upgradeable PCI-Express SSD solution.

    via Angelbird to Bring PCIe SSD on the Cheap.

    This is one of a pair of SSD announcements that came in on Tuesday. SSDs are all around us now and the product announcements are coming in faster and harder. The first one, is from a British company named Angelbird. Looking at the website announcing the specs of their product, it is on paper a very fast PCIe based SSD drive. Right up there with Fusion-io in terms of what you get for the dollars spent. I’m a little concerned however due to the reliance of an OS hosted in the firmware of the PCIe card. I would prefer something a little more peripheral like that the OS supports natively, rather than have the card become the OS. But this is all speculative until actual production or test samples hit the review websites and we see some kind of benchmarks from the likes of Tom’s Hardware or Anandtech.

    From MacNN|Electronista:

    Iomega threw itself into external solid-state drives today through the External SSD Flash Drive. The storage uses a 1.8-inch SSD that lets it occupy a very small footprint but still outperform a rotating hard drive:

    Read more: http://www.electronista.com/articles/10/10/15/iomega.outs.external.usb.30.ssd/

    The second story covers a new product from Iomega where we have for the first time an external SSD from a mainstream manufacturer. Price is at premium compared to the performance, but if you like the looks you’ll be willing to pay. It’s not bad speeds for reading and writing, but it’s not the best compared to the amount of money you’re paying. And why do they still use a 2.5″ external case if it’s internally a 1.8″ drive? Couldn’t they shrink it down to the old Firefly HDD size from back in the day? It should be the smaller.

  • Micron intros SSD speed king • The Register

    The RealSSD P300 comes in a 2.5-inch form factor and in 50GB, 100GB and 200GB capacity points, and is targeted at servers, high-end workstations and storage arrays. The product is being sampled with customers now and mass production should start in October.

    via Micron intros SSD speed king • The Register.

    Sandisk C300 ssd drive
    The C300 as it appears on Anandtech.com

    I am now for the first time after SSDs have hit the market looking at the drive performance of each new product being offered. What I’ve begun to realize is the speeds of each product are starting to fall into a familiar range. For instance I can safely say that for a drive in the 120GB range with Multi-Level Cells you’re going to see a minimum 200MB/sec read/write speeds (writing is usually faster than reading by some amount on every drive). This is a vague estimate of course, but it’s becoming more and more common. Smaller size drives have slower speeds and suffer on benchmarks due in part to the smaller number of parallel data channels. Bigger capacity drives have more channels and therefore can write more data per second. A good capacity for a boot/data drive is going to be in the 120-128GB category. And while it won’t be the best for archiving all your photos and videos, that’s fine. Use a big old 2-3TB SATA drive for those heavy lifting duties. I think that will be a more common architecture in the future and not a premium choice as it is now. SSD for boot/data and typical HDD for big archive and backup.

    On the enterprise front things are a little different speed and throughput are important, but the drive interface is as well. With SATA being the most widely used interface for consumer hardware, big drive arrays for the data center are wedded to a form of Serial Attached Storage (SAS) or Fibre Channel (FC). So now manufacturers and designers like Sandisk need to engineer niche products for the high margin markets that require SAS or FC versions of the SSD. As was the case with the transition from Parallel ATA top Serial ATA, the first products are going to SATA to X interface adapters and electronics on board to make them compatible. Likely this will be the standard procedure for quite a while as a ‘native’ Fibre or SAS interface will require a bit of engineering to be done and cost increases to accommodate the enterprise interfaces. Speeds however will likely always be tuned for the higher volume consumer market and the SATA version of each drive will likely be the highest possible throughput version in each drive category. I’m thinking that the data center folks should adapt and adjust and go with the consumer level gear or adopt SATA SSDs now that the drives are not mechanically spinning disks. Similarly as more and more manufacturers are also doing their own error correction and wear leveling on the memory chips in SSDs the reliability will be equal to or exceed that of a FC or SAS spinning disks.

    And speaking of spinning disks, the highest throughput I’ve ever seen quoted for a SATA disk was always 150MB/sec. Hands down that was theoretically the best it could ever do. More likely you would only see 80MB/sec (which takes me back to the old days of Fast/Wide SCSI and the Barracuda). Given the limits of moving media like spinning disks and read/write heads tracking across their surface, Flash throughput is just stunning. We are now in an era that while the Flash SSDs are slower than RAM, they are awfully fast and fast enough to notice when booting a computer. I think the only real speed enhancement beyond the drive interface is to put Flash SSDs on the motherboard directly and build a SATA drive controller directly on the CPU to make read/write requests. I doubt it would be cost effective for the amount of improvement, but it would eliminate some of the motherboard electronics and smooth the flow a bit. Something to look for certainly in netbook or slate style computers in the future.

  • Drive suppliers hit capacity increase difficulties • The Register

    Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology.

    via Drive suppliers hit capacity increase difficulties • The Register.

    This is a good survey of upcoming HDD platter technologies. HAMR (Heat Assisted Magnetic Recording)and BPM (Bit Patterned Media) are the next generation after the current Perpendicular Magnetic Recording slowly hits the top end of its ability to squash together the 1’s and 0’s of a spinning hard drive platter. HAMR is like the old Floptical technology from the halls of Steve Job’s old NEXT Computer company. It uses a laser to heat the surface of the drive platter before the Read/Write head starts recording data to the drive. This ‘change’ in the state of the surface of the drive (the heat) helps align the magnetism of the bits written so that the tracks of the drive and the bits recorded inside them can be more tightly spaced. In the world of HAMR, Heat + Magnetism = bigger hard drives on the same old 3.5″ platters and 2.5″ platters we have now.  With BPM, the whole drive is manufactured to hold a set number of bits and tracks in advance. Each bit is created directly on the platter as a ‘well’ with a ring of insulating material surround it. The sizes of the wells are sufficiently small and dense enough to allow a light tighter spacing than PMR. But as is often the case the new technologies aren’t ready for manufacturing. A few test samples of possible devices are out in limited or custom made engineering prototypes to test the waters.

    Given the slow down in silicon CMOS chip speeds from the likes of Intel and AMD along with the wall of PMR it would appear the frontier days of desktop computing are coming to a close. Gone are the days of Megahertz wars and now Gigabyte wars waged in the labs of review sites and test labs across the Interwebs. The torrid pace of change in hardware we all experienced from the release of Windows 95 to the release this year of Windows 7 has slowed to a radical incrementalism. Intel releases so many chips with ‘slight’ variations in clock speed and cache one cannot keep up with them all. Hard drive manufacturers try to increment their disks about .5 Tbytes every 6 months but now that will stop. Flash-based SSD will be the biggest change for most of us and will help break through the inherent speed barriers enforced by SATA and spinning disk technologies. I hope a hybrid approach is used mixing SSDs and HDDs for speed and size in desktop computers. Fast things that need to be fast can use the SDD, slow things that are huge in size or quantity will go to the HDD. As for next gen disk based technologies, I’m sure there will be a change to the next higher density technology. But it will no doubt be a long time in coming.