Category: data center

  • LSI Launches $11,500 SSD, Crushes Other SSDs

    Tuesday LSI Corp announced the WarpDrive SLP-300 PCIe-based acceleration card, offering 300 GB of SLC solid state storage and performance up to 240,000 sustained IOPS. It also delivers I/O performance equal to hundreds of mechanical hard drives while consuming less than 25W of power–all for a meaty $11,500 USD.

    via LSI Launches $11,500 SSD, Crushes Other SSDs.

    This is the cost of entry for anyone working on an Enterprise Level project. You cannot participate unless you can cross the threshold of a PCIe card costing $11,500 USD. This is the first time I have seen an actual price quote on one of these cards that swims in the Data center consulting and provisioning market. Fusion-io cannot be too far off of this price when it’s not sold as a full package as part of a larger project RFP. I am somewhat stunned at the price premium, but LSI is a top engineering firm and they definitely can design their own custom silicon to get the top speed out of just about any commercial off the shelf Flash memory chips. I am impressed they went with the PCI Express (8X) 8 lane interface. I’m guessing that’s a requirement for more server owners whereas 4X is for the desktop market. Still I don’t see any 16X interfaces as of yet (that’s the interface most desktops use for their graphics cards from AMD and nVidia). One more part that makes this a premium offering is the choice of Single Level Cell Flash memory chips for the ultimate in speed and reliability along with the Serial Attached Storage interface onboard the PCIe card itself. Desktop models opt for SATA to PCI-X to PCI-e bridge chips forcing you to translate and re-order your data multiple times. I have a feel SAS bridges to PCI-e at the full 8X interface speeds and that is the key to getting faster than 1,000 MB/sec. speeds for write and reads. This part is quoted as getting in the range of ~1,400 MB/sec. and other than some very expensive turnkey boxes from manufacturers like Violin, this is a great user installable part to get the benefit of a really fast SSD drive array on a PCIe card.

  • TidBITS Opinion: A Eulogy for the Xserve: May It Rack in Peace

     

    Image representing Apple as depicted in CrunchBase
    Image via CrunchBase

     

    Apple’s Xserve was born in the spring of 2002 and is scheduled to die in the winter of 2011, and I now step up before its mourners to speak the eulogy for Apple’s maligned and misunderstood server product.

    via TidBITS Opinion: A Eulogy for the Xserve: May It Rack in Peace.

    Chuck Goolsbee’s Eulogy is spot on, and every point is true according even to my limited experience. I’ve purchased 2 different Xserves since they were introduced. On is 2nd generation G4 model, the other is a 2006 Intel model (thankfully I skipped the G5 altogether). Other than a weird bug in the Intel based Xserve (weird blue video screen), there have been no bumps or quirks to report. I agree that form factor of the housing is way too long. Even in the rack I used (a discard SUN Microsystems unit),  the thing was really inelegant. Speaking of the drive bays too is a sore point for me. I have wanted dearly to re-arrange reconfigure and upgrade the drive bays on both the old and newer Xserve but the expense of acquiring new units was prohibitive at best, and they went out of manufacture very quickly after being introduced. If you neglected to buy your Xserve fully configured with the maximum storage available when it shipped you were more or less left to fend for yourself. You could troll Ebay and Bulletin Boards to score a bona fide Apple Drivebay but the supply was so limited it drove up prices and became a black market. The XRaid didn’t help things either, as drivebays were not consistently swappable from the Xserve to the XRaid box. Given the limited time most sysadmins have with doing research on purchases like this to upgrade an existing machine, it was a total disaster, big fail and unsurprising.

    I will continue to run my Xserve units until the drives or power supplies fail. It could happen any day, any time and hopefully I will have sufficient warning to get a new Mac mini server to replace it. Until then, I too, along with Chuck Goolsbee among the rest of the Xserve sysadmins will kind of wonder what could have been.

  • A Quick Look at OCZ’s RevoDrive x2 – AnandTech

     

    Serial Attached SCSI drive connector
    SATA hard drive Interface – Image via Wikipedia

     

    What OCZ (and other companies) ultimately need to do is introduce a SSD controller with a native PCI Express interface (or something else other than SATA). SandForce’s recent SF-2000 announcement showed us that SATA is an interface that simply can’t keep up with SSD controller evolution. At peak read/write speed of 500MB/s, even 6Gbps SATA is barely enough. It took us years to get to 6Gbps SATA, yet in about one year SandForce will have gone from maxing out 3Gbps SATA on sequential reads to nearing the limits of 6Gbps SATA.

    via A Quick Look at OCZ’s RevoDrive x2: IBIS Performance without HSDL – AnandTech :: Your Source for Hardware Analysis and News.

    It doesn’t appear the RevoDrive X2 is all that much better than four equivalent sized SSD drives in a four drive RAID Level 0 array. But hope springs eternal, and the author sums up where manufacturers should go with their future product announcements. I think everyone agrees SATA is the last thing we need to get full speed out of the Flash based SSDs, we need SandForce controllers with native PCIe interfaces and then maybe we will get our full money’s worth out of the SSDs we will buy in the near future. As an enterprise data center architect, I would seriously be following these product announcements and architecture requirements. Shrewdly choosing your data center storage architecture (what mix of spinning disks and SSD do you really need) will be a competitive advantage for data mining, Online Transaction Processing, and Cloud based software applications.

    Until this article came out yesterday I was unaware that OCZ had an SSD product with a SAS (Serial Attached SCSI) interface. That drive is called the IBIS and OCZ describes the connector as HSDL (High Speed Data Link-an OCZ created term). Benchmarks of that device have shown it to be faster than it’s RevoDrive counterpart which uses an old style native hard drive interface (SATA). Anandtech is lobbying to dump SATA altogether even now that the most recent SATA version supports higher throughput (so called SATA 6). The legacy support built into the SATA interface is absolutely unnecessary given the speed of today’s flash memory chips and the SSDs they are designed into. SandForce has further complicated the issue by showing that their drive controllers can vastly out pace even SATA 6 drive interfaces. So as I have concluded in previous blog entries PCIe is the next logical and highest speed option after you look at all the spinning hard drive interfaces currently on the market. The next thing that needs to be addressed is the cost of designing and building these PCIe based SSD drives in the coming year. $1200 seems to be the going price for anything in the 512GB range with roughly 700MB/second data throughput. Once the price goes below the $1,0000 mark, I think the number of buyers will go up (albeit still niche consumers like PC Gamers). In the end we can only benefit by manufacturers dumping SATA for the PCIe interface and the Anandtech quote at the top of the blog, really reinforces what I’ve been observing so far this year.