Tuesday LSI Corp announced the WarpDrive SLP-300 PCIe-based acceleration card, offering 300 GB of SLC solid state storage and performance up to 240,000 sustained IOPS. It also delivers I/O performance equal to hundreds of mechanical hard drives while consuming less than 25W of power–all for a meaty $11,500 USD.
This is the cost of entry for anyone working on an Enterprise Level project. You cannot participate unless you can cross the threshold of a PCIe card costing $11,500 USD. This is the first time I have seen an actual price quote on one of these cards that swims in the Data center consulting and provisioning market. Fusion-io cannot be too far off of this price when it’s not sold as a full package as part of a larger project RFP. I am somewhat stunned at the price premium, but LSI is a top engineering firm and they definitely can design their own custom silicon to get the top speed out of just about any commercial off the shelf Flash memory chips. I am impressed they went with the PCI Express (8X) 8 lane interface. I’m guessing that’s a requirement for more server owners whereas 4X is for the desktop market. Still I don’t see any 16X interfaces as of yet (that’s the interface most desktops use for their graphics cards from AMD and nVidia). One more part that makes this a premium offering is the choice of Single Level Cell Flash memory chips for the ultimate in speed and reliability along with the Serial Attached Storage interface onboard the PCIe card itself. Desktop models opt for SATA to PCI-X to PCI-e bridge chips forcing you to translate and re-order your data multiple times. I have a feel SAS bridges to PCI-e at the full 8X interface speeds and that is the key to getting faster than 1,000 MB/sec. speeds for write and reads. This part is quoted as getting in the range of ~1,400 MB/sec. and other than some very expensive turnkey boxes from manufacturers like Violin, this is a great user installable part to get the benefit of a really fast SSD drive array on a PCIe card.
EC: Here are the things I would say in support of that. One of them, which I think is really important—and this is true especially of the elementary schools—is that training in drawing is teaching people to observe. PH: Which is what you want in scientists, right?
EC: Thats right. Or doctors or lawyers. You want people who are observant. I think most people were not trained under artists, so they have an incorrect image of what an artist actually does. Theres a complete disconnect with what they do. But there are places where this understanding comes across, such as in that famous book by Betty Edwards [Drawing on the Right Side of the Brain].
This interview is with a computer scientist named Ed Catmull. In the time Ed Catmull entered the field, we’ve gone from computers crunching numbers like a desktop calculator to computers doing full 3D animated films. Ed Catmull’s single most important goal was to created an animated film using a computer. He eventually accomplished that and more onced he helped form up Pixar. All of his research and academic work was focused on that one goal.
I’m always surprised to see what references or influences people quote in interviews. In fact, I am really encouraged. It was about 1988 or so when I took a copy of Betty Edward’s book my mom had and started reading it and doing some of the exercises in it. Stranger still I want back to college and majored in art (not drawing but Photography). So I think I understand exactly what Ed Catmull means when he talks about being observant. In every job I’ve had computer related or otherwise that ability to be observant just doesn’t exist in a large number of people. Eventually people begin to ask me how do know all this stuff, when did you learn it? Most times, the things they are most impressed by are things like noticing something and trying a different strategy in attempting to fix a problem. The proof is, I can do this with things I am unfamiliar with and usually make some headway towards fixing a thing. Whether that thing is mechanical, or computer related doesn’t matter. I make good guesses and it’s not because I’m an expert in anything, I merely notice things. That’s all it is.
So maybe everyone should read and go through Betty Edwards’s book Drawing on the Right Side of the Brain. If nothing else it might make you feel a little dislocated and uncomfortable. It might shake you up, and make you question some pre-conceived notions about yourself like, the feeling you can’t draw or you are not good at art. I think with practice, anyone can draw and with practice anyone can become observant.
Apple’s Xserve was born in the spring of 2002 and is scheduled to die in the winter of 2011, and I now step up before its mourners to speak the eulogy for Apple’s maligned and misunderstood server product.
Chuck Goolsbee’s Eulogy is spot on, and every point is true according even to my limited experience. I’ve purchased 2 different Xserves since they were introduced. On is 2nd generation G4 model, the other is a 2006 Intel model (thankfully I skipped the G5 altogether). Other than a weird bug in the Intel based Xserve (weird blue video screen), there have been no bumps or quirks to report. I agree that form factor of the housing is way too long. Even in the rack I used (a discard SUN Microsystems unit), the thing was really inelegant. Speaking of the drive bays too is a sore point for me. I have wanted dearly to re-arrange reconfigure and upgrade the drive bays on both the old and newer Xserve but the expense of acquiring new units was prohibitive at best, and they went out of manufacture very quickly after being introduced. If you neglected to buy your Xserve fully configured with the maximum storage available when it shipped you were more or less left to fend for yourself. You could troll Ebay and Bulletin Boards to score a bona fide Apple Drivebay but the supply was so limited it drove up prices and became a black market. The XRaid didn’t help things either, as drivebays were not consistently swappable from the Xserve to the XRaid box. Given the limited time most sysadmins have with doing research on purchases like this to upgrade an existing machine, it was a total disaster, big fail and unsurprising.
I will continue to run my Xserve units until the drives or power supplies fail. It could happen any day, any time and hopefully I will have sufficient warning to get a new Mac mini server to replace it. Until then, I too, along with Chuck Goolsbee among the rest of the Xserve sysadmins will kind of wonder what could have been.
What OCZ (and other companies) ultimately need to do is introduce a SSD controller with a native PCI Express interface (or something else other than SATA). SandForce’s recent SF-2000 announcement showed us that SATA is an interface that simply can’t keep up with SSD controller evolution. At peak read/write speed of 500MB/s, even 6Gbps SATA is barely enough. It took us years to get to 6Gbps SATA, yet in about one year SandForce will have gone from maxing out 3Gbps SATA on sequential reads to nearing the limits of 6Gbps SATA.
It doesn’t appear the RevoDrive X2 is all that much better than four equivalent sized SSD drives in a four drive RAID Level 0 array. But hope springs eternal, and the author sums up where manufacturers should go with their future product announcements. I think everyone agrees SATA is the last thing we need to get full speed out of the Flash based SSDs, we need SandForce controllers with native PCIe interfaces and then maybe we will get our full money’s worth out of the SSDs we will buy in the near future. As an enterprise data center architect, I would seriously be following these product announcements and architecture requirements. Shrewdly choosing your data center storage architecture (what mix of spinning disks and SSD do you really need) will be a competitive advantage for data mining, Online Transaction Processing, and Cloud based software applications.
Until this article came out yesterday I was unaware that OCZ had an SSD product with a SAS (Serial Attached SCSI) interface. That drive is called the IBIS and OCZ describes the connector as HSDL (High Speed Data Link-an OCZ created term). Benchmarks of that device have shown it to be faster than it’s RevoDrive counterpart which uses an old style native hard drive interface (SATA). Anandtech is lobbying to dump SATA altogether even now that the most recent SATA version supports higher throughput (so called SATA 6). The legacy support built into the SATA interface is absolutely unnecessary given the speed of today’s flash memory chips and the SSDs they are designed into. SandForce has further complicated the issue by showing that their drive controllers can vastly out pace even SATA 6 drive interfaces. So as I have concluded in previous blog entries PCIe is the next logical and highest speed option after you look at all the spinning hard drive interfaces currently on the market. The next thing that needs to be addressed is the cost of designing and building these PCIe based SSD drives in the coming year. $1200 seems to be the going price for anything in the 512GB range with roughly 700MB/second data throughput. Once the price goes below the $1,0000 mark, I think the number of buyers will go up (albeit still niche consumers like PC Gamers). In the end we can only benefit by manufacturers dumping SATA for the PCIe interface and the Anandtech quote at the top of the blog, really reinforces what I’ve been observing so far this year.
Intel and Achronix-2 Great tastes that taste great together
According to Greg Martin, a spokesman for the FPGA maker, Achronix can compete with Xilinx and Altera because it has, at 1.5GHz in its current Speedster1 line, the fastest such chips on the market. And by moving to Intel’s 22nm technology, the company could have ramped up the clock speed to 3GHz.
That kind of says it all in one sentence, or two sentences in this case. The fastest FPGA on the market is quite an accomplishment unto itself. Putting that FPGA on the world’s most advanced production line and silicon wafter technology is what Andy Grove would called the 10X Effect. FPGA’s are reconfigurable processors that can have their circuits re-routed and optimized for different tasks over and over again. This is real beneficial for very small batches of processors where you need a custom design. Some of the things they can speed up is doing math or looking up things in a very large search through a database. In the past I was always curious whether they could be used a general purpose computer which could switch gears and optimize itself for different tasks. I didn’t know whether or not it would work or be worthwhile but it really seemed like there was a vast untapped reservoir of power in the FPGA.
Some super computer manufacturers have started using FPGAs as special purpose co-processors and have found immense speed-ups as a result. Oil prospecting companies have also used them to speed up analysis of seismic data and place good bets on dropping a well bore in the right spot. But price has always been a big barrier to entry as quoted in this article. $1,000 per chip is the cost. Which limits the appeal to those buyers where price is no object but speed and time are more important. The two big competitors in the field off FPGA manufacturing are Altix and Xilinx both of which design the chips but have them manufactured in other countries. This has led to FPGAs being second class citizens used older generation chip technologies on old manufacturing lines. They always had to deal with what they could get. Performance in terms of clock speed was always less too.
It was not unusual to see during the Megahertz and Gigahertz wars chip speeds increasing every month. FPGAs sped up too, but not nearly as fast. I remember seeing 200Mhz/sec and 400Mhz/sec touted as Xilinx and Altix top of the line products. With Achrnix running at 1.5Ghz, things have changed quite a bit. That’s a general purposed CPU speed in a completely customizable FPGA. This means you get speed that makes the FPGA even more useful. However, instead of going faster this article points out people would rather buy the same speed but use less electricity and generate less heat. There’s no better way to do this than to shrink the size of the circuits on the FPGA and that is the core philosophy of Intel Inc. They have just teamed up to put the Achronix FPGA on the smallest feature size production line using the most optimized, cost conscious manufacturer of silicon chips bar none.
Another point being made in the article is the market for FPGAs at this level of performance also tends to be more defense contract oriented. As a result, to maintain the level of security necessary to sell chips to this industry, the chips need to be made in the good ol’ USA and Intel doesn’t outsource anything when it comes to it’s top of the line production facilities. Everything is in Oregon, Arizona or Washington State and is guaranteed not to have any secret backdoors built in to funnel data to foreign governments.
I would love to see some University research projects start looking at FPGAs again and see if as speeds go up, power goes down if there’s a happy medium or mix of general purpose CPUs and FPGAs that might help the average joe working on his desktop, laptop or iPad. All I know is Intel entering a market will make it more competitive and hopefully lower the barrier of entry to anyone who would really like to get their hands on a useful processor that they can customize to their needs.
Building upon the original 1st-generation RevoDrive, the new version boasts speeds up to 740 MB/s and up to 120,000 IOPS, almost three times the throughput of other high-end SATA-based solutions.
One cannot make this stuff up, two weeks ago Angelbird announced its bootable PCI Express SSD. Late yesterday OCZ one of the biggest 3rd party after market makers of SSDs announces a new PCI Express SSD which is also bootable. Big difference between the Angelbird product and OCZ’s RevoDrive is the throughput on the top end. This means if you purchase the most expensive fully equipped card from either manufacturer you will get 900+MBytes/sec. on the Angelfire versus 700+MBytes/sec. on the Revodrive from OCZ. Other differences include the ‘native’ support of the OCZ on the Host OS. I think this means that they aren’t using the ‘virtual OS’ on the embedded chips to boot so much as having the PCIe drive electronics make everything appear to be a real native boot drive. Angelbird uses an embedded OS to virtualize and abstract the hardware so that you get to boot any OS you want and run it off the flash memory onboard.
The other difference I can see from reading the announcements is that only the largest configured size on the Angelbird that gets you the fastest throughput. As drives are added the RAID array is striped over more available flash drives. The OCZ product also does a RAID array to increase speed, however they hit the maximum throughput at an intermediate size (~250GByte configuration) and at the maximum size too. So if you want an ‘normal’ to ‘average’ size storage but better throughput you don’t have to buy the maxed out most expensive version of the OCZ RevoDrive to get there. Which means this could be a more manageable price for the gaming market or for the PC fanboys who want faster boot times. Don’t get me wrong though, I’m not recommending buying an expensive 250GByte RevoDrive if a similarly sized SATA SSD costs a good deal less. No far from it, the speed difference may not be worth the price you pay. But, the RevoDrive could be upgraded over time and keep your speeds at the max 700+MBytes/sec. you get with its high throughput intermediate configuration. Right now, I don’t have any prices to compare for either the Angelbird or OCZ Revodrive products. I can tell you however that the Fusion-io low end desktop product is in the $700-$800 range and doesn’t come with upgradeable storage, you get a few sizes to choose from, and that’s it. If either of the two products ship at a price significantly less than the Fusion-io product everyone will flock to them I’m sure.
Two other significant features touted by both product announcements are the SandForce SF-1200 flash controller. Right now that controller is the de facto standard high throughput part everyone is using for the SATA SSD products. There’s even an intermediate part on the market called the SF-1500 (their top end offering). So it’s de rigeur to include the SandForce SF-1200t in any product you hope to sell to a wide audience (especially hardware fanboys). However, let me caution you that in the flurry of product announcements and always keeping an eye on preventing buyers remorse, SandForce did announce very recently a new drive controller they have labelled the SF-2000 series. This part may or may not be targeted for the consumer desktop market, but depending on how well it performs once it starts shipping you may want to wait and see if the revision of this crop of newly announced PCIe cards adopts the SandForce controller chip to gain the extra throughput it is touting. The new controller is rated at 740MBytes/sec. all by itself, with 4 SSDs attached to it on a PCIe card, theoretically four times 740 equals 2,096 and that is a substantially large quantity of data coming through th PCI Express data bus. Luckily for most of us the PCI Express interface on a 4X (four lane) data bus has a while to go before it gets saturated by all this disk throughput. The question is how long will it take to overwhelm the a four lane PCI Express connector? I hope to see the day this happens.
Intel, Dell, EMC, Fujitsu and IBM are forming a working group to standardise PCIe-based solid state drives SSD, and have a webcast coming out today to discuss it.
Now this is interesting in that just two weeks after Angelbird pre-announces its own PCIe flash based SSD product, now Intel is forming a consortium. Things are heating up, this is now a hot new category and I want to draw your attention to a sentence in this Register article:
By connecting to a server’s PCIe bus, SSDs can pour out their contents faster to the server than by using Fibre Channel or SAS connectivity. The flash is used as a tier of memory below DRAM and cuts out drive array latency when reading and writing data.
This is without a doubt the first instance I have read that there is a belief, even just in the minds of the author of this article, that Fibre Channel and Serial Attached SCSI aren’t fast enough. Who knew PCI Express would be preferable to an old storage interface when it comes to enterprise computing? Lookout world, there’s a new sheriff in town and his name is PCIe SSD. This product category though will be not for the consumer end of the market at least not for this consortium. It is targeting the high margin, high end, data center market where interoperability keeps vendor lock-in from occurring. By choosing interoperability everyone has to gain an advantage not through engineering necessarily but through firmware most likely. If that’s the differentiator than whomever has the best embedded programming team will have the best throughput and the highest rated product. Let’s hope this all eventually finds a market saturation point driving the technology down into the consumer desktop, thus enabling a next big burst in desktop computer performance. I hope PCIe SSD’s become the next storage of choice and that motherboards can be rid of all SATA disk I/O ports and firmware in the near future. We don’t need SATA SSDs, we do need PCIe SSDs.
Extreme SSD performance over PCI-Express on the cheap? There’s hope!
A company called Angelbird is working on bringing high-performance SSD solutions to the masses, specifically, user upgradeable PCI-Express SSD solution.
This is one of a pair of SSD announcements that came in on Tuesday. SSDs are all around us now and the product announcements are coming in faster and harder. The first one, is from a British company named Angelbird. Looking at the website announcing the specs of their product, it is on paper a very fast PCIe based SSD drive. Right up there with Fusion-io in terms of what you get for the dollars spent. I’m a little concerned however due to the reliance of an OS hosted in the firmware of the PCIe card. I would prefer something a little more peripheral like that the OS supports natively, rather than have the card become the OS. But this is all speculative until actual production or test samples hit the review websites and we see some kind of benchmarks from the likes of Tom’s Hardware or Anandtech.
From MacNN|Electronista:
Iomega threw itself into external solid-state drives today through the External SSD Flash Drive. The storage uses a 1.8-inch SSD that lets it occupy a very small footprint but still outperform a rotating hard drive:
The second story covers a new product from Iomega where we have for the first time an external SSD from a mainstream manufacturer. Price is at premium compared to the performance, but if you like the looks you’ll be willing to pay. It’s not bad speeds for reading and writing, but it’s not the best compared to the amount of money you’re paying. And why do they still use a 2.5″ external case if it’s internally a 1.8″ drive? Couldn’t they shrink it down to the old Firefly HDD size from back in the day? It should be the smaller.
Tuesday Samsung announced that it had begun mass-producing the industry’s first 3-bit-per-cell, 64 Gb (8 GB) MLC NAND flash chip using 20-nm-class processing. The news follows Samsung’s introduction of 32 Gb (4 GB) 3-bit NAND flash using 30-nm-class processing last November, and the company’s 32 Gb MLC NAND using 20-nm-class processing unleashed in April.
Samsung’s product development keeps arriving faster and harder each revision of the product cycle. And competition is not slowing down. There are at least two other big flash memory manufacturers who are moving into the ~20nm-class of flash memory too. So three big manufacturers all manufacturing roughly the same ‘feature size’ and Apple sucking up all the supply. If it’s possible for an oversupply to occur it won’t be until next year I am sure and then hopefully prices will start to fall somewhat for the SSD market. Also add to this the Apple style packaging of multiple 64Gbit chips sandwiched one on top of the other to keep everything tidying in one small footprint and you have got ultra dense chips going into products now. In the iPhone and iPad they can layer up to 8 or 16 of those chips into one physical package to save room. This means we could see iPhones hitting 64Gbytes of storage and the iPad could reach 128Gbytes. It will truly be a new day once both of these devices hit these levels of storage. Consider my Mac mini from 2008. It has a spinning hard drive that is only 80Gbytes total. That my friends is a revolution in the making.
SandForce has now announced an SF-2000 controller that doubles up the I/O performance of the SF-1500. The new product runs at 60,000 sustained read and write IOPS and does 500MB/sec when handling read or write data. It uses a 6Gbit/s SATA interface and SandForce says it can make use of single-level cell flash, MLC or the enterprise MLC put out by Micron.
Sandforce is continuing to make great strides in its SSDdisk controller architecture. There’s no stopping the train now. But as always read the fine print on any SSD product you buy and find out who manufactures the drive controller and what version it is. Benchmarks are always a good thing to consult too before you buy.