Apple’s Xserve was born in the spring of 2002 and is scheduled to die in the winter of 2011, and I now step up before its mourners to speak the eulogy for Apple’s maligned and misunderstood server product.
Chuck Goolsbee’s Eulogy is spot on, and every point is true according even to my limited experience. I’ve purchased 2 different Xserves since they were introduced. On is 2nd generation G4 model, the other is a 2006 Intel model (thankfully I skipped the G5 altogether). Other than a weird bug in the Intel based Xserve (weird blue video screen), there have been no bumps or quirks to report. I agree that form factor of the housing is way too long. Even in the rack I used (a discard SUN Microsystems unit), the thing was really inelegant. Speaking of the drive bays too is a sore point for me. I have wanted dearly to re-arrange reconfigure and upgrade the drive bays on both the old and newer Xserve but the expense of acquiring new units was prohibitive at best, and they went out of manufacture very quickly after being introduced. If you neglected to buy your Xserve fully configured with the maximum storage available when it shipped you were more or less left to fend for yourself. You could troll Ebay and Bulletin Boards to score a bona fide Apple Drivebay but the supply was so limited it drove up prices and became a black market. The XRaid didn’t help things either, as drivebays were not consistently swappable from the Xserve to the XRaid box. Given the limited time most sysadmins have with doing research on purchases like this to upgrade an existing machine, it was a total disaster, big fail and unsurprising.
I will continue to run my Xserve units until the drives or power supplies fail. It could happen any day, any time and hopefully I will have sufficient warning to get a new Mac mini server to replace it. Until then, I too, along with Chuck Goolsbee among the rest of the Xserve sysadmins will kind of wonder what could have been.
What OCZ (and other companies) ultimately need to do is introduce a SSD controller with a native PCI Express interface (or something else other than SATA). SandForce’s recent SF-2000 announcement showed us that SATA is an interface that simply can’t keep up with SSD controller evolution. At peak read/write speed of 500MB/s, even 6Gbps SATA is barely enough. It took us years to get to 6Gbps SATA, yet in about one year SandForce will have gone from maxing out 3Gbps SATA on sequential reads to nearing the limits of 6Gbps SATA.
It doesn’t appear the RevoDrive X2 is all that much better than four equivalent sized SSD drives in a four drive RAID Level 0 array. But hope springs eternal, and the author sums up where manufacturers should go with their future product announcements. I think everyone agrees SATA is the last thing we need to get full speed out of the Flash based SSDs, we need SandForce controllers with native PCIe interfaces and then maybe we will get our full money’s worth out of the SSDs we will buy in the near future. As an enterprise data center architect, I would seriously be following these product announcements and architecture requirements. Shrewdly choosing your data center storage architecture (what mix of spinning disks and SSD do you really need) will be a competitive advantage for data mining, Online Transaction Processing, and Cloud based software applications.
Until this article came out yesterday I was unaware that OCZ had an SSD product with a SAS (Serial Attached SCSI) interface. That drive is called the IBIS and OCZ describes the connector as HSDL (High Speed Data Link-an OCZ created term). Benchmarks of that device have shown it to be faster than it’s RevoDrive counterpart which uses an old style native hard drive interface (SATA). Anandtech is lobbying to dump SATA altogether even now that the most recent SATA version supports higher throughput (so called SATA 6). The legacy support built into the SATA interface is absolutely unnecessary given the speed of today’s flash memory chips and the SSDs they are designed into. SandForce has further complicated the issue by showing that their drive controllers can vastly out pace even SATA 6 drive interfaces. So as I have concluded in previous blog entries PCIe is the next logical and highest speed option after you look at all the spinning hard drive interfaces currently on the market. The next thing that needs to be addressed is the cost of designing and building these PCIe based SSD drives in the coming year. $1200 seems to be the going price for anything in the 512GB range with roughly 700MB/second data throughput. Once the price goes below the $1,0000 mark, I think the number of buyers will go up (albeit still niche consumers like PC Gamers). In the end we can only benefit by manufacturers dumping SATA for the PCIe interface and the Anandtech quote at the top of the blog, really reinforces what I’ve been observing so far this year.
Intel and Achronix-2 Great tastes that taste great together
According to Greg Martin, a spokesman for the FPGA maker, Achronix can compete with Xilinx and Altera because it has, at 1.5GHz in its current Speedster1 line, the fastest such chips on the market. And by moving to Intel’s 22nm technology, the company could have ramped up the clock speed to 3GHz.
That kind of says it all in one sentence, or two sentences in this case. The fastest FPGA on the market is quite an accomplishment unto itself. Putting that FPGA on the world’s most advanced production line and silicon wafter technology is what Andy Grove would called the 10X Effect. FPGA’s are reconfigurable processors that can have their circuits re-routed and optimized for different tasks over and over again. This is real beneficial for very small batches of processors where you need a custom design. Some of the things they can speed up is doing math or looking up things in a very large search through a database. In the past I was always curious whether they could be used a general purpose computer which could switch gears and optimize itself for different tasks. I didn’t know whether or not it would work or be worthwhile but it really seemed like there was a vast untapped reservoir of power in the FPGA.
Some super computer manufacturers have started using FPGAs as special purpose co-processors and have found immense speed-ups as a result. Oil prospecting companies have also used them to speed up analysis of seismic data and place good bets on dropping a well bore in the right spot. But price has always been a big barrier to entry as quoted in this article. $1,000 per chip is the cost. Which limits the appeal to those buyers where price is no object but speed and time are more important. The two big competitors in the field off FPGA manufacturing are Altix and Xilinx both of which design the chips but have them manufactured in other countries. This has led to FPGAs being second class citizens used older generation chip technologies on old manufacturing lines. They always had to deal with what they could get. Performance in terms of clock speed was always less too.
It was not unusual to see during the Megahertz and Gigahertz wars chip speeds increasing every month. FPGAs sped up too, but not nearly as fast. I remember seeing 200Mhz/sec and 400Mhz/sec touted as Xilinx and Altix top of the line products. With Achrnix running at 1.5Ghz, things have changed quite a bit. That’s a general purposed CPU speed in a completely customizable FPGA. This means you get speed that makes the FPGA even more useful. However, instead of going faster this article points out people would rather buy the same speed but use less electricity and generate less heat. There’s no better way to do this than to shrink the size of the circuits on the FPGA and that is the core philosophy of Intel Inc. They have just teamed up to put the Achronix FPGA on the smallest feature size production line using the most optimized, cost conscious manufacturer of silicon chips bar none.
Another point being made in the article is the market for FPGAs at this level of performance also tends to be more defense contract oriented. As a result, to maintain the level of security necessary to sell chips to this industry, the chips need to be made in the good ol’ USA and Intel doesn’t outsource anything when it comes to it’s top of the line production facilities. Everything is in Oregon, Arizona or Washington State and is guaranteed not to have any secret backdoors built in to funnel data to foreign governments.
I would love to see some University research projects start looking at FPGAs again and see if as speeds go up, power goes down if there’s a happy medium or mix of general purpose CPUs and FPGAs that might help the average joe working on his desktop, laptop or iPad. All I know is Intel entering a market will make it more competitive and hopefully lower the barrier of entry to anyone who would really like to get their hands on a useful processor that they can customize to their needs.
Building upon the original 1st-generation RevoDrive, the new version boasts speeds up to 740 MB/s and up to 120,000 IOPS, almost three times the throughput of other high-end SATA-based solutions.
One cannot make this stuff up, two weeks ago Angelbird announced its bootable PCI Express SSD. Late yesterday OCZ one of the biggest 3rd party after market makers of SSDs announces a new PCI Express SSD which is also bootable. Big difference between the Angelbird product and OCZ’s RevoDrive is the throughput on the top end. This means if you purchase the most expensive fully equipped card from either manufacturer you will get 900+MBytes/sec. on the Angelfire versus 700+MBytes/sec. on the Revodrive from OCZ. Other differences include the ‘native’ support of the OCZ on the Host OS. I think this means that they aren’t using the ‘virtual OS’ on the embedded chips to boot so much as having the PCIe drive electronics make everything appear to be a real native boot drive. Angelbird uses an embedded OS to virtualize and abstract the hardware so that you get to boot any OS you want and run it off the flash memory onboard.
The other difference I can see from reading the announcements is that only the largest configured size on the Angelbird that gets you the fastest throughput. As drives are added the RAID array is striped over more available flash drives. The OCZ product also does a RAID array to increase speed, however they hit the maximum throughput at an intermediate size (~250GByte configuration) and at the maximum size too. So if you want an ‘normal’ to ‘average’ size storage but better throughput you don’t have to buy the maxed out most expensive version of the OCZ RevoDrive to get there. Which means this could be a more manageable price for the gaming market or for the PC fanboys who want faster boot times. Don’t get me wrong though, I’m not recommending buying an expensive 250GByte RevoDrive if a similarly sized SATA SSD costs a good deal less. No far from it, the speed difference may not be worth the price you pay. But, the RevoDrive could be upgraded over time and keep your speeds at the max 700+MBytes/sec. you get with its high throughput intermediate configuration. Right now, I don’t have any prices to compare for either the Angelbird or OCZ Revodrive products. I can tell you however that the Fusion-io low end desktop product is in the $700-$800 range and doesn’t come with upgradeable storage, you get a few sizes to choose from, and that’s it. If either of the two products ship at a price significantly less than the Fusion-io product everyone will flock to them I’m sure.
Two other significant features touted by both product announcements are the SandForce SF-1200 flash controller. Right now that controller is the de facto standard high throughput part everyone is using for the SATA SSD products. There’s even an intermediate part on the market called the SF-1500 (their top end offering). So it’s de rigeur to include the SandForce SF-1200t in any product you hope to sell to a wide audience (especially hardware fanboys). However, let me caution you that in the flurry of product announcements and always keeping an eye on preventing buyers remorse, SandForce did announce very recently a new drive controller they have labelled the SF-2000 series. This part may or may not be targeted for the consumer desktop market, but depending on how well it performs once it starts shipping you may want to wait and see if the revision of this crop of newly announced PCIe cards adopts the SandForce controller chip to gain the extra throughput it is touting. The new controller is rated at 740MBytes/sec. all by itself, with 4 SSDs attached to it on a PCIe card, theoretically four times 740 equals 2,096 and that is a substantially large quantity of data coming through th PCI Express data bus. Luckily for most of us the PCI Express interface on a 4X (four lane) data bus has a while to go before it gets saturated by all this disk throughput. The question is how long will it take to overwhelm the a four lane PCI Express connector? I hope to see the day this happens.
Intel, Dell, EMC, Fujitsu and IBM are forming a working group to standardise PCIe-based solid state drives SSD, and have a webcast coming out today to discuss it.
Now this is interesting in that just two weeks after Angelbird pre-announces its own PCIe flash based SSD product, now Intel is forming a consortium. Things are heating up, this is now a hot new category and I want to draw your attention to a sentence in this Register article:
By connecting to a server’s PCIe bus, SSDs can pour out their contents faster to the server than by using Fibre Channel or SAS connectivity. The flash is used as a tier of memory below DRAM and cuts out drive array latency when reading and writing data.
This is without a doubt the first instance I have read that there is a belief, even just in the minds of the author of this article, that Fibre Channel and Serial Attached SCSI aren’t fast enough. Who knew PCI Express would be preferable to an old storage interface when it comes to enterprise computing? Lookout world, there’s a new sheriff in town and his name is PCIe SSD. This product category though will be not for the consumer end of the market at least not for this consortium. It is targeting the high margin, high end, data center market where interoperability keeps vendor lock-in from occurring. By choosing interoperability everyone has to gain an advantage not through engineering necessarily but through firmware most likely. If that’s the differentiator than whomever has the best embedded programming team will have the best throughput and the highest rated product. Let’s hope this all eventually finds a market saturation point driving the technology down into the consumer desktop, thus enabling a next big burst in desktop computer performance. I hope PCIe SSD’s become the next storage of choice and that motherboards can be rid of all SATA disk I/O ports and firmware in the near future. We don’t need SATA SSDs, we do need PCIe SSDs.
Extreme SSD performance over PCI-Express on the cheap? There’s hope!
A company called Angelbird is working on bringing high-performance SSD solutions to the masses, specifically, user upgradeable PCI-Express SSD solution.
This is one of a pair of SSD announcements that came in on Tuesday. SSDs are all around us now and the product announcements are coming in faster and harder. The first one, is from a British company named Angelbird. Looking at the website announcing the specs of their product, it is on paper a very fast PCIe based SSD drive. Right up there with Fusion-io in terms of what you get for the dollars spent. I’m a little concerned however due to the reliance of an OS hosted in the firmware of the PCIe card. I would prefer something a little more peripheral like that the OS supports natively, rather than have the card become the OS. But this is all speculative until actual production or test samples hit the review websites and we see some kind of benchmarks from the likes of Tom’s Hardware or Anandtech.
From MacNN|Electronista:
Iomega threw itself into external solid-state drives today through the External SSD Flash Drive. The storage uses a 1.8-inch SSD that lets it occupy a very small footprint but still outperform a rotating hard drive:
The second story covers a new product from Iomega where we have for the first time an external SSD from a mainstream manufacturer. Price is at premium compared to the performance, but if you like the looks you’ll be willing to pay. It’s not bad speeds for reading and writing, but it’s not the best compared to the amount of money you’re paying. And why do they still use a 2.5″ external case if it’s internally a 1.8″ drive? Couldn’t they shrink it down to the old Firefly HDD size from back in the day? It should be the smaller.
Tuesday Samsung announced that it had begun mass-producing the industry’s first 3-bit-per-cell, 64 Gb (8 GB) MLC NAND flash chip using 20-nm-class processing. The news follows Samsung’s introduction of 32 Gb (4 GB) 3-bit NAND flash using 30-nm-class processing last November, and the company’s 32 Gb MLC NAND using 20-nm-class processing unleashed in April.
Samsung’s product development keeps arriving faster and harder each revision of the product cycle. And competition is not slowing down. There are at least two other big flash memory manufacturers who are moving into the ~20nm-class of flash memory too. So three big manufacturers all manufacturing roughly the same ‘feature size’ and Apple sucking up all the supply. If it’s possible for an oversupply to occur it won’t be until next year I am sure and then hopefully prices will start to fall somewhat for the SSD market. Also add to this the Apple style packaging of multiple 64Gbit chips sandwiched one on top of the other to keep everything tidying in one small footprint and you have got ultra dense chips going into products now. In the iPhone and iPad they can layer up to 8 or 16 of those chips into one physical package to save room. This means we could see iPhones hitting 64Gbytes of storage and the iPad could reach 128Gbytes. It will truly be a new day once both of these devices hit these levels of storage. Consider my Mac mini from 2008. It has a spinning hard drive that is only 80Gbytes total. That my friends is a revolution in the making.
SandForce has now announced an SF-2000 controller that doubles up the I/O performance of the SF-1500. The new product runs at 60,000 sustained read and write IOPS and does 500MB/sec when handling read or write data. It uses a 6Gbit/s SATA interface and SandForce says it can make use of single-level cell flash, MLC or the enterprise MLC put out by Micron.
Sandforce is continuing to make great strides in its SSDdisk controller architecture. There’s no stopping the train now. But as always read the fine print on any SSD product you buy and find out who manufactures the drive controller and what version it is. Benchmarks are always a good thing to consult too before you buy.
“We believe the issue is resolved as we have expanded the database threshold to more than 1 trillion records. In the meantime, we are working with Microsoft to develop a warning system on database thresholds so we can anticipate these issues in the future.”
This is the key phrase regarding the recent event where BI stopped sending out alerts for the criminals it was tracking on behalf of police departments around the country. A company like this should do everything it can to design it’s systems for tracking so an eventuality like this doesn’t happen. How long before they bump up against the 1 Trillion record limit? I ask you. Let’s go back to the original article as it was posted on the BBC Online:
Thousands of US sex offenders, prisoners on parole and other convicts were left unmonitored after an electronic tagging system shut down because of data overload.
BI Incorporated, which runs the system, reached its data threshold – more than two billion records – on Tuesday.
This left authorities across 49 states unaware of offenders’ movement for about 12 hours.
BI increased its data storage capacity to avoid a repeat of the problem.
Prisons and other corrections agencies were blocked from getting notifications on about 16,000 people, BI Incorporated spokesman Jock Waldo said on Wednesday.
So the question I have a question as to how 16,000 people results in 2 Billion records in the database? Is that really all they are doing? How much old junk data are they keeping for legal purposes or just because they can keep it for potential future use? And how is it that a company depends on Microsoft to bail them out of such a critical situation. These seems like a very amateurish mistake. And could have been avoided by anyone with the title of Database Administrator who monitors the server on a regular basis. They should have known this thing was hitting an upper limit months ago and started rolling out a new database and moving records into it. This also shows the fundamental flaw in using SQL based record keeping for so-called real time data. Facebook gave up on it long ago as did Google. Rows and Tables and real time updates, doesn’t scale well. And if you cannot employ a Database Administrator to tell you when you are hitting a critical limit, but are dumping it off on the vendor, well good luck with that one guys.
Microsoft hasnt been granted the patent despite it having been first filed in September 2004, but it may face challenges to the claims from companies that began using GPU video encoding independently after the patent application was filed but before it was published.
Given that it took nVidia quite a while before they got any developers to work on shipping products that took advantage of their programmable GPUs (the CUDA architecture), it’s a surprise to me that Microsoft even filed a patent on this. Previously I have re-posted some press releases surrounding the products known as Avivo (from ATI/AMD) and Badaboom, which was designed to speed up this very thing. You rip a DVD and you want to save it to a smaller file size or one that’s compatible with a portable video player. But it takes forever on your computer, so what’s a person to do? Well thanks to nVidia and product X you just add a little software and speed up that transcoding to .mp4 format. It’s like discovering your car can do something you didn’t know was even possible, like turning into a Corvette on straight flat roadways. Now be advised not all roads are straight or flat, but when they are Boom! You can go as fast as you want. That’s what having an accelerated video encoding is like. It’s specialized but when you use it, it really works and it really speeds things up. I think part of why Microsoft wants to enforce this is in the hope of possibly getting licensing fees but part of it is also maintaining it’s bullying prowess on the desktop computer. They own the OS right? So why not remind everyone that were it not for their generosity and research labs we would all be using pocket calculators to do our taxes. This is one case, a premiere example of how patents are stifling innovation. And I would love to see this patent never be enforced or struck down.