Posts Tagged ‘pcie’
Like the native API libraries, directFS is implemented directly on ioMemory, significantly reducing latency by entirely bypassing operating system buffer caches, file system and kernel block I/O layers. Fusion-io directFS will be released as a practical working example of an application running natively on flash to help developers explore the use of Fusion-io APIs.
via (Chris Mellor) Fusion-io shoves OS aside, lets apps drill straight into flash • The Register.
Another interesting announcement from the folks at Fusion-io regarding their brand of PCIe SSD cards. There was a proof of concept project covered previously by Chris Mellor in which Fusion-io attempted to top out at 1 Billion IOPs using a novel architecture where PCIe SSD drives were not treated as storage. In fact the Fusion-io was turned into a memory tier bypassing most of the OSes own buffers and queues for handling a traditional Filesystem. Doing this reaped many benefits in terms of depleting the latency inherent with a FileSystem and how it has to communicate through the OS kernel through to the memory subsystem and back again.
Considering also work done within the last 4 years or more using so-called “in memory’ databases and big data projects in general a product like directFS might pair nicely with them. The limit with in memory databases is always the amount of RAM available and total number of cpu nodes managing those memory subsystems. Tack on the necessary storage to load and snapshot the database over time and you have a very traditional looking database server. However, if you supplement that traditional looking architecture with a tier of storage like the directFS the SAN network becomes a 3rd tier of storage, almost like a tape backup device. Sounds interesting the more I daydream about it.
- Three questions Fusion-io’s rivals face after flash API bombshell (go.theregister.com)
- Fusion-io SDK gives developers native memory access, keys to the NAND realm (engadget.com)
- Fusion-io demos billion IOPS server config – The Register (carpetbomberz.com)
Finally theres talk about looking at other interfaces in addition to SATA. Its possible that we may see a PCIe version of SandForces 3rd generation controller.
Some interesting notes about future directions SandForce might take especially now that SandForce has been bought out by LSI. They are hard at work attempting to optimize other parts of their current memory controller technology (speeding up small random reads and writes). There might be another 2X performance gain to be had at least on the SSD front, but more importantly is the PCI Express market. Fusion-io has been the team to beat when it comes to integrating components and moving data across the PCIe interface. Now SandForce is looking to come out with a bona fide PCIe-SSD controller which up until now has been a roll-your own type affair. The engineering and design expertise of companies like Fusion-io were absolutely necessary to get a PCIe SSD card to market. Now that playing field too will be leveled somewhat and possibly now competitors will enter the market with equally good performance numbers
But even more interesting than this wrinkle in the parts design for PCIe SSDs is the announcement earlier this month of Fusion-io’s new software interface for getting around the limits of File I/O on modern day OSes. Auto Commit Memory: “ACM is a software layer which allows developers to send and receive data stored on Fusion-io’s ioDrive cards directly to and from the CPU, rather than relying upon the operating system”(Link to The Verge article listed in my Fusion-io article). SandForce is up against a moving target if they hope to compete more directly with Fusion-io who is now investing in hardware AND software engineering at the same time. 1 Billion IOPS is nothing to sneeze at given the pace of change since SATA SSDs and PCIe SSDs hit the market in quantity.
The card will use the Marvell 88SE9455 RAID controller that will interface with the SandForce 2200-based daughter cards that can be added to the main controller on demand. This will allow for user-configurable drive sizes from between 60GB and 2TB in size, allowing you to expand your storage as your need for it increases.
I’m a big fan of Other World Computing (OWC) and have always marveled at their ability to create new products they brand on their own. In the article they talk about a new Mac compatible PCIe SSD. It sounds like an uncanny doppleganger to the Angelbird board announced about 2 years ago and started shipping last Fall 2011. The add-on sockets especially remind me of the ugpradable Angelbird board especially. There are not many PCIe SSD cards that have sockets for Flash memory modules and Other World Computing would be the second one I have seen since I’ve been commenting on these devices when they hit the consumer market. Putting sockets on the board makes it easier to come into the market at a lower price point for users where price is most important. However at the high end capacity is king for some purchasers of PCIe SSD drives. So the oddball upgradeable PCIe SSD fills a niche that’s for sure.
Performance projections for this card are really good and typical of most competing PCIe SSD cards. So depending on your needs you might find this perfect. Price however is always harder to pin down. Angelbird sold a bare PCIe card with no SSDs for around $249. It came with 32GB onboard for that price. What was really nice was the card used SATA sockets set far enough apart to place full sized SSDs on the card without crowding each other. This brought the possibility of slowly upgrading to higher speed drives or larger capacity drives over time to the consumer market.
But what’s cooler still is Angelbird’s card allowed it to run under ANY OS, even Mac OS as it was engineered to be a a free standing computer with a large Flash memory attached to it. That allowed it to pre-boot into an embedded OS before handing over control to the Host OS whatever flavor it might be. I don’t know if the OWC card works similarly, but it does NOT use SATA sockets or provide enough room to plug in SSD drives. The plug-in modules for this device are mSATA style sockets used in tablets and netbook style computers. So the modules will most likely need to be purchased direct from OWC to peform capacity upgrades over the life of the PCIe card itself. Prices have not yet been set according to this article.
- Marvell brews ARM-based native PCIe SSD Controller IC: 88NV9145 handles direct PCIe to NAND Flash I/O for high-performance, low-overhead SSD designs (denalimemoryreport.wordpress.com)
- OWC gives Mac Pro users the first PCI Express SSD option (9to5mac.com)
- Angelbird’s Wings PCIe-based SSD preview and benchmarks (engadget.com)
Fusion-io has crammed eight ioDrive flash modules on one PCIe card to give servers 10TB of app-accelerating flash.
This follows on from its second generation ioDrives: PCIe-connected flash cards using single level cell and multi-level cell flash to provide from 400GB to 2.4TB of flash memory, which can be used by applications to get stored data many times faster than from disk. By putting eight 1.28TB multi-level cell ioDrive 2 modules on a single wide ioDrive Octal PCIe card Fusion reaches a 10TB capacity level.
This is some big news in the fight to be king of the PCIe SSD market. I declare: Advantage Fusion-io. They now have the lead in terms of not just speed but also overall capacity at the price point they have targeted. As densities increase and prices more or less stay flat, the value add is more data can stay resident on the PCIe card and not be swapped out to Fibre-Channel array storage on the Storage Area Network (SAN). Performance is likely to be wicked cool and early adopters will now doubt reap big benefits from transaction processing and online analytic processing as well.
- Fusion-io Delivers 10 Terabyte ioDrive Octal (datacenterknowledge.com)
- Fusion-io doubles flash card’s speed, capacity; halves price (networkworld.com)
If you want more speed, then you will have to look to PCI-Express for the answer. Austrian-based Angelbird has opened its online storefront with its Wings add-in card and SSDs.
After more than one year of being announced Angelbird has designed and manufactured a new PCIe flash card. The design of which is full expandable over time depending on your budget needs. Fusion-io has a few ‘expandable’ cards in its inventory too, but the price class of Fusion-io is much higher than the consumer level Angelbird product. So if you cannot afford to build a 1TB flash-based PCIe card, do not worry. Buy what you can and outfit it later over time as your budget allows. Now that’s something any gamer fanboy or desktop enthusiast can get behind.
Angelbird does warn in advance power demands for typical 2.5″ SATA flash modules are higher than what the PCIe bus can provide typically. They recommend using their own memory modules to add onto their base level PCIe card. Up until I read those recommendations I had forgotten some of the limitations and workarounds Graphics Card manufacturers typical use. These have become so routine that there are now 2-3 extra power taps provided even by typical desktop manufacturers for their desktop machines. All this to accommodate the extra graphics chip power required by today’s display adapters. It makes me wonder if Angelbird could do a Rev. of the base level PCIe card with a little 4-pin power input or something similar. It’s doesn’t need another 150watts, it’s going to be closer to 20watts for this type of device I think. I wish Angelbird well and I hope sales start strong so they can sell out their first production run.
- What To Look For In PCIe SSD (informationweek.com)
Theres a new PCIe SSD in town: the RevoDrive 3. Armed with two SF-2281 controllers and anywhere from 128 – 256GB of NAND 120/240GB capacities, the RevoDrive 3 is similar to its predecessors in that the two controllers are RAIDed on card. Heres where things start to change though.
OCZ is back with a revision of its consumer grade PCIe SSD, the RevoDrive. This time out the SandForce SF-2281 makes an appearance and to great I/O effect. The bus interface is a true PCIe bridge chip as opposed to the last versions PCI-X to PCIe bridge. Also this device can be controlled completely through the OSes own drive utilities and TRIM support. All combined this is the most natively and well support PCIe SSD to hit the market. No benchmarks yet from a commercially shipping product. But my fingers are crossed that this thing is going to be faster than OCZ’s Vertex 3 and Vertex 3 Pro (I hope) while possibly holding more flash memory chips than those SATA 6 based SSDs.
One other upshot of this revised product is full OS booting support. So not only will TRIM work but your motherboard and the PCIe’s card electronics will allow you to boot directly off of the card. So this is by far the most evolved and versatile PCIe based SSD drive to date. Pricing is the next big question on my mind after reading the specifications. Hopefully will not be Enterprise grade (greater than $1200). I’ve found most off the prosumer and gamer market upgrade manufacturers are comfortable setting prices at the $1200 price point for these PCIe SSDs. And that trend has been pretty reliable going back to the original RevoDrive.
- OCZ RevoDrive 3 X2 and RevoDrive Hybrid hands-on (video) (engadget.com)
- OCZ opens wide and swallows Indilinx (go.theregister.com)
A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”
I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.
In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.
Tuesday LSI Corp announced the WarpDrive SLP-300 PCIe-based acceleration card, offering 300 GB of SLC solid state storage and performance up to 240,000 sustained IOPS. It also delivers I/O performance equal to hundreds of mechanical hard drives while consuming less than 25W of power–all for a meaty $11,500 USD.
This is the cost of entry for anyone working on an Enterprise Level project. You cannot participate unless you can cross the threshold of a PCIe card costing $11,500 USD. This is the first time I have seen an actual price quote on one of these cards that swims in the Data center consulting and provisioning market. Fusion-io cannot be too far off of this price when it’s not sold as a full package as part of a larger project RFP. I am somewhat stunned at the price premium, but LSI is a top engineering firm and they definitely can design their own custom silicon to get the top speed out of just about any commercial off the shelf Flash memory chips. I am impressed they went with the PCI Express (8X) 8 lane interface. I’m guessing that’s a requirement for more server owners whereas 4X is for the desktop market. Still I don’t see any 16X interfaces as of yet (that’s the interface most desktops use for their graphics cards from AMD and nVidia). One more part that makes this a premium offering is the choice of Single Level Cell Flash memory chips for the ultimate in speed and reliability along with the Serial Attached Storage interface onboard the PCIe card itself. Desktop models opt for SATA to PCI-X to PCI-e bridge chips forcing you to translate and re-order your data multiple times. I have a feel SAS bridges to PCI-e at the full 8X interface speeds and that is the key to getting faster than 1,000 MB/sec. speeds for write and reads. This part is quoted as getting in the range of ~1,400 MB/sec. and other than some very expensive turnkey boxes from manufacturers like Violin, this is a great user installable part to get the benefit of a really fast SSD drive array on a PCIe card.
Intel, Dell, EMC, Fujitsu and IBM are forming a working group to standardise PCIe-based solid state drives SSD, and have a webcast coming out today to discuss it.
Now this is interesting in that just two weeks after Angelbird pre-announces its own PCIe flash based SSD product, now Intel is forming a consortium. Things are heating up, this is now a hot new category and I want to draw your attention to a sentence in this Register article:
By connecting to a server’s PCIe bus, SSDs can pour out their contents faster to the server than by using Fibre Channel or SAS connectivity. The flash is used as a tier of memory below DRAM and cuts out drive array latency when reading and writing data.
This is without a doubt the first instance I have read that there is a belief, even just in the minds of the author of this article, that Fibre Channel and Serial Attached SCSI aren’t fast enough. Who knew PCI Express would be preferable to an old storage interface when it comes to enterprise computing? Lookout world, there’s a new sheriff in town and his name is PCIe SSD. This product category though will be not for the consumer end of the market at least not for this consortium. It is targeting the high margin, high end, data center market where interoperability keeps vendor lock-in from occurring. By choosing interoperability everyone has to gain an advantage not through engineering necessarily but through firmware most likely. If that’s the differentiator than whomever has the best embedded programming team will have the best throughput and the highest rated product. Let’s hope this all eventually finds a market saturation point driving the technology down into the consumer desktop, thus enabling a next big burst in desktop computer performance. I hope PCIe SSD’s become the next storage of choice and that motherboards can be rid of all SATA disk I/O ports and firmware in the near future. We don’t need SATA SSDs, we do need PCIe SSDs.
From Tom’s Hardware:
Extreme SSD performance over PCI-Express on the cheap? There’s hope!
A company called Angelbird is working on bringing high-performance SSD solutions to the masses, specifically, user upgradeable PCI-Express SSD solution.
This is one of a pair of SSD announcements that came in on Tuesday. SSDs are all around us now and the product announcements are coming in faster and harder. The first one, is from a British company named Angelbird. Looking at the website announcing the specs of their product, it is on paper a very fast PCIe based SSD drive. Right up there with Fusion-io in terms of what you get for the dollars spent. I’m a little concerned however due to the reliance of an OS hosted in the firmware of the PCIe card. I would prefer something a little more peripheral like that the OS supports natively, rather than have the card become the OS. But this is all speculative until actual production or test samples hit the review websites and we see some kind of benchmarks from the likes of Tom’s Hardware or Anandtech.
Iomega threw itself into external solid-state drives today through the External SSD Flash Drive. The storage uses a 1.8-inch SSD that lets it occupy a very small footprint but still outperform a rotating hard drive:
Read more: http://www.electronista.com/articles/10/10/15/iomega.outs.external.usb.30.ssd/
The second story covers a new product from Iomega where we have for the first time an external SSD from a mainstream manufacturer. Price is at premium compared to the performance, but if you like the looks you’ll be willing to pay. It’s not bad speeds for reading and writing, but it’s not the best compared to the amount of money you’re paying. And why do they still use a 2.5″ external case if it’s internally a 1.8″ drive? Couldn’t they shrink it down to the old Firefly HDD size from back in the day? It should be the smaller.