Posts Tagged ‘pcie’
Although Intel’s SSD DC P3700 is clearly targeted at the enterprise, the drive will be priced quite aggressively at $3/GB. Furthermore, Intel will be using the same controller and firmware architecture in two other, lower cost derivatives (P3500/P3600). In light of Intel’s positioning of the P3xxx family, a number of you asked for us to run the drive through our standard client SSD workload. We didn’t have the time to do that before Computex, but it was the first thing I did upon my return. If you aren’t familiar with the P3700 I’d recommend reading the initial review, but otherwise let’s look at how it performs as a client drive.
This is Part #2 of the full review Anandtech did on the Intel P3700 PCIe/NVMe card. It’s reassuring to know that Anandtech reports Intel’s got more than just the top end P3700 coming out on the market. Other price points will be competing too for the non-enterprise workload types. $3/GB puts it at the top of a desktop peripheral price for even a fanboy gamer. But for data center workloads and the prices that crowd pays this is going to be an easy choice. Intel’s P3700 as the Anandtech concludes is built not just for speed (peak I/O) but for consistency at all queue depths, file sizes and block sizes. If you’re attempting to budget a capital improvement in your Data Center and you want to quote the increases you’ll see, these benchmarks will be proof enough that you’ll get every penny back that you spent. No need to throw an evaluation unit into your test rig, testing lab and benchmarking it yourself.
As for the lower end models, you might be able to dip your toe, though not at the same performance level, in at the $600 price point. That will be an average to smallish 400GB PCIe card the Intel SSD DC P3500. But still the overall design and engineering is derived in part from the move from just a straight PCIe interface to one that harnesses more data lanes on the PCIe bus and connects to the BIOS via the NVMHCI drive interface. That’s what you’re getting for that price. If you’re very sensitive to price, do not purchase this product line. Samsung has you more than adequately covered under the old regime SSD-SATA drive technology. And even then the performance is nothing to sneeze at. But do know things are in flux with the new higher performance drive interfaces manufacturers will be marketing and selling to you soon. Remember roughly this is the order in which things are improving and of higher I/O:
NVMe/NVMHCI>PCIe SSD>M.2>SATA Express (SATAe)>SATA SSD
And the incremental differences in the middle are small enough that you will only see benefits really if the price is cheaper for a slightly faster interface (say SATA SSD vs. SATA Express choose based on the price being dead equal, not necessarily just performance alone). Knowing what all these things do or even just what they mean and how that equates to your computer’s I/O performance will help you choose wisely over the next year to two years.
We don’t see infrequent blips of CPU architecture releases from Intel, we get a regular, 2-year tick-tock cadence. It’s time for Intel’s NSG to be given the resources necessary to do the same. I long for the day when we don’t just see these SSD releases limited to the enterprise and corporate client segments, but spread across all markets – from mobile to consumer PC client and of course up to the enterprise as well.
Big news in the SSD/Flash memory world at Computex in Taipei, Taiwan. Intel has entered the fray with Samsung and SandForce issuing a fully NVMe compliant set of drives running on PCIe cards. Throughputs are amazing, but the prices are overly competitive. You can enter the market for as low as $600 for a 400GB PCIe card running as an NVMe compliant drive. On Windows Server 2012 R2 and Windows 8.1 you get native support for NVMe drives. This is going to get really interesting. Especially considering all the markets and levels of consumers within the market. On the budget side is the SATA Express interface which is an attempt to factor out some of the slowness inherent in SSDs attached to SATA bus interfaces. Then there’s M.2 which is the smaller form factor PCIe based drive interface being adopted by manufacturers making light and small form factor tablets and laptops. That is a big jump past SATA altogether and also has a speed bump associated with it as it communicates directly with the PCIe bus. Last and most impressive of all is the NVMe devices announced by Intel with yet a further speed bump as it’s addressing multiple data lanes on PCI Express. Some concern trolls in the gaming community are quick to point out the data lanes are being lost to I/O when they already are maxing them out with their 3D graphics boards.
The route forward it seems would be Intel motherboard designs with a PCIe 3 interface with the equivalent data lanes for two full speed 16x graphics cards, but using that extra 16x lane to devote to I/O instead or maybe a 1.5X arrangement with a fully 16X lane and 2 more 8X lanes to handle regular I/O plus a dedicated 8X NVMe interface? It’s going to require some reengineering and BIOS updating no doubt to get all the speed out of all the devices simultaneously. That’s why I would also like to remind readers of the Flash-DIMM phenomenon as well sitting out there on the edges in the high speed, high frequency trading houses in the NYC metro area. We haven’t seen nor heard much since the original product announcement from IBM for the X6-series servers and the options for Flash-DIMMs on that product line. Smart Memory Technology (the prime designer/manufacturer of Flash-DIMMs for SanDisk) has now been bought out by SanDisk. Again now word on that product line now. Same is true for the Lenovo takeover of IBM’s Intel server product line (of which the X6-series is the jewel in the crown). Mergers and acquisitions have veiled and blunted some of these revolutionary product announcements, but I hope eventually that Flash-DIMMs see the light of day and gain full BIOS support and eventually make it into the Desktop computer market. As good as NVMe is going forward, I think we need too a mix of Flash-DIMM to see the full speed of the multi-core X86 Intel chips.
Even though the SATA Express isn’t meant to be a long term architectural improvement over simple SATA or just PCIe drive interfaces, it is a step to help get more speed out of existing flash based SSDs. The lower cost is what drive manufacturers are attempting to address and attract chronic upgraders to adopt the new bus bridging technology. It’s not quite got the same design or longer term speed gains in the newer M.2 SATA spec, so be forewarned SATA Express may fall by the wayside. This is especially true if people understand the tech specs better and see through the marketing language aimed at the home brew computer DIY types. Those folks are very cost sensitive and unwilling to purchase based on specs alone, much less long term viability of a particular drive interface technology.
So if you are somewhat baffled why SSDs seem to max out at ~500MB/s read and write speeds, rest assured SATA Express might help. It’s cheap enough, but just be extra careful that both your drive and motherboard fully support the new interface at the BIOS level. That way you’ll see the speed gains you thought you would get by trading up. Longer term you might actually want to take a closer look at the most recent versions of the NVMe interface (PCI 3.0 with up to 8 PCI lanes being used simultaneously). NVMe is likely to be the longer term winner for SSD based interfaces. But the costs to date put it in the Data Center class purchasing area, not necessarily in the Desktop class. However Intel is releasing product with NVMe interfaces later this year with prices possibly as low ~$700 but the premium per Gigabyte of storage is still pretty high. Always use your best judgement and look at how long you plan on using the computer as is. If you know you’re going to be swapping at regular intervals, I say fine choose SATA Express today. Likely you will jump at the right time for the next best I/O drive interface as they hit the market. However if you’re thinking you’re going to stand and hold onto the current desktop as long as possible, take a long hard look at M.2 and the NVMe products coming out shortly.
I don’t think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It’s not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ’s Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.
Listen well as you pine away for your very own SSD SATA drive. One day you will get that new thing. But what you really, really want is the new, NEW thing. And that my friends is quite simply the PCIe SSD. True the enterprise level purchasers have had a host of manufacturers and models to choose from in this form factor. But the desktop market cannot afford Fusion-io products at ~15K per card fully configured. That’s a whole different market there. RevoDrive has had a wider range of products that go from heights of Fusion-io down to the top end Gamer market with the RevoDrive R-series PCIe drives. But those have always been SATA drives piggy-backed onto a multi-lane PCIe card (4x or 8x depending on how many controllers were installed onboard the card). Here now the evolutionary step of dumping SATA in favor of a more native PCIe to NAND memory controller is slowly taking place. Apple has adopted it for the top end Mac Pro revision (the price and limited availability has made it hard to publicize this architectural choice). It has also been adopted in the laptops available since Summer 2013 that Apple produces (and I have the MacBook Air to prove it). Speedy, yes it is. But how do I get this on my home computer?
Anandtech was able to score an aftermarket card through a 3rd party in Australia along with a PCIe adapter card for that very Samsung PCIe drive. So where there is a will, there is a way. From that purchase of both the drive and adapter, this review of the Samsung PCIe drive has come about. And all one can say looking through all the benchmarks is we have not seen anything yet. Drive speeds which have been the bottle-neck in desktop and mobile computing since the dawn of the Personal Computer are slowly lifting. And not by a little but by a lot. This is going to herald a new age in personal computers that is as close to former Intel Chairman, Andy Grove’s 10X Effect. Samsung’s PCIe native SSD is that kind of disruptive, perspective altering product that will put all manufacturers on notice and force a sea change in design and manufacture.
As end users of the technology SSD’s with SATA interfaces have already had a big time impact on our laptops and desktops. But what I’ve been writing about and trying to find signs of ever since the first introduction of SSD drives is the logical path through the legacy interfaces. Whether it was ATA/BIOS or the bridge chips that glue the motherboard to the CPU, a number of “old” architecture items are still hanging around on the computers of today. Intel’s adoption of UEFI has been a big step forward in shedding the legacy bottleneck components. Beyond that native on CPU controllers for PCIe are a good step forward as well. Lastly the sockets and bridging chips on the motherboard are the neighborhood improvements that again help speed things up. The last mile however is the dumping of the “disk” interace, the ATA/SATA spec as a pre-requisite for reading data off of a spinning magnetic hard drive. We need to improve that last mile to the NAND memory chips and then we’re going to see the full benefit of products like the Samsung PCIe drive. And that day is nearly upon us with the most recent motherboard/chipset revision from Intel. We may need another revision to get exactly what we want, but the roadmap is there and all the manufacturers had better get on it. As Samsung’s driving this revolution,…NOW.
Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.
As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).
Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.
So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.
Like the native API libraries, directFS is implemented directly on ioMemory, significantly reducing latency by entirely bypassing operating system buffer caches, file system and kernel block I/O layers. Fusion-io directFS will be released as a practical working example of an application running natively on flash to help developers explore the use of Fusion-io APIs.
via (Chris Mellor) Fusion-io shoves OS aside, lets apps drill straight into flash • The Register.
Another interesting announcement from the folks at Fusion-io regarding their brand of PCIe SSD cards. There was a proof of concept project covered previously by Chris Mellor in which Fusion-io attempted to top out at 1 Billion IOPs using a novel architecture where PCIe SSD drives were not treated as storage. In fact the Fusion-io was turned into a memory tier bypassing most of the OSes own buffers and queues for handling a traditional Filesystem. Doing this reaped many benefits in terms of depleting the latency inherent with a FileSystem and how it has to communicate through the OS kernel through to the memory subsystem and back again.
Considering also work done within the last 4 years or more using so-called “in memory’ databases and big data projects in general a product like directFS might pair nicely with them. The limit with in memory databases is always the amount of RAM available and total number of cpu nodes managing those memory subsystems. Tack on the necessary storage to load and snapshot the database over time and you have a very traditional looking database server. However, if you supplement that traditional looking architecture with a tier of storage like the directFS the SAN network becomes a 3rd tier of storage, almost like a tape backup device. Sounds interesting the more I daydream about it.
- Three questions Fusion-io’s rivals face after flash API bombshell (go.theregister.com)
- Fusion-io SDK gives developers native memory access, keys to the NAND realm (engadget.com)
- Fusion-io demos billion IOPS server config – The Register (carpetbomberz.com)
Finally theres talk about looking at other interfaces in addition to SATA. Its possible that we may see a PCIe version of SandForces 3rd generation controller.
Some interesting notes about future directions SandForce might take especially now that SandForce has been bought out by LSI. They are hard at work attempting to optimize other parts of their current memory controller technology (speeding up small random reads and writes). There might be another 2X performance gain to be had at least on the SSD front, but more importantly is the PCI Express market. Fusion-io has been the team to beat when it comes to integrating components and moving data across the PCIe interface. Now SandForce is looking to come out with a bona fide PCIe-SSD controller which up until now has been a roll-your own type affair. The engineering and design expertise of companies like Fusion-io were absolutely necessary to get a PCIe SSD card to market. Now that playing field too will be leveled somewhat and possibly now competitors will enter the market with equally good performance numbers
But even more interesting than this wrinkle in the parts design for PCIe SSDs is the announcement earlier this month of Fusion-io’s new software interface for getting around the limits of File I/O on modern day OSes. Auto Commit Memory: “ACM is a software layer which allows developers to send and receive data stored on Fusion-io’s ioDrive cards directly to and from the CPU, rather than relying upon the operating system”(Link to The Verge article listed in my Fusion-io article). SandForce is up against a moving target if they hope to compete more directly with Fusion-io who is now investing in hardware AND software engineering at the same time. 1 Billion IOPS is nothing to sneeze at given the pace of change since SATA SSDs and PCIe SSDs hit the market in quantity.