It’s time stopped, what’s that sound, cost of SSDs are going down,…
But not really what’s going down is the engineering for price and sacrificing the performance. The old adage of “get an SSD, and it feels like new computer” are fast going away. Reason is the demand has increased to such an extent the older, higher performing designs just cost too much compared to what people are willing to pay. It’s a race to the bottom for larger single disk sizes at lower cost/GByte. And the speeds/throughputs keep going down.
I remember seeing speeds start around 200MByte/sec, and peak out at 500MBytes/sec right before the Samsung 840 Pro series took the awards for best SATA SSD. Things got real cloudy after that though. NVMe seemed to be a way forward, but even those devices are no guarantee of better performance (again, due to the cost cutting measures of designers at the fabrication plants for Flash memory). The TL;DR really is at the top of the article here, Intel’s newest product (Optane) is likely a next gen fix, at least as a secondary level storage cache between a slower spinning disk and the CPU. Hopefully sizes will increase (I remember having to eke by a 32GB SSD back in 2009!) and be useful to a wider range of applications and users.
Netlist are the owners of the patent on key parts of Sandisk’s UtraDIMM technology (licensed from Diablo Technologies originally, I believe). While Netlist has lawsuits going back and forth regarding its intellectual property, it continued to develop products. Here now is the EXPRESSvaultTM EV3 announcement. It’s a PCI RAM disk of sorts that backs up the RAM with a ultracapacitor/battery combo. If power is lost an automated process backs up the RAM to onboard flash memory for safe keeping until power is restored. This design is intended to get around the disadvantages of using Flash memory as a disk and the wear and tear that occurs to flash when it is written to frequently. Less expensive flash memory suffers more degradation the more you write to it, eventually memory cells will fail altogether. By using the backing flash memory as failsafe, you will write to that flash only in the event of an emergency, thereby keeping the flash out of the grindstone of high levels of I/O. Note this is a very specific niche application of this technology but is very much the market for which Netlist has produced products in the past. This is their target market.
The Future is lower latencies-Enter UltraDIMM
Turn now to a recent announcement by Lenovo and it’s X6 server line announcing further adoption of the UltraDIMM technology. Lenovo at least is carrying on trying to sell this technology of Flash based memory interspersed with DRAMs. The idea of having “tiers” of storage with SSDs, UltraDIMMs and DRAM all acting in concert is the high speed future for the data center architect. Lucky for the people purchasing these things Netlist and Diablo’s legal wrangling began to sort itself out this Spring 2015: http://www.storagereview.com/diablo_technologies_gains_ground_against_netlist_in_ulltradimm_lawsuit
With a final decision being made fairly recently: http://www.diablo-technologies.com/federal-court-completely-dissolves-injunction/
Now Diablo and Sandisk and UltraDIMM can compete in the marketplace once more. And provide a competitive advantage to the people willing to spend the money for the UltraDIMM product. By itself UltraDIMM does make for some very interesting future uses. More broadly the adoption of an UltraDIMM like technology in laptops, desktops, tablets could see speed improvements across the board. Whether that happens or not is based more on the economics of BIOS and motherboard manufacturers than the merit of the design engineering of UltraDIMMs. More specifically Lenovo and IBM before that had to do a lot of work on the X6 servers to support the new memory technology. Which points to another article from the person I trust to collect all the news and information on storage worldwide, The Register’s Chris Mellor. I’ve followed his writing since about 2005 and really enjoyed his take on the burgeoning SSD market as new products were announced with faster I/O every month in the heady days of 2007 and beyond. Things have slowed down a bit now and PCIe SSDs are still the reference standard by which I/O benchmarks are measured. Fusion-io is now owned by Sandisk and everyone’s still enjoying the speed increases they get when buying these high end PCIe products. But it’s important to note for further increases to occur, just like with Sandisk’s use of UltraDIMM you have to keep pushing the boundaries. And that’s where Chris’s most recent article comes in.
Chris discusses the how Non-Volatile Memory Host Controller Interface (NVMHCI) came about as a result of legacy carry-over from spinning hard drives in the AHCI (Advanced Host Controller Interface) standard developed by Intel. AHCI and SATA (Serial ATA, the follow-on to ATA) both assumed spinning magnetic hard drives (and the speeds at which they push I/O) would be the technology used by a CPU to interact with it’s largest data store, the hard drive. Once that data store became flash memory, a new standard to drive faster access I/O and lower latencies needed to be invented. Enter the NVMe (Non-volatile Memory Express) interface, now being marketed and sold by some manufacturers. A native data channel from the PCI bus to your SSD however it may be designed, is the next big thing in hardware for SSDs. With the promise of better speeds it is worth migrating, once the manufacturers get onboard. But Chris’s article goes further to really look out beyond the immediate concerns of migrating from SATA to NVMe as even Flash memory may eventually be usurped by a different as yet unheard of technology. Given that’s the case, NVMe abstracts enough of the “media” of the non-volatile memory that it should allow future adoption of a number of possible technologies that could usurp the crown of NAND memory chips. And that potentially is a greater benefit than simply just squeezing out a few more Megabytes per second read and write speed. Even more tantalizing in Chris’s view is the mixing of DRAM and Flash memories in a “mesh” lets say of higher and lower speed memories like Fusion-io’s software uses to make the sharp distinction between DRAM and Flash less visible. In a sense, the speed would just come with the purchase of the technology, how it actually works would be the proverbial magic to the sysadmins and residents of Userland.
The ever-increasing density of virtual infrastructures, and the need to scale databases larger than ever, is creating an ongoing need for faster storage. And while flash has become the “go to” performance option, there are environments that still need more. Nonvolatile DRAM is the heir apparent, but it often requires customized motherboards to implement, for which widespread availability could be years away. Netlist, pioneer of NVRAM, has introduced a product that is viable for most data centers right now: the EXPRESSvaultTM EV3.
The Flash Problem
While flash has solved many performance problems, it also creates a few. First there is a legitimate concern over flash wear, especially if the environment is write-heavy. There is also a concern about performance. While flash is fast compared to hard disk drives it’s slow when compared to RAM, especially, again, on writes.
But flash does have two compelling advantages over DRAM. First it is…
Interesting to finally see this form factor hit the market. These cards now are as big or bigger than the typical laptop hard drive. That’s a big deal in that any computer fortified with an SDXC card slot can have a flash based back-up store. I keep my Outlook mail archives on an a drive like this. And occasionally I use it to transfer files the way I would do with a reliable USB flash drive. And this on a laptop that already has an SSD, so I’ve got 2 tiers of this kind of storage. We’re reaching a kind of singularity in flash based storage where the chips and packaging are allowing for such small form factors, hard drives become moot. If I can stuff something this small into a slot roughly the size of a U.S. postage stamp, then why do I need an SATA or even an M.2 sized interface? Is it just for the sake of throughput and performance? That may be the only real argument.
For a very long time I’ve been keenly following the IOPs ratings of newly announced flash memory devices. From the SATA->SSD generation and the most recent PCIe generation to the UltraDIMMs. Now however, this Phase Change Memory announcement has kind of pushed all those other technologies aside. While the IOPs are far above a lot of other competing technologies, that is for reads and not writes. The speed/latency of the writes is about 55 times slower than the reads. So if you want top speed on reading and not writing the data, PCM is your best choice. But 55 times slower is not bad, it puts the write speed at approximately the same speed as Multi-Level Cell (MLC) flash memory currently used in your consumer grade SSD flash drives.
Chris Mellor’s emphasis is PCM likely better suited as a competitor to UltraDIMM as a motherboard memory than a faster PCIe SSD drive. And a lot depends on the chips, glue logic and Application Specific Integrated Circuits (ASICs) on the PCIe board. HGST went to great lengths to juice the whole project by creating a bypass around the typical PCIe interfaces allowing much greater throughput overall. Without that engineering trick, it’s likely the 3M IOPs level wouldn’t have been as easily achieved. So bear in mind, this is nowhere near being a shipping product. In order to achieve that level of development it’s going to take more time to make the thing work using a commodity PCIe chipset on a commodity designed/built motherboard. But still 3M IOPs is pretty impressive.
Tour said: “Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory. It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”
Rice University is continuing research on it’s ReRAM (resistive RAM) and has come up with some new ways to manufacture it. That’s the key to adopting any new discovery first done in a lab environment. You have to keep tweaking it to find out the best way to manufacture it at scale and at a reduced cost. So in the four years since the original announcement was made, now it’s possible to manufacture the Rice U ReRAM. And at the end of the article there’s a note that some people are already buying up licenses for the technology. Hopefully that’s not just for patent trolling protection insurance, no. Instead, I’m hoping some small Fabless chip design house takes this up and tries out some batches of this and qualifies it for manufacture at a large scale contract manufacturer of silicon chips. When that happens, then we’ll have the kind of momentum required to make ReRAM a real shipping product. And with any like Rice U. will continue work on improving the basic science behind the product so it more companies will find it attractive and lucrative. Keep your eye on ReRAM.
If Hewlett-Packard (HPQ) founders Bill Hewlett and Dave Packard are spinning in their graves, they may be due for a break. Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.
Memristor makes an appearance again as a potential memory technology for future computers. To date, flash memory has shown it can scale for a while far into the future. What benefit could there possibly be by adopting memristor? You might be able to put a good deal of it on the same die as the CPU for starters. Which means similar to Intel’s most recent i-Series CPUs with embedded graphics DRAM on the CPU, you could instead put an even larger amount of Memristor memory. Memristor is denser than DRAM and stays resident even after power is taken away from the circuit. Intel’s eDRAM scales up to 128MB on die, imagine how much Memristor memory might fit in the same space? The article states Memristor is 64-128 times denser than DRAM. I wonder if that also holds true from Intel’s embedded DRAM too? Even if it’s only 10x denser as compared to eDRAM, you could still fit 10x 128MB of Memristor memory embedded within a 4 core CPU socket. With that much available space the speed at which memory access needed to occur would solely be determined by the on chip bus speeds. No PCI or DRAM memory controller bus needed. Keep it all on die as much as possible and your speeds would scream along.
There are big downsides to adopting Memristor however. One drawback is how a CPU resets the memory on power down, when all the memory is non-volatile. The CPU now has to explicitly erase things on reset/shutdown before it reboots. That will take some architecture changes both on the hardware and software side. The article further states that even how programming languages use memory would be affected. Long term the promise of memristor is great, but the heavy lifting needed to accommodate the new technology hasn’t been done yet. In an effort to help speed the plow on this evolution in hardware and software, HP is enlisting the Open Source community. It’s hoped that some standards and best practices can slowly be hashed out as to how Memristor is accessed, written to and flushed by the OS, schedulers and apps. One possible early adopter and potential big win would be the large data center owners and Cloud operators.
In memory caches and databases are the bread and butter of the big hitters in Cloud Computing. Memristor might be adapted to this end as a virtual disk made up of memory cells on which a transaction log was written. Or could be pointed to by OS to be treated as a raw disk of sorts, only much faster. By the time the Cloud provider’s architects really optimized their infrastructure for Memristor, there’s no telling how flat the memory hierarchy could become. Today it’s a huge chain of higher and higher speed caches attached to spinning drives at the base of the pyramid. Given higher density like Memristor and physical location closer to the CPU core, one might eliminate a storage tier altogether for online analytical systems. Spinning drives might be relegated to the task of being storage tape replacements for less accessed, less hot data. HP’s hope is to deliver a computer optimized for Memristor (called “The Machine” in this article) by 2019 where Cache, Memory and Storage are no longer so tightly defined and compartmentalized. With any luck this will be a shipping product and will perform at the level they are predicting.
Although Intel’s SSD DC P3700 is clearly targeted at the enterprise, the drive will be priced quite aggressively at $3/GB. Furthermore, Intel will be using the same controller and firmware architecture in two other, lower cost derivatives (P3500/P3600). In light of Intel’s positioning of the P3xxx family, a number of you asked for us to run the drive through our standard client SSD workload. We didn’t have the time to do that before Computex, but it was the first thing I did upon my return. If you aren’t familiar with the P3700 I’d recommend reading the initial review, but otherwise let’s look at how it performs as a client drive.
This is Part #2 of the full review Anandtech did on the Intel P3700 PCIe/NVMe card. It’s reassuring to know that Anandtech reports Intel’s got more than just the top end P3700 coming out on the market. Other price points will be competing too for the non-enterprise workload types. $3/GB puts it at the top of a desktop peripheral price for even a fanboy gamer. But for data center workloads and the prices that crowd pays this is going to be an easy choice. Intel’s P3700 as the Anandtech concludes is built not just for speed (peak I/O) but for consistency at all queue depths, file sizes and block sizes. If you’re attempting to budget a capital improvement in your Data Center and you want to quote the increases you’ll see, these benchmarks will be proof enough that you’ll get every penny back that you spent. No need to throw an evaluation unit into your test rig, testing lab and benchmarking it yourself.
As for the lower end models, you might be able to dip your toe, though not at the same performance level, in at the $600 price point. That will be an average to smallish 400GB PCIe card the Intel SSD DC P3500. But still the overall design and engineering is derived in part from the move from just a straight PCIe interface to one that harnesses more data lanes on the PCIe bus and connects to the BIOS via the NVMHCI drive interface. That’s what you’re getting for that price. If you’re very sensitive to price, do not purchase this product line. Samsung has you more than adequately covered under the old regime SSD-SATA drive technology. And even then the performance is nothing to sneeze at. But do know things are in flux with the new higher performance drive interfaces manufacturers will be marketing and selling to you soon. Remember roughly this is the order in which things are improving and of higher I/O:
And the incremental differences in the middle are small enough that you will only see benefits really if the price is cheaper for a slightly faster interface (say SATA SSD vs. SATA Express choose based on the price being dead equal, not necessarily just performance alone). Knowing what all these things do or even just what they mean and how that equates to your computer’s I/O performance will help you choose wisely over the next year to two years.