Archive for the ‘flash memory’ Category
Tour said: “Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory. It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”
Rice University is continuing research on it’s ReRAM (resistive RAM) and has come up with some new ways to manufacture it. That’s the key to adopting any new discovery first done in a lab environment. You have to keep tweaking it to find out the best way to manufacture it at scale and at a reduced cost. So in the four years since the original announcement was made, now it’s possible to manufacture the Rice U ReRAM. And at the end of the article there’s a note that some people are already buying up licenses for the technology. Hopefully that’s not just for patent trolling protection insurance, no. Instead, I’m hoping some small Fabless chip design house takes this up and tries out some batches of this and qualifies it for manufacture at a large scale contract manufacturer of silicon chips. When that happens, then we’ll have the kind of momentum required to make ReRAM a real shipping product. And with any like Rice U. will continue work on improving the basic science behind the product so it more companies will find it attractive and lucrative. Keep your eye on ReRAM.
If Hewlett-Packard (HPQ) founders Bill Hewlett and Dave Packard are spinning in their graves, they may be due for a break. Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.
Memristor makes an appearance again as a potential memory technology for future computers. To date, flash memory has shown it can scale for a while far into the future. What benefit could there possibly be by adopting memristor? You might be able to put a good deal of it on the same die as the CPU for starters. Which means similar to Intel’s most recent i-Series CPUs with embedded graphics DRAM on the CPU, you could instead put an even larger amount of Memristor memory. Memristor is denser than DRAM and stays resident even after power is taken away from the circuit. Intel’s eDRAM scales up to 128MB on die, imagine how much Memristor memory might fit in the same space? The article states Memristor is 64-128 times denser than DRAM. I wonder if that also holds true from Intel’s embedded DRAM too? Even if it’s only 10x denser as compared to eDRAM, you could still fit 10x 128MB of Memristor memory embedded within a 4 core CPU socket. With that much available space the speed at which memory access needed to occur would solely be determined by the on chip bus speeds. No PCI or DRAM memory controller bus needed. Keep it all on die as much as possible and your speeds would scream along.
There are big downsides to adopting Memristor however. One drawback is how a CPU resets the memory on power down, when all the memory is non-volatile. The CPU now has to explicitly erase things on reset/shutdown before it reboots. That will take some architecture changes both on the hardware and software side. The article further states that even how programming languages use memory would be affected. Long term the promise of memristor is great, but the heavy lifting needed to accommodate the new technology hasn’t been done yet. In an effort to help speed the plow on this evolution in hardware and software, HP is enlisting the Open Source community. It’s hoped that some standards and best practices can slowly be hashed out as to how Memristor is accessed, written to and flushed by the OS, schedulers and apps. One possible early adopter and potential big win would be the large data center owners and Cloud operators.
In memory caches and databases are the bread and butter of the big hitters in Cloud Computing. Memristor might be adapted to this end as a virtual disk made up of memory cells on which a transaction log was written. Or could be pointed to by OS to be treated as a raw disk of sorts, only much faster. By the time the Cloud provider’s architects really optimized their infrastructure for Memristor, there’s no telling how flat the memory hierarchy could become. Today it’s a huge chain of higher and higher speed caches attached to spinning drives at the base of the pyramid. Given higher density like Memristor and physical location closer to the CPU core, one might eliminate a storage tier altogether for online analytical systems. Spinning drives might be relegated to the task of being storage tape replacements for less accessed, less hot data. HP’s hope is to deliver a computer optimized for Memristor (called “The Machine” in this article) by 2019 where Cache, Memory and Storage are no longer so tightly defined and compartmentalized. With any luck this will be a shipping product and will perform at the level they are predicting.
Although Intel’s SSD DC P3700 is clearly targeted at the enterprise, the drive will be priced quite aggressively at $3/GB. Furthermore, Intel will be using the same controller and firmware architecture in two other, lower cost derivatives (P3500/P3600). In light of Intel’s positioning of the P3xxx family, a number of you asked for us to run the drive through our standard client SSD workload. We didn’t have the time to do that before Computex, but it was the first thing I did upon my return. If you aren’t familiar with the P3700 I’d recommend reading the initial review, but otherwise let’s look at how it performs as a client drive.
This is Part #2 of the full review Anandtech did on the Intel P3700 PCIe/NVMe card. It’s reassuring to know that Anandtech reports Intel’s got more than just the top end P3700 coming out on the market. Other price points will be competing too for the non-enterprise workload types. $3/GB puts it at the top of a desktop peripheral price for even a fanboy gamer. But for data center workloads and the prices that crowd pays this is going to be an easy choice. Intel’s P3700 as the Anandtech concludes is built not just for speed (peak I/O) but for consistency at all queue depths, file sizes and block sizes. If you’re attempting to budget a capital improvement in your Data Center and you want to quote the increases you’ll see, these benchmarks will be proof enough that you’ll get every penny back that you spent. No need to throw an evaluation unit into your test rig, testing lab and benchmarking it yourself.
As for the lower end models, you might be able to dip your toe, though not at the same performance level, in at the $600 price point. That will be an average to smallish 400GB PCIe card the Intel SSD DC P3500. But still the overall design and engineering is derived in part from the move from just a straight PCIe interface to one that harnesses more data lanes on the PCIe bus and connects to the BIOS via the NVMHCI drive interface. That’s what you’re getting for that price. If you’re very sensitive to price, do not purchase this product line. Samsung has you more than adequately covered under the old regime SSD-SATA drive technology. And even then the performance is nothing to sneeze at. But do know things are in flux with the new higher performance drive interfaces manufacturers will be marketing and selling to you soon. Remember roughly this is the order in which things are improving and of higher I/O:
NVMe/NVMHCI>PCIe SSD>M.2>SATA Express (SATAe)>SATA SSD
And the incremental differences in the middle are small enough that you will only see benefits really if the price is cheaper for a slightly faster interface (say SATA SSD vs. SATA Express choose based on the price being dead equal, not necessarily just performance alone). Knowing what all these things do or even just what they mean and how that equates to your computer’s I/O performance will help you choose wisely over the next year to two years.
We don’t see infrequent blips of CPU architecture releases from Intel, we get a regular, 2-year tick-tock cadence. It’s time for Intel’s NSG to be given the resources necessary to do the same. I long for the day when we don’t just see these SSD releases limited to the enterprise and corporate client segments, but spread across all markets – from mobile to consumer PC client and of course up to the enterprise as well.
Big news in the SSD/Flash memory world at Computex in Taipei, Taiwan. Intel has entered the fray with Samsung and SandForce issuing a fully NVMe compliant set of drives running on PCIe cards. Throughputs are amazing, but the prices are overly competitive. You can enter the market for as low as $600 for a 400GB PCIe card running as an NVMe compliant drive. On Windows Server 2012 R2 and Windows 8.1 you get native support for NVMe drives. This is going to get really interesting. Especially considering all the markets and levels of consumers within the market. On the budget side is the SATA Express interface which is an attempt to factor out some of the slowness inherent in SSDs attached to SATA bus interfaces. Then there’s M.2 which is the smaller form factor PCIe based drive interface being adopted by manufacturers making light and small form factor tablets and laptops. That is a big jump past SATA altogether and also has a speed bump associated with it as it communicates directly with the PCIe bus. Last and most impressive of all is the NVMe devices announced by Intel with yet a further speed bump as it’s addressing multiple data lanes on PCI Express. Some concern trolls in the gaming community are quick to point out the data lanes are being lost to I/O when they already are maxing them out with their 3D graphics boards.
The route forward it seems would be Intel motherboard designs with a PCIe 3 interface with the equivalent data lanes for two full speed 16x graphics cards, but using that extra 16x lane to devote to I/O instead or maybe a 1.5X arrangement with a fully 16X lane and 2 more 8X lanes to handle regular I/O plus a dedicated 8X NVMe interface? It’s going to require some reengineering and BIOS updating no doubt to get all the speed out of all the devices simultaneously. That’s why I would also like to remind readers of the Flash-DIMM phenomenon as well sitting out there on the edges in the high speed, high frequency trading houses in the NYC metro area. We haven’t seen nor heard much since the original product announcement from IBM for the X6-series servers and the options for Flash-DIMMs on that product line. Smart Memory Technology (the prime designer/manufacturer of Flash-DIMMs for SanDisk) has now been bought out by SanDisk. Again now word on that product line now. Same is true for the Lenovo takeover of IBM’s Intel server product line (of which the X6-series is the jewel in the crown). Mergers and acquisitions have veiled and blunted some of these revolutionary product announcements, but I hope eventually that Flash-DIMMs see the light of day and gain full BIOS support and eventually make it into the Desktop computer market. As good as NVMe is going forward, I think we need too a mix of Flash-DIMM to see the full speed of the multi-core X86 Intel chips.
Even though the SATA Express isn’t meant to be a long term architectural improvement over simple SATA or just PCIe drive interfaces, it is a step to help get more speed out of existing flash based SSDs. The lower cost is what drive manufacturers are attempting to address and attract chronic upgraders to adopt the new bus bridging technology. It’s not quite got the same design or longer term speed gains in the newer M.2 SATA spec, so be forewarned SATA Express may fall by the wayside. This is especially true if people understand the tech specs better and see through the marketing language aimed at the home brew computer DIY types. Those folks are very cost sensitive and unwilling to purchase based on specs alone, much less long term viability of a particular drive interface technology.
So if you are somewhat baffled why SSDs seem to max out at ~500MB/s read and write speeds, rest assured SATA Express might help. It’s cheap enough, but just be extra careful that both your drive and motherboard fully support the new interface at the BIOS level. That way you’ll see the speed gains you thought you would get by trading up. Longer term you might actually want to take a closer look at the most recent versions of the NVMe interface (PCI 3.0 with up to 8 PCI lanes being used simultaneously). NVMe is likely to be the longer term winner for SSD based interfaces. But the costs to date put it in the Data Center class purchasing area, not necessarily in the Desktop class. However Intel is releasing product with NVMe interfaces later this year with prices possibly as low ~$700 but the premium per Gigabyte of storage is still pretty high. Always use your best judgement and look at how long you plan on using the computer as is. If you know you’re going to be swapping at regular intervals, I say fine choose SATA Express today. Likely you will jump at the right time for the next best I/O drive interface as they hit the market. However if you’re thinking you’re going to stand and hold onto the current desktop as long as possible, take a long hard look at M.2 and the NVMe products coming out shortly.
I don’t think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It’s not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ’s Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.
Listen well as you pine away for your very own SSD SATA drive. One day you will get that new thing. But what you really, really want is the new, NEW thing. And that my friends is quite simply the PCIe SSD. True the enterprise level purchasers have had a host of manufacturers and models to choose from in this form factor. But the desktop market cannot afford Fusion-io products at ~15K per card fully configured. That’s a whole different market there. RevoDrive has had a wider range of products that go from heights of Fusion-io down to the top end Gamer market with the RevoDrive R-series PCIe drives. But those have always been SATA drives piggy-backed onto a multi-lane PCIe card (4x or 8x depending on how many controllers were installed onboard the card). Here now the evolutionary step of dumping SATA in favor of a more native PCIe to NAND memory controller is slowly taking place. Apple has adopted it for the top end Mac Pro revision (the price and limited availability has made it hard to publicize this architectural choice). It has also been adopted in the laptops available since Summer 2013 that Apple produces (and I have the MacBook Air to prove it). Speedy, yes it is. But how do I get this on my home computer?
Anandtech was able to score an aftermarket card through a 3rd party in Australia along with a PCIe adapter card for that very Samsung PCIe drive. So where there is a will, there is a way. From that purchase of both the drive and adapter, this review of the Samsung PCIe drive has come about. And all one can say looking through all the benchmarks is we have not seen anything yet. Drive speeds which have been the bottle-neck in desktop and mobile computing since the dawn of the Personal Computer are slowly lifting. And not by a little but by a lot. This is going to herald a new age in personal computers that is as close to former Intel Chairman, Andy Grove’s 10X Effect. Samsung’s PCIe native SSD is that kind of disruptive, perspective altering product that will put all manufacturers on notice and force a sea change in design and manufacture.
As end users of the technology SSD’s with SATA interfaces have already had a big time impact on our laptops and desktops. But what I’ve been writing about and trying to find signs of ever since the first introduction of SSD drives is the logical path through the legacy interfaces. Whether it was ATA/BIOS or the bridge chips that glue the motherboard to the CPU, a number of “old” architecture items are still hanging around on the computers of today. Intel’s adoption of UEFI has been a big step forward in shedding the legacy bottleneck components. Beyond that native on CPU controllers for PCIe are a good step forward as well. Lastly the sockets and bridging chips on the motherboard are the neighborhood improvements that again help speed things up. The last mile however is the dumping of the “disk” interace, the ATA/SATA spec as a pre-requisite for reading data off of a spinning magnetic hard drive. We need to improve that last mile to the NAND memory chips and then we’re going to see the full benefit of products like the Samsung PCIe drive. And that day is nearly upon us with the most recent motherboard/chipset revision from Intel. We may need another revision to get exactly what we want, but the roadmap is there and all the manufacturers had better get on it. As Samsung’s driving this revolution,…NOW.
Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.
As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).
Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.
So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.