With ‘The Machine,’ HP May Have Invented a New Kind of Computer – Businessweek

An image of a circuit with 17 memristors captu...
An image of a circuit with 17 memristors captured by an atomic force microscope. Each memristor is composed of two layers of titanium dioxide connected by wire. As electrical current is applied to one layer, the small signal resistance of the other layer is changed, which may in turn be used as a method to register data. HP makes memory from a once-theoretical circuit (Photo credit: Wikipedia)

If Hewlett-Packard (HPQ) founders Bill Hewlett and Dave Packard are spinning in their graves, they may be due for a break. Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.

via With ‘The Machine,’ HP May Have Invented a New Kind of Computer – Businessweek.

Memristor makes an appearance again as a potential memory technology for future computers. To date, flash memory has shown it can scale for a while far into the future. What benefit could there possibly be by adopting memristor? You might be able to put a good deal of it on the same die as the CPU for starters. Which means similar to Intel’s most recent i-Series CPUs with embedded graphics DRAM on the CPU, you could instead put an even larger amount of Memristor memory. Memristor is denser than DRAM and stays resident even after power is taken away from the circuit. Intel’s eDRAM scales up to 128MB on die, imagine how much Memristor memory might fit in the same space? The article states Memristor is 64-128 times denser than DRAM. I wonder if that also holds true from Intel’s embedded DRAM too? Even if it’s only 10x denser as compared to eDRAM, you could still fit 10x 128MB of Memristor  memory embedded within a 4 core CPU socket. With that much available space the speed at which memory access needed to occur would solely be determined by the on chip bus speeds. No PCI or DRAM memory controller bus needed. Keep it all on die as much as possible and your speeds would scream along.

There are big downsides to adopting Memristor however. One drawback is how a CPU resets the memory on power down, when all the memory is non-volatile. The CPU now has to explicitly erase things on reset/shutdown before it reboots. That will take some architecture changes both on the hardware and software side. The article further states that even how programming languages use memory would be affected. Long term the promise of memristor is great, but the heavy lifting needed to accommodate the new technology hasn’t been done yet. In an effort to help speed the plow on this evolution in hardware and software, HP is enlisting the Open Source community. It’s hoped that some standards and best practices can slowly be hashed out as to how Memristor is accessed, written to and flushed by the OS, schedulers and apps. One possible early adopter and potential big win would be the large data center owners and Cloud operators.

In memory caches and databases are the bread and butter of the big hitters in Cloud Computing. Memristor might be adapted to this end as a virtual disk made up of memory cells on which a transaction log was written. Or could be pointed to by OS to be treated as a raw disk of sorts, only much faster. By the time the Cloud provider’s architects really optimized their infrastructure for Memristor, there’s no telling how flat the memory hierarchy could become. Today it’s a huge chain of higher and higher speed caches attached to spinning drives at the base of the pyramid. Given higher density like Memristor and physical location closer to the CPU core, one might eliminate a storage tier altogether for online analytical systems. Spinning drives might be relegated to the task of being storage tape replacements for less accessed, less hot data. HP’s hope is to deliver a computer optimized for Memristor (called “The Machine” in this article) by 2019 where Cache, Memory and Storage are no longer so tightly defined and compartmentalized. With any luck this will be a shipping product and will perform at the level they are predicting.

Advertisements

Follow-Up – EETimes on SanDisk UltraDIMMs

Image representing IBM as depicted in CrunchBase
Image via CrunchBase

http://www.eetimes.com/document.asp?doc_id=1320775

“The eXFlash DIMM is an option for IBM‘s System x3850 and x3950 X6 servers providing up to 12.8 TB of flash capacity. (Although just as this story was being written, IBM announced it was selling its x86 server business to Lenovo for $2.3 billion).”

Sadly it seems the party is over before it even got started in the sales and shipping of UltraDIMM equipped IBM x86 servers. If Lenovo snatches up this product line, I’m sure all the customers will still be perfectly happy but I worry about that level of innovation and product testing that led to the introduction of UltraDIMM may be slowed.

I’m not criticizing Lenovo for this, they have done a fine job taking over the laptops and desktop brand from IBM.  The motivation to keep on creating new, early samples of very risky and untried technologies seems to be more IBM’s interest in maintaining it’s technological lead in the data center. I don’t know how Lenovo figures into that equation. How much will Lenovo sell in the way of rackmount servers like the X6 line? And just recently there’s been rumblings that IBM wants to sell off it’s long history of doing semi-conductor manufacturing as well.

It’s almost too much to think R&D would be given up by IBM in semi-conductors. Outside of Bell Labs, IBM’s fundamental work in this field brought things like silicon on insulator, copper interconnects and myriad other firsts to ever smaller, finer design rules. While Intel followed it’s own process R&D agenda, IBM went its own way too always trying to find advantage it’s in inventions. Albeit that blistering pace of patent filings means they will likely never see all the benefits of that Research and Development. At best IBM can only hope to enforce it’s patents in a Nathan Myhrvold like way, filing law suits on all infringers, protecting it’s intellectual property. That’s going to be a sad day for all of us who marveled at what they demoed, prototyped and manufactured. So long IBM, hello IBM Global Services.

Enhanced by Zemanta

Flash DOOMED to drive itself off a cliff – boffins • The Register

A flash memory cell.
Image via Wikipedia

Microsoft and University of California San Diego researchers have said flash has a bleak future because smaller and more densely packed circuits on the chips silicon will make it too slow and unreliable. Enterprise flash cost/bit will stagnate and the cutting edge that is flash will become a blunted blade.

via Flash DOOMED to drive itself off a cliff – boffins • The Register. As reported bChris Mellor for The Register (http://www.theregister.co.uk/)

More information regarding semiconductor manufacturers rumors and speculation of a wall being hit in the shrinking down of Flash memory chips. (see this link to the previous Carpetbomber article from Dec. 15). This report has a more definitive ring to it as actual data has been collected and projections based on models of that data. The trend according to these researchers is lower performance due to increasingly bad error rates and signaling on the chip itself. Higher Density chips = Lower Performance per memory cell.

To hedge against this dark future for NAND flash memory companies are attempting to develop novel and in cases exotic technology. IBM has “racetrack memory“, Hewlett-Packard and Hynix have MemRistor and the list goes on. Nobody in the industry has any idea what comes next so bets are being placed all over the map. My advice to anyone reading this article is do not choose a winner until it has won. I say this as someone who has watched a number of technologies fight for supremacy in the market. Sony Betamax versus JVC VHS, HD-DVD versus Blu-ray, LCD versus Plasma Display Panel, etc. I will admit at times the time span for these battles can be waged over a longer period of time, and so it can be harder to tell who has won. But it seems to be shorter time spans over the life of these products as more recent battles have been waged. And who is to say, Blu-ray hasn’t really been adopted widely enough to say it is the be all and end all as DVD and CD disks both are used widely as recordable media. Just know that to go any further in improving the cost vs. performance ratio NAND will need to be forsaken to get to the next technological benchmark in high speed, random access, long term, durable storage media.

Things to look out for as the NAND bandwagon slows down are Triple Level Memory cells, or worse yet Quadruple Level cells. These are not going to be the big saviors the average consumer hopes they will be. Performance of Flash memory that gangs up the memory cells also has higher error rates at the beginning and even higher over time. The amount of cells assigned for being ‘over-provisioned’ will be so high as to negate the cost benefit of choosing the higher density memory cells. Also being touted as a way forward to stave off the end of the road are error correcting circuits and digital signal processors onboard the chips and controllers. As the age of the chip begins to affect its reliability, more statistical quality control techniques are applied to offset the losses of signal quality in the chip. This is a technique used today by at least one manufacturer (Intel), but how widely it can be adopted and how successfully is another question altogether. It would seem each memory manufacturer has its own culture and as a result, a technique for fixing the problem. Who ever has the best marketing and sales campaigns will as past history has shown will be the winner.

English: I, § Music Sorter § (talk) ...
Image via Wikipedia

Could MRAM Ultimately Replace DRAM? < PC World.in

Everspin on Wednesday said its MRAM magnetoresistive random access memory is trickling into products that require reliable, fast non-volatile memory that can preserve data in the event of a power failure.

via Could MRAM Ultimately Replace DRAM? < Other PC Hardware Components, Technology, RAM, Components, Technology < PC World India News < PC World.in.

en:This is a simplified MRAM cell structure.
Image via Wikipedia

Magneto-Resistive RAM in the news

I haven’t heard any product announcements in a while. But it appears Everspin is keeping the faith and shipping real products to real manufacturers. I couldn’t be happier that it’s now on the market and competing for some product designs head to head with RAM and Flash memory. But in this instance it’s really competing against a whole other main stream product; static RAM.

The so-called SRAM was always used as a high speed read mostly cache that allowed a good sized buffer to stay close to the CPU. Static RAM caches were the easiest (but maybe not most cost effective) way to bump the speed of any Motorola or Intel cpu during their co-domination of the desktop market (Intel 386 and Motorola 680000). Stick an SRAM between the CPU and the motherboard, and voila 10-15% performance increase versus a straight through connection between CPU and the motherboard. And static RAM much like Flash based memory chips could also be used to hold data resident for many days powered down. But the cost versus Flash makes it much less competitive. However MRAM can also be used where you might have used a static RAM in the past. Current manufacturers are using it in place of static RAM in hard drive Host Bus Adaptors. This is not just a cost savings but a material savings as these days it is more common to back any mission critical drive electronics with a super-capacitor.

With Magnetic RAMs you can skip including the super capacitor and let the persistence built-in to MRAM do the rest (no need for refreshes or write/re-writes in the background). It makes me wonder if you also went with a super-capacitor to back everything locally and a Magnetic RAM module as well how big a mess that might give them to manage. But from a risk management standing, how much extra or how much less risk would you incur using MRAM plus Super-capacitors in your Disk Controller? I’m sure the cost of manufacture might not warrant the extra effort, but it would still be cool to see a statistical analysis comparing this ‘belt and suspenders’ extravagant setup versus just MRAM or just Super-capacitors.

AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND

There’s a point of diminishing return for Flash memory where further shrinking the chips makes them less and less durable over time. Which has led me to believe there’s a ‘plateau’ of size/durability that will soon be reached by most Flash memory manufacturers. However Intel’s deep reserve of research in silicon semiconductors is helping lead the charge to the next generation of more dense, smaller Flash memory and Micron is partnering with them to help

English: NAND Flash memory circuit
Image via Wikipedia

The big question is endurance, however we wont see a reduction in write cycles this time around. IMFTs 20nm client-grade compute NAND used in consumer SSDs is designed for 3K – 5K write cycles, identical to its 25nm process.

via AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND.

If true this will help considerably in driving down cost of Flash memory chips while maintaining the current level of wear and performance drop seen over the lifetime of a chip. Stories I have read previously indicated that Flash memory might not continue to evolve using the current generation of silicon chip manufacturing technology. Performance drops occur as memory cells wear out. Memory cells were wearing out faster and faster as the wires and transistors got smaller and narrower on the Flash memory chip.

The reason for this is memory cells have to be erased in order to free them up and writing and erasing take a toll on the memory cell each time one of these operations is performed. Single Level memory cells are the most robust, and can go through many thousands even millions of write and erase cycles before they wear out. However the cost per megabyte of Single Level memory cells make it an Enterprise level premium price level for Corporate customers generally speaking. Two Level memory cells are much more cost effective, but the structure of the cells makes them less durable than Single Level cells. And as the wires connecting them get thinner and narrower, the amount of write and erase cycles they can endure without failing drops significantly. Enterprise customers in the past would not purchase products specifically because of this limitation of the Two level memory cell.

As companies like Intel and Samsung tried to make Flash memory chips smaller and less expensive to manufacture, the durability of the chips became less and less. The question everyone asked is there a point of diminishing return where smaller design rules, thinner wires is going to make chips so fragile? The solution for most manufacturers is to add spare memory cells, “over-providing” so that when a cell fails, you can unlock a spare and continue using the whole chip. The over -provisioning no so secret trick has been the way most Solid State Disks (SSDs) have handled the write/erase problem for Two Level memory cells. But even then, the question is how much do you over-provision? Another technique used is called wear-levelling where a memory controller distributes writes/erases over ALL the chips available to it. A statistical scheme is used to make sure each and every chip suffers equally and gets the same number of wear and tear apllied to it. It’s difficult balancing act manufacturers of Flash Memory and storage product manufacturers who consume those chips to make products that perform adequately, do not fail unexpectedly and do not cost too much for laptop and desktop manufacturers to offer to their customers.

If Intel and Micron can successfully address the fragility of Flash chips as the wiring and design rules get smaller and smaller, we will start to see larger memories included in more mobile devices. I predict you will see iPhones and Samsung Android smartphones with upwards of 128GBytes of Flash memory storage. Similarly, tablets and ultra-mobile laptops will also start to have larger and larger SSDs available. Costs should stay about where they are now in comparison to current shipping products. We’ll just have more products to choose from, say like 1TByte SSDs instead of the more typical high end 512GByte SSDs we see today. Prices might also come down, but that’s bound to take a little longer until all the other Flash memory manufacturers catch up.

A flash memory cell.
Image via Wikipedia: Wiring of a Flash Memory Cell

Birck Nanotechnology Center – Ferroelectric RAM

Schematic drawing of original designs of DRAM ...
Image via Wikipedia

The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.

via Discovery Park – Birck Nanotechnology Center – News.

I’m always pleasantly surprised to read that work is still being done on alternate materials for Random Access Memory (RAM). I was following closely developments in the category of ferroelectric RAM by folks like Samsung and HP. Very few of these products promised enough return on investment to be developed into products. And some notable efforts by big manufacturers were abandoned altogether.

If this research effort can be licensed to a big chip manufacturer and not turned into a form of patent trolling ammunition I would feel the effort was not wasted. I think too often most recently these patented technologies are not used as a means of advancing the art of computer technology. Instead they are a portfolio to a litigator seeking rent on the patented technology.

Due to the frequency of abandoned projects in the alternative DRAM technology category, I’m hoping the compatibility of this chip’s manufacturing process with existing chip making technology will be a big step forward. A paradigm shifting technology like magnetic RAM might just push us to the next big mountain top of power conservation, performance and capability that the CPU enjoyed from 1969 to roughly 2005 when chip speeds began to plateau.

OCZ samples twin-core ARM SSD controller • The Register

OCZ is swiftly moving up the charts of manufacturers attempting to differentiate product at the consumer level. Between the PCIe based RevoDrives and this new announcement of it’s own Flash memory controller it appears they are out front on the performance and future performance fronts. Here’s to any manufacturer who decides to not just license SandForce controllers but also design and produce their own.

OCZ Technology
Image via Wikipedia

OCZ says it is available for evaluation now by OEMs and, we presume, OCZ will be using it in its own flash products. Were looking at 1TB SSDs using TLC flash, shipping sequential data out at 500MB/sec which boot quickly, and could be combined to provide multi-TB flash data stores. Parallelising data access would provide multi-GB/sec I/O. The flash future looks bright.

via OCZ samples twin-core ARM SSD controller • The Register.

Who knew pairing an ARM core with the drive electronics for a Flash based SSD could be so successful. Not only are the ARM chips helping to drive the cpus on our handheld devices, they are now becoming the SSD Drive controllers too! If OCZ is able to create these drive controllers with good yields (say 70% on the first run) then they are going to hopefully give themselves a pricing advantage and get a higher profit margin per device sold. This is assuming they don’t have to pay royalties for the SandForce drive controller on every device they ship.

If OCZ was able to draw up their own drive controller, I would be surprised. However, since they have acquired Indilinx it seems like they are making good on the promise held by Indilinx’s current crop of drive controllers. Let’s just hope they are able to match the performance of SandForce at the same price points as well. Otherwise it’s nothing more than a kind of patent machine that will allow OCZ to wage lawsuits against competitors for Intellectual Property they acquired through the acquisition of Indilinx. And we have seen too much of that recently with Apple’s secret bid for Nortel’s patent pool and Google’s acquisition of Motorola.