Categories
data center flash memory technology

With ‘The Machine,’ HP May Have Invented a New Kind of Computer – Businessweek

An image of a circuit with 17 memristors captu...
An image of a circuit with 17 memristors captured by an atomic force microscope. Each memristor is composed of two layers of titanium dioxide connected by wire. As electrical current is applied to one layer, the small signal resistance of the other layer is changed, which may in turn be used as a method to register data. HP makes memory from a once-theoretical circuit (Photo credit: Wikipedia)

If Hewlett-Packard (HPQ) founders Bill Hewlett and Dave Packard are spinning in their graves, they may be due for a break. Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.

via With ‘The Machine,’ HP May Have Invented a New Kind of Computer – Businessweek.

Memristor makes an appearance again as a potential memory technology for future computers. To date, flash memory has shown it can scale for a while far into the future. What benefit could there possibly be by adopting memristor? You might be able to put a good deal of it on the same die as the CPU for starters. Which means similar to Intel’s most recent i-Series CPUs with embedded graphics DRAM on the CPU, you could instead put an even larger amount of Memristor memory. Memristor is denser than DRAM and stays resident even after power is taken away from the circuit. Intel’s eDRAM scales up to 128MB on die, imagine how much Memristor memory might fit in the same space? The article states Memristor is 64-128 times denser than DRAM. I wonder if that also holds true from Intel’s embedded DRAM too? Even if it’s only 10x denser as compared to eDRAM, you could still fit 10x 128MB of Memristor  memory embedded within a 4 core CPU socket. With that much available space the speed at which memory access needed to occur would solely be determined by the on chip bus speeds. No PCI or DRAM memory controller bus needed. Keep it all on die as much as possible and your speeds would scream along.

There are big downsides to adopting Memristor however. One drawback is how a CPU resets the memory on power down, when all the memory is non-volatile. The CPU now has to explicitly erase things on reset/shutdown before it reboots. That will take some architecture changes both on the hardware and software side. The article further states that even how programming languages use memory would be affected. Long term the promise of memristor is great, but the heavy lifting needed to accommodate the new technology hasn’t been done yet. In an effort to help speed the plow on this evolution in hardware and software, HP is enlisting the Open Source community. It’s hoped that some standards and best practices can slowly be hashed out as to how Memristor is accessed, written to and flushed by the OS, schedulers and apps. One possible early adopter and potential big win would be the large data center owners and Cloud operators.

In memory caches and databases are the bread and butter of the big hitters in Cloud Computing. Memristor might be adapted to this end as a virtual disk made up of memory cells on which a transaction log was written. Or could be pointed to by OS to be treated as a raw disk of sorts, only much faster. By the time the Cloud provider’s architects really optimized their infrastructure for Memristor, there’s no telling how flat the memory hierarchy could become. Today it’s a huge chain of higher and higher speed caches attached to spinning drives at the base of the pyramid. Given higher density like Memristor and physical location closer to the CPU core, one might eliminate a storage tier altogether for online analytical systems. Spinning drives might be relegated to the task of being storage tape replacements for less accessed, less hot data. HP’s hope is to deliver a computer optimized for Memristor (called “The Machine” in this article) by 2019 where Cache, Memory and Storage are no longer so tightly defined and compartmentalized. With any luck this will be a shipping product and will perform at the level they are predicting.

Categories
blogtools cloud computers data center technology

Intel Responds to Calxeda/HP ARM Server News (Wired.com)

Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.via Cloudline | Blog | Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data.

Structure of the TILE64 Processor from Tilera
Image via Wikipedia: Tile64 mesh network processor from Tilera
Image representing Tilera as depicted in Crunc...
Image via CrunchBase

So Intel gets an interview with a Conde-Nast writer for a sub-blog of Wired.com. I doubt too many purchasers or data center architects consult Cloudline@Wired.com. But all the same, I saw through many thinly veiled bits of handwaving and old saws from Intel saying, “Yes, this exists but we’re already addressing it with our exiting product lines,. . .” So, I wrote in a comment to this very article. Especially regarding a throw-away line mentioning the ‘future’ of the data center and the direction the Data Center and Cloud Computing market was headed. However the moderator never published the comment. In effect, I raised the Question: Whither Tilera? And the Quanta SM-2 server based on the Tilera Chip?

Aren’t they exactly what is described by the author John Stokes as a network of cores on a chip? And given the scale of Tilera’s own product plans going into the future and the fact they are not just concentrating on Network gear but actual Compute Clouds too, I’d say both Stokes and Walcyzk are asking the wrong questions and directing our attention in the wrong direction. This is not a PR battle but a flat out technology battle. You cannot win this with words and white papers but in fact it requires benchmarks and deployments and Case Histories. Technical merit and superior technology will differentiate the players in the  Cloud in a Box race. And this hasn’t been the case in the past as Intel has battled AMD in the desktop consumer market. In the data center Intel Fear Uncertainty and Doubt is the only weapon they have.

And I’ll quote directly from John Stokes’s article here describing EXACTLY the kind of product that Tilera has been shipping already:

“Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.”

The key phrase here is cluster on a chip, in essence exactly what Tilera has strived to achieve with its Tilera64 based architecture. To review from previous blog entries of this website following the announcements and timelines published by Tilera: