I used to follow all the achievements and exploits of the research departments working on advances in non-volatile memories (anything not DRAM). Magneto Resistive materials looked very promising both Magnetic RAM and Ferro-Electric RAM had been worked on for years. But each new project seemed to spur others on to look for different techniques and materials. So the New Scientist has surveyed the landscape and is reporting back it’s findings. What technology will ultimately win the race?
So where is the technology that can store our high-definition home cinema collection on a single chip? Or every book we would ever want to read or refer to? Flash can’t do that. In labs across the world, though, an impressive array of technologies is lining up that could make such dreams achievable.
I used to follow news stories on new computer memory technology on the IEEE.com website. I didn’t always understand all the terms and technologies, but I did want to know what might be coming on the market in a couples of years. Magnetic RAM seemed like a game changer as did Ferro-Electric RAM. Both of them like Flash could hold their contents without the computer being turned on. And in some ways they were superior to Flash in that they read/write cycle didn’t destroy the memory over time. Flash is known to have a useful fixed lifespan before it wears out. According to the postscript in this article at New Scientist flash memory can sustain between 10,000 and 100,000 read/write cycles before it fails. Despite this, flash memory doesn’t seem to be going away anytime soon, and begs the question where are my MRAMs and FeRAM chips?
Maybe my faith in MRAM or Magnetic RAM was misplaced. I had great hopes for it exactly because so much time had been spent working on it. Looks like they couldn’t break the 32MB barrier in terms of the effective density of the MRAM chips themselves. And FeRAM is also stuck at 128MB effectively for similar reasons. It’s very difficult to contain or restrict the area over which the magnetism acts on the bits running through the wires on the chip. It’s all about too much crosstalk on the wires.
This article mentions something called Racetrack Memory. And what about Racetrack Memory so called RRAM? It reminds me a lot of what I read about the old Sperry Univac computers that used Mercury Delay Lines to store 512bits at a time. Only now instead of acoustic waves, it’s storing actual electrons and reading them in series as needed. Cool stuff, and if I had to vote for which one is going to win, obviously Phase Change and Racetrack look like good prospects right now. I hope both of them see the light of day real soon now.
Outsourcing datacenters is very popular and lucrative depending on your bottom-line requirements. Nobody wants to manage anyone with that level of skill and salary, especially when it comes to benefits. So keeping your skilled workers to a minimum is a way of saving money. But when your datacenter goes down for 12 hours, you lose money.
HP managers are reaping the harvest of their deep cost-cutting at EDS, in the form of a massive mainframe failure that crippled some very large clients, including the taxpayer-owned bank RBS.
The Royal Bank of Scotland is a National Bank and a big player in the European banking market. In Datacenter speak 5 Nines of availability is a guarantee the computer will stay up and running 99.999% of the time. This roughly calculates to 5.26 minutes of downtime allowed PER YEAR. This Royal Bank of Scotland computer was down 12Hours which tranlates to 99.8% Reliability. I think HP and EDS owe some people money for breaking the terms of their contract. It just proves outsourcing is not a cure-all for cost savings. You as the customer don’t know when they are going to start dropping head count to inflate the value of their stock on Wall Street. And when the economy soured, they dropped head count, like you wouldn’t believe. What does that mean for outstanding contracts to provide datacenter services? Well it means all bets are off, you get what ever they are willing to give you. If you are employed to make and manage contracts like this for your company be forewarned. Your outsourcing company can fire everyone at the drop of a hat.
Intel’s executives were quite brash when talking about Larrabee even though most of its public appearances were made on PowerPoint slides. They said that Larrabee would roar onto the scene and outperform competing products.
And so now finally the NY Times nails the coffin shut on Intel’s Larrabee saga. To refresh your memory this is the second attempt by Intel to create a graphics processor. The first failed attempt was some years ago in the late 1990s when 3dfx (bought by nVidia) was tearing up the charts with their Voodoo 1 and Voodoo 2 PCI-based 3D accelerator cards. The age of Quake, Quake 2 were upon us and everyone wanted smoother frame rates. Intel wanted to show its prowess in the design of a low cost graphics card running on the brand new AGP slot which Intel had just invented (remember AGP?). What turned out was a similar set of delays and poor performance as engineering samples came out of the development labs. Given the torrid pace of products released by nVidia and eventually ATI, Intel couldn’t keep up. Their benchmark was surpassed by the time their graphics card saw the light of day, and they couldn’t give them away. (see Wikipedia: Intel i740)
The Intel740, or i740, is a graphics processing unit using an AGP interface released by Intel in 1998. Intel was hoping to use the i740 to popularize the AGP port, while most graphics vendors were still using PCI. Released with enormous fanfare, the i740 proved to have disappointing real-world performance, and sank from view after only a few months on the market
Enter Larrabee, a whole new ball game at Intel, right?! The trend toward larger numbers of parallel processors on GPUs from nVidia and ATI/AMD led Intel to believe they might leverage some of their production lines to make a graphics card again. But this time it was different, nVidia had moved from single purpose GPUs to General Purpose GPUs in order to create a secondary market using their cards as compute intensive co-processor cards. They called it CUDA and provided a few development tools at the early stages. Intel latched onto this idea of the General Purpose GPU and decided they could do better. What’s more general purpose than an Intel x86 processor right? And what if you could provided the libraries and Hardware Abstraction Layer that could turn a larger number of processor cores into something that looked and smelled like a GPU?
For Intel it seemed like a win/win/win everybody wins. The manufacturing lines using older design rules at the 45nm size could be utilized for production, making the graphics card pure profit. They could put 32 processors on a card and program them to do multi duties for the OS (graphics for games, co-processor for transcoding videos to MP4). But each time they did a demo a product white paper and demo at a trade show it became obvious the timeline and schedule was slipping. They had benchmarks to show, great claims to make, future projections of performance to declare. Roadmaps were the order of the day. But just last week rumors started to set in.
Similar to the graphics card foray of the past Intel couldn’t beat it’s time to market demons. The Larrabee project was going to be so late and still was using 45nm manufacturing design rules. Given Intel’s top of the line production lines moved to 32nm this year, and nVidia and AMD are doing design process shrinks on their current products, Intel was at a disadvantage. Rather than scrap the thing and lose face again, they decided to recover somewhat and put Larrabee out there as a free software/hardware development kit and see if that was enough to get people to bite. I don’t know what if any benefit any development on this platform would bring. It would rank right up there with the Itanium and i740 as hugely promoted dead-end products with zero to negative market share. Big Fail – Do Not Want.
And for you armchair Monday morning technology quarter backs here are some links to enjoy leading up to the NYTimes article today:
Now that Droid has hit the market and the mobile Google OS is strutting its stuff, when are we going to see the benefits of an AppStore like Universe? Early wins are going to be critical, so maybe turn-by-turn navigation is an early win?!
Brady Forest writes: Google has announced a free turn-by-turn navigation system for Android 2.0 phones such as the Droid.
And with that we enter a killer app for the cell phone market and the end of the market for single purpose personal navigation devices. Everyone is desperate to get a sample of the Motorola Droid phone to see how well the mix of features work on the phone. Consumer Reports has tried out a number of iPhone navigation apps to see how they measure up to the purpose built navigators. For people who don’t need specific features or generally aren’t connoisseurs of turn-by-turn directions, they are passable. But for anyone who bought early and often from Magellan, Garmin and TomTom the re-purposed iPhone Apps will come up short.
The Motorola Droid however is trying to redefine the market by keeping most of the data in the cloud at Google Inc. datacenters and doing the necessary lookups as needed over the cell phone data network. This is the exact opposite of most personal navigation devices where all the mapping and point of interest data are kept on the device and manually updated through very huge, slow downloads of new data purchased online on an annual basis (at least for me). Depending on the results Consumer Reports gets, I’ll reserve judgment. This is not likely to shift the paradigm currently of personal navigation except that the devices are going to be necessarily even more multipurpose than Garmin has made them. And unwillingly made them at that. The Garmin Nuviphone was supposed to be a big deal. But it’s a poor substitute for a much cheaper phone and more feature filled navigation device. I think the inclusion of Google Maps and Google StreetView is the next big thing in navigation as the Lane assistance differentiated TomTom from Garmin about a year and a half ago. So radical incrementalism is the order of the day still in personal GPS devices. But with an open platform for developing navigation services, who knows what the future may hold. I’m hoping the current oligarchy between Garmin and TomTom starts to crumble and someone starts to eat away at the low end or even the high end of the market. Something has got to give.
What is it that Google Wave might accomplish that other web apps like Facebook/Twitter and god forbid Sharepoint don’t already do? It all depends on your workflow. Google Waves might fit or might fit not, it just depends on how quickly you adjust to the interface.
Wave challenges us to reevaluate how communication is done, stored, and shared between two or more people.
Point taken, since I watched the video of the demo done last spring I too have been smitten with the potential uses of Google Waves. First and foremost it is a communication medium. Second of all unlike email, there are no local, unsynced copies of the text/multimedia threads. Instead everything is central like an old style bulletin board, newsgroup or collaborative wiki. And like a wiki revisions are kept and can be “Played Back” to see how things have evolved over time. For people recognizing the limits of emailing attachments to accomplish this goal of group editing, the benefits far outweigh the barriers to entry. I was hoping to get an invitation into Google Waves, but haven’t yet received one. Of course if I do get invited, the problem of the Fax Machine will crop up. I will need to find someone else who I know well enough to collaborate with in order to try it out. And hopefully there will be a ready and willing audience when I do finally get an invite.
As far as how much better is Waves versus email, it depends very much on how you manage your communications already. Are you a telephone person or an email person or a face-to-face person. All these things affect how you will perceive the benefits of a persistent central store of all the blips and waves you participate in. I think Google could help explain things even to us mid-level technilogically capable folks who are still kind of bewildered by what went on in the Demos at Google Developer Day. But this PDF Educause has compiled will help considerably. The analogy I’m using now is the bulletin board/wiki/collaborative document example. Sometimes it’s just easier to understand something in comparison to something you already know/use/understand.
PS: Finally got an invite from Google Waves about two weeks ago and went hog wild inviting people to join in. If you want to include me in a Wave add me to your list as: firstname.lastname@example.org. Early returns from sending invites and participating in some experimental Waves has shown the wild popularity dying down quite a bit. At one point we had 8 participants in one single Wave. Trying out some of the add-on tools was interesting too. But the universe of add-ons is pretty small at this point. Hopefully Google will get that third party development effort going in high gear. As far as the utility of the Google Waves, it is way too much like a super-charged glorified bulletin board. It doesn’t have any easy hooks in or out to other Social Media infrastructure. Someone has to make it seamless with Facebook/Twitter/Gmail either though RSS hooks or making the whole framework/interface embeddable or linkable in other websites. As always we’ll see how this goes. They need to keep a torrid pace of development like Facebook achieved from 2005-2007 improving and adding membership to the Google Wave Universe.