Categories
science & technology technology wired culture

Move Over GPS, Here Comes the Smartphone – NYTimes.com

Smartphone & GPS
Maybe these devices will converge into one

I personally enjoyed very much the iPhone 3GS presentation when TomTom Inc. presented their software/hardware add-ons that will allow you to use the iPhone as fully functional Navigation System. The question is how long companies like Garmin can sit monopolizing the market and provide little more than radical incrementalism in it’s new product offerings. About a year ago there were four competitors in the personal navigation market: Garmin, TomTom and Navigon with Magellan kind of in the background. Navigon has ended it’s production of devices but will sell it’s software to anyone willing to license it. Magellan is still creeping around, but has been superceeded by Garmin long ago. So TomTom and Garmin beat each others heads in on a quarterly basis. TomTom really did innovate in the software end of things providing all kinds of aids like telling you which road lane to take on the highway, or help at difficult intersections. As they rolled these out, Garmin would just sit back and eventually respond with a similar feature. Slowly by attrition trying to bleed away the advantage of TomTom. Worse yet, Garmin entered into a project to design a brand new cell phone with all the software and gps components integrated into it. THAT folks is the Garmin strategy. They will own the production of the device and the software or nothing at all. TomTom has taken a rather different approach and is kind of taking a cue from Navigon. They took the Apple iPhone Application development environment and ported the software into it. Now the GPS chip of the iPhone can be fully accessed and used to turn the iPhone into a TomTom Go!

Oh how I wish Garmin had seen this coming. Worse yet, they will not adapt their strategy. It’s full steam ahead on the cell phone and they are sticking to it. Ericsson is helping them design it, and it won’t be out for another year. Which shows the perilous position they are in. With the blistering pace of product introductions in the Navigation market, wouldn’t Garmin have learned that a 2 year design cycle on a cell phone is going to KILL the product once it’s released? And worse yet, as the tastes change, who is going to give up their iPhone just to have the privilege of owning the Garmin branded cell phone. I swear that product is dead on arrival and Garmin needs to pay off it’s contract with Ericsson and bury all the prototypes built so far. End it, end it now.

“It’s more like a desperate move. Now that you have the iPhone and the Pre, it’s just too late,” Mr. Blin said. Smartphones equipped with GPS “are the model moving forward that is going to be successful.”

via Move Over GPS, Here Comes the Smartphone – NYTimes.com.

Categories
science & technology technology

Toshiba 3D flash chip

Toshiba currently bonds several traditional flash chips into a multi-chip stacked package. The Apple iPhone 3GS is an example of one manufacturer using this seemingly cutting edge technology. In one chip Toshiba has achieved 32GBytes of storage. But size is always a consideration for portable devices like cell phones. So how do you continue increasing the storage without making the chip too thick?

Enter the nirvana of 3D CMOS manufacturing. SanDisk and Toshiba both have aquired companies who dabbled in the 3D chip area. And I’m not talking multi-chip modules, stacked on on top of  another in a really thin profile. These would be laid down one metallic layer at a time in the manufacture process, achieving the thinnest profile theoretically possible. So if you are like me and amazed that 32GBytes of Flash can fit in one chip, just wait. The densities are going to improve even more. But it’s going to be a few years into the future. Three years of development and research is going to be needed to make the 3D Flash chip a manufacturable product.

The basic idea is to stack layers of flash memory atop one another to build a higher capacity chip more cheaply than by integrating the same number of cells into a single layer chip. The stacked chip would also occupy a smaller area than a single layer chip with the same capacity.

via Toshiba hopes for 3D flash chip within three years • The Register.

Categories
science & technology technology

Waterproof Lithium-Air Batteries

You may remember High School chemistry class when the topic of reactive metals came up. My teacher had a big slab of pure sodium he kept in a jar under kerosene. The reason for that was to prevent any water, even humidity in the air from reacting with that pure metallic sodium. He would slice pieces off of the sodium to make the surfaces completely free of tarnish. Then pull out the pieces with forceps. And in a display of pyrotechnics and sound and fury, he would place the metal in a flask of water. And it would fizz violently racing around on the surface of the water. It was reacting with the water creating Lye (NaOH-Sodium Hydroxide) and Hydrogen Gas(H2). He would then light the gas to show it was really combustible Hydorgen gas.

Well, Lithium is also a very reactive metal too. Which means it has lots of energy stored up in it that can be tapped to do useful things, like being a battery electrode. Lithium Ion batteries exploit this physical trait to give us the highest energy density batteries on the market save for some exotic specialty chemistries, like Zinc Air. Lithium Ion uses all kinds of tricks to keep the water and moisture out of the mix inside the battery. However these tricks take away from the total energy density of the battery. So now the race is on to use pure metallic lithium in a battery without having to use any tricks to protect it from water.

A company based in Berkeley, CA, is developing lightweight, high-energy batteries that can use the surrounding air as a cathode. PolyPlus is partnering with a manufacturing firm to develop single-use lithium metal-air batteries for the government, and it expects these batteries to be on the market within a few years. The company also has rechargeable lithium metal-air batteries in the early stages of development that could eventually power electric vehicles that can go for longer in between charges.

via Technology Review: Waterproof Lithium-Air Batteries.

Categories
computers science & technology technology

Intel to double SSD capacity • The Register

Things are really beginning to heat up now that Toshiba and Samsung are making moves to market new SSD products. Intel is also revising it’s product line by trying to move it’s SSDs to the high end process technology at the 32nm design rule. Moving from 50nm to 32nm is going to increase densities, but most likely costs will stay high as usual for all Intel based product offerings. Nobody wants SSDs to suddenly become a commodity product. Not yet.

Intel is expected to bring forward the projected doubling of its SSD capacities to as early as next month.

The current X18-M and X25-M solid state drives (SSDs) use a 50nm process and have 80GB and 160GB capacities with 2-bit multi-level cell (MLC) technology. A single level cell (SLC) X25-E has faster I/O rates and comes in 32GB and 64GB capacities.

via Intel to double SSD capacity • The Register.

Categories
computers science & technology technology

Moore’s Law to take a breather • The Register

Back in the days of Byte magazine still being published, there was a lot of talk and speculation about new technology to create smaller microchips. Some manufacturers were touting Extreme UV, some thought X-rays would be necessary. In the years since then a small modification of existing manufacturing methods was added.

“Immersion” lithography, or exposing lithography masks using water as the means of transmission rather than air was widely adopted to shrink things down. Dipping everything into optically clear water helps keep the UV light from scattering, the way it would if it were travelling through air or a simple vacuum. So immersion has become widespread, adding years to the old technology. Now even the old style UV processes are hitting the end of their useful life times.And Intel is at last touting Extreme UV as the next big thing.

Note this article from April 22, 2008. Intel was not at all confident in how cost effective Extreme UV would be for making chips on it’s production lines. The belief is EUV will allow chips to shrink from 32 nanometers down to the next lower process design rules. According to the article that would be the 22nm level, and would require all kinds fo tricks to achieve. Stuff like double-patterning, phase-shifting, and pixellated exposure masks in addition to immersion litho. They might be able to tweak the lens material for the exposure source, they might be able to tweak the refractive index of the immersion liquid. However the cost of production lines and masks to make the chips is going to sky-rocket. Brand new chip fab plants are still on the order of $1Billion+ to construct. The number of years the cost of those fabrication lines can be spread out (amortization) is not going to be long enough. So it looks like the commoditization of microchips will finally settle in. We will buy chips for less and less per 1,000, until they are like lightbulbs. It is very near the end of an era as Moore’s law finally hits the wall of physics.

Diminishing Returns of process shrinks

iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore’s Law right.

via iSuppli: Moore’s Law to take a breather • The Register.

Categories
blogtools science & technology technology wired culture

Google Wave – The Shape of Things to come

The Google IO conference in Australia
The Google IO conference by Niall Kennedy

via: Official Google Blog: Went Walkabout. Brought back Google Wave

Did anyone watch the demo video from Google Australia? A number of key members from Google Maps set out to address the task of communication and collaboration. Lars and Jens Rasmussen decided now that Gmaps is a killer, mash-up enabled web app, it’s time to design the Next Big Thing. Enter Google Wave, it is the be all end all paradigm shifting cloud application of all time. It combines all the breathless handwaving and fits of pique that Web 2.0 encompassed 5 years ago. I consider Web 2.0 to have really started the Summer of 2004 with some blogging and podcasting efforts going on and slow but widespread adoption of RSS publishing and subscribing. So first I’ll give you the big link to the video of the demo by Lars Rasmussen and Company:

It is 90 minutes long. It is full of every litte UI tweak and webapp nicety along with rip-roaring collaboration functionality examples and “possible uses” for Google Wave. If you cannot or will not watch a 90 minute video just let me say pictures do speak louder than words. I would have to write a 1,000 page manual to describe everything that’s included in Google Wave. First let’s start off the list of what Google Wave is ‘like’.

It’s like email. You can send and receive messages with a desktop software client. It’s like Chat, you can chat live with anyone who is also on Google Wave. It’s like webmail in that you can also run it without a client and see the same data store. It’s like social bookmarking, you find something you copy it, you keep it, you annotate it, you share it. It’s like picture sharing websites, you take a picture, you upload it, you annotate it, you tag it, you share it. It’s like video sharing websites, same thing as before, upload, annotate, tag, share. It’s like WebEx where you give a presentation, everyone can see the desktp presentation as you give it and comment on it through a chat back-channel. It’s like Sharepoint where you can check-in, check-out documents, revise them, see the revisions and share them with others. It’s like word processor, it has spell checking enabled live as you type. It can even translate into other languages for you on the fly. It’s like all those Web 2.0 mash-ups where you take parts from one webapp and combine them with another so you can have Twitter embedded within your Google Waves. There are no documents as such only text streams associated with authors, editors, recipients, etc. You create waves, you share waves, you store waves, you edit waves, you embed waves, you mash-up waves. One really compelling example given towards the end is using Waves as something like a Content Managements System where mulitple authors work, comment, revise a single text document (a wave) and then collapse it down into a single new revision that get’s shared out until a full document, fully edited is the final product. Whether that be a software spec, user manual or website article doesn’t matter the collaboration mechanism is the same.

So that’s the gratuitous list of what I think Google Wave is. There is some question as to whether Gmail, Google Docs & Spreadsheets will go away in favor of this new protocol and architecture. Management at Google have indicated it is not the case, but that the current Google suite would adopt Google Wave like functionality. I think the collaboration capability would pump-up the volume on the Cloud based software suite. Microsoft will have to further address something like this being made freely available or even leaseable for private business like Gmail is today. And thinking even farther ahead for Universities using Course Management Systems today,… There’s a lot of functionality in Google Wave that is duplicated in 90% of pay for, fully licensed software for Content Management Systems. Any University already using Gmail for student email and wanting to dip their toes into Course Management Systems should consider Google Wave as a possibility. Better yet, any company that repackages and leverages Google Wave in a new Course Management System would likely compete very heavily with the likes of Microsoft/Blackboard.

Categories
science & technology

Nvidia pitches OpenCL as ‘market builder’ • The Register

I used to participate pretty heavily in the old Byte Magazine online forums. One particular thread I was actively involved in was Reconfigurable Computing. The premise we followed was that of Field Programmable Gate Arrays becoming so powerful they could be used as CPUs on a desktop computer. Most people felt this was doable, but inefficient, more likely a FPGA as a reconfigurable co-processor might be better. Enter OpenCL, the way of parsing out tasks to the right tool. In some ways I see a strong correlation to the old Reconfigurable CPU discussion where you used the best tool for the job. In FPGA worlds, you would reconfigure cores to match a particular workload on demand. So if you were playing a Game, you might make the CPU into a GPU until you were done with the game. If you were recording audio, you would reconfigure the FPGA into a DSP, and so on.

OpenCL seems much more lightweight and less risky on the implementation side because it just takes advantage of what’s there. Not anything like the ideas we had of earth shaking changes in architecture (using an FPGA instead of a CPU). Reading what OpenCL might allow in a diverse multi processor desktop computer, it makes me want to strike up the argument for co-processors at least. In an OpenCL world you could easily have an FPGA available as a co-processor and a nice robust nVidia GPU chugging away without discriminating architecturally against either. OpenCL would help parse out the task. Mix in some FPGA level support in the OS as a re-configurable processor, and Voila, you get a DSP or whatever else you might want at any point in the clock cycle.

Given Intel’s drive towards multi-core CPUs, nVidia’s parallel processors, and somewhat less impressive gains on the FPGA front, you could have an AI right on your desktop. Now someone had better get started on those OpenCL drivers and hooks for the kernel! I wish sometimes I could be that person. But it’s too far out of my ability to make it happen.