Carpet Bomberz Inc.

Focusing on desktop, data center news and analysis

Archive for the ‘mobile’ Category

Battery vendors push ultracapacitor wrappers to give Li-ions more bite • The Register

leave a comment »

Lithium ion battery by Varta (Museum Autovisio...

Lithium ion battery by Varta (Museum Autovision Altlußheim, Germany) (Photo credit: Wikipedia)

A pair of battery vendors are hoping that a new design which incorporates the use of an ultracapacitor material will help to improve and extend the life of lithium-ion battery packs.

via Battery vendors push ultracapacitor wrappers to give Li-ions more bite • The Register.

First a little background info on what is a capacitor: https://en.wikipedia.org/wiki/Ultracapacitor#History

In short it’s like a very powerful, high density battery for smoothing out the “load” of an electrical circuit. It helps prevent spikes and dips in the electricity as it flows through a device. But with recent work done on ultra-capacitors they can be more like a full-fledged battery that doesn’t ever lose it’s charge over time. When they are combined up with a real live battery you can do some pretty interesting things to both the capacitor and the battery to help them work together, allowing longer battery life, higher total amount of charge capacity. Many things can flow from combining ultracapacitors with a really high end Lithium ion battery.

Any technology, tweak or improvement that promises at minimum 10% improvement over current Lithium ion battery designs is worth a look. They’re claiming a full 15% in this story from The Reg. And due to the re-design it would seem it needs to meet regulatory/safety approval as well. Having seen the JAL Airlines suffer battery issues on the Boeing 787, I couldn’t agree more.

There will be some heavy lifting needing to be done between now and when a product like this hits the market. Testing and failure analysis will ultimately decide whether or not this ultra-capacitor/Lithium ion hybrid is safe enough to use for consumer electronics. I’m also hoping Apple and other manufacturer/design outfits like Apple are putting some eyes, ears and phone calls on this to learn more. Samsung too might be interested in this, but are seemingly more reliant for battery designs outside of their company. That’s where Apple has the upperhand long term, they will design every part if needed in order to keep ahead of the competition.

Enhanced by Zemanta

Written by Eric Likness

April 3, 2014 at 3:00 pm

DDR4 Heir-Apparent Makes Progress | EE Times

leave a comment »

The first DDR4 memory module was manufactured ...

The first DDR4 memory module was manufactured by Samsung and announced in January 2011. (Photo credit: Wikipedia)

The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.

via DDR4 Heir-Apparent Makes Progress | EE Times.

Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.

Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.

Enhanced by Zemanta

Written by Eric Likness

March 27, 2014 at 3:00 pm

The technical aspects of privacy – O’Reilly Radar

leave a comment »

Image representing Edward Snowden as depicted ...

Image via CrunchBase

The first of three public workshops kicked off a conversation with the federal government on data privacy in the US.

by Andy Oram | @praxagora

via The technical aspects of privacy – O’Reilly Radar.

Interesting topic covering a wide range of issues. I’m so happy MIT sees fit to host a set of workshops on this and keep the pressure up. But as Andy Oram writes, the whole discussion at MIT was circumscribed by the notion that privacy as such doesn’t exist (an old axiom from ex-CEO of Sun Microsystems, Scott McNealy).

No one at that MIT meeting tried to advocate for users managing their own privacy. Andy Oram mentions Vendor Relationship Management movement (thanks to Doc Searls and his Clue-Train Manifesto) as one mechanism for individuals to pick and choose what info and what degree the info is shared out. People remain willfully clueless or ignorant of VRM as an option when it comes to privacy. The shades and granularity of VRM are far more nuanced than the bifurcated/binary debate of Privacy over Security. and it’s sad this held true for the MIT meet-up as well.

Jon Podesta’s call-in to the conference mentioned an existing set of rules for electronic data privacy, data back to the early 1970s and the fear that mainframe computers “knew too much” about private citizens known as Fair Information Practices:  http://epic.org/privacy/consumer/code_fair_info.html (Thanks to Electronic Privacy Information Center for hosting this page). These issues seem to always exist but in different forms at earlier times. These are not new, they are old. But each time there’s  a debate, we start all over like it hasn’t ever existed and it has never been addressed. If the Fair Information Practices rules are law, then all the case history and precedents set by those cases STILL apply to NSA and government surveillance.

I did learn one new term from reading about the conference at MIT, Differential Security. Apparently it’s very timely and some research work is being done in this category. Mostly it applies to datasets and other similar big data that needs to be analyzed but without uniquely identifying an individual in the dataset. You want to find out efficacy of a drug, without spilling the beans that someone has a “prior condition”. That’s the sum effect of implementing differential privacy. You get the query out of the dataset, but you never once know all the fields of the people that make up that query. That sounds like a step in the right direction and should honestly apply to Phone and Internet company records as well. Just because you collect the data, doesn’t mean you should be able to free-wheel through it and do whatever you want. If you’re mining, you should only get the net result of the query rather than snoop through all the fields for each individual. That to me is the true meaning of differential security.

Enhanced by Zemanta

Written by Eric Likness

March 17, 2014 at 3:00 pm

SanDisk Crams 128GB on microSD Card: A World First

with 2 comments

English: A 512 MB Kingston microSD card next t...

English: A 512 MB Kingston microSD card next to a Patriot SD adapter (left) and miniSD adapter (middle). (no original description) (Photo credit: Wikipedia)

This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.

via SanDisk Crams 128GB on microSD Card: A World First.

Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.

Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28

My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.

Enhanced by Zemanta

Written by Eric Likness

March 10, 2014 at 3:00 pm

Posted in computers, flash memory, mobile, SSD

Tagged with ,

AnandTech | The Pixel Density Race and its Technical Merits

leave a comment »

Italiano: Descrizione di un pixel

Italiano: Descrizione di un pixel (Photo credit: Wikipedia)

If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.

via AnandTech | The Pixel Density Race and its Technical Merits.

Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.

I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained  by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.

What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.

Enhanced by Zemanta

Written by Eric Likness

February 17, 2014 at 3:00 pm

Posted in art, gpu, mobile

Tagged with , ,

Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com

leave a comment »

Note this is a draft of an article I wrote back in June when Apple announced it was going to favor its own Maps app over Google Maps and take G-Maps out of the Apple Store altogether. This blog went on hiatus just 2 weeks after that. And a whirlwind number of staff changes occurred at Apple as a result of the debacle of iOS Maps product. Top people have been let go not the least of which was the heir apparent in some people’s views of Steve Jobs; Scott Forstall. He was not popular, very much a jerk and when asked by Tim Cook to co-sign the mea culpa Apple put out over their embarrassment about the lack of performance and lack of quality of iOS Maps, Scott wouldn’t sign it. So goodbye Scott, hello Google Maps. Somehow Google and Apple are in a period of detente over Maps and Google Maps is now returned to the Apple Store. Who knew so much could happen in 6 months right?

Garmin told Wired in a statement. “We think that there is a market for smartphone navigation apps, PNDs [Personal Navigation Devices] and in-dash navigation systems as each of these solutions has their own advantages and use case limitations and ultimately it’s up to the consumer to decide what they prefer.

via Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com.

That’s right mapping and navigation are just one more app in a universe of software you can run on your latest generation iPod Touch or iPhone. I suspect that the Maps will only be available on the iPhone as that was a requirement previously placed on the first gen Maps app on iOS. It would be nice if there were a lower threshold entry point for participation in the Apple Maps app universe.

But I do hear one or two criticisms regarding Apple’s attempt to go its own way. Google’s technology and data set lead (you know all those cars driving around and photographing?) Apple has to buy that data from others, it isn’t going to start from scratch and attempt to re-create Google’s Street View data set. Which means it won’t be something Maps has as a feature probably for quite some time. Android’s own Google Maps app includes turn-by-turn navigation AND Street view built right in. It’s just there. How cool is that? You get the same experience on the mobile device as the one you get working in a web browser on a desktop computer.

In this battle between Google and Apple the pure play personal navigation device (PND) manufacturers are losing share. I glibly suggested in a twee yesterday that Garmin needs to partner up with Apple and help out with its POI and map datasets so that potentially both can benefit. It would be cool if a partnership could be struck that allowed Apple to have feature that didn’t necessarily steal market share from the PNDs, but could somehow raise all boats equally. Maybe a partnership to create a Street View-like add-on for everyone’s mapping datasets would be a good start. That would help level the playing field between Google vs. the rest of the world.

Written by Eric Likness

December 15, 2012 at 12:22 pm

Posted in google, gpu, mobile, navigation, technology

Tagged with , , ,

Doc Searls Weblog · Won and done

leave a comment »

Doc Searls

Doc Searls (Photo credit: Wikipedia)

This tells me my job with foursquare is to be “driven” like a calf into a local business. Of course, this has been the assumption from the start. But I had hoped that somewhere along the way foursquare could also evolve into a true QS app, yielding lat-lon and other helpful information for those (like me) who care about that kind of thing. (And, to be fair, maybe that kind of thing actually is available, through the foursquare API. I saw a Singly app once that suggested as much.) Hey, I would pay for an app that kept track of where I’ve been and what I’ve done, and made  that data available to me in ways I can use.

via Doc Searls Weblog · Won and done.

foursquare as a kind of Lifebits I think is what Doc Searls is describing. A form of self-tracking a la Stephen Wolfram or Gordon Moore. Instead foursquare is the carrot being dangled to lure you into giving your business to a particular retailer. After that you accumulate points for numbers of visits and possibly unlock rewards for your loyalty. But foursquare no doubt accumulates a lot of other data along the way that could be use for the very purpose Doc Searls was hoping for.

Gordon Moore’s work at Microsoft Research bootstrapping the My Lifebits project is a form of memory enhancement, but also logging of personal data that can be analyzed later. The collection or ‘instrumentation’ of one’s environment is what Stephen Wolfram has accomplished by counting things over time. Not to say it’s simpler than the My Lifebits, but it is in someways lighter weight data (instead of videos and pictures, mouse clicks and tallies of email activity, times of day, etc.) There is no doubt that foursquare could make a for profit service to paying users where they could collect this location data and serve it up to subscribers, letting them analyze the data after the fact.

I firmly believe a form of My Lifebits could be aggregated across a wide range of free and paid services along with personal instrumentation and data collecting like the kind Stephen Wolfram does. If there’s one thing I’ve learned readings stories about inventions like these from MIT’s Media Lab is that it’s never an either or proposition. You don’t have to just adopt Gordon Moore’s technology or Stephen Wolfram’s techniques or even foursquare’s own data. You can do all or just pick and choose the ones that suit your personal data collection needs. Then you get to slice, dice and analyze to your heart’s content. What you do with it after that is completely up to you and should be considered as personal as any legal documents or health records you already have.

Which takes me back to an article I wrote some time ago in reference to Jon Udell calling for a federated LifeBits type of service. It wouldn’t be constrained to one kind of data, but all the LifeBits aggregated potentially and new repositories for stuff that must be locked down and private. So add Doc Searls to the list of bloggers and long time technology writers who see an opportunity. Advocacy (in the case of Doc’s experience with foursquare) on behalf of sharing unfiltered data with the users on whom data is collected is one step in that direction. I feel Jon Udell is also an advocate for users gaining access to all that collected and aggregated data. But as Jon Udell asks, who is going to be the first to attempt to offer this up as a pay-for service in the cloud where you can for a fee access your lifebits aggregated into one spot (foursquare,twitter,facebook,gmail,flickr,photostream,mint,eRecords,etc.) so that you don’t spend your life logging on and logging off from service to service to service. Aggregation could be a beautiful thing.

Image representing Foursquare as depicted in C...

Image via CrunchBase

Written by Eric Likness

June 7, 2012 at 3:00 pm

Intel looks to build ultra-efficient mobile chips Apple cant ignore

leave a comment »

English: Paul Otellini, CEO of Intel

Paul Otellini, CEO of Intel (Photo credit: Wikipedia)

During Intels annual investor day on Thursday, CEO Paul Otellini outlined the companys plan to leverage its multi-billion-dollar chip fabrication plants, thousands of developers and industry sway to catch up in the lucrative mobile device sector, reports Forbes.

via Intel looks to build ultra-efficient mobile chips Apple cant ignore (Apple Insider)

But what you are seeing is a form of Fear, Uncertainty and Doubt (FUD) being spread about to sow the seeds of mobile Intel processors sales. The doubt is not as obvious as questioning the performance of ARM chips, or the ability of manufacturers like Samsung to meet their volume targets and reject rates for each new mobile chip. No it’s more subtle than that and only noticeable to people who know details like what design rule Intel is currently using versus that which is used by Samsung or TSMC (Taiwan Semiconductor Manufacturing Corp.) Intel is currently just releasing its next gen 22nm chips as companies like Samsung are still trying to recoup their investment in 45nm and 32nm production lines. Apple is just now beginning to sample some 32nm chips from Samsung in iPad 2 and Apple TV products. It’s current flagship model iPad/iPhone both use a 45nm chip produced by Samsung. Intel is trying to say that the old generation technology while good doesn’t have the weight and just massive investment in the next generation chip technology. The new chips will be smaller, energy efficient, less expensive all the things need to make higher profit on consumer devices using them. However, Intel doesn’t do ARM chips, it has Atom and that is the one thing that has hampered any big design wins in cellphone or tablet designs to date. At any narrow size of the design rule, ARM chips almost always use less power than a comparably sized Atom chip from Intel. So whether it’s really an attempt to spread FUD, can easily be debated one way or another. But the message is clear, Intel is trying to fight back against ARM. Why? Let’s turn back the clock to March of this year in a previous article also appearing in Apple Insider:

Apple could be top mobile processor maker by end of 2012 (Apple Insider, March 20, 2012)

This article is referenced in the original article quoted at the top of the page. And it points out why Intel is trying to get Apple to take notice of its own mobile chip commitments. Apple designs its own chips and has the manufacturing contracted out to a foundry. To date Samsung has been the sole source of the A-processors used in iPhones/iPod/iPad devices as Apple is trying to get TSMC up to speed to get a second source. Meanwhile sales of the Apple devices continues to grow handsomely in spite of these supply limits. More important to Intel is the blistering growth in spite of being on older foundry technology and design rules. Intel has a technological and investment advantage over Samsung now. They do not have a chip however that is BETTER than Apple’s in house designed ARM chip. That’s why the underlying message for Intel is that it has to make it’s Atom chip so much better than an A4, A5, A5X at ANY design ruling that Apple cannot ignore Intel’s superior design and manufacturing capability. Apple will still use Intel chips, but not in its flagship products until Intel achieves that much greater level of technical capability and sophistication in its Mobile microprocessors.

Twin-track development plan for Intel’s expansion into smartphones (The Register, May 11, 2012)

Intel is planning a two-pronged attack on the smartphone and tablet markets, with dual Atom lines going down to 14 nanometers and Android providing the special sauce to spur sales. 

Lastly, Ian Thomson from The Register weighs in looking at what the underlying message from Intel really is. It’s all about the future of microprocessors for the consumer market. However the emphasis in this article is that Android OS devices whether they be phones or tablets or netbooks will be the way to compete AGAINST Apple. But again it’s not Apple as such it’s the microprocessor Apple is using in it’s best selling devices that scares Intel the most. Intel has since its inception been geared towards the ‘mainstream’ market selling into Enterprises and the Consumer area for years. It has milked the desktop PC revolution as it helped create it more or less starting with its forays into integrated micro-processor chips and chipsets. It reminds me a little of the old steel plants that existed in the U.S. during the 1970s as Japan was building NEW steel plants that used a much more energy efficient design, and a steel making technology that created  a higher quality product. So less expensive higher quality steel was only possible by creating brand new steel plants. But the old line U.S. plants couldn’t justify the expense and so just wrapped up and shutdown operations all over the place. Intel while it is able to make that type of investment in newer technology is still not able to create the energy saving mobile processor that will out perform an ARM core cpu.

Written by Eric Likness

May 24, 2012 at 3:00 pm

AnandTech – The iPad 2,4 Review: 32nm Brings Better Battery Life

leave a comment »

New A5 chip from Apple

This is a 32nm A5 cpu from a new model Apple TV, the same CPU being installed in some small number of iPad 2

I would like to applaud Apples 32nm migration plan. By starting with lower volume products and even then, only on a portion of the iPad 2s available on the market, Apple maintains a low profile and gets great experience with Samsungs 32nm HK+MG process.

via AnandTech – The iPad 2,4 Review: 32nm Brings Better Battery Life.

Anand Lal Shimpi @ Anandtech.com does a great turn explaining some of the Electrical Engineering minutiae entailed by Apple’s un-publicized switch to a smaller design rule for some of it’s 2nd Generation iPads. Specifically this iPad’s firmware reads as the iPad 2,4 version indicating a 32nm version of the Apple A5 chip. And boy howdy, is there a difference between 45nm A5 vs. 32nm A5 on the iPad 2.

Anand first explains the process technology involved in making the new chip (metal gate electrodes and High dielectric constant gate oxides). Most of it is chosen to keep electricity from leaking between the two sides of the transistor “switch” that populate the circuits on the processor. The metal gates can handle a higher voltage which is needed to overcome the high dielectric constant of the gate oxide (it is more resistant to conducting electricity, so it needs more voltage ‘oomph!’ applied it). Great explanation I think regarding those two on-die changes with the new Samsung 32nm design ruling. Both of the changes help keep the electrical current from leaking all over the processor.

What does this change mean? Well the follow-up to that question is the benchmarks that Anand runs in the rest of the article checking battery life at each step of the way. Informally it appears the iPad2,4 will have roughly 1 extra hour of battery life as compared to the original iPad2,1 using the larger 45nm A5 chip. Performance of the graphics and cpu are exactly the SAME as the first generation A5. So as the article title indicates this change was just a straightforward die shrink from 45nm to 32nm and no doubt is helping validate the A5 architecture on the new production line process technology. And this will absolutely be required to wedge the very large current generation A5x cpu on the iPad 3 into a new iPhone in the Fall 2012.

But consider this, even as Apple and Samsung both refine and innovate on the ARM architecture for mobile devices, Intel is still the technology leader (bar none). Intel has got 22nm production lines up and running and is releasing Ivy Bridge CPUs with that design rule this Summer 2012. While Intel doesn’t literally compete in the mobile chip industry (there have been attempts in the past), it at least can tout being the most dense, power efficient chip in the categories it dominates. I cannot help but wonder what kind of gains could be made if an innovator like Apple had access to an ARM chip foundry with all of Intel’s process engineering and optimization. What would an A5X chip look like at the 22nm design ruling with all the power efficiency and silicon process technologies applied to it? How large would the die be? What kind of battery life would you see if you die-shrunk an A5X all the way down to 22nm? That to me is the Andy Grove 10X improvement I would like to see. Could we get 11-12 continuous hours of battery life on a cell phone? Could we see a cell phone with more cpu/graphics capability than current generation Xbox and Playstations? Hard to tell, I know, but thinking about it is just so darned much fun I cannot help but think about it.

Design rules at 45nm (left) and 32nm (right) indicate the scale being discussed in the Anandtech article.

Written by Eric Likness

May 17, 2012 at 3:00 pm

ARM creators Sophie Wilson and Steve Furber • reghardware

leave a comment »

BBC Micro

BBC Micro (Photo credit: Wikipedia)

Unsung Heroes of Tech Back in the late 1970s you wouldnt have guessed that this shy young Cambridge maths student named Wilson would be the seed for what has now become the hottest-selling microprocessor in the world.

via Chris Bidmead: ARM creators Sophie Wilson and Steve Furber • reghardware.

This is an amazing story of how a small computer company in Britain was able to jump into the chip design business and accidentally create a new paradigm in low power chips. Astounding what seemingly small groups can come with as complete product categories unto themselves. The BBC Micro was the single most important project that kept the company going and was produced as a learning aid for the BBC television show: The_Computer_Programme, a part of the BBC Computer Literacy Project. From that humble beginning of making the BBC Micro, Furber and Wilson’s ability to engineer a complete computer was well demonstrated.

But whereas the BBC Micro used an off the shelf MOS 6502 cpu, a later computer used a custom (bespoke) designed chip created in house by Wilson and Furber. This is the vaunted Acorn Risc Machine (ARM) used in the Archimedes desktop computer. And that one chip helped launch a revolution unto itself in that the very first time the powered up a sample chip, the multimeter hooked up to registered no power draw. At first one would think this was a flaw, and ask “What the heck is happening here?” But in fact when further inspection showed that the multimeter was correct, the engineers discovered that the whole cpu was running of power that was leaking from the logic circuits within the chip itself. Yes, the low power requirement of this first sample chip of the ARM cpu in 1985 ran on 1/10 of a watt of electricity. And that ‘bug’ then went on to become a feature in later generations of the ARM architecture.

Today we know of the ARM cpu cores as a bit of licensed Intellectual Property that any chip make can acquire and implement in their mobile processor designs. It has come to dominate many different architectures by different manufacturers as diverse as Qualcomm and Apple Inc. But none of it ever would have happened were it not for that somewhat surprising discovery of how power efficient that first sample chip really was when it was plugged into a development board. So thankyou Sophie Wilson and Steve Furber, as the designers and engineers today are able to stand upon your shoulders the way you once stood on the shoulders of people who designed the MOS 6502.

MOS 6502 microprocessor in a dual in-line pack...

MOS 6502 microprocessor in a dual in-line package, an extremely popular 8-bit design (Photo credit: Wikipedia)

Written by Eric Likness

May 14, 2012 at 3:00 pm

Follow

Get every new post delivered to your Inbox.

Join 286 other followers