Posts Tagged ‘ARM’
On Tuesday, the company unveiled its new ARM Cortex-M0+ processor, a low-power chip designed to connect non-PC electronics and smart sensors across the home and office.
Previous iterations of the Cortex family of chips had the same goal, but with the new chip, ARM claims much greater power savings. According to the company, the 32-bit chip consumes just nine microamps per megahertz, an impressively low amount even for an 8- or 16-bit chip.
Lower power means a very conservative power budget especially for devices connected to the network. And 32 bits is nothing to sneeze at considering most manufacturers would pick a 16 or 8-bit chip to bring down the cost and power budget too. According to this article the degree of power savings is so great in fact that in sleep mode the chip consumes almost no power at all. For this market Moore’s Law is paying off big benefits especially given the bonus of a 32bit core. So not only will you get a very small lower power cpu, you’ll have a much more diverse range of software that could run on it and take advantage of a larger memory address space as well. I think non-PC electronics could include things as simple as web cams or cellphone cameras. Can you imagine a CMOS camera chip with a whole 32bit cpu built in? Makes you wonder no just what it could do, but what ELSE it could do, right?
The term ‘Internet of Things‘ is bandied about quite a bit as people dream about cpus and networks connecting ALL the things. And what would be the outcome if your umbrella was connected to the Internet? What if ALL the umbrellas were connected? You could log all kinds of data, whether it was opened or close, what the ambient temperature is. It would be like a portable weather station for anyone aggregating all the logged data potentially. And the list goes on and on. Instead of Tire pressure monitors, why not also capture video of the tire as it is being used commuting to work. It could help measure the tire wear and setup and appointment when you need to get a wheel alignment. It could determine how many times you hit potholes and suggest smoother alternate routes. That’s the kind of blue sky wide open conjecture that is enabled by a 32-bit low/no power cpu.
- ARM Upgrades Cortex-M0 Processor for Low-power Applications (pcworld.com)
- ARM Cortex-M0+ targets low power tech (slashgear.com)
As reported by Andrew Cunningham for Anandtech: Weve known that Microsoft has been planning an ARM-compatible version of Windows since well before we knew anything else about Windows 8, but the particulars have often been obscured both by unclear signals from Microsoft itself and subsequent coverage of those unclear signals by journalists. Steven Sinofsky has taken to the Building Windows blog today to clear up some of this ambiguity, and in doing so has drawn a clearer line between the version of Windows that will run on ARM, and the version of Windows that will run on x86 processors.
That’s right ARM cpus are in the news again this time info for the planned version of Windows 8 for the mobile CPU. And it is a separate version of Windows OS not unlike Windows CE or Windows Mobile or Windows Embedded. They are all called Windows, but are very different operating systems. The product will be called Windows on ARM (WOA) and is only just now being tested internally at Microsoft with a substantial development and release to developers still to be announced.
One upshot of this briefing from Sinofsky was the mobile-centric Metro interface will not be the only desktop available on WOA devices. You will also be able to use the traditional looking Windows desktop and not incur a big battery power performance hit. Which makes it a little more palatable to a wider range of users no doubt who might consider buying a phone or tablet or Ultrabook running an ARM cpu running the new Windows 8 OS. Along the same lines there will be a version of Office apps that will also run on WOA devices including the big three Word, Excel and Powerpoint. These versions will be optimized for mobile devices with touch interfaces which means you should buy the right version of Office for your device (if it doesn’t come pre-installed).
Lastly the optimization and linking to specially built Windows on ARM devices means you won’t be able to install the OS on just ‘any’ hardware you like. Similar to Windows Mobile, you will need to purchase a device designed for the OS and most likely with a version pre-installed from the factory. This isn’t like a desktop OS built to run on many combos of hardware with random devices installed, it’s going to be much more specific and refined than that. Microsoft wants to really constrain and coordinate the look and feel of the OS on many mobile devices so that an average person can expect it to work similarly and look similar no matter who the manufacturer of the device will be. One engineering choice that is going to assist with this goal is an attempt to address the variations in devices by using so-called “Class Drivers” to support the chipsets and interfaces in a WOA device. This is a less device specific way of support say a display panel, keyboard without having to know every detail. A WOA device will have to be designed and built to a spec provided by Microsoft for which then it will provide a generic ‘class driver’ for that keyboard, display panel, USB 3.0 port, etc. So unlike Apple it won’t just be a limited set of hardware components necessarily, but they will have to meet the specs to be supported by the Windows on ARM OS. This no doubt will make it much easier for Microsoft to keep it’s OS up to date as compared to say in the Google Android universe where the device manufacturers have to provide the OS updates (which in fact is not often as they prefer people to upgrade their device to get the new OS releases).
- Building Windows for the ARM processor architecture (Steven Sinofsky @ MSDN)
- Microsoft Unveils Its Next-Generation OS, Windows 8 (bits.blogs.nytimes.com)
- Microsoft plans Windows 8 ARM presentation at Mobile World Congress (slashgear.com)
- Live from the Windows 8 on ARM Preview event! (armdevices.net)
. . . 20 nm may represent an inflection point in which it will be necessary to transition from a metal-oxide semiconductor field-effect transistor MOSFET to Fin-Shaped Field Effect Transistors FinFET or 3D transistors, which Intel refers to as tri-gate designs that are set to debut with the companys 22 nm Ivy Bridge product generation.
Three Dimensional transistors in the news again. Previously Intel announced they were adopting a new design for their next generation next smaller design rule for the Ivy Bridge generation Intel CPUs. Now ARM is also doing work to integrate similar technology into their ARM cpu cores as well. No doubt in order to lower Thermal Design Point and maintain clock speed as well are both driving this move to refine and narrow the design rules for the ARM architecture. Knowing Intel is still the top research and development outfit for silicon semi-conductors would give pause to anyone directly competing with them, but ARM is king of the low power semi-conductor and keeping pace with Intel’s design rules is an absolute necessity.
I don’t know how quickly ARM is going to be able to get a licensee to jump onboard and adopt the new design. Hopefully a large operation like Samsung can take this on and get the chip into it’s design, development, production lines at a chip fabrication facility as soon as possible. Likewise other contract manufacturers like Taiwan Semiconductor Manufacturing Company (TSMC) should also try to get this chip into their facilities quickly too. That way the cell-phone and tablet markets can benefit too as they use a lot of ARM licensed cpu cores and similar intellectual property in their shipping products. And my interest is not so much invested in the competition between Intel and ARM for low power computing but more the overall performance of any single ARM design once it’s been in production for a while and optimized the way Apple designs its custom CPUs using ARM licensed cpu cores. The single most outstanding achievement of Apple in their design and production of the iPad is the battery charge duration of 10 hours. Which to date, is an achievement that has not been beaten, even by other manufacturers and products who also license ARM intellectual property. So if the ARM design is good and can be validated and proto-typed with useful yields quickly, Apple will no doubt be the first to benefit, and by way of Apple so will the consumer (hopefully).
- Intel has 14-nm process running in the lab (nextbigfuture.com)
- The top EDA360 Insider blogs of 2011. A Baker’s Dozen in case you missed them (eda360insider.wordpress.com)
- 3-d Transistors: Redefigning the Transistor (mgitecetech.wordpress.com)
You can bet that if ARM servers suddenly look like they will be taking off that Red Hat and Canonical will kick in some help and move these Xen and KVM projects along. Server maker HP, which has launched the “Redstone” experimental server line using Calxedas new quad-core EnergyCore ARM chips, might also help out. Dell has been playing around with ARM servers, too, and might help with the hypervisor efforts as well.
This is an interesting note, some open source Hypervisor projects are popping up now that the ARM Cortex A15 has been announced and some manufacturers are doling out development boards. What it means longer term is hard to say other than it will potentially be a boon to manufacturers using the ARM15 in massively parallel boxes like Calxeda. Or who are trying to ‘roll their own’ ARM based server farms and want to have the flexibility of virtual machines running under a hypervisor environment. However, the argument remains, “Why use virtual servers on massively parallel cpu architectures when a 1:1 cpu core to app ratio is more often preferred?”
However, I would say old habits of application and hardware consolidation die hard and virtualization is going to be expected because that’s what ‘everyone’ does in their data centers these days. So knowing that a hypervisor is available will help foster some more hardware sales of what will most likely be a niche products for very specific workloads (ie. Calxeda, Qanta SM-2, SeaMicro). And who knows maybe this will foster more manufacturers or even giant data center owners (like Apple, Facebook and Google) to attempt experiments of rolling their own ARM15 environments knowing there’s a ready made hypervisor out there that they can compile on the new ARM chip.
However, I think all eyes are really still going to be on the next generation ARM version 8 with the full 64bit memory and instruction set. Toolsets nowadays are developed in house by a lot of the datacenters and the dominant instruction set is Intel x64 (IA64) which means the migration to 64bits has already happened. Going back to 32bits just to gain the advantage of the lower power ARM architecture is far to costly for most. Whereas porting from IA64 to 64bit ARM architecture is something more datacenters might be willing to do if the potential cost/benefit ratio is high enough to cross-compile and debug. So legacy management software toolsets are really going to drive a lot of testing and adoption decisions by data centers looking at their workloads and seeing if ARM cpus fit their longer term goals of saving money by using less power.
- HP and Calxeda’s Moonshot ARM servers will bring all the boys to the yard (video) (engadget.com)
- ARM V8 Architecture (perspectives.mvdirona.com)
Samsung also previewed a 2 GHz dual-core ARM Cortex-A15 application processor, the Exynos 5250, also designed on its 32-nm process. The company said that the processor is twice as fast as a 1.5 GHz A9 design without having to jump to a quad-core layout.
More news on the release dates and the details off Samsung’s version of the ARM Cortex A15 cpu for mobile devices. Samsung is helping ramp up performance by shrinking the design rule down to 32nm, and in the A15 cpu dropping two out of the four possible cores. This choice is to make room for the integrated graphics processor. It’s a deluxe system on a chip that will no doubt give any A9 equipped tablet a run for its money. Indications at this point by Samsung are that the A15 will be a tablet only cpu and not adapted to smartphone use.
Early in the Fall there were some indications that the memory addressing of the Cortex A15 would be enhanced to allow larger memories (greater than 4GBytes) to be added to devices. As it is now memory addressing isn’t a big issue as memory extensions (up to 40bits Large Physical Address Extensions-LPAE) are allowed under the current generation Cortex A9. However the Instructions are still the same 32 bit Instruction Set longtime users of the ARM architecture are familiar with, and as always are backward compatible with previous generation software. It would appear that the biggest advantage to moving to Cortex A15 would be the potential for higher clock rates, decent power management and room to grow on the die for embedded graphics.
Apple in it’s designs using the Cortex processors has stayed one generation behind the rest of the manufacturers and used all possible knowledge and brute force to eek out a little more power savings. Witness the iPad battery life still tops most other devices on the market. By creating a fully customized Cortex A8, Apple has absolutely set the bar on power management on die, and on the motherboard as well. If Samsung decides to go the route of pure power and clock, but sacrifices two cores to get the power level down I just hope they can justify that effort with equally amazing advancements in the software that runs on this new chip. Whether it be a game or better yet a snazzy User Interface, they need to differentiate themselves and try to show off their new cpu.
- How fast can an ARM Cortex-A15 run? 2GHz in Samsung’s 32nm process technology. That’s fast! (eda360insider.wordpress.com)
Qualcomm CEO Paul Jacobs, speaking during the San Diego semiconductor companys annual analyst day in New York, said Qualcomm is currently working with Microsoft to ensure that the upcoming Windows 8 operating system will run on its ARM-based Snapdragon SoCs.
Windows 8 is a’comin’ down the street. And I bet you’ll see it sooner rather than later. Maybe as early as June on some products. The reason of course is the Tablet Market is sucking all the air out of the room and Microsoft needs a win to keep the mindshare favorable to it’s view of the consumer computer market. Part of that drive is fostering a new level of cooperation with System on chip manufacturers who until now have been devoted to the mobile phone, smart phone market. Now everyone wants a great big Microsoft hope to conquer the Apple iPad in the tablet market. And this may be their only hope to accomplish that in the coming year.
Forrester Research just 2 days ago however predicted the Windows 8 Tablet dead on arrival:
IDG News Service - Interest in tablets with Microsoft’s Windows 8 is plummeting, Forrester Research said in a study released on Tuesday.
Key to making a mark in the tablet computing market is content, content, content. Performance and specs alone will not create a Windows 8 Tablet market in what is an Apple dominated tablet marketplace, as the article says. It also appears previous players in the failed PC Tablet market will make a valiant second attempt this time using Windows 8 (I’m thinking Fujitsu, HP and Dell according to this article).
- Expect the First Windows 8 Snapdragon PC Late 2012 (tomshardware.com)
- Forrester Says Windows 8 Tablets Are Dead-on-Arrival (tomshardware.com)
Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.via Cloudline | Blog | Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data.
So Intel gets an interview with a Conde-Nast writer for a sub-blog of Wired.com. I doubt too many purchasers or data center architects consult Cloudline@Wired.com. But all the same, I saw through many thinly veiled bits of handwaving and old saws from Intel saying, “Yes, this exists but we’re already addressing it with our exiting product lines,. . .” So, I wrote in a comment to this very article. Especially regarding a throw-away line mentioning the ‘future’ of the data center and the direction the Data Center and Cloud Computing market was headed. However the moderator never published the comment. In effect, I raised the Question: Whither Tilera? And the Quanta SM-2 server based on the Tilera Chip?
Aren’t they exactly what is described by the author John Stokes as a network of cores on a chip? And given the scale of Tilera’s own product plans going into the future and the fact they are not just concentrating on Network gear but actual Compute Clouds too, I’d say both Stokes and Walcyzk are asking the wrong questions and directing our attention in the wrong direction. This is not a PR battle but a flat out technology battle. You cannot win this with words and white papers but in fact it requires benchmarks and deployments and Case Histories. Technical merit and superior technology will differentiate the players in the Cloud in a Box race. And this hasn’t been the case in the past as Intel has battled AMD in the desktop consumer market. In the data center Intel Fear Uncertainty and Doubt is the only weapon they have.
And I’ll quote directly from John Stokes’s article here describing EXACTLY the kind of product that Tilera has been shipping already:
“Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.”
The key phrase here is cluster on a chip, in essence exactly what Tilera has strived to achieve with its Tilera64 based architecture. To review from previous blog entries of this website following the announcements and timelines published by Tilera:
- Tilera throws gauntlet at Intel’s feet (go.theregister.com)
- Tilera routs Intel, AMD in Facebook bakeoff (go.theregister.com)
- The ARM v. Intel fight just got good (gigaom.com)
- ARM daddy simulates human brain with million-chip super – The Register (carpetbomberz.com)
- Diving into Big Data (blogs.cisco.com)
- Jason Gerard DeRose: Calxeda is more disruptive than you might think (jderose.blogspot.com)
Calxeda is producing 4-core, 32-bit, ARM-based system-on-chip SOC designs, developed from ARMs Cortex A9. It says it can deliver a server node with a thermal envelope of less than 5 watts. In the summer it was designing an interconnect to link thousands of these things together. A 2U rack enclosure could hold 120 server nodes: thats 480 cores.
HP signing on as a OEM for Calxeda designed equipment is going to push ARM based massively parallel server designs into a lot more data centers. Add to this the announcement of the new ARM-15 cpu and it’s timeline for addressing 64-bit memory and you have a battle royale going up against Intel. Currently the Intel Xeon is the preferred choice for applications requiring large amounts of DRAM to hold whole databases and Memcached webpages for lightning quick fetches. On the other end of the scale is the low per watt 4 core ARM chips dissipating a mere 5 watts. Intel is trying to drive down the Thermal Design Point for their chips even resorting to 64bit Atom chips to keep the Memory Addressing advantage. But the timeline for decreasing the Thermal Design Point doesn’t quite match up to the ARM x64 timeline. So I suspect ARM will have the advantage as will Calxeda for quite some time to come.
While I had hoped the recen ARM-15 announcement was also going to usher in a fully 64-bit capable cpu, it will at least be able to fake larger size memory access. The datapath I remember being quoted was 40-bits wide and that can be further extended using software. And it doesn’t seem to have discouraged HP at all who are testing the Calxeda designed prototype EnergyCore evaluation board. This is all new territory for both Calxeda and HP so a fully engineered and designed prototype is absolutely necessary to get this project off the ground. My hope is HP can do a large scale test and figure out some of the software configuration optimization that needs to occur to gain an advantage in power savings, density and speed over an Intel Atom server (like SeaMicro).
- SeaMicro pushes Atom smasher to 768 cores in 10U box – The Register (carpetbomberz.com)
- The opposite of virtualization: Calexda’s new quad-core ARM part for cloud servers (arstechnica.com)
- ARM server hero Calxeda lines up software super friends – The Register (carpetbomberz.com)
Qualcomm remains the only active player in the smartphone/tablet space that uses its architecture license to put out custom designs. The benefit to a custom design is typically better power and performance characteristics compared to the more easily synthesizable designs you get directly from ARM. The downside is development time and costs go up tremendously.
I’m very curious to see how the different ARM based processors fair against one anther in each successive generation. Especially the move to ARM-15 (x64) none of which will see a quick implementation on a handheld mobile device. ARM-15 is a long ways off yet, but it appears in spite of the next big thing in ARM designed cores, there’s a ton of incremental improvements and evolutionary progress being made on current generation ARM cores. ARM-8 and ARM-9 have a lot of life in them for the foreseeable future including die shrinks that allow either faster clock speeds or constant clock speeds and lower power drain and lower Thermal Design Point (TDP).
Apple’s also going steadily towards the die shrink in order to cement current gains made in it’s A5 chip design too. Taiwan Manfucturing Semi-Conductor (TMSC) is the biggest partner in this direction and is attempting to run the next iteration of Apple mobile processors on its state of the art 22 nanometer design rule process.
- Have you considered the Android factor in multi-core SoC processor management? (eda360insider.wordpress.com)
- Qualcomm reveals more Snapdragon 4 SoC details in a White Paper. Want to know what’s inside? (eda360insider.wordpress.com)
While everyone in the IT racket is trying to figure out how many Intel Xeon and Atom chips can be replaced by ARM processors, Steve Furber, the main designer of the 32-bit ARM RISC processor at Acorn in the 1980s and now the ICL professor of engineering at the University of Manchester, is asking a different question, and that is: how many neurons can an ARM chip simulate?
The phrase reminds me a bit of an old TV commercial that would air during the Saturday cartoons. Tootsie Roll brand lollipops had a center made out of Tootsie Roll. The challenge was to determine how many licks does it take to get to the center of a Tootsie Roll Pop? The answer was, “The World May Never Know”. And so it goes for the simulations large scale and otherwise of the human brain.
I remember also reading Stewart Brand’s 1985 book about the MIT Media Lab and their installation of a brand new multi-processor super computer called The Connection Machine (TCM). Danny Hillis was the designer and author of the original concept of stringing together a series of small one bit computer cores to act like ‘neurons’ in a larger array of cpus. The scale was designed to top out at around 65,535 (2^16). At the time MIT Media Lab only had the machine filled up 1/4 of the way but was attempting to do useful work with it at that size. Hillis spun out of MIT to create a startup company called Thinking Machines (to reflect the neuron style architecture he had pursued as a grad student). In fact all of Hillis’s ideas stemmed from his research that led up to the original Connection Machine Mark. 1.
Spring forward to today and the sudden appearance of massively parallel, low-power servers like Calxeda using ARM chips, Qanta Sq-2 using Tilera chips (also an MIT spin out). Similarly the Seamicro SM-10000×64 which uses Intel Atom chips in large scale, large quantity. And Seamicro is making sales TODAY. It almost seems like a stereotypical case of an idea being way ahead of its time. So recognize the opportunity because now the person directly responsible for designing the ARM chip is attacking that same problem Danny Hillis was all those years ago.
Personally I would like to see Hillis join in some way with this program not as Principal Investigator but may a background consultant. Nothing wrong with a few more eyes on the preliminary designs. Especially with Hillis’s background in programming those old mega-scale computers. That is the true black art of trying to do a brain simulator on this scale. Steve Furber might just be able to make lightning strike twice (once for Acorn/ARM cpus and once more for simulating the brain in silicon).
- Simulating the human brain’s networks (theswarm.wordpress.com)
- SeaMicro Crams 768 Atom Cores in New Cloud Server (pcworld.com)
- ARM says its SpiNNaker chip simulates 1,000 brain neurons (electronista.com)