Blog

  • Samsung: 2 GHz Cortex-A15 Exynos 5250 Chip

    Samsung also previewed a 2 GHz dual-core ARM Cortex-A15 application processor, the Exynos 5250, also designed on its 32-nm process. The company said that the processor is twice as fast as a 1.5 GHz A9 design without having to jump to a quad-core layout.

    via Samsung Reveals 2 GHz Cortex-A15 Exynos 5250 Chip.

    Deutsch: Offizielles Logo der ARM-Prozessorarc...
    Image via Wikipedia

    More news on the release dates and the details off Samsung’s version of the ARM Cortex A15 cpu for mobile devices. Samsung is helping ramp up performance by shrinking the design rule down to 32nm, and in the  A15 cpu dropping two out of the four possible cores. This choice is to make room for the integrated graphics processor. It’s a deluxe system on a chip that will no doubt give any A9 equipped tablet a run for its money. Indications at this point by Samsung are that the A15 will be a tablet only cpu and not adapted to smartphone use.

    Early in the Fall there were some indications that the memory addressing of the Cortex A15 would be enhanced to allow larger memories (greater than 4GBytes) to be added to devices. As it is now memory addressing isn’t a big issue as memory extensions (up to 40bits Large Physical  Address Extensions-LPAE) are allowed under the current generation Cortex A9. However the Instructions are still the same 32 bit Instruction Set longtime users of the ARM architecture are familiar with, and as always are backward compatible with previous generation software. It would appear that the biggest advantage to moving to Cortex A15 would be the potential for higher clock rates, decent power management and room to grow on the die for embedded graphics.

    Apple in it’s designs using the Cortex processors has stayed one generation behind the rest of the manufacturers and used all possible knowledge and brute force to eek out a little more power savings. Witness the iPad battery life still tops most other devices on the market. By creating a fully customized Cortex A8, Apple has absolutely set the bar on power management on die, and on the motherboard as well. If Samsung decides to go the route of pure power and clock, but sacrifices two cores to get the power level down I just hope they can justify that effort with equally amazing advancements in the software that runs on this new chip. Whether it be a game or better yet a snazzy User Interface, they need to differentiate themselves and try to show off their new cpu.

  • Super Tiny Computer Puts Android on Your TV, Laptop-with a side of Raspberry Pi

    Early this year we got to see, through ARM-powered devices such as the Motorola Atrix, that it doesnt take even a netbook to run basic computing functions. At a live demonstration in New York City, FXI Technologies showed off the next evolution of that idea: an ARM-based computer on a USB stick without any of that extra smartphone or tablet baggage.

    via Super Tiny Computer Puts Android on Your TV, Laptop.

    An example of a newly minted Raspberry Pi motherboard

    As time marches onward, the term we use ‘computer’ becomes more and more diffuse. Consider the cell phone, is no longer a phone but a computer connected to a network and can act like a phone. Or your TV is a computer that is also connected to a network and you can watch broadcasts or streamed videos or attach it to a game console. Now what if you could turn any bit of electronics with a usb connector and a video display into a bona fide computer? Size is no limit when using a mobile cpu like an ARM chip. As Android evolves I hope too that efforts like Raspberry Pi show what can be done in a wholly  Open Source context. FXI Technologies is showing us the way, but so are other efforts like the Raspberry Pi computer too.

    I attended a workshop this past Summer sponsored by RedHat covering a wide range of topics including Open Source communities. The main technical person leading the workshop also volunteers some of his time to Mozilla, specifically Mozilla target to ARM cpus, like the Raspberry Pi computer. He told us a little bit about how astoundingly cheap that device is as it was originally intended as the main board for the Sling Box time shifting TV controller. The first generation design was meant to be as low cost as possible but it didn’t quite make it to market. Succeeding generations of the original design did make it to market, as did the custom ARM CPU that Broadcom created to put in the original design. That CPU has now given birth to the Raspberry Pi project using the Broadcom BCM2835 System on a Chip (SOC). This is an ARM 11 based core which puts it just a bit ahead of Apple’s A4 and A5 iPhone/iPad cpus which have used ARM8 and now ARM9 cores for it’s central processing unit. Cost is of course cheap compared to anything else calling itself a computer, or a tablet and this is the reasoning behind making the board layout open source along with targeting a Linux distribution specifically for this computer.

    USB flash drive
    Image via Wikipedia
  • AnandTech – Applied Micros X-Gene: The First ARMv8 SoC

    APM expects that even with a late 2012 launch it will have a 1 – 2 year lead on the competition. If it can get the X-Gene out on time, hitting power and clock targets both very difficult goals, the headstart will be tangible. Note that by the end of 2012 well only just begin to see the first Cortex A15 implementations. ARMv8 based competitors will likey be a full year out, at least. 

    via AnandTech – Applied Micros X-Gene: The First ARMv8 SoC.

    Chip Diagram for the ARM version 8 as implemented by APM

    It’s nice to get a confirmation of the production time lines for the Cortex A15 and the next generation ARM version 8 architecture. So don’t expect to see shipping chips, much less finished product using those chips well into 2013 or even later. As for the 4 core ARM A15, finished product will not appear until well into 2012. This means if Intel is able to scramble, they have time to further refine their Atom chips to reach the power level and Thermal Design Point (TDP) for the competing ARM version 8 architecture. What seems to be the goal is to jam in more cores per CPU socket than is currently done on the Intel architecture (up to almost 32 in on of the graphics presented with the article).

    The target we are talking about is 2W per core @ 3Ghz, and it is going to be a hard, hard target to hit for any chip designer or manufacturer. One can only hope that TMSC can help APM get a finished chip out the door on it’s finest ruling chip production lines (although an update to the article indicates it will ship on 40nm to get it out the door quicker). The finer the ruling of signal lines on the chip the lower the TDP, and the higher they can run the clock rate. If ARM version 8 can accomplish their goal of 2W per cpu core @ 3 Gigahertz, I think everyone will be astounded. And if this same chip can be sampled at the earliest prototypes stages by a current ARM Server manufacturer say, like Calxeda or even SeaMicro then hopefully we can get benchmarks to show what kind of performance can be expected from the ARM v.8  architecture and instruction set. These will be interesting times.

    Intel Atom CPU Z520, 1,333GHz
    Image via Wikipedia
  • Expect the First Windows 8 Snapdragon PC Late 2012

    Image representing Microsoft as depicted in Cr...
    Image via CrunchBase

    Qualcomm CEO Paul Jacobs, speaking during the San Diego semiconductor companys annual analyst day in New York, said Qualcomm is currently working with Microsoft to ensure that the upcoming Windows 8 operating system will run on its ARM-based Snapdragon SoCs.

    via Expect the First Windows 8 Snapdragon PC Late 2012.

    Image representing Qualcomm as depicted in Cru...
    Image via CrunchBase

    Windows 8 is a’comin’ down the street.  And I bet you’ll see it sooner rather than later. Maybe as early as June on some products. The reason of course is the Tablet Market is sucking all the air out of the room and Microsoft needs a win to keep the mindshare favorable to it’s view of the consumer computer market. Part of that drive is fostering a new level of cooperation with System on chip manufacturers who until now have been devoted to the mobile phone, smart phone market. Now everyone wants a great big Microsoft hope to conquer the Apple iPad in the tablet market. And this may be their only hope to accomplish that in the coming year.

    Forrester Research just 2 days ago however predicted the Windows 8 Tablet dead on arrival:

    Image representing Forrester Research as depic...
    Image via CrunchBase

    IDG News Service – Interest in tablets with Microsoft’s Windows 8 is plummeting, Forrester Research said in a study released on Tuesday.

    http://www.computerworld.com/s/article/9222238/Interest_waning_on_Windows_8_tablet_Forrester_says

    Key to making a mark in the tablet computing market is content, content, content. Performance and specs alone will not create a Windows 8 Tablet market in what is an Apple dominated tablet marketplace, as the article says. It also appears previous players in the failed PC Tablet market will make a valiant second attempt this time using Windows 8 (I’m thinking Fujitsu, HP and Dell according to this article).

    Enhanced by Zemanta
  • Fusion plays its card: The Ten of Terabytes • The Register

    Fusion-io has crammed eight ioDrive flash modules on one PCIe card to give servers 10TB of app-accelerating flash.

    This follows on from its second generation ioDrives: PCIe-connected flash cards using single level cell and multi-level cell flash to provide from 400GB to 2.4TB of flash memory, which can be used by applications to get stored data many times faster than from disk. By putting eight 1.28TB multi-level cell ioDrive 2 modules on a single wide ioDrive Octal PCIe card Fusion reaches a 10TB capacity level.

    via Fusion plays its card: The Ten of Terabytes • The Register.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase

    This is some big news in the fight to be king of the PCIe SSD market. I declare: Advantage Fusion-io. They now have the lead in terms of not just speed but also overall capacity at the price point they have targeted.  As densities increase and prices more or less stay flat, the value add is more data can stay resident on the PCIe card and not be swapped out to Fibre-Channel array storage on the Storage Area Network (SAN). Performance is likely to be wicked cool and early adopters will now doubt reap big benefits from transaction processing and online analytic processing as well.

  • CMOS sensor inventor Eric Fossum discusses digital image sensors: Digital Photography Review

    CMOS sensor inventor Eric Fossum discusses digital image sensors: Digital Photography Review.

    Check out the video of the Lecture. Dr. Fossum attempts to address the societal and privacy implications of his invention the CMOS sensor. You don’t find too many scientists willing to engage in this type of presentation. And he brings the thorny issues early in the presentation so that he doesn’t run out of time to cover them by sticking them at the end.

    Also interesting in this video is Dr. Fossum’s story about how he was assigned the task of improving the reliability of CCDs (charged coupled devices) that were being sent into space. Defects in the sensor could occur when a highly energetic particle entered the sensor and created a defect in the sensor itself (ruing the ability to read out data accurately from the chip). The CCD works by collecting a sample than moving it one step at a time out to the edge of the chip, where it then gets amplified and read, and recorded. So if a defect occurs, the buckets moving a particular row or column of pixels will hit the defect and alter the reading or stop it from reading altogether.

    Dr. Fossum was able to get around this by building an amplifier into each pixel. This was achieved, hanks to the scaling down of micro-electronics available in silicon semi-conductors and Moore’s Law. A double-benefit of using CMOS semiconductors for the sensor is you can add all kinds of OTHER electronic circuits on the same chip as the sensor, so things get really interesting because you can integrate them on the silicon (bring up performance, bringing down costs). As Dr. Fossum says, “basically we can integrate so many things, we can create a full camera on a chip. All you do is add power, and out comes an image,…”

    Also liked this quote, “The force of marketing is greater than the force of engineering…”

    Lastly, he covers his research of quanta-image sensor (QIS) which sounds pretty interesting too.

    CMOS Image Sensor has photo diodes (PD), same ...
    Image via Wikipedia
  • Intel Responds to Calxeda/HP ARM Server News (Wired.com)

    Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.via Cloudline | Blog | Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data.

    Structure of the TILE64 Processor from Tilera
    Image via Wikipedia: Tile64 mesh network processor from Tilera
    Image representing Tilera as depicted in Crunc...
    Image via CrunchBase

    So Intel gets an interview with a Conde-Nast writer for a sub-blog of Wired.com. I doubt too many purchasers or data center architects consult Cloudline@Wired.com. But all the same, I saw through many thinly veiled bits of handwaving and old saws from Intel saying, “Yes, this exists but we’re already addressing it with our exiting product lines,. . .” So, I wrote in a comment to this very article. Especially regarding a throw-away line mentioning the ‘future’ of the data center and the direction the Data Center and Cloud Computing market was headed. However the moderator never published the comment. In effect, I raised the Question: Whither Tilera? And the Quanta SM-2 server based on the Tilera Chip?

    Aren’t they exactly what is described by the author John Stokes as a network of cores on a chip? And given the scale of Tilera’s own product plans going into the future and the fact they are not just concentrating on Network gear but actual Compute Clouds too, I’d say both Stokes and Walcyzk are asking the wrong questions and directing our attention in the wrong direction. This is not a PR battle but a flat out technology battle. You cannot win this with words and white papers but in fact it requires benchmarks and deployments and Case Histories. Technical merit and superior technology will differentiate the players in the  Cloud in a Box race. And this hasn’t been the case in the past as Intel has battled AMD in the desktop consumer market. In the data center Intel Fear Uncertainty and Doubt is the only weapon they have.

    And I’ll quote directly from John Stokes’s article here describing EXACTLY the kind of product that Tilera has been shipping already:

    “Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.”

    The key phrase here is cluster on a chip, in essence exactly what Tilera has strived to achieve with its Tilera64 based architecture. To review from previous blog entries of this website following the announcements and timelines published by Tilera:

  • ARM specs out first 64-bit RISC chips • The Register

    IMG_1267
    Image by krunkwerke via Flickr

    The ARM RISC processor is getting true 64-bit processing and memory addressing – removing the last practical barrier to seeing an army of ARM chips take a run at the desktops and servers that give Intel and AMD their moolah.

    via ARM specs out first 64-bit RISC chips • The Register.

    The downside to this announcement is the timeline ARM lays out for the first generation chips to use the new Vers. 8 architecture. Due to limited demand, as ARM defines it, chips will not be shipping until 2013 or as late as 2014. However according to this Register article the existing IT Data center infrastructure will not adopt ANY ARM-based chips until they are designed as a 64-bit clean architecture. Sounds like a potential for a chicken and egg scenario except ARM will get that Egg out the door on schedule with TMSC as it’s test chip partner. Some other details that come from the article include that the top end ARM-15 chip just announced already addresses more than 32-bits of Memory through a workaround that allows enterprising programmers to address as many as 40bits of memory if they need it. The best argument made for the real market need of 64-bit Memory addressing is for programmers currently on different chip architectures who might want to port their apps to ARM. THEY are are the real target market for the Vers. 8 architecture, and will have a much easier time porting over to another chip architecture that has the same level of memory addressing capability (64-bits all around).

    As for companies like Calxeda who are adopting the ARM-15 architecture and the current ARM-8 Cortex chips (both of which fall under the previous gen. vers. 7 architecture), 32-bits of memory (4Gbytes in total) is enough to get by depending on the application being run. Highly parallel apps or simple things like single threaded webservers will perform well under these circumstances, according to The Register. And I am inclined to believe this based on current practices of Data Center giants like Facebook and Google (virtualization is sacrificed for massively parallel architectures). Also given the plans folks like Calxeda have for hardware interconnects, the ability off all those low power 32-bit chips all communicating with one another holds a lot of promise too.  I’m still curious to see if Calxeda can come up with a unique product utilizing the 64-bit ARM vers. 8 architecture when the chip finally is taped out and test chips are shipped out my TMSC.

  • HP hooks up with Calxeda to form server ARMy • The Register

    Calxeda is producing 4-core, 32-bit, ARM-based system-on-chip SOC designs, developed from ARMs Cortex A9. It says it can deliver a server node with a thermal envelope of less than 5 watts. In the summer it was designing an interconnect to link thousands of these things together. A 2U rack enclosure could hold 120 server nodes: thats 480 cores.

    via HP hooks up with Calxeda to form server ARMy • The Register.

    EnergyCore prototype card
    The first attempt at making an OEM compute node from Calxeda

    HP signing on as a OEM for Calxeda designed equipment is going to push ARM based massively parallel server designs into a lot more data centers. Add to this the announcement of the new ARM-15 cpu and it’s timeline for addressing 64-bit memory and you have a battle royale going up against Intel. Currently the Intel Xeon is the preferred choice for applications requiring large amounts of DRAM to hold whole databases and Memcached webpages for lightning quick fetches. On the other end of the scale is the low per watt 4 core ARM chips dissipating a mere 5 watts. Intel is trying to drive down the Thermal Design Point for their chips even resorting to 64bit Atom chips to keep the Memory Addressing advantage. But the timeline for decreasing the Thermal Design Point doesn’t quite match up to the ARM x64 timeline. So I suspect ARM will have the advantage as will Calxeda for quite some time to come.

    While I had hoped the recen ARM-15 announcement was also going to usher in a fully 64-bit capable cpu, it will at least be able to fake larger size memory access. The datapath I remember being quoted was 40-bits wide and that can be further extended using software. And it doesn’t seem to have discouraged HP at all who are testing the Calxeda designed prototype EnergyCore evaluation board. This is all new territory for both Calxeda and HP so a fully engineered and designed prototype is absolutely necessary to get this project off the ground. My hope is HP can do a large scale test and figure out some of the software configuration optimization that needs to occur to gain an advantage in power savings, density and speed over an Intel Atom server (like SeaMicro).