Qualcomm CEO Paul Jacobs, speaking during the San Diego semiconductor companys annual analyst day in New York, said Qualcomm is currently working with Microsoft to ensure that the upcoming Windows 8 operating system will run on its ARM-based Snapdragon SoCs.
Windows 8 is a’comin’ down the street. And I bet you’ll see it sooner rather than later. Maybe as early as June on some products. The reason of course is the Tablet Market is sucking all the air out of the room and Microsoft needs a win to keep the mindshare favorable to it’s view of the consumer computer market. Part of that drive is fostering a new level of cooperation with System on chip manufacturers who until now have been devoted to the mobile phone, smart phone market. Now everyone wants a great big Microsoft hope to conquer the Apple iPad in the tablet market. And this may be their only hope to accomplish that in the coming year.
Forrester Research just 2 days ago however predicted the Windows 8 Tablet dead on arrival:
Image via CrunchBase
IDG News Service – Interest in tablets with Microsoft’s Windows 8 is plummeting, Forrester Research said in a study released on Tuesday.
Key to making a mark in the tablet computing market is content, content, content. Performance and specs alone will not create a Windows 8 Tablet market in what is an Apple dominated tablet marketplace, as the article says. It also appears previous players in the failed PC Tablet market will make a valiant second attempt this time using Windows 8 (I’m thinking Fujitsu, HP and Dell according to this article).
Fusion-io has crammed eight ioDrive flash modules on one PCIe card to give servers 10TB of app-accelerating flash.
This follows on from its second generation ioDrives: PCIe-connected flash cards using single level cell and multi-level cell flash to provide from 400GB to 2.4TB of flash memory, which can be used by applications to get stored data many times faster than from disk. By putting eight 1.28TB multi-level cell ioDrive 2 modules on a single wide ioDrive Octal PCIe card Fusion reaches a 10TB capacity level.
This is some big news in the fight to be king of the PCIe SSD market. I declare: Advantage Fusion-io. They now have the lead in terms of not just speed but also overall capacity at the price point they have targeted. As densities increase and prices more or less stay flat, the value add is more data can stay resident on the PCIe card and not be swapped out to Fibre-Channel array storage on the Storage Area Network (SAN). Performance is likely to be wicked cool and early adopters will now doubt reap big benefits from transaction processing and online analytic processing as well.
Check out the video of the Lecture. Dr. Fossum attempts to address the societal and privacy implications of his invention the CMOS sensor. You don’t find too many scientists willing to engage in this type of presentation. And he brings the thorny issues early in the presentation so that he doesn’t run out of time to cover them by sticking them at the end.
Also interesting in this video is Dr. Fossum’s story about how he was assigned the task of improving the reliability of CCDs (charged coupled devices) that were being sent into space. Defects in the sensor could occur when a highly energetic particle entered the sensor and created a defect in the sensor itself (ruing the ability to read out data accurately from the chip). The CCD works by collecting a sample than moving it one step at a time out to the edge of the chip, where it then gets amplified and read, and recorded. So if a defect occurs, the buckets moving a particular row or column of pixels will hit the defect and alter the reading or stop it from reading altogether.
Dr. Fossum was able to get around this by building an amplifier into each pixel. This was achieved, hanks to the scaling down of micro-electronics available in silicon semi-conductors and Moore’s Law. A double-benefit of using CMOS semiconductors for the sensor is you can add all kinds of OTHER electronic circuits on the same chip as the sensor, so things get really interesting because you can integrate them on the silicon (bring up performance, bringing down costs). As Dr. Fossum says, “basically we can integrate so many things, we can create a full camera on a chip. All you do is add power, and out comes an image,…”
Also liked this quote, “The force of marketing is greater than the force of engineering…”
Lastly, he covers his research of quanta-image sensor (QIS) which sounds pretty interesting too.
Image via Wikipedia: Tile64 mesh network processor from Tilera
Image via CrunchBase
So Intel gets an interview with a Conde-Nast writer for a sub-blog of Wired.com. I doubt too many purchasers or data center architects consult Cloudline@Wired.com. But all the same, I saw through many thinly veiled bits of handwaving and old saws from Intel saying, “Yes, this exists but we’re already addressing it with our exiting product lines,. . .” So, I wrote in a comment to this very article. Especially regarding a throw-away line mentioning the ‘future’ of the data center and the direction the Data Center and Cloud Computing market was headed. However the moderator never published the comment. In effect, I raised the Question: Whither Tilera? And the Quanta SM-2 server based on the Tilera Chip?
Aren’t they exactly what is described by the author John Stokes as a network of cores on a chip? And given the scale of Tilera’s own product plans going into the future and the fact they are not just concentrating on Network gear but actual Compute Clouds too, I’d say both Stokes and Walcyzk are asking the wrong questions and directing our attention in the wrong direction. This is not a PR battle but a flat out technology battle. You cannot win this with words and white papers but in fact it requires benchmarks and deployments and Case Histories. Technical merit and superior technology will differentiate the players in the Cloud in a Box race. And this hasn’t been the case in the past as Intel has battled AMD in the desktop consumer market. In the data center Intel Fear Uncertainty and Doubt is the only weapon they have.
And I’ll quote directly from John Stokes’s article here describing EXACTLY the kind of product that Tilera has been shipping already:
“Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.”
The key phrase here is cluster on a chip, in essence exactly what Tilera has strived to achieve with its Tilera64 based architecture. To review from previous blog entries of this website following the announcements and timelines published by Tilera:
The ARM RISC processor is getting true 64-bit processing and memory addressing – removing the last practical barrier to seeing an army of ARM chips take a run at the desktops and servers that give Intel and AMD their moolah.
The downside to this announcement is the timeline ARM lays out for the first generation chips to use the new Vers. 8 architecture. Due to limited demand, as ARM defines it, chips will not be shipping until 2013 or as late as 2014. However according to this Register article the existing IT Data center infrastructure will not adopt ANY ARM-based chips until they are designed as a 64-bit clean architecture. Sounds like a potential for a chicken and egg scenario except ARM will get that Egg out the door on schedule with TMSC as it’s test chip partner. Some other details that come from the article include that the top end ARM-15 chip just announced already addresses more than 32-bits of Memory through a workaround that allows enterprising programmers to address as many as 40bits of memory if they need it. The best argument made for the real market need of 64-bit Memory addressing is for programmers currently on different chip architectures who might want to port their apps to ARM. THEY are are the real target market for the Vers. 8 architecture, and will have a much easier time porting over to another chip architecture that has the same level of memory addressing capability (64-bits all around).
As for companies like Calxeda who are adopting the ARM-15 architecture and the current ARM-8 Cortex chips (both of which fall under the previous gen. vers. 7 architecture), 32-bits of memory (4Gbytes in total) is enough to get by depending on the application being run. Highly parallel apps or simple things like single threaded webservers will perform well under these circumstances, according to The Register. And I am inclined to believe this based on current practices of Data Center giants like Facebook and Google (virtualization is sacrificed for massively parallel architectures). Also given the plans folks like Calxeda have for hardware interconnects, the ability off all those low power 32-bit chips all communicating with one another holds a lot of promise too. I’m still curious to see if Calxeda can come up with a unique product utilizing the 64-bit ARM vers. 8 architecture when the chip finally is taped out and test chips are shipped out my TMSC.
Calxeda is producing 4-core, 32-bit, ARM-based system-on-chip SOC designs, developed from ARMs Cortex A9. It says it can deliver a server node with a thermal envelope of less than 5 watts. In the summer it was designing an interconnect to link thousands of these things together. A 2U rack enclosure could hold 120 server nodes: thats 480 cores.
The first attempt at making an OEM compute node from Calxeda
HP signing on as a OEM for Calxeda designed equipment is going to push ARM based massively parallel server designs into a lot more data centers. Add to this the announcement of the new ARM-15 cpu and it’s timeline for addressing 64-bit memory and you have a battle royale going up against Intel. Currently the Intel Xeon is the preferred choice for applications requiring large amounts of DRAM to hold whole databases and Memcached webpages for lightning quick fetches. On the other end of the scale is the low per watt 4 core ARM chips dissipating a mere 5 watts. Intel is trying to drive down the Thermal Design Point for their chips even resorting to 64bit Atom chips to keep the Memory Addressing advantage. But the timeline for decreasing the Thermal Design Point doesn’t quite match up to the ARM x64 timeline. So I suspect ARM will have the advantage as will Calxeda for quite some time to come.
While I had hoped the recen ARM-15 announcement was also going to usher in a fully 64-bit capable cpu, it will at least be able to fake larger size memory access. The datapath I remember being quoted was 40-bits wide and that can be further extended using software. And it doesn’t seem to have discouraged HP at all who are testing the Calxeda designed prototype EnergyCore evaluation board. This is all new territory for both Calxeda and HP so a fully engineered and designed prototype is absolutely necessary to get this project off the ground. My hope is HP can do a large scale test and figure out some of the software configuration optimization that needs to occur to gain an advantage in power savings, density and speed over an Intel Atom server (like SeaMicro).
Always nice to get an update on the elmcity project from Jon Udell. It is the ‘calendar’ of calendars and a great project showing how one can leverage open data, but at the same time confront some technological challenges too.
As I review and improve the elmcity hubs in selected cities, I am again reminded of William Gibson’s wonderful aphorism: “The future is already here, it’s just not evenly distributed.” Yesterday we saw that the future of community calendars hasn’t yet arrived at the University of Michigan. But today I was delighted to see that it has arrived, in a big way, for the Ann Arbor public schools. Almost all of them, it turns out, are making good use of Google Calendar to publish machine-readable calendar information. This morning I rounded up thirty of those calendars and added them to Ann Arbor’s elmcity hub, bringing the total number of feeds from 194 to 224.
Here’s the breakdown of the 309 events from the grade schools:
The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday.
Not good news in imho. The reason being is the mission creep and abuses that come with absolute power in the form of a National Security Letter. The other part of the equation is Google’s business model runs opposite to the idea of protecting people’s information. If you disagree, I ask that you read this blog post from Christopher Soghoian, where he details just what exactly it is Google does when it keeps all your data unencrypted in its data centers. In order to sell AdWords and serve advertisements to you, Google needs to keep everything open and unencrypted. At the same time they aren’t too casual in their stewardship of your data, but they do respond to law enforcement requests for customer data. To quote Seghoian at the end of his blog entry:
“The end result is that law enforcement agencies can, and regularly do request user data from the company — requests that would lead to nothing if the company put user security and privacy first.”
And that indeed is the moral of the story. Which leaves everyone asking what’s the alternative? Earlier in the same story the blame is placed square on the end-user for not protecting themselves. Encryption tools for email and personal documents have been around for a long time. And often there are commercial products available to help accomplish some level of privacy even for so-called Cloud hosted data. But the friction point is always going to be the level of familiarity, ease of use and cost of the product before it is as widely used and adopted as Webmail has been since the advent of desktop email clients like Eudora.
So if you really have concerns, take action, don’t wait for Google to act to defend your rights. Encrypt your email, your documents and make Google one bit less culpable for any law enforcement requests that may or may not include your personal data.
This past June, fellow High Tech History writer Gil Press wrote an entry in recognition of International Business Machines’ centennial. In the interim, I came across a documentary created by noted filmmaker Errol Morris for IBM that draws on the experiences of, among others, the corporation’s former technicians and executives to tell a thirty-minute story of some of IBM’s more notable achievements in computing over the last one hundred years.
In this instance, Morris’ collaboration with noted composer Philip Glass resulted in an expertly produced, sentimental (occasionally overly so), and informative oral history. Morris and Glass previously worked together on the 2003 Oscar-winning documentary, The Fog of War: Eleven Lessons from the Life of Robert S. McNamara. And this was not the first time that Morris had been commissioned to work for IBM. In 1999 he filmed a short documentary intended to screen at an in-house conference for IBM employees. The conference never took place…