The ARM RISC processor is getting true 64-bit processing and memory addressing – removing the last practical barrier to seeing an army of ARM chips take a run at the desktops and servers that give Intel and AMD their moolah.
The downside to this announcement is the timeline ARM lays out for the first generation chips to use the new Vers. 8 architecture. Due to limited demand, as ARM defines it, chips will not be shipping until 2013 or as late as 2014. However according to this Register article the existing IT Data center infrastructure will not adopt ANY ARM-based chips until they are designed as a 64-bit clean architecture. Sounds like a potential for a chicken and egg scenario except ARM will get that Egg out the door on schedule with TMSC as it’s test chip partner. Some other details that come from the article include that the top end ARM-15 chip just announced already addresses more than 32-bits of Memory through a workaround that allows enterprising programmers to address as many as 40bits of memory if they need it. The best argument made for the real market need of 64-bit Memory addressing is for programmers currently on different chip architectures who might want to port their apps to ARM. THEY are are the real target market for the Vers. 8 architecture, and will have a much easier time porting over to another chip architecture that has the same level of memory addressing capability (64-bits all around).
As for companies like Calxeda who are adopting the ARM-15 architecture and the current ARM-8 Cortex chips (both of which fall under the previous gen. vers. 7 architecture), 32-bits of memory (4Gbytes in total) is enough to get by depending on the application being run. Highly parallel apps or simple things like single threaded webservers will perform well under these circumstances, according to The Register. And I am inclined to believe this based on current practices of Data Center giants like Facebook and Google (virtualization is sacrificed for massively parallel architectures). Also given the plans folks like Calxeda have for hardware interconnects, the ability off all those low power 32-bit chips all communicating with one another holds a lot of promise too. I’m still curious to see if Calxeda can come up with a unique product utilizing the 64-bit ARM vers. 8 architecture when the chip finally is taped out and test chips are shipped out my TMSC.
Calxeda is producing 4-core, 32-bit, ARM-based system-on-chip SOC designs, developed from ARMs Cortex A9. It says it can deliver a server node with a thermal envelope of less than 5 watts. In the summer it was designing an interconnect to link thousands of these things together. A 2U rack enclosure could hold 120 server nodes: thats 480 cores.
The first attempt at making an OEM compute node from Calxeda
HP signing on as a OEM for Calxeda designed equipment is going to push ARM based massively parallel server designs into a lot more data centers. Add to this the announcement of the new ARM-15 cpu and it’s timeline for addressing 64-bit memory and you have a battle royale going up against Intel. Currently the Intel Xeon is the preferred choice for applications requiring large amounts of DRAM to hold whole databases and Memcached webpages for lightning quick fetches. On the other end of the scale is the low per watt 4 core ARM chips dissipating a mere 5 watts. Intel is trying to drive down the Thermal Design Point for their chips even resorting to 64bit Atom chips to keep the Memory Addressing advantage. But the timeline for decreasing the Thermal Design Point doesn’t quite match up to the ARM x64 timeline. So I suspect ARM will have the advantage as will Calxeda for quite some time to come.
While I had hoped the recen ARM-15 announcement was also going to usher in a fully 64-bit capable cpu, it will at least be able to fake larger size memory access. The datapath I remember being quoted was 40-bits wide and that can be further extended using software. And it doesn’t seem to have discouraged HP at all who are testing the Calxeda designed prototype EnergyCore evaluation board. This is all new territory for both Calxeda and HP so a fully engineered and designed prototype is absolutely necessary to get this project off the ground. My hope is HP can do a large scale test and figure out some of the software configuration optimization that needs to occur to gain an advantage in power savings, density and speed over an Intel Atom server (like SeaMicro).
Always nice to get an update on the elmcity project from Jon Udell. It is the ‘calendar’ of calendars and a great project showing how one can leverage open data, but at the same time confront some technological challenges too.
As I review and improve the elmcity hubs in selected cities, I am again reminded of William Gibson’s wonderful aphorism: “The future is already here, it’s just not evenly distributed.” Yesterday we saw that the future of community calendars hasn’t yet arrived at the University of Michigan. But today I was delighted to see that it has arrived, in a big way, for the Ann Arbor public schools. Almost all of them, it turns out, are making good use of Google Calendar to publish machine-readable calendar information. This morning I rounded up thirty of those calendars and added them to Ann Arbor’s elmcity hub, bringing the total number of feeds from 194 to 224.
Here’s the breakdown of the 309 events from the grade schools:
The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday.
Not good news in imho. The reason being is the mission creep and abuses that come with absolute power in the form of a National Security Letter. The other part of the equation is Google’s business model runs opposite to the idea of protecting people’s information. If you disagree, I ask that you read this blog post from Christopher Soghoian, where he details just what exactly it is Google does when it keeps all your data unencrypted in its data centers. In order to sell AdWords and serve advertisements to you, Google needs to keep everything open and unencrypted. At the same time they aren’t too casual in their stewardship of your data, but they do respond to law enforcement requests for customer data. To quote Seghoian at the end of his blog entry:
“The end result is that law enforcement agencies can, and regularly do request user data from the company — requests that would lead to nothing if the company put user security and privacy first.”
And that indeed is the moral of the story. Which leaves everyone asking what’s the alternative? Earlier in the same story the blame is placed square on the end-user for not protecting themselves. Encryption tools for email and personal documents have been around for a long time. And often there are commercial products available to help accomplish some level of privacy even for so-called Cloud hosted data. But the friction point is always going to be the level of familiarity, ease of use and cost of the product before it is as widely used and adopted as Webmail has been since the advent of desktop email clients like Eudora.
So if you really have concerns, take action, don’t wait for Google to act to defend your rights. Encrypt your email, your documents and make Google one bit less culpable for any law enforcement requests that may or may not include your personal data.
This past June, fellow High Tech History writer Gil Press wrote an entry in recognition of International Business Machines’ centennial. In the interim, I came across a documentary created by noted filmmaker Errol Morris for IBM that draws on the experiences of, among others, the corporation’s former technicians and executives to tell a thirty-minute story of some of IBM’s more notable achievements in computing over the last one hundred years.
In this instance, Morris’ collaboration with noted composer Philip Glass resulted in an expertly produced, sentimental (occasionally overly so), and informative oral history. Morris and Glass previously worked together on the 2003 Oscar-winning documentary, The Fog of War: Eleven Lessons from the Life of Robert S. McNamara. And this was not the first time that Morris had been commissioned to work for IBM. In 1999 he filmed a short documentary intended to screen at an in-house conference for IBM employees. The conference never took place…
Duval being a computer scientist strongly believes in the power of data and the revelations it holds.
Actually, I am not sure what would be the alternative to ‘believing in data’ – not believing in data? Isn’t confronting theories with data one of the core activities of any science?
For me, there lies one of the most important promises of learning analytics: as a research domain, technology enhanced learning is too much a field of opinions – maybe learning analytics can help to turn the field into more of a collection of experimentally validated theories? Into more of a science?
I’m not sure I understand the problem that Wolfgang seems to have with data. Of course, a real issue is selecting what kind of data…
Pioneering Campus CIOs Say Necessity Drives Shift to Cloud By David Raths 10/25/11 A recent survey of campus IT leaders suggested that most colleges and universities are still gun shy about cloud computing. Yet if attendance at conference meetings is any gauge, there is widespread curiosity about the experience of early adopters.
The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday. By Ryan SingelOctober 25, 2011 | 11:07 am
I think the old adage about Greeks Bearing Gifts applies here, or should I say geeks bearing gifts?! Cloud computing as it applies to desktop productivity apps is a double-edged sword as it is commonly practiced in Higher Ed. What was once seen as a major outsourcing/cost-savings dumping Student email off to willing companies like Google is now seen as the ‘wave of the future’ where desktops give way to mobile devices and Web apps slowly evolve into just apps, independent of the websites where they actually run. However beware dear reader as those contracts you sign and the myriad terms of your Service Level Agreement are hammered out, rules can change. And by that I mean, the rule regarding simple things like, National Security Letters sent by the FBI to Google Inc. for the email of an individual you have outsourced your Student Email services to. I ask here to anyone who is reading, does Google under the terms of its contracts with a Higher Ed I.T. unit have to notify anyone that they are sending 6 months or more of GMail messages to the FBI to read at their leisure?
I’m reminded somewhat of the bad old days under Napster, where Higher Ed eventually started to receive mass quantities of DMCA (Digital Millenium Copyright Act) notices about infringing music files being shared over their data networks. In cases like that, each school could set its own policy. Some were very neutral asking for proof of the infringement, and in some cases ignoring the request because the point of origin was not the copyright holder but a 3rd party clearance group who spammed every University on behalf of the Recording Industry (RIAA). The beauty of the democracy and law for DMCA requests meant each institution could now decide how best to pursue the matter and do so at their own discretion. Not so with government requests for electronic data, oh no. For a national security letter, the University doesn’t even have to notify the individual that their data is being shared to the FBI. Nor can the tell anyone, it’s a complete, air-tight gag order placed on the service provider whomever they may be (Library, Student Records, or University I.T.) Whither the case of the outsourced University I.T. then?
In the haste to save money the SLAs Universities across the U.S. have made with big name providers like Google have made everyone subject to the rules governing that provider. Google is not an institution of Higher Ed. They are for profit, a U.S. corporation subject to all the laws governing any company chartered in the U.S. And they unlike Higher Ed do not have the interest, much less the luxury of responding to National Security letters in their own way, or at their own discretion. In fact, they don’t have to tell anyone what their actually policy as such is that’s private only for their top level officers and their Legal Dept. to know. So in the mad rush to create the omnipresent future of ‘Cloud Computing’ one must ask themselves, are we really only making the surveillance by Government easier? Are we really understanding what we give up when we decide to adopt applications/data hosted in the Cloud? Sure, yes, privacy as then CEO of Sun Microsystems Scott McNeely once boasted is ‘You have zero privacy anyway, get over it‘. But do we really understand the full implication of what this means?
I dare say it’s the folks focusing on the bottom line who are signing away our rights without us ever getting a say. And while I understand that even non-profit Higher Ed is run like a business, they are the last folks who should be participating in this construction of the Data Cloud Surveillance State we find ourselves in now. If we cannot choose for ourselves, what then do we have left for ourselves? Our thoughts? Our feelings? Sorry no, those too are now housed in the cloud by the likes of Facebook Timeline lifesteaming. Now everyone knows everything and you have surrendered it all just for the sake of catching up with some old College and High School friends. That’s too high a price to pay I think. So, to the degree possible I am unwilling to just let this ‘freedom’ ebb through a process of adopting this new App or that new platform. The new New Thing maybe cool, but there’s a whole lotta string attached.
The test chip will be fabbed at TSMC on its next-generation 20nm process, a full node reduction ~50% transistor scaling over its 28nm process. With the first 28nm ARM based products due out from TSMC in 2012, this 20nm tape-out announcement is an important milestone but were still around two years away from productization.
Image by Route79 via Flickr (Now that's scary isn't it! Boo!)
Happy Halloween! And like most years there are some tricks up ARM’s sleeve announced this past week along with some partnerships that should make things trickier for the Engineers trying to equip ever more energy efficient and dense Data Centers the world over.
It’s been announced, the ARM15 is coming to market some time in the future. Albeit a ways off yet. And it’s going to be using a really narrow design rule to insure it’s as low power as it possibly can be. I know manufacturers of the massively parallel compute cloud in a box will be seeking out this chip as soon as samples can arrive. The 64bit version of ARM15 is the real potential jewel in the crown for Calxeda who is attempting to balance low power and 64bit performance in the same design.
I can’t wait to see the first benchmarks of these chips apart from the benchmarks from the first shipping product Calxeda can get out with the ARM15 x64. Also note just this week Hewlett-Packard has signed on to sell designs by Calxeda in forth coming servers targeted at Energy Efficient Data Center build-outs. So more news to come regarding that partnership and you can read it right here @ Carpetbomberz.com
This method shows, Yang says, that “bits can be patterned more densely together by reducing the number of processing steps”. The HDD industry will be fascinated to understand how BPM drives can be made at a perhaps lower-than-anticipated cost.
Moore’s Law applies to semi-conductors built on silicon wafers. And to a lesser extent it has had some application to hard disk drive storage as well. When IBM created is GMR (Giant Magneto-Resistive) read/write head technology and was able to develop it into a shipping product, a real storage arms race began. Densities increased, prices dropped and before you knew it hard drives went from 1Gbyte to 10Gbytes overnight practically speaking. Soon a 30Gbyte drive was the default average size boot and data drive for every shipping PC when just a few years before a 700Mbyte drive was the norm. This was a greater than 10X improvement with the adoption of a new technology.
I remember a lot of those touted technologies were added on and tacked on at the same time. PRML (Partial Read Maximum Likelihood) and Perpendicular Magnetic Recording (PMR) too both helped keep the ball rolling in terms of storage density. IBM even did some pretty advanced work layering magnetic layers between magnetically insulating layers (using thin layers of Ruthenium) to help create even stronger magnetic recording media for the newer higher density drives.
However each new incremental advance has now run a course and the advances in storage technology are slowing down again. But there’s still one shining hope: Bit Patterned-Media (BPM). And in all the speculation about which technology is going to keep the storage density ball rolling, this new announcement is sure to play it’s part. A competing technique using lasers to heat the disk surface before writing data is also being researched and discussed, but is likely to force a lot of storage vendors to agree to make a transition to that technology simultaneously. BPM on the other hand isn’t so different and revolutionary that it must be rolled out en masse simultaneously by each drive vendor to insure everyone is compatible. And better yet BPM maybe a much lower cost and immediate way to increase storage densities without incurring big equipment and manufacturing machine upgrade costs.
So I’m thinking we’ll be seeing BPM much more quickly and we’ll continue to enjoy the advances in drive density for a little while longer.
Through first quarter of 2012, Intel will be releasing new SSDs: Intel SSD 520 “Cherryville” Series replacement for the Intel SSD 510 Series, Intel SSD 710 “Lyndonville” Series Enterprise HET-MLC SSD replacement for X25-E series, and Intel SSD 720 “Ramsdale” Series PCIe based SSD. In addition, you will be seeing two additional mSATA SSDs codenamed “Hawley Creek” by the end of the fourth quarter 2011.
That’s right folks Intel is jumping on the high performance PCIe SSD bandwagon with the Intel SSD 720 in the first quarter of 2012. Don’t know what price they will charge but given quotes and pre-releases of specs it’s going to compete against products from competitors like RamSan, Fusion-io and the top level OCZ PCIe prouct the R4. My best guess is based on pricing for those products it will be in the roughly $10,000+ category with an 8x PCI interface and fully complement of Flash memory (usually over 1TB on this class of PCIe card).
Knowing that Intel’s got some big engineering resources behind their SSD designs, I’m curious to see how close they can come to the performance statistics quoted in this table here:
2200 Mbytes/sec of Read throughput and 1100Mbytes/sec of Write throughput. Those are some pretty heft numbers compared to currently shipping products in the upper pro-summer and lower Enterprise Class price category. Hopefully Anandtech will get a shipping or even pre-release version before the end of the year and give it a good torture test. Following Anand Lai Shimpi on his Twitter feed, I’m seeing all kinds of tweets about how a lot of pre-release products from manufacturers off SSDs and PCIe SSDs fail during the benchmarks. Doesn’t bode well for the Quality Control depts. at the manufacturers assembling and testing these products. Especially considering the price premium of these items, it would be much more reassuring if the testing was more rigorous and conservative.