Point taken, try to limit the use of ‘to be’ + ‘verb’ + ‘by’. I’m probably more guilty of this than most. That and the use of probably.
Author: carpetbomberz
-
AppleInsider | Apple seen merging iOS, Mac OS X with custom A6 chip in 2012
Rumors of an ARM-based MacBook Air are not new. In May, one report claimed that Apple had built a test notebook featuring the same low-power A5 processor found in the iPad 2. The report, which came from Japan, suggested that Apple officials were impressed by the results of the experiment.
via AppleInsider | Apple seen merging iOS, Mac OS X with custom A6 chip in 2012.
Following up on an article they did back on May 27th, and one prior to that on May 6th, AppleInsider does a bit of prediction and prognosticating about the eventual fusion of iOS and Mac OS X. What they see triggering this is an ARM chip that would be able to execute 64-bit binaries across all of the product lines (A fabled ARM A-6). How long would it take to do this consolidation and interweaving? How many combined updaters, security patches, Pro App updaters would it take to get OS X 10.7 to be ‘more’ like iOS than it is today? Software development is going to take a while and it’s not just a matter of cross-compiling to an ARM chip from a software based on Intel chips.
Given that 64-bit Intel Atom chips are already running on the new Seamircro SM10000 (x64), it won’t be long now I’m sure before the ARM equivalent ARM-15 chip hits full stride. The designers have been aiming for a 4-core ARM design that will be encompassed by the ARM-15 release real soon now (RSN). The next step after that chip is licensed and piloted, tested and put into production will be a 64-bit clean design. I’m curious to see if 64-bit will be applied across ALL the different product lines within Apple. Especially when the issue of power-usage and Thermal Design power (TDM) is considered, will 64-bit ARM chips be as battery friendly? I wonder. True Intel has jumped the 64-bit divide on the desktop with the Core 2 Duo line some time ago and made them somewhat battery friendly. But they cannot compare at all to the 10 hours+ one gets on a 32-bit ARM chip today using the iPad.
Lastly, App Developers will also need to keep their Xcode environment up to date and merge in new changes constantly up to the big cutover to ARM x64. No telling what that’s going to be like apart from the previous 2 problems I have raised here. Apple in the 10.7 Lion run-up was very late in providing the support and tools to allow the developers to get their Apps ready. I will say though that in the history of migrations in Apple’s hardware/software, they have done more of them, more successfully than any other company. So I think they will be able to pull it off no doubt, but there will be much wailing and gnashing of teeth. And hopefully we’ll see something better as the end-users of the technology, something better than a much bigger profit margin for Apple (though that seems to be the prime mover in most recent cases as Steve Jobs has done the long slow fade into obscurity).
If ARM x64 is inevitable and iOS on Everything too, then I’m hoping things don’t change so much I can’t do things similarly to the way I do them now on the desktop. Currently on OS X 10.7 I am ignoring completely:
- Gestures
- Misson Control
- Launch Pad
- AppStore (not really because I had to download Lion)
Let’s hope this roster doesn’t get even longer over time as the iOS becomes the de facto OS on all Apple Products. Because I was sure hoping the future would be brighter than this. And as AppleInsider quotes from May 6th,
“In addition to laptops, the report said that Apple would ‘presumably’ be looking to move its desktop Macs to ARM architecture as well. It characterized the transition to Apple-made chips for its line of computers as a ‘done deal’.”
Related articles
- Wired.com’s story about Jeffries & Co. analyst: Peter Misek (www.wired.com)
- Read MacNN’s coverage of the same story (www.ipodnn.com)
- Slashdot’s blurb about this prediction and commentary (www.slashdot.com)
- Apple to incrementally deliver iCloud, new features for Mac and iOS users – Apple Insider (news.google.com)
-
First Sungard goes private and now Blackboard
The buyers include Bain Capital, the Blackstone Group, Goldman Sachs Capital Partners, Kohlberg Kravis Roberts, Providence Equity Partners and Texas Pacific Group. The group is led by Silver Lake Partners. The deal is a leveraged buyout – Sungard will be taken private and its shares removed from Wall Street.
via Sungard goes private • The Register. Posted in CIO, 29th March 2005 10:37 GMT
RTTNews – Private equity firm Providence Equity Partners, Inc. agreed Friday to take educational software and systems provider Blackboard, Inc. (BBBB: News ) private for $45 per share in an all-cash deal of $1.64 billion.
http://www.rttnews.com/Content/TopStories.aspx?Id=1658133 7/1/2011 8:53 AM ETIt would appear now that Providence Equity Partners owns two giants in the Higher Ed outsourcing industry Sungard and Blackboard. What does this mean? Will there be consolidation where there is overlap between the two companies? Will there be attempts to steal customers or upsell each other’s products?
Related articles
- Special Notes on Recent Changes for Blackboard (hollymccracken.wordpress.com)
- Providence Equity Is Said to Be in Lead to Buy Blackboard (businessweek.com)
-
SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register
An original SM10000 server with 512 cores and 1TB of main memory cost $139,000. The bump up to the 64-bit Atom N570 for 512 cores and the same 1TB of memory boosted the price to $165,000. A 768-core, 1.5TB machine using the new 64HD cards will run you $237,000. Thats 50 per cent more oomph and memory for 43.6 per cent more money. ®
via SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register.
SeaMicro continues to pump out the jams releasing another updated chassis in less than a year. There is now a grand total of 768 processor cores jammed in that 10U high box. Which leads me to believe they have just eclipsed the compute per rack unit of the Tilera and Calxeda massively parallel cloud servers in a box. But that would wrong because Calxeda is making a 2U server rack unit hold 120-4 core ARM cpus. So that gives you a grand total of 480 in just 2 rack units alone. Multiply that by 5 and you get 2400 cores in a 10U rack serving. So advantage Calxeda in total core count, however lets also consider software too. Atom being the cpu that Seamicro has chosen all along is an intel architecture chip and an x64 architecture at that. It is the best of both worlds for anyone who already had a big investment in Intel binary compatible OSes and applications. It is most often the software and it’s legacy pieces that drive the choice of which processor goes into your data cloud.
Anyone who had clean slate to start from might be able to choose between Calxeda versus Seamicro for their applications and infrastructure. And if density/thermal design point per rack unit is very important Calxeda too will suit your needs I would think. But who knows? Maybe your workflow isn’t as massively parallel as a Calxeda server and you might have a much lower implementation threshold getting started on an Intel system, so again advantage Seamicro. A real industry analyst would look at these two competing companies as complimentary, different architectures for different workflows.
-
NoSQL is What? (via Jeremy Zawodny’s blog)
Great set of comments along with a very good description of advantages of using NoSQL in a web application. There seems to be quite a bit of philosophical differences over whether or not NoSQL needs to be chosen at the earliest stages of ANY project. But Jeremy’s comments more or less prove, you pick the right tool for the right job, ‘Nuff Said.
Related articles
- NoSQL is a Premature Optimization (smoothspan.wordpress.com)
- NoSQL gaining popularity in enterprises (i-programmer.info)
- Is NoSQL a Premature Optimization that’s Worse than Death? Or the Lady Gaga of the Database World? (highscalability.com)
- NoSQL: NoSQL/NewSQL/MySQL Is Not a Zero Sum Game (themindstorms.blogspot.com)
-
ARM daddy simulates human brain with million-chip super • The Register
While everyone in the IT racket is trying to figure out how many Intel Xeon and Atom chips can be replaced by ARM processors, Steve Furber, the main designer of the 32-bit ARM RISC processor at Acorn in the 1980s and now the ICL professor of engineering at the University of Manchester, is asking a different question, and that is: how many neurons can an ARM chip simulate?
via ARM daddy simulates human brain with million-chip super • The Register.
The phrase reminds me a bit of an old TV commercial that would air during the Saturday cartoons. Tootsie Roll brand lollipops had a center made out of Tootsie Roll. The challenge was to determine how many licks does it take to get to the center of a Tootsie Roll Pop? The answer was, “The World May Never Know”. And so it goes for the simulations large scale and otherwise of the human brain.
I remember also reading Stewart Brand’s 1985 book about the MIT Media Lab and their installation of a brand new multi-processor super computer called The Connection Machine (TCM). Danny Hillis was the designer and author of the original concept of stringing together a series of small one bit computer cores to act like ‘neurons’ in a larger array of cpus. The scale was designed to top out at around 65,535 (2^16). At the time MIT Media Lab only had the machine filled up 1/4 of the way but was attempting to do useful work with it at that size. Hillis spun out of MIT to create a startup company called Thinking Machines (to reflect the neuron style architecture he had pursued as a grad student). In fact all of Hillis’s ideas stemmed from his research that led up to the original Connection Machine Mark. 1.
Spring forward to today and the sudden appearance of massively parallel, low-power servers like Calxeda using ARM chips, Qanta Sq-2 using Tilera chips (also an MIT spin out). Similarly the Seamicro SM-10000×64 which uses Intel Atom chips in large scale, large quantity. And Seamicro is making sales TODAY. It almost seems like a stereotypical case of an idea being way ahead of its time. So recognize the opportunity because now the person directly responsible for designing the ARM chip is attacking that same problem Danny Hillis was all those years ago.
Personally I would like to see Hillis join in some way with this program not as Principal Investigator but may a background consultant. Nothing wrong with a few more eyes on the preliminary designs. Especially with Hillis’s background in programming those old mega-scale computers. That is the true black art of trying to do a brain simulator on this scale. Steve Furber might just be able to make lightning strike twice (once for Acorn/ARM cpus and once more for simulating the brain in silicon).
Related articles
- Simulating the human brain’s networks (theswarm.wordpress.com)
- SeaMicro Crams 768 Atom Cores in New Cloud Server (pcworld.com)
- ARM says its SpiNNaker chip simulates 1,000 brain neurons (electronista.com)
-
Distracting chatter is useful. But thanks to RSS (remember that?) it’s optional. (via Jon Udell)
I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.
via Jon Udell
-
Atom smasher claims Hadoop cloud migration victory • The Register
SeaMicro has been peddling its SM10000-64 micro server, based on Intels dual-core, 64-bit Atom N570 processor and cramming 256 of these chips into a 10U chassis. . .
. . . The SM10000-64 is not so much a micro server as a complete data center in a box, designed for low power consumption and loosely coupled parallel processing, such as Hadoop or Memcached, or small monolithic workloads, like Web servers.
via Atom smasher claims Hadoop cloud migration victory • The Register.
While it is not always easy to illustrate the cost/benefit and Return on Investment on a lower power box like the Seamicro, running it head to head on a similar workload with a bunch of off the shelf Xeon boxes really shows the difference. The calculation of the benefit is critical too. What do you measure? Is it speed? Is it speed per transaction? Is it total volume allowed through? Or is it cost per unit transaction within a set amount of transactions? You’re getting closer with that last one. The test setup used a set number of transaction needing to be done in a set period of time. The benchmark then measure total power dissipation to accomplish that number of transactions in the set amount of time. SeaMicro came away the winner in unit cost per transaction in power terms. While the Xeon based servers had huge excess speed and capacity the power dissipation put it pretty far into the higher cost per transaction category.
However it is very difficult to communicate this advantage that SeaMicro has over Intel. Future tests/benchmarks need to be constructed with clearly stated goals and criteria. Specifically if it can be communicated as a Case History of a particular problem that could be solved by either a SeaMicro server or a bunch of Intel boxes running Xeon cpus with big caches. Once that Case History is well described, then the two architectures are then put to work showing what the end goal is in clear terms (cost per transaction). Then and only then will SeaMicro communicate effectively how it does things different and how that can save money. Otherwise it’s too different to measure effectively versus a Intel Xeon based rack of servers.
Related articles
- Big data on micro servers? You bet. (gigaom.com)
- A visit to Silicon Valley’s hot new hardware company: Microserver maker SeaMicro (scobleizer.com)
- eHarmony Switches from Cloud to Atom Servers (datacenterknowledge.com)





