Just a very brief look at a couple of patent filings by Apple with some descriptions of potential applications. They seem to want to use it for navigation purposes using the onboard video camera. One half the screen will use the live video feed, the other half is a ‘virtual’ rendition of that scene in 3D to allow you to find a path or maybe a parking space in between all those buildings.
The second filing mentions a see-through screen whose opacity can be regulated by the user. The information display will take precedence over the image seen through the LCD panel. It will default to totally opaque using no voltage whatsoever (In Plane switching design for the LCD).
However the most intriguing part of the story as told by AppleInsider is the use of sensors on the device to determine angle, direction, bearing to then send over the network. Why the network? Well the whole rendering of the 3D scene as described in first patent filing is done somewhere in the cloud and spit back to the iOS device. No onboard 3D rendering needed or at least not at that level of detail. Maybe those datacenters in North Carolina are really cloud based 3D rendering farms?
While everyone in the IT racket is trying to figure out how many Intel Xeon and Atom chips can be replaced by ARM processors, Steve Furber, the main designer of the 32-bit ARM RISC processor at Acorn in the 1980s and now the ICL professor of engineering at the University of Manchester, is asking a different question, and that is: how many neurons can an ARM chip simulate?
The phrase reminds me a bit of an old TV commercial that would air during the Saturday cartoons. Tootsie Roll brand lollipops had a center made out of Tootsie Roll. The challenge was to determine how many licks does it take to get to the center of a Tootsie Roll Pop? The answer was, “The World May Never Know”. And so it goes for the simulations large scale and otherwise of the human brain.
I remember also reading Stewart Brand’s 1985 book about the MIT Media Lab and their installation of a brand new multi-processor super computer called The Connection Machine (TCM). Danny Hillis was the designer and author of the original concept of stringing together a series of small one bit computer cores to act like ‘neurons’ in a larger array of cpus. The scale was designed to top out at around 65,535 (2^16). At the time MIT Media Lab only had the machine filled up 1/4 of the way but was attempting to do useful work with it at that size. Hillis spun out of MIT to create a startup company called Thinking Machines (to reflect the neuron style architecture he had pursued as a grad student). In fact all of Hillis’s ideas stemmed from his research that led up to the original Connection Machine Mark. 1.
Spring forward to today and the sudden appearance of massively parallel, low-power servers like Calxeda using ARM chips, Qanta Sq-2 using Tilera chips (also an MIT spin out). Similarly the Seamicro SM-10000×64 which uses Intel Atom chips in large scale, large quantity. And Seamicro is making sales TODAY. It almost seems like a stereotypical case of an idea being way ahead of its time. So recognize the opportunity because now the person directly responsible for designing the ARM chip is attacking that same problem Danny Hillis was all those years ago.
Personally I would like to see Hillis join in some way with this program not as Principal Investigator but may a background consultant. Nothing wrong with a few more eyes on the preliminary designs. Especially with Hillis’s background in programming those old mega-scale computers. That is the true black art of trying to do a brain simulator on this scale. Steve Furber might just be able to make lightning strike twice (once for Acorn/ARM cpus and once more for simulating the brain in silicon).
I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.
When I left the pageview business I walked away from an engine that had, for many years, manufactured an audience for my writing. Four years on I’m still adjusting to the change. I always used to cringe when publishers talked about using content to drive traffic. Of course when the traffic was being herded my way I loved the attention. And when it wasn’t I felt — still feel — its absence. There are plenty of things I don’t miss, though. Among t … Read More
SeaMicro has been peddling its SM10000-64 micro server, based on Intels dual-core, 64-bit Atom N570 processor and cramming 256 of these chips into a 10U chassis. . .
. . . The SM10000-64 is not so much a micro server as a complete data center in a box, designed for low power consumption and loosely coupled parallel processing, such as Hadoop or Memcached, or small monolithic workloads, like Web servers.
While it is not always easy to illustrate the cost/benefit and Return on Investment on a lower power box like the Seamicro, running it head to head on a similar workload with a bunch of off the shelf Xeon boxes really shows the difference. The calculation of the benefit is critical too. What do you measure? Is it speed? Is it speed per transaction? Is it total volume allowed through? Or is it cost per unit transaction within a set amount of transactions? You’re getting closer with that last one. The test setup used a set number of transaction needing to be done in a set period of time. The benchmark then measure total power dissipation to accomplish that number of transactions in the set amount of time. SeaMicro came away the winner in unit cost per transaction in power terms. While the Xeon based servers had huge excess speed and capacity the power dissipation put it pretty far into the higher cost per transaction category.
However it is very difficult to communicate this advantage that SeaMicro has over Intel. Future tests/benchmarks need to be constructed with clearly stated goals and criteria. Specifically if it can be communicated as a Case History of a particular problem that could be solved by either a SeaMicro server or a bunch of Intel boxes running Xeon cpus with big caches. Once that Case History is well described, then the two architectures are then put to work showing what the end goal is in clear terms (cost per transaction). Then and only then will SeaMicro communicate effectively how it does things different and how that can save money. Otherwise it’s too different to measure effectively versus a Intel Xeon based rack of servers.
Though the AR element is not particularly elegant, merely consisting of a blue dot superimposed on your cell phone screen that guides the user through Tokyo’s streets, we think it’s nevertheless a clever marketing gimmick.
Augmented Reality (AR) in the news this week being used for a marketing campaign in Tokyo JP. It’s mostly geared towards getting people out to visit bars and restaurants to collect points. Whoever gets enough points can cash them in for Chivas Regal memorabilia. But hey, it’s something I guess. I just wish the navigation interface was a little more sophisticated.
I also wonder how many different phones you can use as personal navigators to find the locations awarding points. Seems like GPS is an absolute requirement, but so is one that has a Foursquare or Livedoor client as well.
Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.
CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.
Upstart mega-multicore chip maker Tilera has not yet started sampling its future Tile-Gx 3000 series of server processors, and companies have already locked in orders for the chips.
Proof that sometimes a shipping product doesn’t always make all the difference. Although it might be nice to tout performance of actual shipping product. What’s becoming more real is the power efficiency of the Tilera architcture core for core versus the Intel IA-64 architecture. Tilera can provide a much lower Thermal Design Point (TDM) per core than typical Intel chips running the same workloads. So Tilera for the win on paper anyways.
Thus far, Intels Many Integrated Core MIC is little more than a research project. Intel picked up the remnants of the failed “Larrabee” graphics card project and rechristened it Knights and put it solely in the service of the king of computing, the CPU.
Ahhh, alas poor ol’ Larrabee, we hardly knew ye. And yet, somehow your ghost will rise again, and again and again. I remember the hints at the 80 core cpu, which then fell to 64 cores, 40 cores and now just today I read this article to find out it is merely Larrabee and only has a grand total of (hold tight, are you ready for this shocker?) 32 cores. Wait what was that? Did you say 32 cores? Let’s turn back the page to May 15, 2009 where Intel announced the then new Larrabee graphics processing engine with a 32-core processor. That’s right, nothing (well maybe not nothing) has happened in TWO YEARS! Or very little has happened a few die shrinks, and now the upcoming 3D transistors (tri-gate) for the 22nm design revision for Intel Architecture CPUs. It also looks like they may have shuffled around the floor plan/layout of the first gen Larrabee CPU to help speed things up a bit. But, other than these incrementalist appointments the car looks vastly like the model year car from two years ago. Now, what we can also hope has improved since 2009 is the speed and efficiency of the compilers Intel’s engineers have crafted to accompany the release of this re-packaged Larrabee.
This is the shortest presentation I’ve seen and most pragmatic about what SSDs can do for you. He recommends buying Intel 320s and getting your feet wet by moving from a bicycle to a Ferrari. Later on if you need to go with a PCIe SSD do it, but it’s like the difference between a Formula 1 race car and a Ferrari. Personally in spite of the lack of major difference Artur is trying to illustrate I still like the idea of buying once and getting more than you need. And if this doesn’t start you down the road of seriously buying SSDs of some sort check out this interview with Violin Memory CEO, Don Bazile:
Basile said: “Larry is telling people to use flash … That’s the fundamental shift in the industry. … Customers know their competitors will adopt the technology. Will they be first, second or last in their industry to do so? … It will happen and happen relatively quickly. It’s not just speed; its the lowest cost of data base transaction in history. [Flash] is faster and cheaper on the exact same software. It’s a no-brainer.”
Violin Memory is the current market leader in data center SSD installations for transactional data or analytical processing. The boost folks are getting from putting the databases on Violin Memory boxes is automatic, requires very little tuning and the results are just flat out astounding. The ‘Larry’ quoted above is the Larry Ellison of Oracle, the giant database maker. So with that kind of praise I’m going to say the tipping point is near, but please read the article. Chris Mellor lays out a pretty detailed future of evolution in SSD sales and new product development. 3-bit Multi-Level memory cells in NAND flash is what Mellor thinks will be the tipping point as price is still the biggest sticking point for anyone responsible for bidding on new storage system installs. However while that price sticking point is a bigger issue for batch oriented off-line data warehouse analysis, for online streaming analysis SSD is cheaper per byte per second throughput. So depending on the typical style of database work you do or performance you need SSD is putting the big iron spinning hard disk vendors to shame. The inertia of these big capital outlays and cozy relationships with these vendors will make some shops harder to adopt the new technology (But IBM is giving us such a big discount!…WE are an EMC shop,etc.). However the competitors of the folks owning those datacenters will soon eat all that low hanging fruit a simple cutover to SSDs will afford and the competitive advantage will swing to the early adopters.
*Late Note: Chris Mellor just followed up Monday night (June 27th) with an editorial further laying out the challenge to disk storage presented by the data center Flash Array vendors. Check it out:
What should the disk drive array vendors do, if this scenario plays out?They should buy in or develop their own all-flash array technology. Having a tier of SSD storage in a disk drive array is a good start but customers will want the simpler choice of an all-flash array and, anyway, they are here now. Guys like Violin and Whiptail and TMS are knocking on the storage array vendors customer doors right now.
Calxeda in the news again this week with some more announcements regarding its plans. Remembering recently to the last article I posted on Calxeda, this company boasts an ARM based server packing 120 cpus (each with four cores) into a 2U high rack (making it just 3-1/2″ tall *see note). With every evolution in hardware one must needs get an equal if not greater revolution in software. Which is the point of the announcement by Calxeda of its new software partners.
It’s all mostly cloud apps, cloud provisioning and cloud management types of vendors. And with the partnership each company gets early access to the hardware Calxeda is promising to design, prototype and eventually manufacture. Both Google and Intel have poo-poohed the idea of using “wimpy processors” on massively parallel workloads claiming faster serialized workloads are still easier to manage through existing software/programming techniques. For many years as Intel has complained about the programming tools, it still has gone the multi-core/multi-thread route hoping to continue its domination by offering up ‘newer’ and higher performing products. So while Intel bad mouths parallelism on competing cpus it seems to be desperate to sell multi-core to willing customers year over year.
Even as power efficient as those cores maybe Intel’s old culture of maximum performance for the money still holds sway. Even the most recent Ultra-low Voltage i-series cpus are still hitting about 17Watts of power for chips clocking in around 1.8Ghz (speed boosting up to 2.9Ghz in a pinch). Even if Intel allowed these chips to be installed into servers we’re stilling talking a lot of Thermal Design Point (TDM) that has to be chilled to keep running.