Categories
computers data center mobile technology

AnandTech – Applied Micros X-Gene: The First ARMv8 SoC

APM expects that even with a late 2012 launch it will have a 1 – 2 year lead on the competition. If it can get the X-Gene out on time, hitting power and clock targets both very difficult goals, the headstart will be tangible. Note that by the end of 2012 well only just begin to see the first Cortex A15 implementations. ARMv8 based competitors will likey be a full year out, at least. 

via AnandTech – Applied Micros X-Gene: The First ARMv8 SoC.

Chip Diagram for the ARM version 8 as implemented by APM

It’s nice to get a confirmation of the production time lines for the Cortex A15 and the next generation ARM version 8 architecture. So don’t expect to see shipping chips, much less finished product using those chips well into 2013 or even later. As for the 4 core ARM A15, finished product will not appear until well into 2012. This means if Intel is able to scramble, they have time to further refine their Atom chips to reach the power level and Thermal Design Point (TDP) for the competing ARM version 8 architecture. What seems to be the goal is to jam in more cores per CPU socket than is currently done on the Intel architecture (up to almost 32 in on of the graphics presented with the article).

The target we are talking about is 2W per core @ 3Ghz, and it is going to be a hard, hard target to hit for any chip designer or manufacturer. One can only hope that TMSC can help APM get a finished chip out the door on it’s finest ruling chip production lines (although an update to the article indicates it will ship on 40nm to get it out the door quicker). The finer the ruling of signal lines on the chip the lower the TDP, and the higher they can run the clock rate. If ARM version 8 can accomplish their goal of 2W per cpu core @ 3 Gigahertz, I think everyone will be astounded. And if this same chip can be sampled at the earliest prototypes stages by a current ARM Server manufacturer say, like Calxeda or even SeaMicro then hopefully we can get benchmarks to show what kind of performance can be expected from the ARM v.8  architecture and instruction set. These will be interesting times.

Intel Atom CPU Z520, 1,333GHz
Image via Wikipedia
Categories
media mobile

Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ

Image representing Layar as depicted in CrunchBase
Image via CrunchBase

“We have added to the platform computer vision, so we can recognize what you are looking at, and then add things on top of them.”

via Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ.

I’ve been a fan of Augmented Reality for a while, following the announcements from Layar over the past two years. I’m hoping out of this work comes something more than another channel for selling, advertising and marketing. But innovation always follows where the money is and artistic creative pursuits are NOT it. Witness the evolution of Layar from a toolkit to a whole package of brand loyalty add-ons ready to be sent out whole to any smartphone owner, unwitting enough to download the Layar created App.

The emphasis in this WSJ article however is not how Layar is trying to market itself. Instead they are more worried about how Layar is creating a ‘virtual’ space where meta-data is tagged onto a physical location. So a Layar Augmented Reality squatter can setup a very mundane virtual T-shirt shop (say like Second Life) in the same physical location as a high class couturier on a high street in London or Paris. What right does anyone have to squat in the Layar domain? Just like Domain Name System squatters of today, they have every right by being there first. Which brings to mind how this will evolve into a game of technical one-upsmanship whereby each Augmented Reality Domain will be subject to the market forces of popularity. Witness the chaotic evolution of social networking where AOL, Friendster, MySpace, Facebook and now Google+ all usurp market mindshare from one another.

While the Layar squatter has his T-shirt shop today, the question is who knows this other than other Layar users? Who will yet know whether anyone else will ever know? This leads me to conclude this is a much bigger deal to the WSJ than it is to anyone who might be sniped at by or squatted upon within an Augmented Reality cul-de-sac. Though those stores and corporations may not be able to budge the Layar squatters, they can at least lay claim to the rest of their empire and prevent any future miscreants from owning their virtual space. But as I say, in one-upsmanship there is no real end game, only just the NEXT game.

Categories
computers technology wintel

Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech

A 256Kx4 Dynamic RAM chip on an early PC memor...
Image via Wikipedia

Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips.

via Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech.

Who says there’s no such thing as progress? Apart from the DDR memory bus data rates moving from DDR-3 to DDR-4 soon what have you read that was significantly different, much less better than the first gen DDR DIMMS from years ago? Chip stacking is de rigeur for manufacturers of Flash memory especially in mobile devices with limited real estate on the motherboards. This packaging has flowed back into the computer market very handily and has lead to small form factors in all the very Flash memory devices. Whether it be, Thumb drives, or aftermarket 2.5″ Laptop Solid State Disks or embedded on an mSATA module everyone’s benefiting equally.

Wither stacking of RAM modules? I know there’s been some efforts to do this again for the mobile device market. But any large scale flow back into the general computing market has been hard to see. I’m hoping this announcement Invensas is a real shipping product eventually and not an attempt to stake a claim on intellectual property that will take the form of lawsuits against current memory designers and manufacturers. Stacking is the way to go, even if it never can be used in say a CPU, I would think clock speeds and power savings requirements on RAM modules might be sufficient to allow some stacking to occur. And if the memory access speeds improve at the same time, so much the better.

Categories
cloud computers data center technology

Tilera routs Intel, AMD in Facebook bakeoff • The Register

Structure of the TILE64 Processor from Tilera
Tile64 processor from Tilera

Facebook lined up the Tilera-based Quanta servers against a number of different server configurations making use of Intels four-core Xeon L5520 running at 2.27GHz and eight-core Opteron 6128 HE processors running at 2GHz. Both of these x64 chips are low-voltage, low power variants. Facebook ran the tests on single-socket 1U rack servers with 32GB and on dual-socket 1U rack servers with 64GB.All three machines ran CentOS Linux with the 2.6.33 kernel and Memcached 1.2.3h.

via Tilera routs Intel, AMD in Facebook bakeoff • The Register.

You will definitely want to read this whole story as presented El Reg. They have a few graphs displaying the performance of the Tilera based Quanta data cloud in a box versus the Intel server rack. And let me tell you on certain very specific workloads like the Web Caching using Memcached I declare advantage Tilera. No doubt data center managers need to pay attention to this and get some more evidence to back up this initial white paper from Facebook, but this is big, big news. And all one need do apart from tuning the software for the chipset is add a few PCIe based SSDs or TMS RamSan and you have what could theoretically be the fastest possible web performance possible. Even at this level of performance, there’s still room to grow I think on the hard drive storage front. What I would hope in future to see is Facebook do an exhaustive test on the Quanta SQ-2 product versus Calxeda (ARM cloud in a box) and the Seamicro SM-10000×64 (64bit Intel Atom cloud in a box). It would prove an interesting research project just to see how much chipsets, chip architectures and instruction sets play in optimizing each for a particular style and category of data center workload. I know I will be waiting and watching.

Categories
vague interests

Whodunnit: An Exercise in Passive Voice (via The Daily Post at WordPress.com)

Point taken, try to limit the use of ‘to be’ + ‘verb’ + ‘by’. I’m probably more guilty of this than most. That and the use of probably.

We've all heard the non-apology "mistakes were made." Chances are that some of us have even used it when trying to admit a mistake without quite fessing up to it. This and similar phrases are so tempting because they're indirect about whodunnit. And they're indirect because they use a little thing called the passive voice. When talking about the passive voice, people often mention that it obscures the agent, which is just a fancy way of saying it … Read More

via The Daily Post at WordPress.com

Categories
cloud computers data center technology

SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register

Image representing SeaMicro as depicted in Cru...
Image via CrunchBase

An original SM10000 server with 512 cores and 1TB of main memory cost $139,000. The bump up to the 64-bit Atom N570 for 512 cores and the same 1TB of memory boosted the price to $165,000. A 768-core, 1.5TB machine using the new 64HD cards will run you $237,000. Thats 50 per cent more oomph and memory for 43.6 per cent more money. ®

via SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register.

SeaMicro continues to pump out the jams releasing another updated chassis in less than a year. There is now a grand total of 768 processor cores jammed in that 10U high box. Which leads me to believe they have just eclipsed the compute per rack unit of the Tilera and Calxeda massively parallel cloud servers in a box. But that would wrong because Calxeda is making a 2U server rack unit hold 120-4 core ARM cpus. So that gives you a grand total of 480 in just 2 rack units alone. Multiply that by 5 and you get 2400 cores in a 10U rack serving. So advantage Calxeda in total core count, however lets also consider software too. Atom being the cpu that Seamicro has chosen all along is an intel architecture chip and an x64 architecture at that. It is the best of both worlds for anyone who already had a big investment in Intel binary compatible OSes and applications. It is most often the software and it’s legacy pieces that drive the choice of which processor goes into your data cloud.

Anyone who had clean slate to start from might be able to choose between Calxeda versus Seamicro for their applications and infrastructure. And if density/thermal design point per rack unit is very important Calxeda too will suit your needs I would think. But who knows? Maybe your workflow isn’t as massively parallel as a Calxeda server and you might have a much lower implementation threshold getting started on an Intel system, so again advantage Seamicro. A real industry analyst would look at these two competing companies as complimentary, different architectures for different workflows.

Categories
blogroll cloud gpu macintosh media mobile navigation wired culture

Apple patents hint at future AR screen tech for iPad | Electronista

Structure of liquid crystal display: 1 – verti...
Image via Wikipedia

Apple may be working on bringing augmented reality views to its iPad thanks to a newly discovered patent filing with the USPTO.

via Apple patents hint at future AR screen tech for iPad | Electronista. (Originally posted at AppleInsider at the following link below)

Original Article: Apple Insider article on AR

Just a very brief look at a couple of patent filings by Apple with some descriptions of potential applications. They seem to want to use it for navigation purposes using the onboard video camera. One half the screen will use the live video feed, the other half is a ‘virtual’ rendition of that scene in 3D to allow you to find a path or maybe a parking space in between all those buildings.

The second filing mentions a see-through screen whose opacity can be regulated by the user. The information display will take precedence over the image seen through the LCD panel. It will default to totally opaque using no voltage whatsoever (In Plane switching design for the LCD).

However the most intriguing part of the story as told by AppleInsider is the use of sensors on the device to determine angle, direction, bearing to then send over the network. Why the network? Well the whole rendering of the 3D scene as described in first patent filing is done somewhere in the cloud and spit back to the iOS device. No onboard 3D rendering needed or at least not at that level of detail. Maybe those datacenters in North Carolina are really cloud based 3D rendering farms?

Categories
google media surveillance technology wired culture

Kim Cameron returns to Microsoft as indie ID expert • The Register

Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

via Kim Cameron returns to Microsoft as indie ID expert • The Register.

CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

Categories
blogtools cloud media navigation science & technology technology

Goal oriented visualizations? (via Erik Duval’s Weblog)

Charles Minard's 1869 chart showing the losses...
Image via Wikipedia

Visualizations and their efficacy always takes me back to Edward Tufte‘s big hard cover books on Infographics (or Chart Junk when it’s done badly). In terms of this specific category, visualization leading to a goal I think it’s still very much a ‘general case’. But examples are always better than theoretical descriptions of an ideal. So while I don’t have an example to give (which is what Erik Duval really wants) I can at least point to a person who knows how Infographics get misused.

I’m also reminded somewhat of the most recent issue of Wired Magazine where there’s an article on feedback loops. How are goal oriented visualizations different from or better than feedback loops? I’d say that’s an interesting question to investigate further. The primary example given in that story is the radar equipped speed limit sign. It doesn’t tell you the posted speed. It merely tells you how fast you are going and that by itself apart from ticketing and making the speed limit signs more noticeable did more to effect a change in behavior than any other option. So maybe a goal oriented visualization could also benefit from some techniques like feedback loops?

Some of the fine fleur of information visualisation in Europe gathered in Brussels today at the Visualizing Europe meeting. Definitely worth to follow the links of the speakers on the program! Twitter has a good trace of what was discussed. Revisit offers a rather different view on that discussion than your typical twitter timeline. In the Q&A session, Paul Kahn asked the Rather Big Question: how do you choose between different design alterna … Read More

via Erik Duval’s Weblog

Categories
media surveillance web standards wired culture

JSON Activity Streams Spec Hits Version 1.0

This is icon for social networking website. Th...
Image via Wikipedia

The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

via JSON Activity Streams Spec Hits Version 1.0.

Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.