Speaking of speeds and feeds, ARM claims that when running at 2GHz, the CoreLink CCN-508 can deliver up to 1.6TB/sec of usable system bandwidth – and that’s “T” as in “tera”. When equipped with DDR4 memory, its four-channel memory system can nudge up to around 75GB/sec.
via ARM targets enterprise with 32-core, 1.6TB/sec bandwidth beastie • The Register. Goodbye Calxeda, SeaMicro. Tilera, hello ARM! I’m so happy to see a project like this see light of day and hopefully get picked up by a licensee of ARM. If this part can find it’s way into a shipping product whatever device, appliance, gateway or server it might be that would be fantastic. ARM is predicting pretty high throughput capability on this chip. I just wish they had an equally capable memory bus or memory controller. Four channels of DDR4 RAM will net you only 75GB/sec bandwidth when coupled up with this chip. But we shouldn’t be too much a perfectionist and demand the full theoretical throughput of 1.6TB (at least not yet). This is the perfect experimental testing ground to see what hybrid of NVRAM and DRAM might be able to inch up the performance on the memory bus. I’m specifically referencing something like the IBM/SanDisk UltraDIMM and similar products like it that would act as an integrated memory layer resident in the DIMM slots on a well designed custom motherboard. That to me would mark the entry into a new class of high speed computing for general usage or even cloud-type data center usage. I know cloud providers prefer virtualized everything, virtual machines, virtual storage, virtual networking. I just hope that a low power, high throughput CPU is matched up with something equal to it’s I/O capabilities.
I started using lists, but then I stopped. Now I’m back at it again, and will start using them to keep up with the flood of tweets. Really great tips and will be definitely checking out Tweetdeck. More columns means better use of my screen real estate.
The Center IT outfit I work for is dumping as much on premise Exchange Mailbox hosting as it can. However we are sticking with Outlook365 as provisioned by Microsoft (essentially an Outlook’d version of Hotmail). It has the calendar and global address list we all have come to rely on. But as this article goes into great detail on the rest of the Office Suite, people aren’t creating as many documents as they once did. We’re viewing them yes, but we just aren’t creating them.
I wonder how much of this is due in part to re-use or the assignment of duties to much higher top level people to become the authors. Your average admin assistant or even secretary doesn’t draft anything dictated to them anymore. The top level types now generally would be embarrassed to dictate something out to anyone. Plus the culture of secrecy necessitates more 1-to-1 style communications. And long form writing? Who does that anymore? No one writes letters, they write brief email or even briefer text, Tweets or Facebook updates. Everything is abbreviated to such a degree you don’t need thesaurus, pagination, or any of the super specialized doo-dads and add-ons we all begged M$ and Novell to add to their première word processors back in the day.
From an evolutionary standpoint, we could get by with the original text editors first made available on timesharing systems. I’m thinking of utilities like line editors (that’s really a step backwards, so I’m being really facetious here). The point I’m making is we’ve gone through a very advanced stage in the evolution of our writing tool of choice and it became a monopoly. WordPerfect lost out and fell by the wayside. Primary, Secondary and Middle Schools across the U.S. adopted M$ Word. They made it a requirement. Every college freshman has been given discounts to further the loyalty to the Office Suite. Now we don’t write like we used to, much less read. What’s the use of writing something so long in pages, no one will ever read it? We’ve jumped the shark of long form writing, and therefore the premiere app, the killer app for the desktop computer is slowly receding behind us as we keep speeding ahead. Eventually we’ll see it on the horizon, it’s sails being the last visible part, the crow’s nest, then poof! It will disappear below the horizon line. We’ll be left with our nostalgic memories of the first time we used MS Word.
Amazon Elastic Compute Cloud (Amazon EC2) (Photo credit: Will Merydith)
The looming introduction of a 64-bit ARM-based server core (production 64-bit ARM server chips are expected from a variety of vendors later this year) also changes the economics of developing a server chip. While Moorhead believes building your own core is a multihundred million dollar process, Andrew Feldman, the corporate vice president and general manager of Advanced Micro Devices’ server chip business, told me last December that it could be in the tens of millions.
Things are changing rapidly in the ARM licensing market. The cost of a license is reasonable, you just need to get a contract fabricator to help process the silicon wafers for you. As the pull quote says even someone “dabbling” in the custom silicon cpu market, the threshold and risk for an outfit like Amazon is pretty darned low. And like so many other fields and areas in the cloud services sector, many others have done a lot of the heavy lifting already. Google and Facebook both have detailed and outline their custom computer build process (with Facebook going further and drafting the Open Compute Cloud spec). Apple (though not really a cloud provider) has shown the way towards a workable, scalable and somewhat future proof path to spinning many revs of custom CPUs (granted ARM derived, but still admirable). Between Apple’s contract manufacturing with Samsung and TSMC for their custom mobile CPUs and the knowledge Amazon has in house for their own rack based computers, there’s no telling how optimized they could make their AWS and EC2 data center services given more time.
No doubt to stay competitive against Google, Facebook, Microsoft and IBM, Amazon will go the custom route and try to lower ALL the marginal operating costs and capital costs. At least as is technically feasible and is cost effective. There’s a new cold war on in the Cloud, and it’s going to be customized, custom made, ultra-tailored computer configurations. And each player will find it’s competitive advantage each step along the way, some will go for MIPs some for FLOPs others for TDM and all the marginal costs and returns will be optimized for each completed instruction for each clock cycle. It’s a brave new closed source, closed hardware world and we’re just the ones living in it, or should I say living in the cloud.
Interesting news to hear this Google Glass engineer is jumping ship to join Oculus. Now that is very interesting. I wouldn’t blame anyone who would join up with Oculus, I think it will have a much more outrageously creative future over the lighter weight wearable stuff from Google.
Here’s my latest DIY project, a smartphone based on a Raspberry Pi. It’s called – wait for it – the PiPhone. It makes use an Adafruit touchscreen interface and a Sim900 GSM/GPRS module to make phone calls.
Dave Hunt doesn’t just do photography, he’s a Maker through and through. And the components are out there, you just need to know where to look to buy them. Once purchased then you get down to brass tacks of what IS a cellphone anyways. And that’s what Dave has documented in his write-up of the PiPhone. Hopefully an effort like this will spawn copycats enough to trigger a landslide in DIY fab and assembly projects for people that want their own. I think it would be cool to just have an unlocked phone I could use wherever I wanted with the appropriate carrier’s SIM card.
I think it’s truly remarkable that Dave was able to get Lithium ion gel battery packs and TFT displays that were touch sensitive. The original work of designing, engineering and manufacturing those displays alone made them a competitive advantage to folks like Apple. Being first to market with something that capable and forward expansive, was a true visionary move. Now the vision is percolating downward through the market and even so-called “feature” phones or dumb-phones might have some type of touch sensitive display.
This building by bits and pieces reminds me a bit of the research Google is doing in open hardware, modular cell phone designs like the Ara Project written up by Wired.com. Ara is an interesting experiment in divvying up the whole motherboard into block sized functions that can be swapped in and out, substituted by the owner according to their needs. If you’re not a camera hound, why spend the extra money on a overly capable, very high rez camera? Why not add a storage module instead because you like to watch movies or play games instead? Or in the case of open hardware developers, why not develop a new module that others could then manufacture themselves, with a circuit board or even a 3D printer? The possibilities are numerous and seeing an effort like what Dave Hunt did with his PiPhone as a lone individual working on his own, proves there’s a lot of potential in the open hardware area for cell phones. Maybe this device or future versions will break somewhat of the lock current monopoly providers have on their closed hardware, closed source code products.
According to NSA expert and former Guardian columnist Glenn Greenwald’s new book, No Place to Hide, the NSA has intercepted servers and routers from U.S. manufacturers in the delivery process in order to install tracking gear.
In a Guardian excerpt from the book, which comes out tomorrow, Greenwald highlighted a June 2010 report from the NSA’s Access and Target Development department explaining how the intelligence agency installs backdoor surveillance tools on internationally bound routers, servers and other networking equipment before the items are delivered worldwide. Would-be recipients of the equipment have no idea that their items have been tampered with, because the equipment comes delivered with a factory seal.
Through the surveillance tools, Greenwald wrote that the NSA is able to access “entire networks and all their users,” and he singled out an instance in which the NSA was able to exploit and gain access to a network from a…
Even Moverio’s less powerful (compared to VR displays) head tracking would make something like Google Glass overheat, McCracken said, which is why Glass input is primarily voice command or a physical touch. McCracken, who has developed for Glass, said that more advanced uses can only be accomplished with something more powerful.
Epson has swept in and gotten a head start on others in the smart glasses field. I think with their full head tracking system, and something like a Microsoft Xbox Kinect like projector and receiver pointed outward wherever you are looking, it might be possible to get a very realistic “information overlay”. Microsoft’s XBox Kinect has a 3D projector/scanner built-in which could potentially be another sensor built-in to the Epson glasses. The Augmented Reality apps on Moverio only do edge detection to provide the information overlay placement. If you had an additional 3D map (approximating the shapes and depth as well) you might be able to correlate the two data feeds (edges and a 3D mesh) to get a really good informational overlay at close range, normal arm’s length working distances.
Granted the Kinect is rather large in comparison to the Epson Moverio glasses. The resolution is also geared for longer distances too. At a very short distance XBox Kinect may not quite be what you’re looking for to improve the informational overlay. But an Epson Moverio paired up with a Kinect-like 3D projector/scanner could tie into the head tracking and allow some greater degree of accurate video overlay. Check out this video for a hack to use the Kinect as a 3D scanner:
Also as the pull-quote mentions Epson has done an interesting cost-benefit analysis and decided a smartphone level CPU and motherboard were absolutely necessary for making Moverio work. No doubt that light weight and miniature size of cellphones has by itself revolutionized the mobile phone industry. Now it’s time to leverage all that work and see what “else” the super power efficient mobile cpu’s can do along with their mobile gpu counterparts. I think this sudden announcement by Epson is going to cause a tidal wave of product announcements similar to the wave following the iPhone introduction in 2007. Prior to that Blackberry and it’s pseudo smartphone were the monopoly holders in the category they created (mobile phone as email browser). Now Epson is trying to show there’s a much wider application of the technology outside of Google Glass and Oculus Rift.
More info on the Neurogrid massively parallel computer. Comparing it to other AI experiments in modeling individual neurons is apt. I compare it to Danny Hillis’s The Connection Machine (TMC) which used ~65k individual 1 bit processors to model neurons. It was a great idea, and experiment, but it never quite got very far into the commercial market.