Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for February 2012

Daring Fireball: Mountain Lion

Wrestling with Mountain Lion

And then the reveal: Mac OS X — sorry, OS X — is going on an iOS-esque one-major-update-per-year development schedule. This year’s update is scheduled for release in the summer, and is ready now for a developer preview release. Its name is Mountain Lion.1

via Daring Fireball: Mountain Lion.

Mountain Lion is the next iteration of Mac OS X. And while there are some changes since the original Lion was released just this past Summer, they are more like further improvements than real changes. I say this in part due to the concentration on aligning the OS X apps with iOS apps for small things like using the same name:

iCal versus Calendar

iChat versus Messages

Address book versus Contacts

Reminders versus Notes

etc.

Under the facial, superficial level more of the Carbonized libraries and apps are being factored out and being given full Cocoa libraries and app equivalents where possible. But one of the bigger changes, one that’s been slipping since the release of Mac OS X 10.7 is the use of ‘Sand-boxing’ as a security measure for Apps. The sand-box would be implemented by the Developers to adhere to strict rules set forth by Apple. Apps wouldn’t be allowed to do certain things anymore like writing to an external Filesystem, meaning saving or writing out to a USB drive without special privileges being asked for. Seems trivial at first but on the level of a day to day user of a given App it might break it altogether. I’m thinking of iMovie as an example where you can specify you want new Video clips saved into an Event Folder kept on an external hard drive. Will iMovie need to be re-written in order to work on Mountain Lion? Will sand-boxing hurt other Apple iApps as well?

Then there is the matter of ‘GateKeeper’ which is another OS mechanism to limit trust based on who the developer. Apple will issue security certificates to registered developers who post their software through the App Store, but independents who sell direct can also register for these certs as well, thus establishing a chain of trust from the developer to Apple to the OS X user. From that point you can choose to trust either just App store certified apps, independent developers who are Apple certified or unknown, uncertified apps. Depending on your needs the security level can be chosen according to which type of software you use. some people are big on free software which is the least likely to have a certification, but still may be more trustworthy than even the most ‘certified’ of AppStore software (I’m thinking emacs as an example). So sandboxes, gatekeepers all conspire to funnel developers into the desktop OS and thus make it much harder for developers of malware to infect Apple OS X computers.

These changes should be fully ready for consumption upon release of the OS in July. But as I mentioned sandboxing has been rolled back no less than two times so far. First roll-back occurred in November. The most recent rollback was here in February. The next target date for sandboxing is in June and should get all the Apple developers to get on board  prior to the release of Mountain Lion the following month, in July. This reminds me a bit of the flexibility Apple had to show in the face of widespread criticism and active resistance to the Final Cut Pro X release last June. Apple had to scramble for a time to address concerns of bugs and stability under Mac OS X 10.7 (the previous Snow Leopard release seemed to work better for some who wrote on Apple support discussion forums). Apple quickly came up with an alternate route for dissatisfied customers who demanded satisfaction by giving copies of Final Cut Pro Studio 7 (with just the Final Cut Pro app included) to people who called up their support lines asking to substitute the older version of the software for a recent purchase of FCP X. Flexibility like this seems to be more frequent going forward which is great to see Apple’s willingness to adapt to an adverse situation of their own creation. We’ll see how this migration goes come July.

Mac OS X logo

Image via Wikipedia

Written by Eric Likness

February 27, 2012 at 3:00 pm

Maxeler FPGA Project

carpetbomberz:

Great posting by Lucas Szyrmer @ softwaretrading.co.uk, it’s a nice summary of the story from last month about JP Morgan Chase’s use of FPGAs to speed up some of their analysis for risk. And it goes into greater detail concerning the mechanics of how to translate what one has to do in software across the divide into something that can be turned in VHDL/Verilog and written into the FPGA itself. It is in a word, a ‘non-trivial’ task, and can take quite a long time to get working.

Originally posted on Software Trading:


Lately, I’ve been exploring a little known corner of high performance computing (HPC) known as FPGAs. Turns out, it’s time to get electrical on yowass (Pulp Fiction reference intentional). You can program these chips in the field, thus speeding up processing speeds dramatically, relative to generic CPUs. It’s possible to customize functionality to very specific needs.

Why this works

The main benefit of FPGAs comes from reorganizing calculations. FPGAs work on a massively parallel basis. You get rid of bottlenecks in typical CPU design. While these bottlenecks are good for general purpose applications, like watching Pulp Fiction, they significantly slow down the amount of calculations that you do per second. In addition to being massively multi-parallel, FPGAs also are faster, according to FPGAdeveloper, because:

  • you aren’t competing with your operating system or applications like anti-virus for CPU cycle time
  • you run at a lower level than the OS, so you doing have…

View original 427 more words

Written by Eric Likness

February 24, 2012 at 10:42 am

Buzzword: Augmented Reality

Augmented Reality in the Classroom Craig Knapp

Augmented Reality in the Classroom Craig Knapp (Photo credit: caswell_tom)

What it means. “Augmented reality” sounds very “Star Trek,” but what is it, exactly? In short, AR is defined as “an artificial environment created through the combination of real-world and computer-generated data.”

via Buzzword: Augmented Reality.

Nice little survey from the people at Consumer Reports, with specific examples given from the Consumer Electronics Show this past January. Whether it’s software or hardware there’s a lot of things that can be labeled and marketed as ‘Augmented Reality’. On this blog I’ve concentrated more on the apps running on smartphones with integrated cameras, acclerometers and GPS. Those pieces are important building blocks for an integrated Augmented Reality-like experience. But as this article from CR shows, your experience may vary quite a bit.

In my commentary on stories posted by others on the Internet, I have covered mostly just the examples of AR apps on mobile phones. Specifically I’ve concentrated on the toolkit provided by Layar to add metadata to existing map points of interest. The idea of ‘marking up’ the existing landscape for me holds a great deal of promise as the workload is shifted off the creator of the 3D world to the people traveling within it. The same could hold true for Massively Multiplayer Games and some worlds do allow the members to do that kind of building and marking up of the environment itself. But Layar provides a set of data that you can call up while merely pointing the cell phone camera at a compass direction and then bring up the associated data.

It’s a sort of hunt for information, sometimes it’s well done if the metadata mark-up is well done. But like many crowd-sourced efforts some amount of lower quality work or worse vandalism occurs. But this should keep anyone from trying to enhance the hidden data that can be discovered through a Layar enhanced Real World. I’m hoping the mobile phone based AR applications grow and find a niche if not a killer app. It’s still early days and mobile phone AR is not being adopted very quickly but I think there’s still a lot of untapped resources there. I don’t think we have discovered all the possible applications of mobile phone AR.

Written by Eric Likness

February 23, 2012 at 3:00 pm

SeaMicro adds Xeons to Atom smasher microservers • The Register

Theres some interesting future possibilities for the SeaMicro machines. First, SeaMicro could extend that torus interconnect to span multiple chassis. Second, it could put a “Patsburg” C600 chipset on an auxiliary card and actually make fatter SMP nodes out of single processor cards and then link them into the torus interconnect. Finally, it could of course add other processors to the boards, such as Tileras 64-bit Tile Gx3000s or 64-bit ARM processors when they become available.

via SeaMicro adds Xeons to Atom smasher microservers • The Register.

SeaMicro SM10000

SeaMicro SM10000 (Photo credit: blogeee.net)

Timothy Prickett Morgan writing for The Register, has a great article on SeaMicro’s recent announcement of a Xeon-based 10U server chassis. Seemingly going against it’s first two generations of low power massively parallel server boxes, this one uses a brawny Intel Xeon server chip (albeit one that is fairly low power and low Thermal Design Point).

Sad as it may seem to me, the popularity of the low power, massively parallel cpu box must not be very lucrative. But a true testament to the flexibility of their original 10U server rack design is the ability to do a ‘swap’ of the higher power Intel Xeon cpus. I doubt there’s too many competitors in this section of the market that could ‘turn on a dime’ the way SeaMicro has appeared to do with this Xeon based server. Most often designs will be so heavily optimized for a particular cpu, power supply and form factor layout that changing one component might force a bigger change order in the design department. And the product would take longer to develop and ship as a result.

So even though I hope the 64bit Intel Atom will still be SeaMicro’s flagship product, I’m also glad they can stay in the fight longer selling into the ‘established’ older data center accounts worldwide. Adapt or die is the cliche adage of some technology writers and I would mark this with a plus (+) in the adapt column.

Written by Eric Likness

February 20, 2012 at 3:00 pm

Tilera preps many-cored Gx chips for March launch • The Register

“Were here today shipping a 64-bit processor core and we are what looks like two years ahead of ARM,” says Bishara. “The architecture of the Tile-Gx is aligned to the workload and gives one server node per chip rather than a sea of wimpy nodes not acting in a cache coherent manner. We have been in this market for two years now and we know what hurts in data centers and what works. And 32-bit ARM just is not going to cut it. Applied Micro is doing their own core, and that adds a lot of risks.”

via Tilera preps many-cored Gx chips for March launch • The Register.

Tile of a TILE64 processor from Tilera
Image via Wikipedia

Tilera is preparing to ship a 36 core Tile-Gx cpu in March. It’s going to be packaged with a re-compiled Linux distribution of CentOS on a development board (TILEencore). It will also have a number of re-compiled Unix utilities and packages included, so OEM shops can begin product development as soon as possible.

I’m glad to see Tilera is still duking it out, battling for the design wins with manufacturers selling into the Data Center as it were. Larger Memory addressing will help make the Tilera chips more competitive with Commodity Intel Hardware data center shops who build their own hardware. Maybe we’ll see full 64bit memory extensions at some point as a follow on to the current 40bit address space extensions currently. The memory extensions are necessary to address more than the 32bit limit of 4GBytes, so an extra 8 bits goes a long, long way to competing against a fully 64bit address space.

Also considering work being done at ARM for optimizing their chip designs for narrower design rules, Tilera should follow suit and attempt to shrink their chip architecture too. This would allow clock speeds to ease upward and keep the thermal design point consistent with previous generation Tile architecture chips, making Tile-Gx more competitive in the coming years. ARM announced 1 month ago they will be developing a 22nm sized cpu core for future licensing by ARM customers. As it is now Tilera uses an older fabrication design rule of around 40nm (which is still quite good given the expense required to shrink to lower design rules). And they have plans to eventually migrate to a narrower design rule. However ideally they would not stay farther behind that 1 generation from the top-end process lines of Intel (who is targeting 14nm production lines in the near future).

Written by Eric Likness

February 16, 2012 at 10:28 pm

ARM Pitches Tri-gate Transistors for 20nm and Beyond

English: I am the author of this image.

Image via Wikipedia

. . . 20 nm may represent an inflection point in which it will be necessary to transition from a metal-oxide semiconductor field-effect transistor MOSFET to Fin-Shaped Field Effect Transistors FinFET or 3D transistors, which Intel refers to as tri-gate designs that are set to debut with the companys 22 nm Ivy Bridge product generation.

via ARM Pitches Tri-gate Transistors for 20nm and Beyond.

Three Dimensional transistors in the news again. Previously Intel announced they were adopting a new design for their next generation next smaller design rule for the Ivy Bridge generation Intel CPUs. Now ARM is also doing work to integrate similar technology into their ARM cpu cores as well. No doubt in order to lower Thermal Design Point and maintain clock speed as well are both driving this move to refine and narrow the design rules for the ARM architecture. Knowing Intel is still the top research and development outfit for silicon semi-conductors would give pause to anyone directly competing with them, but ARM is king of the low power semi-conductor and keeping pace with Intel’s design rules is an absolute necessity.

I don’t know how quickly ARM is going to be able to get a licensee to jump onboard and adopt the new design. Hopefully a large operation like Samsung can take this on and get the chip into it’s design, development, production lines at a chip fabrication facility as soon as possible. Likewise other contract manufacturers like Taiwan Semiconductor Manufacturing Company (TSMC) should also try to get this chip into their facilities quickly too. That way the cell-phone and tablet markets can benefit too as they use a lot of ARM licensed cpu cores and similar intellectual property in their shipping products. And my interest is not so much invested in the competition between Intel and ARM for low power computing but more the overall performance of any single ARM design once it’s been in production for a while and optimized the way Apple designs its custom CPUs using ARM licensed cpu cores. The single most outstanding achievement of Apple in their design and production of the iPad is the battery charge duration of 10 hours. Which to date, is an achievement that has not been beaten, even by other manufacturers and products who also license ARM intellectual property. So if  the ARM design is good and can be validated and proto-typed with useful yields quickly, Apple will no doubt be the first to benefit, and by way of Apple so will the consumer (hopefully).

Schematic view (L) and SEM view (R) of Intel t...

Image via Wikipedia

Written by Eric Likness

February 13, 2012 at 3:00 pm

AnandTech – Some Thoughts on SandForces 3rd Generation SSD Controller

Finally theres talk about looking at other interfaces in addition to SATA. Its possible that we may see a PCIe version of SandForces 3rd generation controller.

via AnandTech – Some Thoughts on SandForces 3rd Generation SSD Controller.

SandForce

Image via Wikipedia

Some interesting notes about future directions SandForce might take especially now that SandForce has been bought out by LSI. They are hard at work attempting to optimize other parts of their current memory controller technology (speeding up small random reads and writes). There might be another 2X performance gain to be had at least on the SSD front, but more importantly is the PCI Express market. Fusion-io has been the team to beat when it comes to integrating components and moving data across the PCIe interface. Now SandForce is looking to come out with a bona fide PCIe-SSD controller which up until now has been a roll-your own type affair. The engineering and design expertise of companies like Fusion-io were absolutely necessary to get a PCIe SSD card to market. Now that playing field too will be leveled somewhat and possibly now competitors will enter the market with equally good performance numbers

But even more interesting than this wrinkle in the parts design for PCIe SSDs is the announcement earlier this month of Fusion-io’s new software interface for getting around the limits of File I/O on modern day OSes. Auto Commit Memory: “ACM is a software layer which allows developers to send and receive data stored on Fusion-io’s ioDrive cards directly to and from the CPU, rather than relying upon the operating system”(Link to The Verge article listed in my Fusion-io article). SandForce is up against a moving target if they hope to compete more directly with Fusion-io who is now investing in hardware AND software engineering at the same time. 1 Billion IOPS is nothing to sneeze at given the pace of change since SATA SSDs and PCIe SSDs hit the market in quantity.

Written by Eric Likness

February 9, 2012 at 3:00 pm

Posted in flash memory, SSD, technology

Tagged with , , ,

Tilera | Wired Enterprise | Wired.com

Tilera’s roadmap calls for its next generation of processors, code-named Stratton, to be released in 2013. The product line will expand the number of processors in both directions, down to as few as four and up to as many as 200 cores. The company is going from a 40-nm to a 28-nm process, meaning they’re able to cram more circuits in a given area. The chip will have improvements to interfaces, memory, I/O and instruction set, and will have more cache memory.

via Tilera | Wired Enterprise | Wired.com.

Image representing Wired Magazine as depicted ...

Image via CrunchBase

I’m enjoying the survey of companies doing massively parallel, low power computing products. Wired.com|Enterprise started last week with a look at SeaMicro and how the two principal founders got their start observing Google’s initial stabs at a warehouse sized computer. Since that time things have fractured somewhat instead of coalescing and now three big attempts are competing to fulfil the low power, massively parallel computer in a box. Tilera is a longer term project startup from MIT going back further than Calxeda or SeaMicro.

However application of this technology has been completely dependent on the software. Whether it be OSes or Applications, they all have to be constructed carefully to take full advantage of the Tile processor architecture. To their credit Tilera has attempted to insulate application developers from some of the vagaries of the underlying chip by creating an OS that does the heavy lifting of queuing and scheduling. But still, there’s got to be a learning curve there even if it isn’t quite as daunting as say folks who develop applications for the super computers at National Labs here in the U.S. Suffice it to say it’s a non-trivial choice to adopt a Tilera cpu for a product/project you are working on. And the people who need a Tilera GX cpu for their app, already know all they need to know about the the chip in advance. It’s that kind of choice they are making.

I’m also relieved to know they are continuing development to shrink down the design rules. Intel being the biggest leader in silicon semi-conductor manufacturing, continues to shrink its design, development and manufacturing design rules. We’re fast approaching a 20nm-18nm production line in both Oregon and Arizona. Both are Intel design fabrication plants and there not about to stop and take a breath. Companies like Tilera, Calxeda and SeaMicro need to do continuous development on their products to keep from being blind sided by Intel’s continuous product development juggernaut. So Tilera is very wise to shrink its design rule from 40nm down to 28nm as fast as it can and then get good yields on the production lines once they start sampling chips at this size.

*UPDATE: Just saw this run through my blogroll last week. Tilera has announced a new chip coming in March. Glad to see Tilera is still duking it out, battling for the design wins with manufacturers selling into the Data Center as it were. Larger Memory addressing will help make the Tilera chips more competitive with Commodity Intel Hardware shops, and maybe we’ll see full 64bit memory extensions at some point as a follow on to the current 40bit address space extenstions currently being touted in this article from The Register.

English: Block diagram of the Tilera TILEPro64...

Image via Wikipedia

Written by Eric Likness

February 6, 2012 at 3:00 pm

How Google Spawned The 384-Chip Server | Wired Enterprise | Wired.com

SeaMicro’s latest server includes 384 Intel Atom chips, and each chip has two “cores,” which are essentially processors unto themselves. This means the machine can handle 768 tasks at once, and if you’re running software suited to this massively parallel setup, you can indeed save power and space.

via How Google Spawned The 384-Chip Server | Wired Enterprise | Wired.com.

Image representing Wired Magazine as depicted ...

Image via CrunchBase

Great article from Wired.com on SeaMicro and the two principle minds behind its formation. Both of these fellows were quite impressed with Google’s data center infrastructure at the points in time when they both got to visit a Google Data Center. But rather than just sit back and gawk, they decided to take action and borrow, nay steal some of those interesting ideas the Google Engineers adopted early on. However, the typical naysayers pull a page out of the Google white paper arguing against SeaMicro and the large number of smaller, lower-powered cores they use in the SM-10000 product.

SeaMicro SM10000

Image by blogeee.net via Flickr

But nothing speaks of success more than product sales and SeaMicro is selling it’s product into data centers. While they may not achieve the level of commerce reached by Apple Inc., it’s a good start. What still needs to be done is more benchmarks and real world comparisons that reproduce or negate the results of Google’s whitepaper promoting their choice of off the shelf commodity Intel chips. Google is adamant that higher clock speed ‘server’ chips attached to single motherboards connected to one another in large quantity is the best way to go. However, the two guys who started SeaMicro insist that while Google’s choice for itself makes perfect sense, NO ONE else is quite like Google in their compute infrastructure requirements. Nobody has such a large enterprise or the scale Google requires (except for maybe Facebook, and possibly Amazon). So maybe there is a market at the middle and lower end of the data center owner’s market? Every data center’s needs will be different especially when it comes to available space, available power and cooling restrictions for a given application. And SeaMicro might be the secret weapon for shops constrained by all three: power/cooling/space.

*UPDATE: Just saw this flash through my Google Reader blogroll this past Wednesday, Seamicro is now selling an Intel Xeon based server. I guess the market for larger numbers of lower power chips just isn’t strong enough to sustain a business. Sadly this makes all the wonder and speculation surrounding the SM10000 seem kinda moot now. But hopefully there’s enough intellectual property rights and patents in the original design to keep the idea going for a while. Seamicro does have quite a headstart over competitors like Tilera, Calxeda and Applied Micro. And if they can help finance further developments of Atom based servers by selling a few Xeons along the way, all the better.

Written by Eric Likness

February 2, 2012 at 9:51 pm