Category: technology

General technology, not anything in particular

  • Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ

    Image representing Layar as depicted in CrunchBase
    Image via CrunchBase

    “We have added to the platform computer vision, so we can recognize what you are looking at, and then add things on top of them.”

    via Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ.

    I’ve been a fan of Augmented Reality for a while, following the announcements from Layar over the past two years. I’m hoping out of this work comes something more than another channel for selling, advertising and marketing. But innovation always follows where the money is and artistic creative pursuits are NOT it. Witness the evolution of Layar from a toolkit to a whole package of brand loyalty add-ons ready to be sent out whole to any smartphone owner, unwitting enough to download the Layar created App.

    The emphasis in this WSJ article however is not how Layar is trying to market itself. Instead they are more worried about how Layar is creating a ‘virtual’ space where meta-data is tagged onto a physical location. So a Layar Augmented Reality squatter can setup a very mundane virtual T-shirt shop (say like Second Life) in the same physical location as a high class couturier on a high street in London or Paris. What right does anyone have to squat in the Layar domain? Just like Domain Name System squatters of today, they have every right by being there first. Which brings to mind how this will evolve into a game of technical one-upsmanship whereby each Augmented Reality Domain will be subject to the market forces of popularity. Witness the chaotic evolution of social networking where AOL, Friendster, MySpace, Facebook and now Google+ all usurp market mindshare from one another.

    While the Layar squatter has his T-shirt shop today, the question is who knows this other than other Layar users? Who will yet know whether anyone else will ever know? This leads me to conclude this is a much bigger deal to the WSJ than it is to anyone who might be sniped at by or squatted upon within an Augmented Reality cul-de-sac. Though those stores and corporations may not be able to budge the Layar squatters, they can at least lay claim to the rest of their empire and prevent any future miscreants from owning their virtual space. But as I say, in one-upsmanship there is no real end game, only just the NEXT game.

  • $1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud

    Amazon Web Services logo
    Image via Wikipedia

    Amazon EC2 and other cloud services are expanding the market for high-performance computing. Without access to a national lab or a supercomputer in your own data center, cloud computing lets businesses spin up temporary clusters at will and stop paying for them as soon as the computing needs are met.

    via $1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud.

    If you own your Data Center, you might be a little nervous right now as even a Data Center can be outsourced on an as needed basis. Especially if you are doing scientific computing you should consider the fixed costs of acquiring and maintaining those sunk, capital costs after the cluster is up and running. This story provides one great example of what I think the Cloud Computer could one day become. Rent-a-Center style data centers and compute clusters seem like an incredible value especially for a University but even more so for a business that may not need a to keep a real live data center under their control. Examples abound as even online services like Drop Box lease their compute cycles from the likes of Amazon Web Services and the Elastic Compute Cloud (EC2). And if migrating an application into a Data Center along with the data set to be analyzed can be sped up sufficiently and the cost kept down, who knows what might be possible.

    Opportunity costs are many when it comes to having access to a sufficiently large number of nodes in a compute cluster. Mostly with modeling applications, you get to run a simulation at finer time slices, at higher resolution possibly gaining a better understanding of how close your algorithms match the real world. This isn’t just for business but for science as well and I think being saddled with a typical Data Center installation and it’s infrastructure and depreciation costs along with staffing make it seem less attractive if the big Data Center providers are willing to sell part of their compute cycles at a reasonable rate. The best part is you can shop around too. In the bad old days of batch computing and the glassed in data center, before desktops and mini-computers people were dying to get access to the machine and run their jobs. Now the surplus of computing cycles is so great for the big players, they help subsidize the costs of build-outs and redundancies by letting people bid of the spare compute cycles they have just lying around generating heat. It’s a whole new era of compute cycle auctions and I for one am dying to see more stories like this in the future.

  • AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel

    USB Connector

    A new report claims Apple has continued to investigate implementing USB 3.0 in its Mac computers independent of Intels plans to eventually support USB 3.0 at the chipset level.

    via AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel.

    This is interesting to read, I have not paid much attention to USB 3.0 due to how slowly it has been adopted by the PC manufacturing world. But in the past Apple has been quicker to adopt some mainstream technologies than it’s PC manufacturing counterparts. The value add is increased as more and more devices also adopt the new interface, namely anything that runs the iOS. The surest sign there’s a move going on will be whether or not there is USB 3.0 support in the iOS 5.x and whether or not there is hardware support in the next Revision of the iPhone.

    And now it appears Apple is releasing two iPhones, a minor iPhone 4 update and a new iPhone 5 at roughly the same time. Given reports that the new iPhone 5 has a lot of RAM installed, I’m curious about how much of the storage is NAND based Flash memory. Will we see something on the order of 64GB again or more this time around when the new phones are released.  The upshot is for instances where you can tether your device to sync it to the Mac, with a USB 3.0 compliant interface the file transfer speed will make the chore of pulling out the cables worth the effort. However, the all encompassing sharing of data all the time between Apple devices may make the whole adoption of USB 3.0 seem less necessary if every device can find its partner and sync over the airwaves instead of over iPod connectors.

    Still it would be nice to have a dedicated high speed cable for the inevitable external Hard drive connection necessary in these days of the smaller laptops like the Macbook Air, or the Mac mini. Less space internally means these devices will need a supplement to the internal hard drive, one even that the Apple iCloud cannot fulfill especially considering the size of video files coming off each new generation of HD video cameras. I don’t care what Apple says but 250GBs of AVCHD files is going to sync very,…very,… slowly. All the more reason to adopt USB 3.0 as soon as possible.

  • Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    Image via Wikipedia

    Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips.

    via Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech.

    Who says there’s no such thing as progress? Apart from the DDR memory bus data rates moving from DDR-3 to DDR-4 soon what have you read that was significantly different, much less better than the first gen DDR DIMMS from years ago? Chip stacking is de rigeur for manufacturers of Flash memory especially in mobile devices with limited real estate on the motherboards. This packaging has flowed back into the computer market very handily and has lead to small form factors in all the very Flash memory devices. Whether it be, Thumb drives, or aftermarket 2.5″ Laptop Solid State Disks or embedded on an mSATA module everyone’s benefiting equally.

    Wither stacking of RAM modules? I know there’s been some efforts to do this again for the mobile device market. But any large scale flow back into the general computing market has been hard to see. I’m hoping this announcement Invensas is a real shipping product eventually and not an attempt to stake a claim on intellectual property that will take the form of lawsuits against current memory designers and manufacturers. Stacking is the way to go, even if it never can be used in say a CPU, I would think clock speeds and power savings requirements on RAM modules might be sufficient to allow some stacking to occur. And if the memory access speeds improve at the same time, so much the better.

  • Angelbird Now Shipping SSD RAID Card for 800 MB/s

    If you want more speed, then you will have to look to PCI-Express for the answer. Austrian-based Angelbird has opened its online storefront with its Wings add-in card and SSDs.

    via Angelbird Now Shipping SSD RAID Card for 800 MB/s.

    After more than one year of being announced Angelbird has designed and manufactured a new PCIe flash card. The design of which is full expandable over time depending on your budget needs. Fusion-io has a few ‘expandable’ cards in its inventory too, but the price class of Fusion-io is much higher than the consumer level Angelbird product. So if you cannot afford to build a 1TB flash-based PCIe card, do not worry. Buy what you can and outfit it later over time as your budget allows. Now that’s something any gamer fanboy or desktop enthusiast can get behind.

    Angelbird does warn in advance power demands for typical 2.5″ SATA flash modules are higher than what the PCIe bus can provide typically. They recommend using their own memory modules to add onto their base level PCIe card. Up until I read those recommendations I had forgotten some of the limitations and workarounds Graphics Card manufacturers typical use. These have become so routine that there are now 2-3 extra power taps provided even by typical desktop manufacturers for their desktop machines. All this to accommodate the extra graphics chip power required by today’s display adapters. It makes me wonder if Angelbird could do a Rev. of the base level PCIe card with a little 4-pin power input or something similar. It’s doesn’t need another 150watts, it’s going to be closer to 20watts for this type of device I think. I wish Angelbird well and I hope sales start strong so they can sell out their first production run.

  • OCZ Launches PCIe-Based HDD/SDD Hybrid Drive

    By bypassing the SATA bottleneck, OCZs RevoDrive Hybrid promises transfer speeds up to 910 MB/s and up to 120,000 IOPS 4K random write. The SSD aspect reportedly uses a SandForce SF-2281 controller and the hard drive platters spin at 5,400rpm. On a whole, the hybrid drive makes good use of the companys proprietary Virtualized Controller Architecture.

    via OCZ Launches PCIe-Based HDD/SDD Hybrid Drive.

    RevoDrive Hybrid PCIe
    Image from: Tom's Hardware

    Good news on the Consumer Electronics front, OCZ continues to innovate on the desktop aftermarket introducing a new PCIe Flash product that marries a nice 1TByte Hard Drive to a 100GB flash-based SSD. The best of both worlds all in one neat little package. Previously you might buy these two devices seperately, 1 average sized Flash drive and 1 spacious Hard drive. Then you would configure the Flash Drive as your System boot drive and then using some kind of alias/shortcut trick have the Hard drive as your user folder to hold videos, pictures, etc. This has caused some very conservative types to sit out and wait for even bigger Flash drives hoping to store everything on one logical volume. But what they really want is a hybrid of big storage and fast speed and that according to the press release is what the OCZ Hybrid Drive delivers. With a SandForce drive controller and two drives the whole architecture is hidden away along with the caching algorithm that moves files between the flash and hard drive storage areas. To the end user, they see but one big Hard drive (albeit installed in one of their PCI card slots), but experience the faster bootup times, faster application loading times. I’m seriously considering adding one of these devices into a home computer we have and migrating the bootdrive and user home directories over to that, using the current hard drives as the Windows backup device. I think that would be a pretty robust setup and could accommodate a lot of future growth and expansion.

  • Augmented Reality Maps and Directions Coming to iPhone

    iOS logo
    Image via Wikipedia

    Of course, there are already turn-by-turn GPS apps for iOS, Android and other operating systems, but having an augmented reality-based navigational system thats native to the phone is pretty unique.

    via Augmented Reality Maps and Directions Coming to iPhone.

    In the deadly navigation battle between Google Android and Apple iOS a new front is being formed, Augmented Reality. Apple has also shown that it’s driven to create a duplicate of the Google Maps app for iOS in an attempt to maintain its independence from the Googleplex by all means possible. Though Apple may re-invent the wheel (of network available maps), you will be pleasantly surprised what other bells & whistles get thrown in as well.

    Enter the value-added feature of Augmented Reality. Apple is now filing patents on AR relating to handheld device navigation. And maybe this time ’round the Augmented Reality features will be a little more useful than marked up Geo Locations. To date Google Maps hasn’t quite approached this level of functionality, but do have most of the most valuable dataset (Street View) that would allow them to also add an Augmented Reality component. The question is who will get to market first with the most functional, and useful version of Augmented Reality maps?

  • ARM vet: The CPUs future is threatened • The Register

    8-inch silicon wafer with multiple intel Penti...
    Image via Wikipedia

    Harkening back to when he joined ARM, Segars said: “2G, back in the early 90s, was a hard problem. It was solved with a general-purpose processor, DSP, and a bit of control logic, but essentially it was a programmable thing. It was hard then – but by todays standards that was a complete walk in the park.”

    He wasn’t merely indulging in “Hey you kids, get off my lawn!” old-guy nostalgia. He had a point to make about increasing silicon complexity – and he had figures to back it up: “A 4G modem,” he said, “which is going to deliver about 100X the bandwidth … is going to be about 500 times more complex than a 2G solution.”

    via ARM vet: The CPUs future is threatened • The Register.

    A very interesting look a the state of the art in microprocessor manufacturing, The Register talks with one of the principles at ARM, the folks who license their processor designs to almost every cell phone manufacturer worldwide. Looking at the trends in manufacturing, Simon Segars is predicting a more difficult level of sustained performance gains in the near future. Most advancement he feels will be had by integrating more kinds of processing and coordinating the I/O between those processors on the same processor die. Which is kind of what Intel is attempting to do integrating graphics cores, memory controllers and CPU all on one slice of silicon. But the software integration is the trickiest part, and Intel still sees fit to just add more general purpose CPU cores to continue making new sales. Processor clocks stay pretty rigidly near the 3GHz boundary and have not shifted significantly since the end of the Pentium IV era.

    Note too, the difficulty of scaling up as well as designing the next gen chips. Referring back to my article from Dec.21,  2010; 450mm wafers (commentary on Electronista article), Intel is the only company rich enough to scale up to the next size of wafer. Every step in the manufacturing process has become so specialized that the motivation to create new devices for manufacture and test just isn’t there because the total number of manufacturers who can scale up to the next largest size of silicon wafer is probably 4 companies worldwide. That’s a measure of how exorbitantly expensive large scale chip manufacturing has become. It seems more and more a plateau is being reached in terms of clock speeds and the size of wafers finished in manufacturing. With these limits, Simon Segars thesis becomes even stronger.

  • Dave’s final questions – POSSE RIT

    What elements need to be present for an open source project to be successful (And really what is success?) Recruiting pipeline is critical, participation is essential to life of the project. Eclipse/Sage/Octave/Blender/MySQL/Fedora/Linux/Apache/Firefox/Handbrake/VLC. Give up power early, let more people participate in the project as early as possible. Advertise the on-ramp for your commiters/contributors clearly. A license that is compatible with the target audience (GPL,LGPL,BSD,MIT).Re-use of existing technology to get going quicker and not re-invent wheels.

    What does the path to development entry look like? Need to collect some stories from people who come into a developer community and are still with it (example of the Redhat Interns who started at mid-teenage years). For the classroom experience Inquiry/Active/Constructivism style learning on open source projects is a good start. Providing an outlet for creativity is another path.

    What does small scale Community Architecture look like? (still open question) OS project managers need to look at the contribution pathway, lower barriers, maximize visibility not just of the project and the transparency of the processes, roadmaps.

  • David May, parallel processing pioneer • reghardware

    INMOS T800 Transputer
    Image via Wikipedia

    The key idea was to create a component that could be scaled from use as a single embedded chip in dedicated devices like a TV set-top box, all the way up to a vast supercomputer built from a huge array of interconnected Transputers.

    Connect them up and you had, what was, for its era, a hugely powerful system, able to render Mandelbrot Set images and even do ray tracing in real time – a complex computing task only now coming into the reach of the latest GPUs, but solved by British boffins 30-odd years ago.

    via David May, parallel processing pioneer • reghardware.

    I remember the Transputer. I remember seeing ISA-based add-on cards for desktop computers back in the early 1980s. They would advertise in the back of the popular computer technology magazines of the day. And while it seemed really mysterious what you could do with a Transputer, the price premium to buy those boards made you realize it must have been pretty magical.

    Most recently while I was attending workshop in Open Source software I met a couple form employees of  a famous manufacturer of camera film. In their research labs these guys used to build custom machines using arrays of Transputers to speed up image processing tasks inside the products they were developing. So knowing that there’s even denser architectures using chips like Tilera, Intel Atom and ARM chips absolutely blows them away. The price/performance ratio doesn’t come close.

    Software was probably the biggest point off friction in that the tools to integrate the Transputer into the overall design required another level of expertise. That is true to of the General Purpose Graphics Processing Unit (GPGU) that nVidia championed and now markets with its Tesla product line. And the Chinese have created a hybrid supercomputer mating Tesla boards up with commodity cpus. It’s too bad that the economics of designing and producing the Transputer didn’t scale with the time (the way it has for Intel as a comparison). Clock speeds also fell behind too, which allowed general purpose micro-processors to spend the extra clock cycles performing the same calculations only faster. This is also the advantage that RISC chips had until they couldn’t overcome the performance increases designed in by Intel.