UK Startup Blippar Confirms It has Acquired AR Pioneer Layar | TechCrunch

Cool Schiphol flights #layar
Cool Schiphol flights #layar (Photo credit: claudia.rahanmetan)

The acquisition makes Blippar one of the largest AR players globally, giving it a powerful positioning in the AR and visual browsing space, which may help its adoption in the mass consumer space where AR has tended to languish.

via UK Startup Blippar Confirms It has Acquired AR Pioneer Layar | TechCrunch.

Layar was definitely one of the first to get out there and promote Augmented Reality apps on mobile devices. Glad to see there was a enough talent and capability still resident there to make it worthwhile acquiring it. It’s true what they say in the article that the only other big name player in this field helping promote Augmented Reality is possibly Oculus Rift. I would add Google Glass to that mix as well, especially for AR (not necessarily VR).


Epson Moverio BT-200 AR Glasses In Person, Hands On

Wikitude Augmented Reality SDK optimized for E...
Wikitude Augmented Reality SDK optimized for Epson Moverio BT-200 (Photo credit: WIKITUDE)

Even Moverio’s less powerful (compared to VR displays) head tracking would make something like Google Glass overheat, McCracken said, which is why Glass input is primarily voice command or a physical touch. McCracken, who has developed for Glass, said that more advanced uses can only be accomplished with something more powerful.

via Epson Moverio BT-200 AR Glasses In Person, Hands On.

Epson has swept in and gotten a head start on others in the smart glasses field. I think with their full head tracking system, and something like a Microsoft Xbox Kinect like projector and receiver pointed outward wherever you are looking, it might be possible to get a very realistic “information overlay”. Microsoft’s XBox Kinect has a 3D projector/scanner built-in which could potentially be another sensor built-in to the Epson glasses. The Augmented Reality apps on Moverio only do edge detection to provide the information overlay placement. If you had an additional 3D map (approximating the shapes and depth as well) you might be able to correlate the two data feeds (edges and a 3D mesh) to get a really good informational overlay at close range, normal arm’s length working distances.

Granted the Kinect is rather large in comparison to the Epson Moverio glasses. The resolution is also geared for longer distances too. At a very short distance XBox Kinect may not quite be what you’re looking for to improve the informational overlay. But an Epson Moverio paired up with a Kinect-like 3D projector/scanner could tie into the head tracking and allow some greater degree of accurate video overlay. Check out this video for a hack to use the Kinect as a 3D scanner:

3D Scanning with an Xbox Kinect – YouTube

Also as the pull-quote mentions Epson has done an interesting cost-benefit analysis and decided a smartphone level CPU and motherboard were absolutely necessary for making Moverio work. No doubt that light weight and miniature size of cellphones has by itself revolutionized the mobile phone industry. Now it’s time to leverage all that work and see what “else” the super power efficient mobile cpu’s can do along with their mobile gpu counterparts. I think this sudden announcement by Epson is going to cause a tidal wave of product announcements similar to the wave following the iPhone introduction in 2007. Prior to that Blackberry and it’s pseudo smartphone were the monopoly holders in the category they created (mobile phone as email browser). Now Epson is trying to show there’s a much wider application of the technology outside of Google Glass and Oculus Rift.

Enhanced by Zemanta

Google X founder Thrun demonstrates Project Glass on TV show | Electronista

Sebastian Thrun, Associate Professor of Comput...
Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)

Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.

via Google X founder Thrun demonstrates Project Glass on TV show | Electronista.

You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.

Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.

So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:

Project Glass: One day. . .

As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:

1. A specific iPhone that has a noise cancelling microphone array

2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses

So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:

1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside

2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.

It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.

Sergey Brin wearing Google Glasses
Thomas Hawk’s photo of Sergey Brin wearing Google Glasses

Google shows off Project Glass augmented reality specs • The Register

Thomas Hawk's picture of Sergey Brin wearing the prototype of Project Glass

But it is early days yet. Google has made it clear that this is only the initial stages of Project Glass and it is seeking feedback from the general public on what they want from these spectacles. While these kinds of heads-up displays are popular in films and fiction and dearly wanted by this hack, the poor sales of existing eye-level screens suggests a certain reluctance on the part of buyers.

via Google shows off Project Glass augmented reality specs • The Register.

The video of the Google Glass interface is kind of interesting and problematic at the same time. Stuff floats in and out of few kind of like the organism that live in the mucous of your eye. And the the latency delays of when you see something and issue a command give it a kind of halting staccato cadence when interacting with it. It looks and feels like old style voice recognition that needed discrete pauses added to know when things ended. As a demo it’s interesting, but they should issue releases very quickly and get this thing up to speed as fast as they possibly can. And I don’t mean having the CEO Sergey Brin show up at a party wearing the thing. According to reports the ‘back pack’ that the glasses are tethered to is not small. Based on the description I think Google has a long way to go yet.

And on the smaller scale tinkerer front, this WordPress blogger fashioned an older style ‘periscope’ using a cellphone, mirror and half-mirrored sunglasses to get a cheaper Augmented Reality experience. The cellphone is an HTC unit strapped onto the rim of a baseball hat. The display is than reflected downwards through a hold cut in the rim and then is reflected off a pair of sunglasses mounted at roughly a 45 degree angle. It’s cheap, it works, but I don’t know how good the voice activation is. Makes me wonder how well it might work with an iPhone Siri interface. The author even mentions that HTC is a little heavy and an iPhone might work a little better. I wonder if it wouldn’t work better still if the ‘periscope’ mirror arrangement was scrapped altogether. Instead just mount the phone flat onto the bill of the hat, let the screen face downward. The screen would then reflect off the sunglasses surface. The number of reflecting surfaces would be reduced, the image would be brighter, etc. I noticed a lot of people also commented on this fellow’s blog and might get some discussion brewing about longer term the value-add benefits to Augmented Reality. There is a killer app yet to be found and even Google hasn’t captured the flag yet.

This picture shows the Wikitude World Browser ...
This picture shows the Wikitude World Browser on the iPhone looking at the Old Town of Salzburg. Computer-generated information is drawn on top of the screen. This is an example for location-based Augmented Reality. (Photo credit: Wikipedia)

Buzzword: Augmented Reality

Augmented Reality in the Classroom Craig Knapp
Augmented Reality in the Classroom Craig Knapp (Photo credit: caswell_tom)

What it means. “Augmented reality” sounds very “Star Trek,” but what is it, exactly? In short, AR is defined as “an artificial environment created through the combination of real-world and computer-generated data.”

via Buzzword: Augmented Reality.

Nice little survey from the people at Consumer Reports, with specific examples given from the Consumer Electronics Show this past January. Whether it’s software or hardware there’s a lot of things that can be labeled and marketed as ‘Augmented Reality’. On this blog I’ve concentrated more on the apps running on smartphones with integrated cameras, acclerometers and GPS. Those pieces are important building blocks for an integrated Augmented Reality-like experience. But as this article from CR shows, your experience may vary quite a bit.

In my commentary on stories posted by others on the Internet, I have covered mostly just the examples of AR apps on mobile phones. Specifically I’ve concentrated on the toolkit provided by Layar to add metadata to existing map points of interest. The idea of ‘marking up’ the existing landscape for me holds a great deal of promise as the workload is shifted off the creator of the 3D world to the people traveling within it. The same could hold true for Massively Multiplayer Games and some worlds do allow the members to do that kind of building and marking up of the environment itself. But Layar provides a set of data that you can call up while merely pointing the cell phone camera at a compass direction and then bring up the associated data.

It’s a sort of hunt for information, sometimes it’s well done if the metadata mark-up is well done. But like many crowd-sourced efforts some amount of lower quality work or worse vandalism occurs. But this should keep anyone from trying to enhance the hidden data that can be discovered through a Layar enhanced Real World. I’m hoping the mobile phone based AR applications grow and find a niche if not a killer app. It’s still early days and mobile phone AR is not being adopted very quickly but I think there’s still a lot of untapped resources there. I don’t think we have discovered all the possible applications of mobile phone AR.

Disruptions: Wearing Your Computer on Your Sleeve –

Ubiquitous computing, One Laptop per Child, Wearable Computers, the iPod Touch, the iPad and now the iPhone all descendants in a long lineage of predictions about the Future of Computing. But the newest wrinkle (pun intended) is the topic of ‘wearable’ computers. Given how portable and powerful smart phones are these days, why do we need to ‘wear’ the computer? (more)

English: This depicts the evolution of wearabl...
Image via Wikipedia: Bad old days of Wearable Computers

Wearable computing is a broad term. Technically, a fancy electronic watch is a wearable computer. But the ultimate version of this technology is a screen that would somehow augment our vision with information and media.

via Disruptions: Wearing Your Computer on Your Sleeve –

Augmented Reality in the news, only this time it’s Google so it’s like for rilz, yo! Just kidding, it will be very interesting given Google’s investment in the Android OS and power-saving mobile computing what kind of wearable computers they will develop. No offense to MIT Media Lab, but getting something into the hands of end-users is something Google is much more accomplished at doing (but One Laptop Per Child however is the counter-argument of course). I think mobile phones are already kind of like a wearable computer. Think back to the first iPod arm bands right? Essentially now just scale the ipod up to the size of an Android and it’s no different. It’s practically wearable today (as Bilton says in his article).

What’s different then with this effort is the accessorizing of the real wearable computer (the smart phone) giving it the augmentation role we’ve seen with products like Layar. But maybe not just limited to cameras, video screens and information overlays, the next wave would have auxiliary wearable sensors communicating back to the smartphone like the old Nike accelerometer that would fit into special Nike Shoes. And also consider the iPod Nano ‘wrist watch’ fad as it exists today. It may not run the Apple iOS, but it certainly could transmit data to your smartphone if need be. Which leads to the hints and rumors of attempts by Apple to create ‘curved glass’.

This has been an ongoing effort by Apple, without being tied to any product or feature in their current product line. Except maybe the iPhone. Most websites I’ve read to date speculate the curvature is not very pronounced and a styling cue to further help marketing and sales of the iPhone. But in this article the curvature Bilton is talking about would be more like the face of a bracelet around the wrist, much more pronounced. Thus the emphasis on curved glass might point to more work being done on wearable computers.

Lastly Bilton’s article goes into a typical futuristic projection of what form the video display will take. No news to report on this topic specifically as it’s a lot of hand-waving and make believe where contact lenses potentially can become display screens. As for me, the more pragmatic approach of companies like Layar creating iPhone/Augmented Reality software hybrids is going to ship sooner and prototype faster than the make believe video contact lenses of the Future.The takeaway I get from Bilton’s article is there’s more of a defined move to create more functions with the smartphone as more of a computer. Though MIT Media Lab have labeled this ‘wearable computing’ think of it more generally as Ubiquitous Computing where the smartphone and its data connection are with you wherever you go.

Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ

WSJ want to bring the threat of Augmented Reality to brand managers savvy enough to keep up with new products being offered by companies like Layar. But what threat is there really, if the market uptake of Augmented Reality is so small, and the information store so much like a typical social networking stovepipe, ala Facebook? It is an interesting story so I encourage you to read the WSJ article about a squatter in the Layar domain. Read On:

Image representing Layar as depicted in CrunchBase
Image via CrunchBase

“We have added to the platform computer vision, so we can recognize what you are looking at, and then add things on top of them.”

via Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ.

I’ve been a fan of Augmented Reality for a while, following the announcements from Layar over the past two years. I’m hoping out of this work comes something more than another channel for selling, advertising and marketing. But innovation always follows where the money is and artistic creative pursuits are NOT it. Witness the evolution of Layar from a toolkit to a whole package of brand loyalty add-ons ready to be sent out whole to any smartphone owner, unwitting enough to download the Layar created App.

The emphasis in this WSJ article however is not how Layar is trying to market itself. Instead they are more worried about how Layar is creating a ‘virtual’ space where meta-data is tagged onto a physical location. So a Layar Augmented Reality squatter can setup a very mundane virtual T-shirt shop (say like Second Life) in the same physical location as a high class couturier on a high street in London or Paris. What right does anyone have to squat in the Layar domain? Just like Domain Name System squatters of today, they have every right by being there first. Which brings to mind how this will evolve into a game of technical one-upsmanship whereby each Augmented Reality Domain will be subject to the market forces of popularity. Witness the chaotic evolution of social networking where AOL, Friendster, MySpace, Facebook and now Google+ all usurp market mindshare from one another.

While the Layar squatter has his T-shirt shop today, the question is who knows this other than other Layar users? Who will yet know whether anyone else will ever know? This leads me to conclude this is a much bigger deal to the WSJ than it is to anyone who might be sniped at by or squatted upon within an Augmented Reality cul-de-sac. Though those stores and corporations may not be able to budge the Layar squatters, they can at least lay claim to the rest of their empire and prevent any future miscreants from owning their virtual space. But as I say, in one-upsmanship there is no real end game, only just the NEXT game.