Posts Tagged ‘ar’
Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.
You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.
Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.
So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:
As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:
1. A specific iPhone that has a noise cancelling microphone array
2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses
So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:
1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside
2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.
It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.
- Google’s Sebastian Thrun: 3 Visions in the ‘Age of Disruption’ (wired.com)
- Google Glasses Make Their First TV Appearance (gizmodo.com.au)
- How Google’s Self-Driving Car Works (spectrum.ieee.org)
What it means. “Augmented reality” sounds very “Star Trek,” but what is it, exactly? In short, AR is defined as “an artificial environment created through the combination of real-world and computer-generated data.”
Nice little survey from the people at Consumer Reports, with specific examples given from the Consumer Electronics Show this past January. Whether it’s software or hardware there’s a lot of things that can be labeled and marketed as ‘Augmented Reality’. On this blog I’ve concentrated more on the apps running on smartphones with integrated cameras, acclerometers and GPS. Those pieces are important building blocks for an integrated Augmented Reality-like experience. But as this article from CR shows, your experience may vary quite a bit.
In my commentary on stories posted by others on the Internet, I have covered mostly just the examples of AR apps on mobile phones. Specifically I’ve concentrated on the toolkit provided by Layar to add metadata to existing map points of interest. The idea of ‘marking up’ the existing landscape for me holds a great deal of promise as the workload is shifted off the creator of the 3D world to the people traveling within it. The same could hold true for Massively Multiplayer Games and some worlds do allow the members to do that kind of building and marking up of the environment itself. But Layar provides a set of data that you can call up while merely pointing the cell phone camera at a compass direction and then bring up the associated data.
It’s a sort of hunt for information, sometimes it’s well done if the metadata mark-up is well done. But like many crowd-sourced efforts some amount of lower quality work or worse vandalism occurs. But this should keep anyone from trying to enhance the hidden data that can be discovered through a Layar enhanced Real World. I’m hoping the mobile phone based AR applications grow and find a niche if not a killer app. It’s still early days and mobile phone AR is not being adopted very quickly but I think there’s still a lot of untapped resources there. I don’t think we have discovered all the possible applications of mobile phone AR.
- Buzzword: Augmented Reality (news.consumerreports.org)
- Use Your iPhone As An Augmented Reality HUD In Your Next Game Of Lazer Tag (cultofmac.com)
- Ubiquitous Computing: Education Everywhere with Augmented Reality (futureinstitution.wordpress.com)
Apple may be working on bringing augmented reality views to its iPad thanks to a newly discovered patent filing with the USPTO.
via Apple patents hint at future AR screen tech for iPad | Electronista. (Originally posted at AppleInsider at the following link below)
Just a very brief look at a couple of patent filings by Apple with some descriptions of potential applications. They seem to want to use it for navigation purposes using the onboard video camera. One half the screen will use the live video feed, the other half is a ‘virtual’ rendition of that scene in 3D to allow you to find a path or maybe a parking space in between all those buildings.
The second filing mentions a see-through screen whose opacity can be regulated by the user. The information display will take precedence over the image seen through the LCD panel. It will default to totally opaque using no voltage whatsoever (In Plane switching design for the LCD).
However the most intriguing part of the story as told by AppleInsider is the use of sensors on the device to determine angle, direction, bearing to then send over the network. Why the network? Well the whole rendering of the 3D scene as described in first patent filing is done somewhere in the cloud and spit back to the iOS device. No onboard 3D rendering needed or at least not at that level of detail. Maybe those datacenters in North Carolina are really cloud based 3D rendering farms?
Though the AR element is not particularly elegant, merely consisting of a blue dot superimposed on your cell phone screen that guides the user through Tokyo’s streets, we think it’s nevertheless a clever marketing gimmick.
Augmented Reality (AR) in the news this week being used for a marketing campaign in Tokyo JP. It’s mostly geared towards getting people out to visit bars and restaurants to collect points. Whoever gets enough points can cash them in for Chivas Regal memorabilia. But hey, it’s something I guess. I just wish the navigation interface was a little more sophisticated.
I also wonder how many different phones you can use as personal navigators to find the locations awarding points. Seems like GPS is an absolute requirement, but so is one that has a Foursquare or Livedoor client as well.
Lens-FitzGerald: I never thought of going into augmented reality, but cyberspace, any form of digital worlds, have always been one of the things I’ve been thinking about since I found out about science fiction. One of the first books I read of the cyber punk genre was Bruce Sterling‘s “Mirror Shades.” Mirror shades, meaning, of course, AR goggles. And that book came out in 1988 and ever since, this was my world.
An interview with the man that who created the most significant Augmented Reality (AR) application on handheld devices Layar. In the time since the first releases on smartphones like the Android in Europe, Layar has branched out to cover more of the OSes available on hand held devices. The interest I think has cooled somewhat on AR as social network and location has seemed to rule the day. And I would argue even location isn’t as fiery hot as it was at the beginning. But Facebook is still here with a vengeance. So wither the market for AR? What’s next you wonder, well it seems Qualcomm today has announced it’s very own AR Toolkit to help jump start the developer market more useful, nay killer AR apps. Stay tuned.
I remember when I first saw the Verizon Wireless commercial featuring the Layar Reality Browser. It looked like something out of a science fiction movie. When my student web coordinator came in to the office with her iPhone, I asked her if she had ever heard of “Layar.” She had not heard of it so we downloaded it from the App Store. I was amazed at how the app used the phone’s camera, GPS and Internet access to create a virtual layer of information over the image being displayed by the phone. It was my first experience with an augmented reality application.
It’s nice to know Layar is getting some wider exposure. When I first wrote about it last year, the smartphone market was still somewhat small. And Layar was targeting phones that already had GPS built-in which the Apple iPhone wasn’t quite ready to allow access to in its development tools. Now the iPhone and Droid are willing participants in this burgeoning era of Augmented Reality.
The video in the article is from Droid and does a WAY better job than any of the fanboy websites for the Layar application. Hopefully real world performance is as good as it appears in the video. And I’m pretty sure the software company that makes it has continuously been updating it since it was first on the iPhone a year ago. Given the recent release of the iPhone 4 and it’s performance enhancements, I have a feeling Layar would be a cool, cool app to try out and explore.
TomTom is releasing a new personal navigation device (PND) called the TomTom Live 1000. As part of this article from MacNN they mention TomTom is attempting to get into the App Store market by creating its own marketplace for TomTom specific software add-ons (like the Apple App Store). The reason is the cold war going on between device manufacturers gaining the upper hand by wholesale adoption of a closed application software universe. Google is doing it with Android and Apple has done it with the iPhone and iPad. Going all the way back to the iPod, there was interest in running games on those handheld devices, but no obvious way to ‘sell’ them, until the App Store came out. Now TomTom is following suit, by redesigning the whole TomTom universe using Webkit as a key component of it’s new OS on TomTom devices (Webkit is also being used in the Android based Garmin A10 phone too). Ambivalent about the added value? Other than trying to gain some market share against PND manufacturers, Harold Goddijn, the CEO of TomTom says it’s all about innovation. They mention in passing the possibility of Augmented Reality apps for TomTom devices. But there’s a small matter of getting a video feed into the PND that can then be layered with the AR software. And honestly even the CEO Tom Goodjin is somewhat ambivalent about seizing the opportunity of Augmented Reality in the TomTom application store universe. As reported on Pocket-lint.com: “Although Goddjin confirmed that the company was looking at the possibility of adding augmented reality in to the mix, the niche technology isn’t a major objective for them.”
It’s not enough to just overlay information on an Apple iPhone or TomTom PND screen showing related points of interest (POI). Like the iPhone Nearest Tube app from Acrossair, knowing the general compass direction to a subway station is useful. But full step-by-step navigating to it seems to be the next logical step, maps and all. What makes me think of this is the recent announcement of the Garmin A10 smartphone with GPS navigation. If Garmin, TomTom or an independent developer could mashup Augmented Reality with their respective navigation engines, whilst throwing in a bit of Google Street View one might, just might have the most useful personal assistant for finding places on foot. Garmin has a whole slew of devices for the hiking, and bicycling market. They even offer walking/pedestrian directions on their automobile navigation devices. So the overlay of Augmented Reality/Point-of-Interest and full-on Garmin Navigation to me would be a truly killer app.