Category: google

  • Google X founder Thrun demonstrates Project Glass on TV show | Electronista

    Sebastian Thrun, Associate Professor of Comput...
    Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)

    Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.

    via Google X founder Thrun demonstrates Project Glass on TV show | Electronista.

    You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.

    Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.

    So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:

    Project Glass: One day. . .

    As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:

    1. A specific iPhone that has a noise cancelling microphone array

    2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses

    So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:

    1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside

    2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.

    It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.

    Sergey Brin wearing Google Glasses
    Thomas Hawk’s photo of Sergey Brin wearing Google Glasses
  • Google shows off Project Glass augmented reality specs • The Register

    Thomas Hawk’s picture of Sergey Brin wearing the prototype of Project Glass

    But it is early days yet. Google has made it clear that this is only the initial stages of Project Glass and it is seeking feedback from the general public on what they want from these spectacles. While these kinds of heads-up displays are popular in films and fiction and dearly wanted by this hack, the poor sales of existing eye-level screens suggests a certain reluctance on the part of buyers.

    via Google shows off Project Glass augmented reality specs • The Register.

    The video of the Google Glass interface is kind of interesting and problematic at the same time. Stuff floats in and out of few kind of like the organism that live in the mucous of your eye. And the the latency delays of when you see something and issue a command give it a kind of halting staccato cadence when interacting with it. It looks and feels like old style voice recognition that needed discrete pauses added to know when things ended. As a demo it’s interesting, but they should issue releases very quickly and get this thing up to speed as fast as they possibly can. And I don’t mean having the CEO Sergey Brin show up at a party wearing the thing. According to reports the ‘back pack’ that the glasses are tethered to is not small. Based on the description I think Google has a long way to go yet.

    http://my20percent.wordpress.com/2012/02/27/baseball-cap-head-up-displa/

    And on the smaller scale tinkerer front, this WordPress blogger fashioned an older style ‘periscope’ using a cellphone, mirror and half-mirrored sunglasses to get a cheaper Augmented Reality experience. The cellphone is an HTC unit strapped onto the rim of a baseball hat. The display is than reflected downwards through a hold cut in the rim and then is reflected off a pair of sunglasses mounted at roughly a 45 degree angle. It’s cheap, it works, but I don’t know how good the voice activation is. Makes me wonder how well it might work with an iPhone Siri interface. The author even mentions that HTC is a little heavy and an iPhone might work a little better. I wonder if it wouldn’t work better still if the ‘periscope’ mirror arrangement was scrapped altogether. Instead just mount the phone flat onto the bill of the hat, let the screen face downward. The screen would then reflect off the sunglasses surface. The number of reflecting surfaces would be reduced, the image would be brighter, etc. I noticed a lot of people also commented on this fellow’s blog and might get some discussion brewing about longer term the value-add benefits to Augmented Reality. There is a killer app yet to be found and even Google hasn’t captured the flag yet.

    This picture shows the Wikitude World Browser ...
    This picture shows the Wikitude World Browser on the iPhone looking at the Old Town of Salzburg. Computer-generated information is drawn on top of the screen. This is an example for location-based Augmented Reality. (Photo credit: Wikipedia)
  • Tilera | Wired Enterprise | Wired.com

    Tilera’s roadmap calls for its next generation of processors, code-named Stratton, to be released in 2013. The product line will expand the number of processors in both directions, down to as few as four and up to as many as 200 cores. The company is going from a 40-nm to a 28-nm process, meaning they’re able to cram more circuits in a given area. The chip will have improvements to interfaces, memory, I/O and instruction set, and will have more cache memory.

    via Tilera | Wired Enterprise | Wired.com.

    Image representing Wired Magazine as depicted ...
    Image via CrunchBase

    I’m enjoying the survey of companies doing massively parallel, low power computing products. Wired.com|Enterprise started last week with a look at SeaMicro and how the two principal founders got their start observing Google’s initial stabs at a warehouse sized computer. Since that time things have fractured somewhat instead of coalescing and now three big attempts are competing to fulfil the low power, massively parallel computer in a box. Tilera is a longer term project startup from MIT going back further than Calxeda or SeaMicro.

    However application of this technology has been completely dependent on the software. Whether it be OSes or Applications, they all have to be constructed carefully to take full advantage of the Tile processor architecture. To their credit Tilera has attempted to insulate application developers from some of the vagaries of the underlying chip by creating an OS that does the heavy lifting of queuing and scheduling. But still, there’s got to be a learning curve there even if it isn’t quite as daunting as say folks who develop applications for the super computers at National Labs here in the U.S. Suffice it to say it’s a non-trivial choice to adopt a Tilera cpu for a product/project you are working on. And the people who need a Tilera GX cpu for their app, already know all they need to know about the the chip in advance. It’s that kind of choice they are making.

    I’m also relieved to know they are continuing development to shrink down the design rules. Intel being the biggest leader in silicon semi-conductor manufacturing, continues to shrink its design, development and manufacturing design rules. We’re fast approaching a 20nm-18nm production line in both Oregon and Arizona. Both are Intel design fabrication plants and there not about to stop and take a breath. Companies like Tilera, Calxeda and SeaMicro need to do continuous development on their products to keep from being blind sided by Intel’s continuous product development juggernaut. So Tilera is very wise to shrink its design rule from 40nm down to 28nm as fast as it can and then get good yields on the production lines once they start sampling chips at this size.

    *UPDATE: Just saw this run through my blogroll last week. Tilera has announced a new chip coming in March. Glad to see Tilera is still duking it out, battling for the design wins with manufacturers selling into the Data Center as it were. Larger Memory addressing will help make the Tilera chips more competitive with Commodity Intel Hardware shops, and maybe we’ll see full 64bit memory extensions at some point as a follow on to the current 40bit address space extenstions currently being touted in this article from The Register.

    English: Block diagram of the Tilera TILEPro64...
    Image via Wikipedia
  • How Google Spawned The 384-Chip Server | Wired Enterprise | Wired.com

    SeaMicro’s latest server includes 384 Intel Atom chips, and each chip has two “cores,” which are essentially processors unto themselves. This means the machine can handle 768 tasks at once, and if you’re running software suited to this massively parallel setup, you can indeed save power and space.

    via How Google Spawned The 384-Chip Server | Wired Enterprise | Wired.com.

    Image representing Wired Magazine as depicted ...
    Image via CrunchBase

    Great article from Wired.com on SeaMicro and the two principle minds behind its formation. Both of these fellows were quite impressed with Google’s data center infrastructure at the points in time when they both got to visit a Google Data Center. But rather than just sit back and gawk, they decided to take action and borrow, nay steal some of those interesting ideas the Google Engineers adopted early on. However, the typical naysayers pull a page out of the Google white paper arguing against SeaMicro and the large number of smaller, lower-powered cores they use in the SM-10000 product.

    SeaMicro SM10000
    Image by blogeee.net via Flickr

    But nothing speaks of success more than product sales and SeaMicro is selling it’s product into data centers. While they may not achieve the level of commerce reached by Apple Inc., it’s a good start. What still needs to be done is more benchmarks and real world comparisons that reproduce or negate the results of Google’s whitepaper promoting their choice of off the shelf commodity Intel chips. Google is adamant that higher clock speed ‘server’ chips attached to single motherboards connected to one another in large quantity is the best way to go. However, the two guys who started SeaMicro insist that while Google’s choice for itself makes perfect sense, NO ONE else is quite like Google in their compute infrastructure requirements. Nobody has such a large enterprise or the scale Google requires (except for maybe Facebook, and possibly Amazon). So maybe there is a market at the middle and lower end of the data center owner’s market? Every data center’s needs will be different especially when it comes to available space, available power and cooling restrictions for a given application. And SeaMicro might be the secret weapon for shops constrained by all three: power/cooling/space.

    *UPDATE: Just saw this flash through my Google Reader blogroll this past Wednesday, Seamicro is now selling an Intel Xeon based server. I guess the market for larger numbers of lower power chips just isn’t strong enough to sustain a business. Sadly this makes all the wonder and speculation surrounding the SM10000 seem kinda moot now. But hopefully there’s enough intellectual property rights and patents in the original design to keep the idea going for a while. Seamicro does have quite a headstart over competitors like Tilera, Calxeda and Applied Micro. And if they can help finance further developments of Atom based servers by selling a few Xeons along the way, all the better.

  • U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com

    Image representing Google as depicted in Crunc...
    Image via CrunchBase

    The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday.

    via U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com.

    Not good news in imho. The reason being is the mission creep and abuses that come with absolute power in the form of a National Security Letter. The other part of the equation is Google’s business model runs opposite to the idea of protecting people’s information. If you disagree, I ask that you read this blog post from Christopher Soghoian, where he details just what exactly it is Google does when it keeps all your data unencrypted in its data centers. In order to sell AdWords and serve advertisements to you, Google needs to keep everything open and unencrypted. At the same time they aren’t too casual in their stewardship of your data, but they do respond to law enforcement requests for customer data. To quote Seghoian at the end of his blog entry:

    The end result is that law enforcement agencies can, and regularly do request user data from the company — requests that would lead to nothing if the company put user security and privacy first.”

    And that indeed is the moral of the story. Which leaves everyone asking what’s the alternative? Earlier in the same story the blame is placed square on the end-user for not protecting themselves. Encryption tools for email and personal documents have been around for a long time. And often there are commercial products available to help accomplish some level of privacy even for so-called Cloud hosted data. But the friction point is always going to be the level of familiarity, ease of use and cost of the product before it is as widely used and adopted as Webmail has been since the advent of desktop email clients like Eudora.

    So if you really have concerns, take action, don’t wait for Google to act to defend your rights. Encrypt your email, your documents and make Google one bit less culpable for any law enforcement requests that may or may not include your personal data.

  • Google confirms Maps with local map downloads as iOS lags | Electronista

    A common message shown on TomTom OS when there...
    Image via Wikipedia

    Google Maps gets map downloads in Labs betaAfter a brief unofficial discovery, Google on Thursday confirmed that Google Maps 5.7 has the first experimental support for local maps downloads.

    via Google confirms Maps with local map downloads as iOS lags | Electronista.

    Google Maps for Android is starting to show a level of maturity only seen on dedicated GPS units. True, there still is no routing feature (you need access to Google’s servers for that functionality) But you at least a downloaded map that you can zoom out and in on to get a view without incurring heavy data charges. Yes, overseas you may rack up some big charges as you navigate live maps via the Google Maps app on Android. This is now solved partially by downloading in advance the immediate area you will be visiting (within a few miles radius). It’s an incremental improvement to be sure and makes Android phones a little more self sufficient without making you regret the data charges.

    Apple on the other hand is behind. Hands down they are kind of letting the 3rd party gps development go to folks like Navigon and TomTom who both require somewhat hefty fees to license their downloaded content. Apple’s Maps doesn’t compare to Navigon, TomTom, much less Google for actual usefulness in a wide range of situations. And Apple isn’t currently using the downloadable vector based maps introduced with this revision of Google Maps for Android vers. 5.7. So it will struggle with large jpeg images as you pan and scan around the map to find your location.

  • Kim Cameron returns to Microsoft as indie ID expert • The Register

    Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

    via Kim Cameron returns to Microsoft as indie ID expert • The Register.

    CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

  • ARM server hero Calxeda lines up software super friends • The Register

    Company Logo
    Maker of the massively parallel ARM-based server

    via ARM server hero Calxeda lines up software super friends • The Register.

    Calxeda in the news again this week with some more announcements regarding its plans. Remembering recently to the last article I posted on Calxeda, this company boasts an ARM based server packing 120 cpus (each with four cores) into a 2U high rack (making it just 3-1/2″ tall *see note). With every evolution in hardware one must needs get an equal if not greater revolution in software. Which is the point of the announcement by Calxeda of its new software partners.

    It’s all mostly cloud apps, cloud provisioning and cloud management types of vendors. And with the partnership each company gets early access to the hardware Calxeda is promising to design, prototype and eventually manufacture. Both Google and Intel have poo-poohed the idea of using “wimpy processors” on massively parallel workloads claiming faster serialized workloads are still easier to manage through existing software/programming techniques. For many years as Intel has complained about the programming tools, it still has gone the multi-core/multi-thread route hoping to continue its domination by offering up ‘newer’ and higher performing products. So while Intel bad mouths parallelism on competing cpus it seems to be desperate to sell multi-core to willing customers year over year.

    Even as power efficient as those cores maybe Intel’s old culture of maximum performance for the money still holds sway. Even the most recent Ultra-low Voltage i-series cpus are still hitting about 17Watts of power for chips clocking in around 1.8Ghz (speed boosting up to 2.9Ghz in a pinch). Even if Intel allowed these chips to be installed into servers we’re stilling talking a lot of  Thermal Design Point (TDM) that has to be chilled to keep running.

  • From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1)

    Process and data modeling
    Image via Wikipedia

    Big Data

    In short, big data simply means data sets that are large enough to be difficult to work with. Exactly how big is big is a matter of debate. Data sets that are multiple petabytes in size are generally considered big data (a petabye is 1,024 terabytes). But the debate over the term doesn’t stop there.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1).

    There’s big doin’s inside and outside the data center theses days. You cannot spend a day without a cool new article about some new project that’s just been open sourced from one of the departments inside the social networking giants. Hadoop being the biggest example. What you ask is Hadoop? It is a project Yahoo started after Google started spilling the beans on it’s two huge technological leaps in massively parallel databases and processing real time data streams. The first one was called BigTable. It is a huge distributed database that could be brought up on an inordinately large number of commodity servers and then ingest all the indexing data sent by Google’s web bots as they found new websites. That’s the database and ingestion point. The second point is the way in which the rankings and ‘pertinence’ of the indexed websites would be calculated through PageRank. The invention for the realtime processing of this data being collected is called MapReduce. It was a way of pulling in, processing and quickly sorting out the important highly ranked websites. Yahoo read the white papers put out by Google and subsequently created a version of those technologies which today power the Yahoo! search engine. Having put this into production and realizing the benefits of it, Yahoo turned it into an open source project to lower the threshold of people wanting to get into the Big Data industry. Similarly, they wanted to get many eyes of programmers looking at the source code and adding features, packaging it, and all importantly debugging what was already there. Hadoop was the name given to the Yahoo bag of software and this is what a lot of people initially adopt if they are trying to do large scale collection and real-time analysis of Big Data.

    Another discovery along the way towards the Big Data movement was a parallel attempt to overcome the limitations of extending the schema of a typical database holding all the incoming indexed websites. Tables and Rows and Structured Query Language (SQL) have ruled the day since about 1977 or so, and for many kinds of tabbed data there is no substitute. However, the kinds of data being stored now fall into the big amorphous mass of binary large objects (BLOBs) that can slow down a traditional database. So a non-SQL approach was adopted and there are parts of the BigTable database and Hadoop that dump the unique key values and relational tables of SQL to just get the data in and characterize it as quickly as possible, or better yet to re-characterize it by adding elements to the schema after the fact. Whatever you are doing, what you collect might not be structured or easily structured so you’re going to need to play fast and loose with it and you need a database of some sort equal to that task. Enter the NoSQL movement to collect and analyze Big Data in its least structured form. So my recommendation to anyone trying to get the square peg of Relational Databases to fit the round hole of their unstructured data is to give up. Go NoSQL and get to work.

    This first article from Read Write Web is good in that it lays the foundation for what a relational database universe looks like and how you can manipulate it. Having established what IS, future articles will be looking at what quick, dirty workarounds and one off projects people have come up with to fit their needs. And subsequently which ‘Works for Me’ type solutions have been turned into bigger open source projects that will ‘Work for Others’, as that is where each of these technologies will really differentiate themselves. Ease of use and lowering the threshold will be deciding factors for many people’s adoption of a NoSQL database I’m sure.

  • SPDY: An experimental protocol for a faster web – The Chromium Projects

    Google Chromium alpha for Linux. User agent: M...
    Image via Wikipedia

    As part of the “Let’s make the web faster” initiative, we are experimenting with alternative protocols to help reduce the latency of web pages. One of these experiments is SPDY (pronounced “SPeeDY”), an application-layer protocol for transporting content over the web, designed specifically for minimal latency.  In addition to a specification of the protocol, we have developed a SPDY-enabled Google Chrome browser and open-source web server. In lab tests, we have compared the performance of these applications over HTTP and SPDY, and have observed up to 64% reductions in page load times in SPDY. We hope to engage the open source community to contribute ideas, feedback, code, and test results, to make SPDY the next-generation application protocol for a faster web.

    via SPDY: An experimental protocol for a faster web – The Chromium Projects.

    Google wants the World Wide Web to go faster. I think we all would like to have that as well. But what kind of heavy lifting is it going to take? The transition from Arpanet to the TCP/IP protocol took a very long time and required some heavy handed shoving to accomplish the cutover in 1984. We can all thank Vint Cerf for making that happen so that we could continue to grow and evolve as an online species (Tip of Hat). But now what? There’s been a move to evolved from TCP/IP version 4 to version 6 to accommodate the increase in number of network devices. Speed really wasn’t a consideration in that revision. I don’t know how this project integrates with TCP/IP vers. 6. But I hope maybe it can be pursued on a parallel course with the big migration to TCP/IP vers. 6.

    What would be the worst thing that could happen is to create another Facebook/Twitter/Apple Store/Google/AOL cul-de-sac that only benefits the account holders loyal to Google. Yes it would be nice if Google Docs and all the other attendant services provided via/through Google got onboard the SPDY accelerator train. I would stand to benefit, but things like this should be pushed further up into the wider Internet so that everyone, everywhere has the same benefits. Otherwise this is an attempt to steal away user accounts and create churn in the competitors account databases.