Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for the ‘google’ Category

Jon Udell on filter failure

Jon Udell

Jon Udell (Photo credit: Wikipedia)

http://blog.jonudell.net/2014/01/26/its-time-to-engineer-some-filter-failure/

Jon’s article points out his experience of the erosion of serendipity or at least opposing view points that social media enforces (somewhat) accidentally. I couldn’t agree more. One of the big promises of the Internet was that it was unimaginably vast and continuing to grow. The other big promise was that it was open in the way people could participate. There were no dictats or proscribed methods per se, but etiquette at best. There were FAQs to guide us, and rules of thumb to prevent us from embarrassing ourselves. But the Internet, It was something so vast one could never know or see everything that was out there, good or bad.

But like the Wild est, search engines began fencing in the old prairie. At once both allowing us to get to the good stuff and waste less time doing important stuff. But therein lies the bargain of the “filter”, giving up control to an authority to help you do something with data or information. All the electrons/photons whizzing back and forth on the series of tubes exisiting all at once, available (more or less) all at once. But now with Social Neworks, like AOL before we suffer from the side effects of the filter.

I remember being an AOL member, finally caving in and installing the app from some free floppy disk I would get in the mail at least once a week. I registered my credit card for the first free 20 hours (can you imagine?). And just like people who ‘try’ Netflix, I never unregistered. I lazily stayed the course and tried getting my money’s worth, spending more time online. At the same time ISPs, small mom and pop type shops were renting off parts of a Fractional T-1 leased line they owned, putting up modem pools and started selling access to the “Internet”. Nobody knew why you would want to do that with all teh kewl thingz one could do on AOL. Shopping, Chat Rooms, News, Stock quotes. It was ‘like’ the Internet. But not open and free and limitless like the Internet. And that’s where the failure begins to occur.

AOL had to police it’s population, enforce some codes of conduct. They could kick you off, stop accepting your credit card payments. One could not be kicked of the ‘Internet’ in the same way, especially in those early days. But getting back to Jon’s point about filters that fail and allow you to see the whole world, discover an opposing viewpoint or better mulitple opposing viewpoints. That is the promise of the Internet, and we’re seeing less and less of it as we corral ourselves into our favorite brand name social networking community. I skipped MySpace, but I did jump on Flickr, and eventually Facebook. And in so doing gave up a little of that wildcat freedom and frontier-like experience of  dial-up over PPP or SLIP connection to a modem pool, doing a search first on Yahoo, then AltaVista, and then Google to find the important stuff.

Enhanced by Zemanta

Written by carpetbomberz

February 13, 2014 at 3:00 pm

Sebastian Thrun and Udacity

Sebastian Thrun

Sebastian Thrun (Photo credit: novas0x2a)

Sebastian Thrun and Udacity

MOOCs may not be the future of higher education. But they will be the future of Corporate Training. You can bet money on that. In fact money IS being bet on that right now. Udacity has gone from attempting to create a higher ed paradigm shifting option to a more conventional platform for conducting training for employees of corporations willing to pay money to host on their platform.

Now that’s a change we can all believe in, as trainers and traveling, itinerant consultants can sit at their home base and run the sessions synchronously or asynchronously as time allows or demands change. Yes, hands on training may be the best but the best training is the one you actually can provion, pay for and get people to attend. Otherwise, it’s all an exercise in human potential waiting to be tapped, but at times more often opportunities lost outright. Just in Time training? Hardly, how about cost-effective training provided in a plentiful manner? Absolutely yes.

Written by Eric Likness

December 9, 2013 at 3:00 pm

Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com

Note this is a draft of an article I wrote back in June when Apple announced it was going to favor its own Maps app over Google Maps and take G-Maps out of the Apple Store altogether. This blog went on hiatus just 2 weeks after that. And a whirlwind number of staff changes occurred at Apple as a result of the debacle of iOS Maps product. Top people have been let go not the least of which was the heir apparent in some people’s views of Steve Jobs; Scott Forstall. He was not popular, very much a jerk and when asked by Tim Cook to co-sign the mea culpa Apple put out over their embarrassment about the lack of performance and lack of quality of iOS Maps, Scott wouldn’t sign it. So goodbye Scott, hello Google Maps. Somehow Google and Apple are in a period of detente over Maps and Google Maps is now returned to the Apple Store. Who knew so much could happen in 6 months right?

Garmin told Wired in a statement. “We think that there is a market for smartphone navigation apps, PNDs [Personal Navigation Devices] and in-dash navigation systems as each of these solutions has their own advantages and use case limitations and ultimately it’s up to the consumer to decide what they prefer.

via Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com.

That’s right mapping and navigation are just one more app in a universe of software you can run on your latest generation iPod Touch or iPhone. I suspect that the Maps will only be available on the iPhone as that was a requirement previously placed on the first gen Maps app on iOS. It would be nice if there were a lower threshold entry point for participation in the Apple Maps app universe.

But I do hear one or two criticisms regarding Apple’s attempt to go its own way. Google’s technology and data set lead (you know all those cars driving around and photographing?) Apple has to buy that data from others, it isn’t going to start from scratch and attempt to re-create Google’s Street View data set. Which means it won’t be something Maps has as a feature probably for quite some time. Android’s own Google Maps app includes turn-by-turn navigation AND Street view built right in. It’s just there. How cool is that? You get the same experience on the mobile device as the one you get working in a web browser on a desktop computer.

In this battle between Google and Apple the pure play personal navigation device (PND) manufacturers are losing share. I glibly suggested in a twee yesterday that Garmin needs to partner up with Apple and help out with its POI and map datasets so that potentially both can benefit. It would be cool if a partnership could be struck that allowed Apple to have feature that didn’t necessarily steal market share from the PNDs, but could somehow raise all boats equally. Maybe a partnership to create a Street View-like add-on for everyone’s mapping datasets would be a good start. That would help level the playing field between Google vs. the rest of the world.

Written by Eric Likness

December 15, 2012 at 12:22 pm

Posted in google, gpu, mobile, navigation, technology

Tagged with , , ,

Google X founder Thrun demonstrates Project Glass on TV show | Electronista

Sebastian Thrun, Associate Professor of Comput...

Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)

Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.

via Google X founder Thrun demonstrates Project Glass on TV show | Electronista.

You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.

Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.

So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:

Project Glass: One day. . .

As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:

1. A specific iPhone that has a noise cancelling microphone array

2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses

So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:

1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside

2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.

It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.

Sergey Brin wearing Google Glasses

Thomas Hawk’s photo of Sergey Brin wearing Google Glasses

Written by Eric Likness

May 10, 2012 at 3:00 pm

Google shows off Project Glass augmented reality specs • The Register

Thomas Hawk's picture of Sergey Brin wearing the prototype of Project Glass

But it is early days yet. Google has made it clear that this is only the initial stages of Project Glass and it is seeking feedback from the general public on what they want from these spectacles. While these kinds of heads-up displays are popular in films and fiction and dearly wanted by this hack, the poor sales of existing eye-level screens suggests a certain reluctance on the part of buyers.

via Google shows off Project Glass augmented reality specs • The Register.

The video of the Google Glass interface is kind of interesting and problematic at the same time. Stuff floats in and out of few kind of like the organism that live in the mucous of your eye. And the the latency delays of when you see something and issue a command give it a kind of halting staccato cadence when interacting with it. It looks and feels like old style voice recognition that needed discrete pauses added to know when things ended. As a demo it’s interesting, but they should issue releases very quickly and get this thing up to speed as fast as they possibly can. And I don’t mean having the CEO Sergey Brin show up at a party wearing the thing. According to reports the ‘back pack’ that the glasses are tethered to is not small. Based on the description I think Google has a long way to go yet.

http://my20percent.wordpress.com/2012/02/27/baseball-cap-head-up-displa/

And on the smaller scale tinkerer front, this WordPress blogger fashioned an older style ‘periscope’ using a cellphone, mirror and half-mirrored sunglasses to get a cheaper Augmented Reality experience. The cellphone is an HTC unit strapped onto the rim of a baseball hat. The display is than reflected downwards through a hold cut in the rim and then is reflected off a pair of sunglasses mounted at roughly a 45 degree angle. It’s cheap, it works, but I don’t know how good the voice activation is. Makes me wonder how well it might work with an iPhone Siri interface. The author even mentions that HTC is a little heavy and an iPhone might work a little better. I wonder if it wouldn’t work better still if the ‘periscope’ mirror arrangement was scrapped altogether. Instead just mount the phone flat onto the bill of the hat, let the screen face downward. The screen would then reflect off the sunglasses surface. The number of reflecting surfaces would be reduced, the image would be brighter, etc. I noticed a lot of people also commented on this fellow’s blog and might get some discussion brewing about longer term the value-add benefits to Augmented Reality. There is a killer app yet to be found and even Google hasn’t captured the flag yet.

This picture shows the Wikitude World Browser ...

This picture shows the Wikitude World Browser on the iPhone looking at the Old Town of Salzburg. Computer-generated information is drawn on top of the screen. This is an example for location-based Augmented Reality. (Photo credit: Wikipedia)

Written by Eric Likness

April 19, 2012 at 3:00 pm

Tilera | Wired Enterprise | Wired.com

Tilera’s roadmap calls for its next generation of processors, code-named Stratton, to be released in 2013. The product line will expand the number of processors in both directions, down to as few as four and up to as many as 200 cores. The company is going from a 40-nm to a 28-nm process, meaning they’re able to cram more circuits in a given area. The chip will have improvements to interfaces, memory, I/O and instruction set, and will have more cache memory.

via Tilera | Wired Enterprise | Wired.com.

Image representing Wired Magazine as depicted ...

Image via CrunchBase

I’m enjoying the survey of companies doing massively parallel, low power computing products. Wired.com|Enterprise started last week with a look at SeaMicro and how the two principal founders got their start observing Google’s initial stabs at a warehouse sized computer. Since that time things have fractured somewhat instead of coalescing and now three big attempts are competing to fulfil the low power, massively parallel computer in a box. Tilera is a longer term project startup from MIT going back further than Calxeda or SeaMicro.

However application of this technology has been completely dependent on the software. Whether it be OSes or Applications, they all have to be constructed carefully to take full advantage of the Tile processor architecture. To their credit Tilera has attempted to insulate application developers from some of the vagaries of the underlying chip by creating an OS that does the heavy lifting of queuing and scheduling. But still, there’s got to be a learning curve there even if it isn’t quite as daunting as say folks who develop applications for the super computers at National Labs here in the U.S. Suffice it to say it’s a non-trivial choice to adopt a Tilera cpu for a product/project you are working on. And the people who need a Tilera GX cpu for their app, already know all they need to know about the the chip in advance. It’s that kind of choice they are making.

I’m also relieved to know they are continuing development to shrink down the design rules. Intel being the biggest leader in silicon semi-conductor manufacturing, continues to shrink its design, development and manufacturing design rules. We’re fast approaching a 20nm-18nm production line in both Oregon and Arizona. Both are Intel design fabrication plants and there not about to stop and take a breath. Companies like Tilera, Calxeda and SeaMicro need to do continuous development on their products to keep from being blind sided by Intel’s continuous product development juggernaut. So Tilera is very wise to shrink its design rule from 40nm down to 28nm as fast as it can and then get good yields on the production lines once they start sampling chips at this size.

*UPDATE: Just saw this run through my blogroll last week. Tilera has announced a new chip coming in March. Glad to see Tilera is still duking it out, battling for the design wins with manufacturers selling into the Data Center as it were. Larger Memory addressing will help make the Tilera chips more competitive with Commodity Intel Hardware shops, and maybe we’ll see full 64bit memory extensions at some point as a follow on to the current 40bit address space extenstions currently being touted in this article from The Register.

English: Block diagram of the Tilera TILEPro64...

Image via Wikipedia

Written by Eric Likness

February 6, 2012 at 3:00 pm

How Google Spawned The 384-Chip Server | Wired Enterprise | Wired.com

SeaMicro’s latest server includes 384 Intel Atom chips, and each chip has two “cores,” which are essentially processors unto themselves. This means the machine can handle 768 tasks at once, and if you’re running software suited to this massively parallel setup, you can indeed save power and space.

via How Google Spawned The 384-Chip Server | Wired Enterprise | Wired.com.

Image representing Wired Magazine as depicted ...

Image via CrunchBase

Great article from Wired.com on SeaMicro and the two principle minds behind its formation. Both of these fellows were quite impressed with Google’s data center infrastructure at the points in time when they both got to visit a Google Data Center. But rather than just sit back and gawk, they decided to take action and borrow, nay steal some of those interesting ideas the Google Engineers adopted early on. However, the typical naysayers pull a page out of the Google white paper arguing against SeaMicro and the large number of smaller, lower-powered cores they use in the SM-10000 product.

SeaMicro SM10000

Image by blogeee.net via Flickr

But nothing speaks of success more than product sales and SeaMicro is selling it’s product into data centers. While they may not achieve the level of commerce reached by Apple Inc., it’s a good start. What still needs to be done is more benchmarks and real world comparisons that reproduce or negate the results of Google’s whitepaper promoting their choice of off the shelf commodity Intel chips. Google is adamant that higher clock speed ‘server’ chips attached to single motherboards connected to one another in large quantity is the best way to go. However, the two guys who started SeaMicro insist that while Google’s choice for itself makes perfect sense, NO ONE else is quite like Google in their compute infrastructure requirements. Nobody has such a large enterprise or the scale Google requires (except for maybe Facebook, and possibly Amazon). So maybe there is a market at the middle and lower end of the data center owner’s market? Every data center’s needs will be different especially when it comes to available space, available power and cooling restrictions for a given application. And SeaMicro might be the secret weapon for shops constrained by all three: power/cooling/space.

*UPDATE: Just saw this flash through my Google Reader blogroll this past Wednesday, Seamicro is now selling an Intel Xeon based server. I guess the market for larger numbers of lower power chips just isn’t strong enough to sustain a business. Sadly this makes all the wonder and speculation surrounding the SM10000 seem kinda moot now. But hopefully there’s enough intellectual property rights and patents in the original design to keep the idea going for a while. Seamicro does have quite a headstart over competitors like Tilera, Calxeda and Applied Micro. And if they can help finance further developments of Atom based servers by selling a few Xeons along the way, all the better.

Written by Eric Likness

February 2, 2012 at 9:51 pm

U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com

Image representing Google as depicted in Crunc...

Image via CrunchBase

The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday.

via U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com.

Not good news in imho. The reason being is the mission creep and abuses that come with absolute power in the form of a National Security Letter. The other part of the equation is Google’s business model runs opposite to the idea of protecting people’s information. If you disagree, I ask that you read this blog post from Christopher Soghoian, where he details just what exactly it is Google does when it keeps all your data unencrypted in its data centers. In order to sell AdWords and serve advertisements to you, Google needs to keep everything open and unencrypted. At the same time they aren’t too casual in their stewardship of your data, but they do respond to law enforcement requests for customer data. To quote Seghoian at the end of his blog entry:

The end result is that law enforcement agencies can, and regularly do request user data from the company — requests that would lead to nothing if the company put user security and privacy first.”

And that indeed is the moral of the story. Which leaves everyone asking what’s the alternative? Earlier in the same story the blame is placed square on the end-user for not protecting themselves. Encryption tools for email and personal documents have been around for a long time. And often there are commercial products available to help accomplish some level of privacy even for so-called Cloud hosted data. But the friction point is always going to be the level of familiarity, ease of use and cost of the product before it is as widely used and adopted as Webmail has been since the advent of desktop email clients like Eudora.

So if you really have concerns, take action, don’t wait for Google to act to defend your rights. Encrypt your email, your documents and make Google one bit less culpable for any law enforcement requests that may or may not include your personal data.

Written by Eric Likness

November 7, 2011 at 3:00 pm

Google confirms Maps with local map downloads as iOS lags | Electronista

A common message shown on TomTom OS when there...

Image via Wikipedia

Google Maps gets map downloads in Labs betaAfter a brief unofficial discovery, Google on Thursday confirmed that Google Maps 5.7 has the first experimental support for local maps downloads.

via Google confirms Maps with local map downloads as iOS lags | Electronista.

Google Maps for Android is starting to show a level of maturity only seen on dedicated GPS units. True, there still is no routing feature (you need access to Google’s servers for that functionality) But you at least a downloaded map that you can zoom out and in on to get a view without incurring heavy data charges. Yes, overseas you may rack up some big charges as you navigate live maps via the Google Maps app on Android. This is now solved partially by downloading in advance the immediate area you will be visiting (within a few miles radius). It’s an incremental improvement to be sure and makes Android phones a little more self sufficient without making you regret the data charges.

Apple on the other hand is behind. Hands down they are kind of letting the 3rd party gps development go to folks like Navigon and TomTom who both require somewhat hefty fees to license their downloaded content. Apple’s Maps doesn’t compare to Navigon, TomTom, much less Google for actual usefulness in a wide range of situations. And Apple isn’t currently using the downloadable vector based maps introduced with this revision of Google Maps for Android vers. 5.7. So it will struggle with large jpeg images as you pan and scan around the map to find your location.

Written by Eric Likness

July 28, 2011 at 3:00 pm

Kim Cameron returns to Microsoft as indie ID expert • The Register

Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

via Kim Cameron returns to Microsoft as indie ID expert • The Register.

CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

Follow

Get every new post delivered to your Inbox.

Join 288 other followers