Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for the ‘google’ Category

Audrey Watters: The Future of Ed-Tech is a Reclamation Project #DLFAB

Audrey Watters Media Predicts 2011

Audrey Watters Media Predicts 2011 (Photo credit: @Photo.)

We can reclaim the Web and more broadly ed-tech for teaching and learning. But we must reclaim control of the data, content, and knowledge we create. We are not resources to be mined. Learners do not enter our schools and in our libraries to become products for the textbook industry and the testing industry and the technology industry and the ed-tech industry to profit from. 

via The Future of Ed-Tech is a Reclamation Project #DLFAB.(by Audrey Watters)

Really philosophical article about what it is Higher Ed is trying to do here. It’s not just about student portfolios, it’s Everything. It is the books you check out the seminars you attend, the videos you watched the notes you took all the artifacts of learning. And currently they are all squirreled away and stashed inside data silos like Learning Management Systems.

The original World Wide Web was like the Wild, Wild West, an open frontier without visible limit. Cloud services and commercial offerings has fenced in the frontier in a series of waves of fashion. Whether it was AOL, Tripod.com, Geocities, Friendster, MySpace, Facebook the web grew in the form of gated communities and cul-de-sacs for “members only”. True the democracy of it all was membership was open and free, practically anyone could join, all you had to do was hand over the control, the keys to YOUR data. That was the bargain, by giving up your privacy, you gained all the rewards of socializing with long lost friends and acquaintances. From that little spark the surveillance and “data mining” operation hit full speed.

Reclaiming ownership of all this data, especially the component that is generated in one’s lifetime of learning is a worthy cause. Audrey Watters references Jon Udell in an example of the kind of data we would want to own and limit access to our whole lives. From the article:

Udell then imagines what it might mean to collect all of one’s important data from grade school, high school, college and work — to have the ability to turn this into a portfolio — for posterity, for personal reflection, and for professional display on the Web.

Indeed, and at the same time though this data may live on the Internet somewhere access is restricted to those whom we give explicit permission to access it. That’s in part a project unto itself, this mesh of data could be text, or other data objects that might need to be translated, converted to future readable formats so it doesn’t grow old and obsolete in an abandoned file format. All of this stuff could be give a very fine level of access control to individuals you have approved to read parts or pieces or maybe even give wholesale access to. You would make that decision and maybe just share the absolute minimum necessary. So instead of seeing a portfolio of your whole educational career, you just give out the relevant links and just those links. That’s what Jon Udell is pursuing now through the Thali Project. Thali is a much more generalized way to share data from many devices but presented in a holistic, rationalized manner to whomever you define as a trusted peer. It’s not just about educational portfolios, it’s about sharing your data. But first and foremost you have to own the data or attempt to reclaim it from the wilds and wilderness of the social media enterprise, the educational enterprise, all these folks who want to own your data while giving you free services in return.

Audrey uses the metaphor, “Data is the new oil” and that at the heart is the problem. Given the free oil, those who invested in holding onto and storing the oil are loathe to give it up. And like credit reporting agencies with their duplicate and sometime incorrect datasets, those folks will give access to that unknown quantity to the highest bidder for whatever reason. Whether its campaign staffers, private detectives, vengeful spouses, doesn’t matter as they own the data and set the rules as to how it is shared. However in the future when we’ve all reclaimed ownership of our piece of the oil field, THEN we’ll have something. And when it comes to the digital equivalent of the old manila folder, we too will truly own our education.

Written by Eric Likness

June 12, 2014 at 3:00 pm

Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite

English: Diagram showing overview of cloud com...

It’s not unprecedented: Google already offers a testing suite for Android apps, though that’s focused on making sure they run well on smartphones and tablets, not testing the cloud-based services they connect to. If Google added testing services for the websites and services those apps connect to, it would have an end-to-end lock on developing for both the Web and mobile.

via Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite.

Load testing websites and web-apps is a market whose time has come. I know where I work we have Project group who has a guy who manages an installation of Silk as a load tester. Behind that is a little farm of old Latitude E6400s that he manages from the Silk console to point at whichever app is in development/QA/testing before it goes into production. Knowing there’s potential for a cloud-based tool for this makes me very, very interested.

As outsourcing goes, the Software as a Service (SaaS) or Platform as a Service (PaaS) or even Infrastructure as a Service (IaaS) categories are great as raw materials. But if there was just an app that I could login to, spin up some VMs install my load-test tool of choice and then manage them from my desktop, I would feel like I had accomplished something. Or failing that even just a toolkit for load testing with whatever tool du jour is already available (nothing is perfect that way) would be cool too. And better yet, if I could do that with an updated tool whenever I  needed to conduct a round of testing, the tool would take into account things like the Heart Bleed bug in a timely fashion. That’s the kind of benefit a cloud-based, centrally managed, centrally updated Load Test service could provide.

And now as Microsoft has just announced a partnership with Salesforce on their Azure cloud platform, things get even more interesting. Not only could you develop using an existing toolkit like Salesforce.com, but host it on more than one cloud platform (AWS or Azure) as your needs change. And I would hope this would include unit test, load test and the whole sweet suite of security auditing one would expect for a webapp (thereby helping prevent vulnerabilities like HeartBleed OpenSSL).

Enhanced by Zemanta

Written by Eric Likness

June 2, 2014 at 3:00 pm

Posted in cloud, google, support

Tagged with , ,

Microsoft Office applications barely used by many employees, new study shows – Techworld.com

The Microsoft Office Core Applications

The Microsoft Office Core Applications (Photo credit: Wikipedia)

After stripping out unnecessary licensing Office licenses, organisations were left with a hybrid environment, part cloud, part desktop Office.

via Microsoft Office applications barely used by many employees, new study shows – Techworld.com.

The Center IT outfit I work for is dumping as much on premise Exchange Mailbox hosting as it can. However we are sticking with Outlook365 as provisioned by Microsoft (essentially an Outlook’d version of Hotmail). It has the calendar and global address list we all have come to rely on. But as this article goes into great detail on the rest of the Office Suite, people aren’t creating as many documents as they once did. We’re viewing them yes, but we just aren’t creating them.

I wonder how much of this is due in part to re-use or the assignment of duties to much higher top level people to become the authors. Your average admin assistant or even secretary doesn’t draft anything dictated to them anymore. The top level types now generally would be embarrassed to dictate something out to anyone. Plus the culture of secrecy necessitates more 1-to-1 style communications. And long form writing? Who does that anymore? No one writes letters, they write brief email or even briefer text, Tweets or Facebook updates. Everything is abbreviated to such a degree you don’t need thesaurus, pagination, or any of the super specialized doo-dads and add-ons we all begged M$ and Novell to add to their première word processors back in the day.

From an evolutionary standpoint, we could get by with the original text editors first made available on timesharing systems. I’m thinking of utilities like line editors (that’s really a step backwards, so I’m being really facetious here). The point I’m making is we’ve gone through a very advanced stage in the evolution of our writing tool of choice and it became a monopoly. WordPerfect lost out and fell by the wayside. Primary, Secondary and Middle Schools across the U.S. adopted M$ Word. They made it a requirement. Every college freshman has been given discounts to further the loyalty to the Office Suite. Now we don’t write like we used to, much less read. What’s the use of writing something so long in pages, no one will ever read it? We’ve jumped the shark of long form writing, and therefore the premiere app, the killer app for the desktop computer is slowly receding behind us as we keep speeding ahead. Eventually we’ll see it on the horizon, it’s sails being the last visible part, the crow’s nest, then poof! It will disappear below the horizon line. We’ll be left with our nostalgic memories of the first time we used MS Word.

Enhanced by Zemanta

Written by Eric Likness

May 19, 2014 at 3:00 pm

Posted in cloud, computers, google, wintel

Tagged with , ,

Google Glass teardown puts rock-bottom price on hardware • The Register

Google Glass OOB Experience 27126

Google Glass OOB Experience 27126 (Photo credit: tedeytan)

A teardown report on Google Glass is raising eyebrows over suggestions that the augmented reality headset costs as little as $80 to produce.

via Google Glass teardown puts rock-bottom price on hardware • The Register.

One more reason to not be a Glasshole is you don’t want to be a sucker. Given what the Oculus Rift is being sold for versus Google Glass, one has to ask themselves why is Glass so much more expensive? It doesn’t do low latency stereoscopic 3D. It doesn’t have special eye adapters PROVIDED depending on your eyeglass correction. Glass requires you to provided prescription lenses if you really needed them. It doesn’t have large, full color, high rez AMOLED display. So why $1500 when Rift is $350? And even the recently announced Epson Moverio is priced at $700.

These days with the proliferation of teardown sites and the experts at iFixit and their partners at Chipworks, it’s just a matter of time before someone writes up your Bill of Materials (BOM). Once that’s hit the Interwebs and communicated widely all the business analysts and Wall Street Hedgefunders know how to predict the profit of the company based on sales. If Google retails Glass at the same price it is the development kits, it’s going to be real difficult to compete for very long given lower price and more capable alternatives. I appreciate what Google’s done making it lightweight and power efficient, but it’s still $80 in parts being sold at a mark-up of $1500. That’s the bottom line, that’s the Bill of Materials.

Enhanced by Zemanta

Written by Eric Likness

May 8, 2014 at 3:00 pm

Posted in google, wired culture

Tagged with , ,

Epson Moverio BT-200 AR Glasses In Person, Hands On

Wikitude Augmented Reality SDK optimized for E...

Wikitude Augmented Reality SDK optimized for Epson Moverio BT-200 (Photo credit: WIKITUDE)

Even Moverio’s less powerful (compared to VR displays) head tracking would make something like Google Glass overheat, McCracken said, which is why Glass input is primarily voice command or a physical touch. McCracken, who has developed for Glass, said that more advanced uses can only be accomplished with something more powerful.

via Epson Moverio BT-200 AR Glasses In Person, Hands On.

Epson has swept in and gotten a head start on others in the smart glasses field. I think with their full head tracking system, and something like a Microsoft Xbox Kinect like projector and receiver pointed outward wherever you are looking, it might be possible to get a very realistic “information overlay”. Microsoft’s XBox Kinect has a 3D projector/scanner built-in which could potentially be another sensor built-in to the Epson glasses. The Augmented Reality apps on Moverio only do edge detection to provide the information overlay placement. If you had an additional 3D map (approximating the shapes and depth as well) you might be able to correlate the two data feeds (edges and a 3D mesh) to get a really good informational overlay at close range, normal arm’s length working distances.

Granted the Kinect is rather large in comparison to the Epson Moverio glasses. The resolution is also geared for longer distances too. At a very short distance XBox Kinect may not quite be what you’re looking for to improve the informational overlay. But an Epson Moverio paired up with a Kinect-like 3D projector/scanner could tie into the head tracking and allow some greater degree of accurate video overlay. Check out this video for a hack to use the Kinect as a 3D scanner:

3D Scanning with an Xbox Kinect – YouTube

Also as the pull-quote mentions Epson has done an interesting cost-benefit analysis and decided a smartphone level CPU and motherboard were absolutely necessary for making Moverio work. No doubt that light weight and miniature size of cellphones has by itself revolutionized the mobile phone industry. Now it’s time to leverage all that work and see what “else” the super power efficient mobile cpu’s can do along with their mobile gpu counterparts. I think this sudden announcement by Epson is going to cause a tidal wave of product announcements similar to the wave following the iPhone introduction in 2007. Prior to that Blackberry and it’s pseudo smartphone were the monopoly holders in the category they created (mobile phone as email browser). Now Epson is trying to show there’s a much wider application of the technology outside of Google Glass and Oculus Rift.

Enhanced by Zemanta

Written by Eric Likness

May 8, 2014 at 3:00 pm

Posted in google, mobile

Tagged with , , ,

Jon Udell on filter failure

Jon Udell

Jon Udell (Photo credit: Wikipedia)

http://blog.jonudell.net/2014/01/26/its-time-to-engineer-some-filter-failure/

Jon’s article points out his experience of the erosion of serendipity or at least opposing view points that social media enforces (somewhat) accidentally. I couldn’t agree more. One of the big promises of the Internet was that it was unimaginably vast and continuing to grow. The other big promise was that it was open in the way people could participate. There were no dictats or proscribed methods per se, but etiquette at best. There were FAQs to guide us, and rules of thumb to prevent us from embarrassing ourselves. But the Internet, It was something so vast one could never know or see everything that was out there, good or bad.

But like the Wild est, search engines began fencing in the old prairie. At once both allowing us to get to the good stuff and waste less time doing important stuff. But therein lies the bargain of the “filter”, giving up control to an authority to help you do something with data or information. All the electrons/photons whizzing back and forth on the series of tubes exisiting all at once, available (more or less) all at once. But now with Social Neworks, like AOL before we suffer from the side effects of the filter.

I remember being an AOL member, finally caving in and installing the app from some free floppy disk I would get in the mail at least once a week. I registered my credit card for the first free 20 hours (can you imagine?). And just like people who ‘try’ Netflix, I never unregistered. I lazily stayed the course and tried getting my money’s worth, spending more time online. At the same time ISPs, small mom and pop type shops were renting off parts of a Fractional T-1 leased line they owned, putting up modem pools and started selling access to the “Internet”. Nobody knew why you would want to do that with all teh kewl thingz one could do on AOL. Shopping, Chat Rooms, News, Stock quotes. It was ‘like’ the Internet. But not open and free and limitless like the Internet. And that’s where the failure begins to occur.

AOL had to police it’s population, enforce some codes of conduct. They could kick you off, stop accepting your credit card payments. One could not be kicked of the ‘Internet’ in the same way, especially in those early days. But getting back to Jon’s point about filters that fail and allow you to see the whole world, discover an opposing viewpoint or better mulitple opposing viewpoints. That is the promise of the Internet, and we’re seeing less and less of it as we corral ourselves into our favorite brand name social networking community. I skipped MySpace, but I did jump on Flickr, and eventually Facebook. And in so doing gave up a little of that wildcat freedom and frontier-like experience of  dial-up over PPP or SLIP connection to a modem pool, doing a search first on Yahoo, then AltaVista, and then Google to find the important stuff.

Enhanced by Zemanta

Written by carpetbomberz

February 13, 2014 at 3:00 pm

Sebastian Thrun and Udacity

Sebastian Thrun

Sebastian Thrun (Photo credit: novas0x2a)

Sebastian Thrun and Udacity

MOOCs may not be the future of higher education. But they will be the future of Corporate Training. You can bet money on that. In fact money IS being bet on that right now. Udacity has gone from attempting to create a higher ed paradigm shifting option to a more conventional platform for conducting training for employees of corporations willing to pay money to host on their platform.

Now that’s a change we can all believe in, as trainers and traveling, itinerant consultants can sit at their home base and run the sessions synchronously or asynchronously as time allows or demands change. Yes, hands on training may be the best but the best training is the one you actually can provion, pay for and get people to attend. Otherwise, it’s all an exercise in human potential waiting to be tapped, but at times more often opportunities lost outright. Just in Time training? Hardly, how about cost-effective training provided in a plentiful manner? Absolutely yes.

Written by Eric Likness

December 9, 2013 at 3:00 pm

Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com

Note this is a draft of an article I wrote back in June when Apple announced it was going to favor its own Maps app over Google Maps and take G-Maps out of the Apple Store altogether. This blog went on hiatus just 2 weeks after that. And a whirlwind number of staff changes occurred at Apple as a result of the debacle of iOS Maps product. Top people have been let go not the least of which was the heir apparent in some people’s views of Steve Jobs; Scott Forstall. He was not popular, very much a jerk and when asked by Tim Cook to co-sign the mea culpa Apple put out over their embarrassment about the lack of performance and lack of quality of iOS Maps, Scott wouldn’t sign it. So goodbye Scott, hello Google Maps. Somehow Google and Apple are in a period of detente over Maps and Google Maps is now returned to the Apple Store. Who knew so much could happen in 6 months right?

Garmin told Wired in a statement. “We think that there is a market for smartphone navigation apps, PNDs [Personal Navigation Devices] and in-dash navigation systems as each of these solutions has their own advantages and use case limitations and ultimately it’s up to the consumer to decide what they prefer.

via Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com.

That’s right mapping and navigation are just one more app in a universe of software you can run on your latest generation iPod Touch or iPhone. I suspect that the Maps will only be available on the iPhone as that was a requirement previously placed on the first gen Maps app on iOS. It would be nice if there were a lower threshold entry point for participation in the Apple Maps app universe.

But I do hear one or two criticisms regarding Apple’s attempt to go its own way. Google’s technology and data set lead (you know all those cars driving around and photographing?) Apple has to buy that data from others, it isn’t going to start from scratch and attempt to re-create Google’s Street View data set. Which means it won’t be something Maps has as a feature probably for quite some time. Android’s own Google Maps app includes turn-by-turn navigation AND Street view built right in. It’s just there. How cool is that? You get the same experience on the mobile device as the one you get working in a web browser on a desktop computer.

In this battle between Google and Apple the pure play personal navigation device (PND) manufacturers are losing share. I glibly suggested in a twee yesterday that Garmin needs to partner up with Apple and help out with its POI and map datasets so that potentially both can benefit. It would be cool if a partnership could be struck that allowed Apple to have feature that didn’t necessarily steal market share from the PNDs, but could somehow raise all boats equally. Maybe a partnership to create a Street View-like add-on for everyone’s mapping datasets would be a good start. That would help level the playing field between Google vs. the rest of the world.

Written by Eric Likness

December 15, 2012 at 12:22 pm

Posted in google, gpu, mobile, navigation, technology

Tagged with , , ,

Google X founder Thrun demonstrates Project Glass on TV show | Electronista

Sebastian Thrun, Associate Professor of Comput...

Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)

Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.

via Google X founder Thrun demonstrates Project Glass on TV show | Electronista.

You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.

Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.

So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:

Project Glass: One day. . .

As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:

1. A specific iPhone that has a noise cancelling microphone array

2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses

So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:

1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside

2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.

It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.

Sergey Brin wearing Google Glasses

Thomas Hawk’s photo of Sergey Brin wearing Google Glasses

Written by Eric Likness

May 10, 2012 at 3:00 pm

Google shows off Project Glass augmented reality specs • The Register

Thomas Hawk's picture of Sergey Brin wearing the prototype of Project Glass

But it is early days yet. Google has made it clear that this is only the initial stages of Project Glass and it is seeking feedback from the general public on what they want from these spectacles. While these kinds of heads-up displays are popular in films and fiction and dearly wanted by this hack, the poor sales of existing eye-level screens suggests a certain reluctance on the part of buyers.

via Google shows off Project Glass augmented reality specs • The Register.

The video of the Google Glass interface is kind of interesting and problematic at the same time. Stuff floats in and out of few kind of like the organism that live in the mucous of your eye. And the the latency delays of when you see something and issue a command give it a kind of halting staccato cadence when interacting with it. It looks and feels like old style voice recognition that needed discrete pauses added to know when things ended. As a demo it’s interesting, but they should issue releases very quickly and get this thing up to speed as fast as they possibly can. And I don’t mean having the CEO Sergey Brin show up at a party wearing the thing. According to reports the ‘back pack’ that the glasses are tethered to is not small. Based on the description I think Google has a long way to go yet.

http://my20percent.wordpress.com/2012/02/27/baseball-cap-head-up-displa/

And on the smaller scale tinkerer front, this WordPress blogger fashioned an older style ‘periscope’ using a cellphone, mirror and half-mirrored sunglasses to get a cheaper Augmented Reality experience. The cellphone is an HTC unit strapped onto the rim of a baseball hat. The display is than reflected downwards through a hold cut in the rim and then is reflected off a pair of sunglasses mounted at roughly a 45 degree angle. It’s cheap, it works, but I don’t know how good the voice activation is. Makes me wonder how well it might work with an iPhone Siri interface. The author even mentions that HTC is a little heavy and an iPhone might work a little better. I wonder if it wouldn’t work better still if the ‘periscope’ mirror arrangement was scrapped altogether. Instead just mount the phone flat onto the bill of the hat, let the screen face downward. The screen would then reflect off the sunglasses surface. The number of reflecting surfaces would be reduced, the image would be brighter, etc. I noticed a lot of people also commented on this fellow’s blog and might get some discussion brewing about longer term the value-add benefits to Augmented Reality. There is a killer app yet to be found and even Google hasn’t captured the flag yet.

This picture shows the Wikitude World Browser ...

This picture shows the Wikitude World Browser on the iPhone looking at the Old Town of Salzburg. Computer-generated information is drawn on top of the screen. This is an example for location-based Augmented Reality. (Photo credit: Wikipedia)

Written by Eric Likness

April 19, 2012 at 3:00 pm