On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?
Phil Windley as absolutely right. And when it comes to Silos, consider the silos we call App Stores and Network Providers. Cell phones get locked to the subsidizing provider of the phone. The phone gets locked to the app store the manufacturer has built. All of this is designed to “capture” and ensnare a user into the cul-de-sac called the “brand”. And it would seem if we let manufacturers and network providers make all the choices this will be no different than the cell phone market we see today.
We can reclaim the Web and more broadly ed-tech for teaching and learning. But we must reclaim control of the data, content, and knowledge we create. We are not resources to be mined. Learners do not enter our schools and in our libraries to become products for the textbook industry and the testing industry and the technology industry and the ed-tech industry to profit from.
Really philosophical article about what it is Higher Ed is trying to do here. It’s not just about student portfolios, it’s Everything. It is the books you check out the seminars you attend, the videos you watched the notes you took all the artifacts of learning. And currently they are all squirreled away and stashed inside data silos like Learning Management Systems.
The original World Wide Web was like the Wild, Wild West, an open frontier without visible limit. Cloud services and commercial offerings has fenced in the frontier in a series of waves of fashion. Whether it was AOL, Tripod.com, Geocities, Friendster, MySpace, Facebook the web grew in the form of gated communities and cul-de-sacs for “members only”. True the democracy of it all was membership was open and free, practically anyone could join, all you had to do was hand over the control, the keys to YOUR data. That was the bargain, by giving up your privacy, you gained all the rewards of socializing with long lost friends and acquaintances. From that little spark the surveillance and “data mining” operation hit full speed.
Reclaiming ownership of all this data, especially the component that is generated in one’s lifetime of learning is a worthy cause. Audrey Watters references Jon Udell in an example of the kind of data we would want to own and limit access to our whole lives. From the article:
“Udell then imagines what it might mean to collect all of one’s important data from grade school, high school, college and work — to have the ability to turn this into a portfolio — for posterity, for personal reflection, and for professional display on the Web.“
Indeed, and at the same time though this data may live on the Internet somewhere access is restricted to those whom we give explicit permission to access it. That’s in part a project unto itself, this mesh of data could be text, or other data objects that might need to be translated, converted to future readable formats so it doesn’t grow old and obsolete in an abandoned file format. All of this stuff could be give a very fine level of access control to individuals you have approved to read parts or pieces or maybe even give wholesale access to. You would make that decision and maybe just share the absolute minimum necessary. So instead of seeing a portfolio of your whole educational career, you just give out the relevant links and just those links. That’s what Jon Udell is pursuing now through the Thali Project. Thali is a much more generalized way to share data from many devices but presented in a holistic, rationalized manner to whomever you define as a trusted peer. It’s not just about educational portfolios, it’s about sharing your data. But first and foremost you have to own the data or attempt to reclaim it from the wilds and wilderness of the social media enterprise, the educational enterprise, all these folks who want to own your data while giving you free services in return.
Audrey uses the metaphor, “Data is the new oil” and that at the heart is the problem. Given the free oil, those who invested in holding onto and storing the oil are loathe to give it up. And like credit reporting agencies with their duplicate and sometime incorrect datasets, those folks will give access to that unknown quantity to the highest bidder for whatever reason. Whether its campaign staffers, private detectives, vengeful spouses, doesn’t matter as they own the data and set the rules as to how it is shared. However in the future when we’ve all reclaimed ownership of our piece of the oil field, THEN we’ll have something. And when it comes to the digital equivalent of the old manila folder, we too will truly own our education.
He continues, “People are upset about privacy, but in one sense they are insufficiently upset because they don’t really understand what’s at risk. They are looking only at the short term.” And to him, there is only one viable answer to these potential risks: “You’re going to control your own data.” He sees the future as one where individuals make active sharing decisions, knowing precisely when, how, and by whom their data will be used. “That’s the most important thing, control of the data,” he reflects. “It has to be done correctly. Otherwise you end up with something like the Stasi.”
Sounds a little bit like VRM and a little bit like Jon Udell‘s Thali project. Wearables don’t fix the problem of metadata being collected about you, no. You still don’t control those ingoing/outgoing feeds of information.
Sandy Pentland points out a lot can be derived and discerned simply from the people you know. Every contact in your friend list adds one more bit of intelligence about you without anyone ever talking to your directly. This kind of analysis is only possible now due to the End User License Agreements posted by each of the collecting entities (so-called social networking websites).
An alternative to this wildcat, frontier mentality by data collectors is Vendor Relationship Management (as proposed in the Cluetrain Manifesto) Doc Searls wants people to be able to share the absolute minimum necessary in order to get what they want or need from vendors on the Internet, especially the data collecting types. And then from that point if an individual wants to share more, they should get rewarded with a higher level of something in return from the people they share with (prime example are vendors, the ‘V’ in VRM).
Thali in another way allows you to share data as well. But instead of letting someone into your data mesh in an all or nothing way, it lets strongly identified individuals have linkages into our out of your own data streams whatever form those data streams may take. I think Sandy Pentland, Doc Searls and Jon Udell would all agree there needs to be some amount of ownership and control ceded back to the individual going forward. Too many of the vendors own the data and the metadata right now, and will do what they like with it including responding to National Security Letters. So instead of being a commercial venture, they are swiftly evolving into branches or defacto subsidiary of the National Security Agency. If we can place controls on the data, we’ll maybe get closer to the ideal of social networking and controlled data sharing.
Interesting topic covering a wide range of issues. I’m so happy MIT sees fit to host a set of workshops on this and keep the pressure up. But as Andy Oram writes, the whole discussion at MIT was circumscribed by the notion that privacy as such doesn’t exist (an old axiom from ex-CEO of Sun Microsystems, Scott McNealy).
No one at that MIT meeting tried to advocate for users managing their own privacy. Andy Oram mentions Vendor Relationship Management movement (thanks to Doc Searls and his Clue-Train Manifesto) as one mechanism for individuals to pick and choose what info and what degree the info is shared out. People remain willfully clueless or ignorant of VRM as an option when it comes to privacy. The shades and granularity of VRM are far more nuanced than the bifurcated/binary debate of Privacy over Security. and it’s sad this held true for the MIT meet-up as well.
Jon Podesta’s call-in to the conference mentioned an existing set of rules for electronic data privacy, data back to the early 1970s and the fear that mainframe computers “knew too much” about private citizens known as Fair Information Practices: http://epic.org/privacy/consumer/code_fair_info.html (Thanks to Electronic Privacy Information Center for hosting this page). These issues seem to always exist but in different forms at earlier times. These are not new, they are old. But each time there’s a debate, we start all over like it hasn’t ever existed and it has never been addressed. If the Fair Information Practices rules are law, then all the case history and precedents set by those cases STILL apply to NSA and government surveillance.
I did learn one new term from reading about the conference at MIT, Differential Security. Apparently it’s very timely and some research work is being done in this category. Mostly it applies to datasets and other similar big data that needs to be analyzed but without uniquely identifying an individual in the dataset. You want to find out efficacy of a drug, without spilling the beans that someone has a “prior condition”. That’s the sum effect of implementing differential privacy. You get the query out of the dataset, but you never once know all the fields of the people that make up that query. That sounds like a step in the right direction and should honestly apply to Phone and Internet company records as well. Just because you collect the data, doesn’t mean you should be able to free-wheel through it and do whatever you want. If you’re mining, you should only get the net result of the query rather than snoop through all the fields for each individual. That to me is the true meaning of differential security.
This tells me my job with foursquare is to be “driven” like a calf into a local business. Of course, this has been the assumption from the start. But I had hoped that somewhere along the way foursquare could also evolve into a true QS app, yielding lat-lon and other helpful information for those (like me) who care about that kind of thing. (And, to be fair, maybe that kind of thing actually is available, through the foursquare API. I saw a Singly app once that suggested as much.) Hey, I would pay for an app that kept track of where I’ve been and what I’ve done, and made that data available to me in ways I can use.
foursquare as a kind of Lifebits I think is what Doc Searls is describing. A form of self-tracking a la Stephen Wolfram or Gordon Moore. Instead foursquare is the carrot being dangled to lure you into giving your business to a particular retailer. After that you accumulate points for numbers of visits and possibly unlock rewards for your loyalty. But foursquare no doubt accumulates a lot of other data along the way that could be use for the very purpose Doc Searls was hoping for.
Gordon Moore’s work at Microsoft Research bootstrapping the My Lifebits project is a form of memory enhancement, but also logging of personal data that can be analyzed later. The collection or ‘instrumentation’ of one’s environment is what Stephen Wolfram has accomplished by counting things over time. Not to say it’s simpler than the My Lifebits, but it is in someways lighter weight data (instead of videos and pictures, mouse clicks and tallies of email activity, times of day, etc.) There is no doubt that foursquare could make a for profit service to paying users where they could collect this location data and serve it up to subscribers, letting them analyze the data after the fact.
I firmly believe a form of My Lifebits could be aggregated across a wide range of free and paid services along with personal instrumentation and data collecting like the kind Stephen Wolfram does. If there’s one thing I’ve learned readings stories about inventions like these from MIT’s Media Lab is that it’s never an either or proposition. You don’t have to just adopt Gordon Moore’s technology or Stephen Wolfram’s techniques or even foursquare’s own data. You can do all or just pick and choose the ones that suit your personal data collection needs. Then you get to slice, dice and analyze to your heart’s content. What you do with it after that is completely up to you and should be considered as personal as any legal documents or health records you already have.
Which takes me back to an article I wrote some time ago in reference to Jon Udell calling for a federated LifeBits type of service. It wouldn’t be constrained to one kind of data, but all the LifeBits aggregated potentially and new repositories for stuff that must be locked down and private. So add Doc Searls to the list of bloggers and long time technology writers who see an opportunity. Advocacy (in the case of Doc’s experience with foursquare) on behalf of sharing unfiltered data with the users on whom data is collected is one step in that direction. I feel Jon Udell is also an advocate for users gaining access to all that collected and aggregated data. But as Jon Udell asks, who is going to be the first to attempt to offer this up as a pay-for service in the cloud where you can for a fee access your lifebits aggregated into one spot (foursquare,twitter,facebook,gmail,flickr,photostream,mint,eRecords,etc.) so that you don’t spend your life logging on and logging off from service to service to service. Aggregation could be a beautiful thing.
Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.
You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.
Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.
So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:
As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:
1. A specific iPhone that has a noise cancelling microphone array
2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses
So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:
1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside
2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.
It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.
On Tuesday, the company unveiled its new ARM Cortex-M0+ processor, a low-power chip designed to connect non-PC electronics and smart sensors across the home and office.
Previous iterations of the Cortex family of chips had the same goal, but with the new chip, ARM claims much greater power savings. According to the company, the 32-bit chip consumes just nine microamps per megahertz, an impressively low amount even for an 8- or 16-bit chip.
Lower power means a very conservative power budget especially for devices connected to the network. And 32 bits is nothing to sneeze at considering most manufacturers would pick a 16 or 8-bit chip to bring down the cost and power budget too. According to this article the degree of power savings is so great in fact that in sleep mode the chip consumes almost no power at all. For this market Moore’s Law is paying off big benefits especially given the bonus of a 32bit core. So not only will you get a very small lower power cpu, you’ll have a much more diverse range of software that could run on it and take advantage of a larger memory address space as well. I think non-PC electronics could include things as simple as web cams or cellphone cameras. Can you imagine a CMOS camera chip with a whole 32bit cpu built in? Makes you wonder no just what it could do, but what ELSE it could do, right?
The term ‘Internet of Things‘ is bandied about quite a bit as people dream about cpus and networks connecting ALL the things. And what would be the outcome if your umbrella was connected to the Internet? What if ALL the umbrellas were connected? You could log all kinds of data, whether it was opened or close, what the ambient temperature is. It would be like a portable weather station for anyone aggregating all the logged data potentially. And the list goes on and on. Instead of Tire pressure monitors, why not also capture video of the tire as it is being used commuting to work. It could help measure the tire wear and setup and appointment when you need to get a wheel alignment. It could determine how many times you hit potholes and suggest smoother alternate routes. That’s the kind of blue sky wide open conjecture that is enabled by a 32-bit low/no power cpu.
One day I’m sure everyone will routinely collect all sorts of data about themselves. But because I’ve been interested in data for a very long time, I started doing this long ago. I actually assumed lots of other people were doing it too, but apparently they were not. And so now I have what is probably one of the world’s largest collections of personal data.
In some ways similar to Stephen Wolfram, Gordon Bell at Microsoft has engaged in an attempt to record his “LifeBits” using a ‘wearable’ computer to record video and capture what goes on in his life. In my opinion, Stephen Wolfram has done Gordon Bell one better by collecting data over a much longer period and of a much wider range than Gordon Bell accomplished within the scope of LifeBits. Reading Wolfram’s summary of all his data plots is as interesting as seeing the plots themselves. There can be no doubt that Stephen Wolfram has always and will continue to think differently than most folks, and dare I say most scientists. Bravo!
The biggest difference between MyLifeBits versus Wolfram’s personal data collection is the Wolram’s emphasis on non-image based data. The goal it seems for the Microsoft Research group is to fulfill the promise of Vannevar Bush’s old article titled “As we may think” printed in the Atlantic, July 1945. In this article Bush proposes a prototype of a more ‘visual computer’ that would act as a memory recall and analytic thinking aid. He named it the Memex.
Gordon Bell and Jim Gemmell of Microsoft Research, seemed to be focused on the novelty of a camera carried and taking pictures automatically of the area immediately in front of it. This log of ‘what was seen’ was meant to help cement visual memory and recall. Gordon Bell had spent a long period of time digitizing, “articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally.” This over emphasis on visual data I think if used properly might be useful to some but is more a product of Gordon Bell’s own personal interest in seeing how much he could capture then catalog after the fact.
Stephen Wolfram’s data wasn’t even necessarily based on a ‘wearable computer‘ the way MyLifeBits seems to be. Wolfram built in a logging/capture system into things he did daily on a computer. This even included data collected by a digital pedometer to measure the steps he would take in a day. The plots of the data are most interesting in comparison to one another especially given the length of time over which they were collected (a much bigger set than Gordon Bell’s Life Bits I dare say). So maybe this points to another step forward in the evolution of Lifebits perhaps? Wolfram’s data seems to be more useful in a lot of ways, he’s not as focused on memory and recall of any given day. But maybe a synthesis of Wolfram’s data collection methods and analysis and Gordon Bell’s MyLifeBits capture of image data might be useful to a broader range of people if someone wanted to embrace and extend these two scientists’ personal data projects.
While I agree there might be a better technical solution to the DNS blocking adopted by SOPA and PIPA bills, less formal networks are in essence filling the gap. By this I mean the MegaUpload takedown that occurred yesterday at the the order of the U.S. Justice Department. Without even the benefit of SOPA or PIPA, they ordered investigations, arrests and takedowns of the whole MegaUpload enterprise. But what is interesting is the knock-on effects social networks had in the vacuum left by the DNS blocking. Within hours the DNS was replaced by it’s immediate pre-cursors. That’s right, folks were sending the IP addresses of available MegaUpload hosts by plain text in Tweet messages the world ’round. And given the announcement today that Twitter will be closing in on it’s 500 Million’th account being created I’m not too worried about a technical solution to DNS blocking. That too is already moot, by virtue of the the fact of social networking and simple numeric IP addresses. Long live IPv4 and the quadruple octets 255.255.255.xxx
Wearable computing is a broad term. Technically, a fancy electronic watch is a wearable computer. But the ultimate version of this technology is a screen that would somehow augment our vision with information and media.
Augmented Reality in the news, only this time it’s Google so it’s like for rilz, yo! Just kidding, it will be very interesting given Google’s investment in the Android OS and power-saving mobile computing what kind of wearable computers they will develop. No offense to MIT Media Lab, but getting something into the hands of end-users is something Google is much more accomplished at doing (but One Laptop Per Child however is the counter-argument of course). I think mobile phones are already kind of like a wearable computer. Think back to the first iPod arm bands right? Essentially now just scale the ipod up to the size of an Android and it’s no different. It’s practically wearable today (as Bilton says in his article).
What’s different then with this effort is the accessorizing of the real wearable computer (the smart phone) giving it the augmentation role we’ve seen with products like Layar. But maybe not just limited to cameras, video screens and information overlays, the next wave would have auxiliary wearable sensors communicating back to the smartphone like the old Nike accelerometer that would fit into special Nike Shoes. And also consider the iPod Nano ‘wrist watch’ fad as it exists today. It may not run the Apple iOS, but it certainly could transmit data to your smartphone if need be. Which leads to the hints and rumors of attempts by Apple to create ‘curved glass’.
This has been an ongoing effort by Apple, without being tied to any product or feature in their current product line. Except maybe the iPhone. Most websites I’ve read to date speculate the curvature is not very pronounced and a styling cue to further help marketing and sales of the iPhone. But in this article the curvature Bilton is talking about would be more like the face of a bracelet around the wrist, much more pronounced. Thus the emphasis on curved glass might point to more work being done on wearable computers.
Lastly Bilton’s article goes into a typical futuristic projection of what form the video display will take. No news to report on this topic specifically as it’s a lot of hand-waving and make believe where contact lenses potentially can become display screens. As for me, the more pragmatic approach of companies like Layar creating iPhone/Augmented Reality software hybrids is going to ship sooner and prototype faster than the make believe video contact lenses of the Future.The takeaway I get from Bilton’s article is there’s more of a defined move to create more functions with the smartphone as more of a computer. Though MIT Media Lab have labeled this ‘wearable computing’ think of it more generally as Ubiquitous Computing where the smartphone and its data connection are with you wherever you go.