The technical aspects of privacy – O’Reilly Radar

Image representing Edward Snowden as depicted ...
Image via CrunchBase

The first of three public workshops kicked off a conversation with the federal government on data privacy in the US.

by Andy Oram | @praxagora

via The technical aspects of privacy – O’Reilly Radar.

Interesting topic covering a wide range of issues. I’m so happy MIT sees fit to host a set of workshops on this and keep the pressure up. But as Andy Oram writes, the whole discussion at MIT was circumscribed by the notion that privacy as such doesn’t exist (an old axiom from ex-CEO of Sun Microsystems, Scott McNealy).

No one at that MIT meeting tried to advocate for users managing their own privacy. Andy Oram mentions Vendor Relationship Management movement (thanks to Doc Searls and his Clue-Train Manifesto) as one mechanism for individuals to pick and choose what info and what degree the info is shared out. People remain willfully clueless or ignorant of VRM as an option when it comes to privacy. The shades and granularity of VRM are far more nuanced than the bifurcated/binary debate of Privacy over Security. and it’s sad this held true for the MIT meet-up as well.

Jon Podesta’s call-in to the conference mentioned an existing set of rules for electronic data privacy, data back to the early 1970s and the fear that mainframe computers “knew too much” about private citizens known as Fair Information Practices:  http://epic.org/privacy/consumer/code_fair_info.html (Thanks to Electronic Privacy Information Center for hosting this page). These issues seem to always exist but in different forms at earlier times. These are not new, they are old. But each time there’s  a debate, we start all over like it hasn’t ever existed and it has never been addressed. If the Fair Information Practices rules are law, then all the case history and precedents set by those cases STILL apply to NSA and government surveillance.

I did learn one new term from reading about the conference at MIT, Differential Security. Apparently it’s very timely and some research work is being done in this category. Mostly it applies to datasets and other similar big data that needs to be analyzed but without uniquely identifying an individual in the dataset. You want to find out efficacy of a drug, without spilling the beans that someone has a “prior condition”. That’s the sum effect of implementing differential privacy. You get the query out of the dataset, but you never once know all the fields of the people that make up that query. That sounds like a step in the right direction and should honestly apply to Phone and Internet company records as well. Just because you collect the data, doesn’t mean you should be able to free-wheel through it and do whatever you want. If you’re mining, you should only get the net result of the query rather than snoop through all the fields for each individual. That to me is the true meaning of differential security.

Enhanced by Zemanta
Advertisements

Author: carpetbomberz

Technology news & commentary all-in-one!