Category: surveillance

  • U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com

    Image representing Google as depicted in Crunc...
    Image via CrunchBase

    The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday.

    via U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com.

    Not good news in imho. The reason being is the mission creep and abuses that come with absolute power in the form of a National Security Letter. The other part of the equation is Google’s business model runs opposite to the idea of protecting people’s information. If you disagree, I ask that you read this blog post from Christopher Soghoian, where he details just what exactly it is Google does when it keeps all your data unencrypted in its data centers. In order to sell AdWords and serve advertisements to you, Google needs to keep everything open and unencrypted. At the same time they aren’t too casual in their stewardship of your data, but they do respond to law enforcement requests for customer data. To quote Seghoian at the end of his blog entry:

    The end result is that law enforcement agencies can, and regularly do request user data from the company — requests that would lead to nothing if the company put user security and privacy first.”

    And that indeed is the moral of the story. Which leaves everyone asking what’s the alternative? Earlier in the same story the blame is placed square on the end-user for not protecting themselves. Encryption tools for email and personal documents have been around for a long time. And often there are commercial products available to help accomplish some level of privacy even for so-called Cloud hosted data. But the friction point is always going to be the level of familiarity, ease of use and cost of the product before it is as widely used and adopted as Webmail has been since the advent of desktop email clients like Eudora.

    So if you really have concerns, take action, don’t wait for Google to act to defend your rights. Encrypt your email, your documents and make Google one bit less culpable for any law enforcement requests that may or may not include your personal data.

  • Kim Cameron returns to Microsoft as indie ID expert • The Register

    Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

    via Kim Cameron returns to Microsoft as indie ID expert • The Register.

    CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

  • JSON Activity Streams Spec Hits Version 1.0

    This is icon for social networking website. Th...
    Image via Wikipedia

    The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

    via JSON Activity Streams Spec Hits Version 1.0.

    Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

    We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

  • From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1)

    Process and data modeling
    Image via Wikipedia

    Big Data

    In short, big data simply means data sets that are large enough to be difficult to work with. Exactly how big is big is a matter of debate. Data sets that are multiple petabytes in size are generally considered big data (a petabye is 1,024 terabytes). But the debate over the term doesn’t stop there.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1).

    There’s big doin’s inside and outside the data center theses days. You cannot spend a day without a cool new article about some new project that’s just been open sourced from one of the departments inside the social networking giants. Hadoop being the biggest example. What you ask is Hadoop? It is a project Yahoo started after Google started spilling the beans on it’s two huge technological leaps in massively parallel databases and processing real time data streams. The first one was called BigTable. It is a huge distributed database that could be brought up on an inordinately large number of commodity servers and then ingest all the indexing data sent by Google’s web bots as they found new websites. That’s the database and ingestion point. The second point is the way in which the rankings and ‘pertinence’ of the indexed websites would be calculated through PageRank. The invention for the realtime processing of this data being collected is called MapReduce. It was a way of pulling in, processing and quickly sorting out the important highly ranked websites. Yahoo read the white papers put out by Google and subsequently created a version of those technologies which today power the Yahoo! search engine. Having put this into production and realizing the benefits of it, Yahoo turned it into an open source project to lower the threshold of people wanting to get into the Big Data industry. Similarly, they wanted to get many eyes of programmers looking at the source code and adding features, packaging it, and all importantly debugging what was already there. Hadoop was the name given to the Yahoo bag of software and this is what a lot of people initially adopt if they are trying to do large scale collection and real-time analysis of Big Data.

    Another discovery along the way towards the Big Data movement was a parallel attempt to overcome the limitations of extending the schema of a typical database holding all the incoming indexed websites. Tables and Rows and Structured Query Language (SQL) have ruled the day since about 1977 or so, and for many kinds of tabbed data there is no substitute. However, the kinds of data being stored now fall into the big amorphous mass of binary large objects (BLOBs) that can slow down a traditional database. So a non-SQL approach was adopted and there are parts of the BigTable database and Hadoop that dump the unique key values and relational tables of SQL to just get the data in and characterize it as quickly as possible, or better yet to re-characterize it by adding elements to the schema after the fact. Whatever you are doing, what you collect might not be structured or easily structured so you’re going to need to play fast and loose with it and you need a database of some sort equal to that task. Enter the NoSQL movement to collect and analyze Big Data in its least structured form. So my recommendation to anyone trying to get the square peg of Relational Databases to fit the round hole of their unstructured data is to give up. Go NoSQL and get to work.

    This first article from Read Write Web is good in that it lays the foundation for what a relational database universe looks like and how you can manipulate it. Having established what IS, future articles will be looking at what quick, dirty workarounds and one off projects people have come up with to fit their needs. And subsequently which ‘Works for Me’ type solutions have been turned into bigger open source projects that will ‘Work for Others’, as that is where each of these technologies will really differentiate themselves. Ease of use and lowering the threshold will be deciding factors for many people’s adoption of a NoSQL database I’m sure.

  • Bye, Flip. We’ll Miss You | Epicenter | Wired.com

    Image representing Flip Video as depicted in C...
    Image via CrunchBase

    Cisco killed off the much-beloved Flip video camera Tuesday. It was an unglamorous end for a cool device that just few years earlier shocked us all by coming to dominate the video-camera market, utterly routing established players like Sony and Canon

    via Bye, Flip. We’ll Miss You | Epicenter | Wired.com.

    I don’t usually write about Consumer Electronics per se. This particular product category got my attention due to it’s long gestation and overwhelming domination of a category in the market that didn’t exist until it was created. It was the pocket video camera with a built-in flip out USB connector. Like a USB flash drive with a LCD screen, a lens and one big red button, the Flip pared down everything to the absolute essentials, including the absolute immediacy of online video sharing via YouTube and Facebook. Now the revolution has ended, devices have converged and many are telling the story of explaining Why(?) this has happened. In the case of Wired.com’s Robert Capps he claims Flip lost its way after Cisco lost its way doing the Flip 2 revision, trying to get a WiFi connected camera out there for people to record their ‘Lifestream’.

    Prior to Robert Capps, different writers for different pubs all spouted the conclusion of Cisco’s own Media Relations folks. Cisco’s Flip camera was the victim of inevitable convergence, pure and simple. Smartphones, in particular Apple’s iPhone kept adding features all once available only on the Flip. Easy recording, easy sharing, larger resolution, bigger LCD screen, and it could play Angry Birds too! I don’t cotton to that conclusion as fed to us by Cisco. It’s too convenient and the convergence myth does not account for the one thing Flip has the iPhone doesn’t have, has never had WILL never have. And that is a simple, industry standard connector. Yes folks convergence is not simply displacing cherry-picked features from one device and incorporating into yours, no. True convergence is picking up all that is BEST about one device and incorporating it, so that fewer and fewer compromises must be made. Which brings me to the issue of the Apple multi-pin connector that has been with us since the first iPod hit the market in 2002.

    See the Flip didn’t have a proprietary connector, it just had a big old ugly USB connector. Just as big and ugly as the one your mouse and keyboard use to connect to your desktop computer. The beauty of that choice was Flip could connect to just about any computer manufactured after 1998 (when USB was first hitting the market). The second thing was all the apps for making the Flip play back the videos you shot or to cut them down and edit them were sitting on the Flip, just like hard drive, waiting for you to install them on whichever random computer you wanted to use. Didn’t matter whether or not it had the software installed, it COULD be installed directly from the Flip itself. Isn’t that slick?! You didn’t have to first search for the software online, download and install, it was right there, just double-click and go.

    Compare this to the Apple iOS cul-de-sac we all know as iTunes. Your iPhone, iTouch, iPad, iPod all know your computer not through simply by communicating through it’s USB connector. You must first have iTunes installed AND have your proprietary Apple to USB connector to link-up. Then and only then can your device ‘see’ your computer and the Internet. This gated community provided through iTunes allows Apple to see what you are doing, market directly to you and watch as you connect to YouTube to upload your video. All with the intention of one day acting on that information, maintaining full control at each step along the path way from shooting to sharing your video. If this is convergence, I’ll keep my old Flip mino (non-HD) thankyou very much. Freedom (as in choice) is a wonderful thing and compromising that in the name of convergence (mis-recognized as convenience) is no compromise. It is a racket and everyone wants to sell you on the ‘good’ points of the racket. I am not buying it.

  • Intel lets outside chip maker into its fabs • The Register

     

    Banner image Achronix 22i
    Intel and Achronix-2 Great tastes that taste great together

     

    According to Greg Martin, a spokesman for the FPGA maker, Achronix can compete with Xilinx and Altera because it has, at 1.5GHz in its current Speedster1 line, the fastest such chips on the market. And by moving to Intel’s 22nm technology, the company could have ramped up the clock speed to 3GHz.

    via Intel lets outside chip maker into its fabs • The Register.

    That kind of says it all in one sentence, or two sentences in this case. The fastest FPGA on the market is quite an accomplishment unto itself. Putting that FPGA on the world’s most advanced production line and silicon wafter technology is what Andy Grove would called the 10X Effect. FPGA’s are reconfigurable processors that can have their circuits re-routed and optimized for different tasks over and over again. This is real beneficial for very small batches of processors where you need a custom design. Some of the things they can speed up is doing math or looking up things in a very large search through a database. In the past I was always curious whether they could be used a general purpose computer which could switch gears and optimize itself for different tasks. I didn’t know whether or not it would work or be worthwhile but it really seemed like there was a vast untapped reservoir of power in the FPGA.

    Some super computer manufacturers have started using FPGAs as special purpose co-processors and have found immense speed-ups as a result. Oil prospecting companies have also used them to speed up analysis of seismic data and place good bets on dropping a well bore in the right spot. But price has always been a big barrier to entry as quoted in this article. $1,000 per chip is the cost. Which limits the appeal to those buyers where price is no object but speed and time are more important. The two big competitors in the field off FPGA manufacturing are Altix and Xilinx both of which design the chips but have them manufactured in other countries. This has led to FPGAs being second class citizens used older generation chip technologies on old manufacturing lines. They always had to deal with what they could get. Performance in terms of clock speed was always less too.

    It was not unusual to see during the Megahertz and Gigahertz wars chip speeds increasing every month. FPGAs sped up too, but not nearly as fast. I remember seeing 200Mhz/sec and 400Mhz/sec touted as Xilinx and Altix top of the line products. With Achrnix running at 1.5Ghz, things have changed quite a bit. That’s a general purposed CPU speed in a completely customizable FPGA. This means you get speed that makes the FPGA even more useful. However, instead of going faster this article points out people would rather buy the same speed but use less electricity and generate less heat. There’s no better way to do this than to shrink the size of the circuits on the FPGA and that is the core philosophy of Intel Inc. They have just teamed up to put the Achronix FPGA on the smallest feature size production line using the most optimized, cost conscious manufacturer of silicon chips bar none.

    Another point being made in the article is the market for FPGAs at this level of performance also tends to be more defense contract oriented. As a result, to maintain the level of security necessary to sell chips to this industry, the chips need to be made in the good ol’ USA and Intel doesn’t outsource anything when it comes to it’s top of the line production facilities. Everything is in Oregon, Arizona or Washington State and is guaranteed not to have any secret backdoors built in to funnel data to foreign governments.

    I would love to see some University research projects start looking at FPGAs again and see if as speeds go up, power goes down if there’s a happy medium or mix of general purpose CPUs and FPGAs that might help the average joe working on his desktop, laptop or iPad. All I know is Intel entering a market will make it more competitive and hopefully lower the barrier of entry to anyone who would really like to get their hands on a useful processor that they can customize to their needs.

  • October 6, 2010 | BI Incorporated

    “We believe the issue is resolved as we have expanded the database threshold to more than 1 trillion records. In the meantime, we are working with Microsoft to develop a warning system on database thresholds so we can anticipate these issues in the future.”

    via October 6, 2010 | BI Incorporated.

    This is the key phrase regarding the recent event where BI stopped sending out alerts for the criminals it was tracking on behalf of police departments around the country. A company like this should do everything it can to design it’s systems for tracking so an eventuality like this doesn’t happen. How long before they bump up against the 1 Trillion record limit? I ask you. Let’s go back to the original article as it was posted on the BBC Online:

    Thousands of US sex offenders, prisoners on parole and other convicts were left unmonitored after an electronic tagging system shut down because of data overload.

    BI Incorporated, which runs the system, reached its data threshold – more than two billion records – on Tuesday.

    This left authorities across 49 states unaware of offenders’ movement for about 12 hours.

    BI increased its data storage capacity to avoid a repeat of the problem.

    Prisons and other corrections agencies were blocked from getting notifications on about 16,000 people, BI Incorporated spokesman Jock Waldo said on Wednesday.

    So the question I have a question as to how 16,000 people results in 2 Billion records in the database? Is that really all they are doing? How much old junk data are they keeping for legal purposes or just because they can keep it for potential future use? And how is it that a company depends on Microsoft to bail them out of such a critical situation. These seems like a very amateurish mistake. And could have been avoided by anyone with the title of Database Administrator who monitors the server on a regular basis. They should have known this thing was hitting an upper limit months ago and started rolling out a new database and moving records into it. This also shows the fundamental flaw in using SQL based record keeping for so-called real time data. Facebook gave up on it long ago as did Google. Rows and Tables and real time updates, doesn’t scale well. And if you cannot employ a Database Administrator to tell you when you are hitting a critical limit, but are dumping it off on the vendor, well good luck with that one guys.

     

  • Custom superchippery pulls 3D from 2D images like humans • The Register

    Computing brainboxes believe they have found a method which would allow robotic systems to perceive the 3D world around them by analysing 2D images as the human brain does – which would, among other things, allow the affordable development of cars able to drive themselves safely.

    via Custom superchippery pulls 3D from 2D images like humans • The Register.

    The beauty of this new work is they designed a custom CPU using a Virtex 6 FPGA (Field Programmable Gate Array). FPGA for those who don’t know is a computer chip that you can ‘re-wire’ through software to take on mathematical task you can dream up. In the old days this would have required a custom chip to be engineered, validated and manufactured at great cost. FPGAs require development kits and FPGA chips you need to program. With this you can optimize every step within the computer processor and speed things up much more than a general purpose computer processor (like the Intel chip that powers your Windows or Mac computer). In this example of the research being done the custom designed computer circuitry is using video images to decide where in the world a robot can safely drive as it maneuvers around on the ground. I know Hans Moravec has done a lot with it at Carnegie Mellon U. And it seems that this group is from Yale’s engineering dept. which is encouraging to see the techniques embraced and extended by another U.S. university. The low power of this processor and it’s facility for processing the video images in real-time is ahead of its time and hopefully will find some commercial application either in robotics or automotive safety controls. As for me I’m still hoping for a robot chauffeur.

  • Buzz Bombs in the News – Or the Wheel Reinvented

    Slashdot just posted this article for all to read on the Interwebs

    penguinrecorder writes“The Thunder Generator uses a mixture of liquefied petroleum, cooking gas, and air to create explosions, which in turn generate shock waves capable of stunning people from 30 to 100 meters away. At that range, the weapon is relatively harmless, making people run in panic when they feel the sonic blast hitting their bodies. However, at less than ten meters, the Thunder Generator is capable ofcausing permanent damage or killing people.”

    I went directly to the article itself and read the contents of it. And it was very straight forward, more or less indicating this new shockwave gun was an adaptation of the propane powered “scare crows” used to budge and shift birds from farm fields in Israel.

    http://www.defensenews.com/story.php?i=4447499&c=FEA&s=TEC

    TEL AVIV – An Israeli-developed shock wave cannon used by farmers to scare away crop-threatening birds could soon be available to police and homeland security forces around the world for nonlethal crowd control and perimeter defense.

    I think Mark Pauline and Survival Research Labs beat the Israeli’s to the punch inventing the so-called cannon:

    http://srl.org.nyud.net:8090/srlvideos/machinetests/bigpulsejetQT300.mov

    Prior to Mark Pauline and Survival Research Labs, the German military in WW2 adapted the pulse jet for the V-1 Buzz bomb. In short, a German terror weapon has indirectly become the product of an Israeli defense contractor. Irony Explodes. The V1 Buzz bomb was influenced by a French inventor Georges Marconnet. Everything Old is new again in the war on terror. Some good ideas never die, they just get re-invented like the wheel.

  • The Eternal Value of Privacy by Bruce Schneier

    Two proverbs say it best: Quis custodiet custodes ipsos? (“Who watches the watchers?”) and “Absolute power corrupts absolutely.”

    via The Eternal Value of Privacy.

    Nobody is the final authority when it comes to monitoring and privacy. No surer example exists than when Stalin died, the rules changed. When the East German state ended the Stazi went away. When the U.S. invaded Iraq, Saddam Hussein fled from power. Those in power try to cleanse their country of all who oppose them (the wrong-thinkers). Then their power evaporates, they vanish and all the rules change again. The same is true of Bush 43.

    George W. Bush was here, now he’s gone. So why not dismantle all that surveillance gear the NSA put into all the network facilities at AT&T, Sprint? The rules have changed, you don’t need to acquiesce to the current administration, because it’s not the same people making the same demands. The rules have changed. Yet as world events on Christmas day have proved there’s always a Jaws-like shark fin rising and falling out there in the ocean. The threat is very close by and we have to be ever vigilant. So the watchers claim of authority is re-established with each and every tragic episode. Still, is a single incident cause for the continued erosion of our rights to privacy? Given the hair-trigger responses we try to architect and instant reprisals it’s obvious to me the current environment proves it can never end, under the current structure. So in order to stop the erosion, we need to change our thinking about the threat. True no one wants to be fearful of flying wherever they may go. And when they go, they don’t want to be faced with having to kill a fellow passenger in order to save themselves, but that’s the situation we have mentally put ourselves in.

    The only way out is to change our thinking. Change how we think about the danger, the threat and you change how much of our freedoms we are willing to give up to respond to the threat. And maybe we can get back to where we once belonged.