Category: wired culture

Those promoters and bandwagoneers of everything in the Fy00tcha!

  • Distracting chatter is useful. But thanks to RSS (remember that?) it’s optional. (via Jon Udell)

    editing my radio userland instiki from my 770
    Image by Donovan Watts via Flickr

    I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.

    When I left the pageview business I walked away from an engine that had, for many years, manufactured an audience for my writing. Four years on I’m still adjusting to the change. I always used to cringe when publishers talked about using content to drive traffic. Of course when the traffic was being herded my way I loved the attention. And when it wasn’t I felt — still feel — its absence. There are plenty of things I don’t miss, though. Among t … Read More

    via Jon Udell

  • Kim Cameron returns to Microsoft as indie ID expert • The Register

    Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

    via Kim Cameron returns to Microsoft as indie ID expert • The Register.

    CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

  • JSON Activity Streams Spec Hits Version 1.0

    This is icon for social networking website. Th...
    Image via Wikipedia

    The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

    via JSON Activity Streams Spec Hits Version 1.0.

    Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

    We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

  • Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud

    Image representing Amazon Web Services as depi...
    Image via CrunchBase

    Almost as galling as the Amazon Web Services outage itself is a the litany of blog posts, such as this one and this one, that place the blame not on AWS for having a long failure and not communicating with its customers about it, but on AWS customers for not being better prepared for an outage.

    via Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud.

    As Klint Finley points out in his article, everyone seems to be blaming the folks who ponied up money to host their websites/webapps on the Amazon data center cloud. Until the outage, I was not really aware of the ins and outs, workflow and configuration required to run something on Amazons infrastructure. I am small-scale, small potatoes mostly relying on free services which when the work is great, and when they don’t work, meh! I can take or leave them, my livelihood doesn’t depend on them (thank goodness). But for those who do depend on uptime and pay money for it, they need  some greater level of understanding by their service provider.

    Amazon doesn’t make things explicit enough to follow a best practice in configuring your website installation using their services. It appears some business had no outages (but didn’t follow best practices) and some folks did have long outages though they had set up everything ‘by the book’ following best practices. The service that lay at the center of the outage was called Relational Database Service (RDS) and Elastic Block Storage (EBS). Many websites use databases to hold contents of the website, collect data and transaction information, collect metadata about users likes/dislikes, etc. The Elastic Block Storage acts as the container for the data in the RDS. When your website goes down if you have things setup correctly things fail gracefully, you have duplicate RDS and EBS containers in the Amazon data center cloud that will take over and continue responding to people clicking on things and typing in information on your website instead of throwing up error messages or not responding at all (in a word it just magically continues working). However, if you don’t follow the “guidelines” as specified by Amazon, all bets are off you wasted money paying double for the more robust, fault tolerant failover service.

    Most people don’t care about this especially if they weren’t affected by the outages. But the business owners who suffered and their customers who they are liable for definitely do. So if the entrepreneurial spirit bites you, and you’re very interested in online commerce always be aware. Nothing is free, and especially nothing is free even if you pay for it and don’t get what you paid for. I would hope a leading online commerce company like Amazon could do a better job and in future make good on its promises.

  • Data hand tools – O’Reilly Radar

    A Shebang, also Hashbang or Sharp bang. This i...
    Image via Wikipedia

    Whenever you need to work with data, don’t overlook the Unix “hand tools.” Sure, everything I’ve done here could be done with Excel or some other fancy tool like R or Mathematica. Those tools are all great, but if your data is living in the cloud, using these tools is possible, but painful. Yes, we have remote desktops, but remote desktops across the Internet, even with modern high-speed networking, are far from comfortable. Your problem may be too large to use the hand tools for final analysis, but they’re great for initial explorations. Once you get used to working on the Unix command line, you’ll find that it’s often faster than the alternatives. And the more you use these tools, the more fluent you’ll become.

    via Data hand tools – O’Reilly Radar.

    This is a great remedial refresher on the Unix commandline and for me kind of reinforces an idea I’ve had that when it comes to computing We Live Like Kings. What? How is that possible, well think about what you are trying to accomplish and finding the least complicated quickest way to that point is a dying art. More often one is forced to follow or highly encouraged to set out on a journey with very well defined protocols/rituals included. You must use the APIs, the tools, the methods as specified by your group. Things falling outside that orthodoxy are frowned upon no matter what the speed and accuracy of the result. So doing it quick and dirty using some Shell scripting and utilities is going to be embarrassing for those unfamiliar with those same tools.

    My experience doing this involved a very low end attempt to split Web access logs into nice neat bits that began an ended on certain dates. I used grep, split, and a bunch of binaries I borrowed for doing log analysis and formatting the output into a web report. Overall it didn’t take much time, and required very little downloading, uploading,uncompressing,etc. It was all commandline based with all the output dumped to a directory on the same machine. I probably spent 20 minutes every Sunday running these by hand (as I’m not a cronjob master much less an atjob master). And none of the work I did was mission critical other than being a barometer of how much use the websites were getting from the users. I realize now I could have had the whole works automated with variables setup in the shell script to accommodate running on different days of the week, time changes, etc. But editing the scripts by hand in vi editor only made me quicker and more proficient in vi (which I still gravitate towards using even now).

    And as low end as my needs were and how little experience I had initially using these tools, I am grateful for the time I spent doing it. I feel so much more comfortable knowing I can figure out how to do these tasks on my own, pipe outputs into inputs for other utilities and get useful results. I think I understand it though I’m not a programmer, and couldn’t really leverage higher level things like data structures to get work done, no. I’m a brute force kind of guy and given how fast the CPUs are running, a few ugly, inefficient recursions isn’t going to kill me or my reputation. So here’s to Mike Loukides article and how much it reminds me of what I like about Unix.

  • Showcase Your Skills & Analyze Which Skills Are Trending With LinkedIn’s New Tool

    Image representing LinkedIn as depicted in Cru...
    Image via CrunchBase

    Professional network LinkedIn has just introduced the beta launch of a new feature LinkedIn Skills, a way for you to search for particular skills and expertise, and of course, showcase your own and in LinkedIn’s words, “a whole new way to understand the landscape of skills & expertise, who has them, and how it’s changing over time.”

    via Showcase Your Skills & Analyze Which Skills Are Trending With LinkedIn’s New Tool.

    It may not seem that important at first, especially if people don’t keep their profiles up to date in LinkedIn. However, for the largest number of ‘new’ users that are in the job market actively seeking positions, I’m hoping those data are going to be more useful. Those might be worth following over time to see what demand there is for those skills in the market place. That is the promise at least. My concern though is just as grades have inflated over time at most U.S. Universities, skills too will be overstated, lied about and be very untrustworthy as people try to compete with one another on LinkedIn.

  • OpenID: The Web’s Most Successful Failure|Wired.com

    First 37Signals announced it would drop support for OpenID. Then Microsoft’s Dare Obasanjo called OpenID a failure (along with XML and AtomPub). Former Facebooker Yishan Wong’s scathing (and sometimes wrong) rant calling OpenID a failure is one of the more popular answers on Quora.

    But if OpenID is a failure, it’s one of the web’s most successful failures.

    via OpenID: The Web’s Most Successful Failure | Webmonkey | Wired.com.

    I was always of the mind that said Single Sign-on is a good thing, not bad. And any service whether it be for work or outside of work that can re-use an identifier and authentication, or whatnot should make things easier to manage and possibly be more secure in the long run. There are proponents for and against anything that looks or acts like a single sign-on. Detractors always argue that if one of the services gets hacked they somehow can gain access to your password and identity and hack in to your accounts on all the other systems out there. In reality with a typical single sign-on service you don’t ever send a password to the place your logging into (unless it’s the source of record like the website that hosts your OpenID). Instead you send something more like a scrambled message that only you could have originated and which the website you’re logging into will be able to descramble. And the message it is sending is based on your OpenID provider, the source of record for your identity online. So nobody is storing your password, nobody is able to hack into all your other accounts when they hijack your favorite web service.

    Where I work I was a strong advocate for centralized identity management like OpenID. Some people thought the only use for this was as a single sign-on service. But real centralize identity management also encompasses the authorizations you have once you have declared and authenticated your identity. And it’s the authorization that is key to what is really useful for a Single Sign-on service.

    I may be given a ‘role’ within someone’s website or page on a social networking website that either adds or takes away levels off privacy to the person who has declared me as a ‘friend’. And if they wanted to ‘redefine’ my level of privilege, all they would have to do is change privileges for that ‘role’ not for me personally and all my levels of access would change accordingly. Why? Because a role is kind off like a rank or group membership. Just like everyone in the army who is an officer can enjoy benefits like attending an officers club because they have the role, officer. I can see more of a person’s profile or personal details because I have been declared a friend. Nowhere in this is it absolutely necessary to define specific restrictions, levels of privilege to me Individually! It’s all based on my membership in a group. And if someone wants to eliminate that group or change the permissions to all members of the group, they do it once, and only once to the definition of that role, and it rolls out, cascades out to all the members after that point. So OpenID can be authentication (which is what most people stop at) and it can additionally be authorization (what am I allowed and not allowed to do once I prove who I am). It’s a very powerful and poorly understood capability.

    The widest application I’ve seen so far using something like OpenID is the Facebook ‘sign-on’ service that allows you to make comments to articles on news websites and weblogs. Disqus is a third party provider that acts as a hub to anyone that wants to re-use someone’s Facebook or OpenID credentials to prove that they are real and not a rogue spambot. That chain of identity is maintained by Disqus providing the plumbing back to whichever of the many services someone might be subscribed to or participate in. I already have an OpenID but I also have a Facebook account. Disqus will allow me to use either one. Given how much information might be passed along by Facebook through a third party (something they are notorious for allowing Applications to do) I chose to use my OpenID which more or less says I am X user at X website and I am the owner of that website as well. A chain of authentications just good enough to allow me to make comments on an article is what OpenID provides. Not too much information, just enough information travels back and forth. And because of this absolute precision, abolishing all the unneeded private detail or having to create an account on the website hosting the article, I can just freely come and go as I please.

    That is the lightweight joy of OpenID.

  • Big Web Operations Turn to Tiny Chips – NYTimes.com

    Stephen O’Grady, a founder at the technology analyst company RedMonk, said the technology industry often has swung back and forth between more standard computing systems and specialized gear.

    via Big Web Operations Turn to Tiny Chips – NYTimes.com.

    A little tip of the hat to Andrew Feldman, CEO of SeaMicro the startup company that announced it’s first product last week. The giant 512 cpu computer is being covered in this NYTimes article to spotlight the ‘exotic’ technologies both hardware and software some companies use to deploy huge web apps. It’s part NoSQL part low power massive parallelism.

  • Genius Inventor Alan Kay Reveals All then gets stiffed by the App Store

    I wonder: Is there an opportunity for Alan Kay’s Dynabook? An iPad with a Sqeak implementation that enables any user to write his or her own applications, rather than resorting to purchasing an app?

    via Did Steve Jobs Steal The iPad? Genius Inventor Alan Kay Reveals All. (source: 6:20 PM – April 17, 2010 by Wolfgang Gruener, Tom’s Hardware)

    Apple earlier this month instituted a new rule that also effectively blocks meta-platforms: clause 3.3.1, which stipulates that iPhone apps may only be made using Apple-approved programming languages. Many have speculated that the main target of the new rule was Adobe, whose CS5 software, released last week, includes a feature to easily convert Flash-coded software into native iPhone apps.

    Some critics expressed concern that beyond attacking Adobe, Apple’s policies would result in collateral damage potentially stifling innovation in the App Store. Scratch appears to be a victim despite its tie to Jobs’ old friend.

    Apple Rejects Kid-Friendly Programming App (April 20, 2010 2:15 pm)

    What a difference 3 days makes right? Tom’s Hardware did a great retrospective on the ‘originality’ of the iPad and learned a heck of a lot of Computer History along the way. At the end of the article they plug Alan Kay’s Squeak based programming environment called Scratch. It is a free application that is used to create new graphical programs and is used as a means to teach mathematics problem-solving through writing programs in Scratch. The iPad was the next logical step in the distribution of the program, giving kids free access to it whenever and on whatever platform was available. But two days later, the announcement came out the Apple App Store, the only venue by which to purchase or even download software onto the iPhone or the iPad had roundly reject Scratch. The App Store will not allow it to be downloaded and that’s the end of that. The reasoning is Scratch (which is really a programming tool) has an interpreter built-in which allows it to execute the programs written within its programming environment. Java does this, Adobe Flash does this, it’s common with anything that’s like a programming tool. But Apple has forbidden anything that looks, sounds, or smells like a potential way of hijacking or hacking into their devices. So Scratch and Adobe Flash are now both forbidden to run on the Apple iPad. How quickly things change don’t they especially if you read the whole Tom’s Hardware article. Alan Kay and Steve Jobs are presented as really friendly towards one another.

  • Garmin brings first Android phone to US through T-Mobile | Electronista

    As a phone, Garmin’s entry occupies the lower mid-range with a three-megapixel camera, native T-Mobile 3G and Wi-Fi. Built-in storage hasn’t been mentioned but should be enough to carry offline maps in addition to the usual app and media storage.

    via Garmin brings first Android phone to US through T-Mobile | Electronista.

    After it’s first attempt to create a Garmin branded phone called the G60, Garmin is back once again with the A50. But this time making a much more strategic choice by adopting an open platform: Google’s Android phone OS. I wrote about Garmin’s response to the coming Smartphone onslaught to it’s dominance of the GPS navigation market. This was after I read this article in the NYTimes: Move Over GPS, Here Comes the Smartphone – (July 8, 2009). At that time Navigon which had been in the market for GPS navigation, dropped out and went to software only licensing to device manufacturers. Whispers and rumors indicated TomTom was going to license its software as well. By Fall 2009 TomTom had shipped an iPhone version of its product. It looked like a form of paradigm shift that kills an industry overnight. GPS navigation was evolving to a software only industry. Devices themselves were better handled by the likes of Samsung, Apple, etc. When the Garmin nuviphone finally reached the market, the only review I found was on Consumer Reports. And they were not overly positive in touting what the phone did differently from a a standalone navigation unit. And worse yet, they had spent two years in development of this device only to have it hit the market trumped by the TomTom iPhone App. It was a big mistake and likely to make Garmin more wary of trying another attempt at making a device.

    Hope springs eternal it seems at Garmin. They have taken a different tack and are now going the open systems route (to an extent). It seems they don’t have to invent everything themselves. They can still manufacture devices and provide software, but they don’t have to also create an OS that allows things to be modularly integrated (Phone and GPS) and given that they chose Android, things can only get better.  I say this in part because over time it has become obvious to me Google is a real fan of GPS navigation and certainly of Maps.

    When I bought my first GPS unit from Garmin, I discovered that you can save out routes direct from Google Maps into a format that a Garmin GPS receiver can use. I know in the past Garmin forced it’s users to first purchase a PC application that allowed you to plan and plot routes then save them back to your receiver. Later it was made less expensive and eventually it was included with the purchase of new units. I’ve seen screen shots of this software and it was clunky, black and white, and more like a cartography mapping program than a route planner. On the other hand, Google Maps was as fast and intuitive as driving your car. You click on a start point, and end point and it would draw the route right on top of the satellite photos of your route. You could zoom in and out and see, actually see points of interest on your route. It seems in one stroke Google Maps stole away route planning from Garmin.

    In the intervening time Google also decided to get in the Smartphone business to compete with Apple. Many of Google’s web apps are accessed through iPhones, so why not tap into that user base who might be willing to adopt a device from the same people running the datacenter and applications hosted in them?  It might not be a huge number of users, but Google has money and time and can continuously improve anything it does until it becomes the most competitive player in a market it has chosen to compete in. Tying this all together one can see the logical progression from Google Maps to Google Smartphone. And even Google came up with some prototypes showing what this might look like:

    Google Shrinks Another Market With Free Turn-By-Turn Navigation – O’Reilly Radar (December 7, 2009)

    Google made a video showing how Google Maps, and Streetview could be integrated on an Android 2.0 device. And it looked good. It was everything someone could have wanted, navigation, text to speech directions, the ability to zoom in and out, go to Streetview to get an accurate photo of the street address. There were some bits of unpolished User Interface that they still needed to work on. But prototypes and demos are always rough.

    The video they posted led me to believe I would stick to my Garmin device, as it still had some logical organization that it would take years for Google to finally hit upon. My verdict was to wait and see what happened next. With Garmin’s announcement today though, things are even a little more interesting than I thought they would be. I can’t wait to see the demo of the final device when it ships. I definitely want to see how they integrate the navigation interface with the Web based Google Maps. If they’re separated as different Apps, that’s okay I guess but a Mashup of Garmin navigation and Google Maps with Streetview would be a Killer App. Mix in live network connection for updates on traffic, construction, and Points of Interest and there’s no telling how high they will fly. Look at this video from MobileBurn.com :

    Now all I need is a robot chauffeur to drive my car for me.