I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.
via Jon Udell
Those promoters and bandwagoneers of everything in the Fy00tcha!
I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.
via Jon Udell
Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.
via Kim Cameron returns to Microsoft as indie ID expert • The Register.
CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.
Almost as galling as the Amazon Web Services outage itself is a the litany of blog posts, such as this one and this one, that place the blame not on AWS for having a long failure and not communicating with its customers about it, but on AWS customers for not being better prepared for an outage.
via Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud.
As Klint Finley points out in his article, everyone seems to be blaming the folks who ponied up money to host their websites/webapps on the Amazon data center cloud. Until the outage, I was not really aware of the ins and outs, workflow and configuration required to run something on Amazons infrastructure. I am small-scale, small potatoes mostly relying on free services which when the work is great, and when they don’t work, meh! I can take or leave them, my livelihood doesn’t depend on them (thank goodness). But for those who do depend on uptime and pay money for it, they need some greater level of understanding by their service provider.
Amazon doesn’t make things explicit enough to follow a best practice in configuring your website installation using their services. It appears some business had no outages (but didn’t follow best practices) and some folks did have long outages though they had set up everything ‘by the book’ following best practices. The service that lay at the center of the outage was called Relational Database Service (RDS) and Elastic Block Storage (EBS). Many websites use databases to hold contents of the website, collect data and transaction information, collect metadata about users likes/dislikes, etc. The Elastic Block Storage acts as the container for the data in the RDS. When your website goes down if you have things setup correctly things fail gracefully, you have duplicate RDS and EBS containers in the Amazon data center cloud that will take over and continue responding to people clicking on things and typing in information on your website instead of throwing up error messages or not responding at all (in a word it just magically continues working). However, if you don’t follow the “guidelines” as specified by Amazon, all bets are off you wasted money paying double for the more robust, fault tolerant failover service.
Most people don’t care about this especially if they weren’t affected by the outages. But the business owners who suffered and their customers who they are liable for definitely do. So if the entrepreneurial spirit bites you, and you’re very interested in online commerce always be aware. Nothing is free, and especially nothing is free even if you pay for it and don’t get what you paid for. I would hope a leading online commerce company like Amazon could do a better job and in future make good on its promises.
Whenever you need to work with data, don’t overlook the Unix “hand tools.” Sure, everything I’ve done here could be done with Excel or some other fancy tool like R or Mathematica. Those tools are all great, but if your data is living in the cloud, using these tools is possible, but painful. Yes, we have remote desktops, but remote desktops across the Internet, even with modern high-speed networking, are far from comfortable. Your problem may be too large to use the hand tools for final analysis, but they’re great for initial explorations. Once you get used to working on the Unix command line, you’ll find that it’s often faster than the alternatives. And the more you use these tools, the more fluent you’ll become.
This is a great remedial refresher on the Unix commandline and for me kind of reinforces an idea I’ve had that when it comes to computing We Live Like Kings. What? How is that possible, well think about what you are trying to accomplish and finding the least complicated quickest way to that point is a dying art. More often one is forced to follow or highly encouraged to set out on a journey with very well defined protocols/rituals included. You must use the APIs, the tools, the methods as specified by your group. Things falling outside that orthodoxy are frowned upon no matter what the speed and accuracy of the result. So doing it quick and dirty using some Shell scripting and utilities is going to be embarrassing for those unfamiliar with those same tools.
My experience doing this involved a very low end attempt to split Web access logs into nice neat bits that began an ended on certain dates. I used grep, split, and a bunch of binaries I borrowed for doing log analysis and formatting the output into a web report. Overall it didn’t take much time, and required very little downloading, uploading,uncompressing,etc. It was all commandline based with all the output dumped to a directory on the same machine. I probably spent 20 minutes every Sunday running these by hand (as I’m not a cronjob master much less an atjob master). And none of the work I did was mission critical other than being a barometer of how much use the websites were getting from the users. I realize now I could have had the whole works automated with variables setup in the shell script to accommodate running on different days of the week, time changes, etc. But editing the scripts by hand in vi editor only made me quicker and more proficient in vi (which I still gravitate towards using even now).
And as low end as my needs were and how little experience I had initially using these tools, I am grateful for the time I spent doing it. I feel so much more comfortable knowing I can figure out how to do these tasks on my own, pipe outputs into inputs for other utilities and get useful results. I think I understand it though I’m not a programmer, and couldn’t really leverage higher level things like data structures to get work done, no. I’m a brute force kind of guy and given how fast the CPUs are running, a few ugly, inefficient recursions isn’t going to kill me or my reputation. So here’s to Mike Loukides article and how much it reminds me of what I like about Unix.
First 37Signals announced it would drop support for OpenID. Then Microsoft’s Dare Obasanjo called OpenID a failure (along with XML and AtomPub). Former Facebooker Yishan Wong’s scathing (and sometimes wrong) rant calling OpenID a failure is one of the more popular answers on Quora.
But if OpenID is a failure, it’s one of the web’s most successful failures.
via OpenID: The Web’s Most Successful Failure | Webmonkey | Wired.com.
I was always of the mind that said Single Sign-on is a good thing, not bad. And any service whether it be for work or outside of work that can re-use an identifier and authentication, or whatnot should make things easier to manage and possibly be more secure in the long run. There are proponents for and against anything that looks or acts like a single sign-on. Detractors always argue that if one of the services gets hacked they somehow can gain access to your password and identity and hack in to your accounts on all the other systems out there. In reality with a typical single sign-on service you don’t ever send a password to the place your logging into (unless it’s the source of record like the website that hosts your OpenID). Instead you send something more like a scrambled message that only you could have originated and which the website you’re logging into will be able to descramble. And the message it is sending is based on your OpenID provider, the source of record for your identity online. So nobody is storing your password, nobody is able to hack into all your other accounts when they hijack your favorite web service.
Where I work I was a strong advocate for centralized identity management like OpenID. Some people thought the only use for this was as a single sign-on service. But real centralize identity management also encompasses the authorizations you have once you have declared and authenticated your identity. And it’s the authorization that is key to what is really useful for a Single Sign-on service.
I may be given a ‘role’ within someone’s website or page on a social networking website that either adds or takes away levels off privacy to the person who has declared me as a ‘friend’. And if they wanted to ‘redefine’ my level of privilege, all they would have to do is change privileges for that ‘role’ not for me personally and all my levels of access would change accordingly. Why? Because a role is kind off like a rank or group membership. Just like everyone in the army who is an officer can enjoy benefits like attending an officers club because they have the role, officer. I can see more of a person’s profile or personal details because I have been declared a friend. Nowhere in this is it absolutely necessary to define specific restrictions, levels of privilege to me Individually! It’s all based on my membership in a group. And if someone wants to eliminate that group or change the permissions to all members of the group, they do it once, and only once to the definition of that role, and it rolls out, cascades out to all the members after that point. So OpenID can be authentication (which is what most people stop at) and it can additionally be authorization (what am I allowed and not allowed to do once I prove who I am). It’s a very powerful and poorly understood capability.
The widest application I’ve seen so far using something like OpenID is the Facebook ‘sign-on’ service that allows you to make comments to articles on news websites and weblogs. Disqus is a third party provider that acts as a hub to anyone that wants to re-use someone’s Facebook or OpenID credentials to prove that they are real and not a rogue spambot. That chain of identity is maintained by Disqus providing the plumbing back to whichever of the many services someone might be subscribed to or participate in. I already have an OpenID but I also have a Facebook account. Disqus will allow me to use either one. Given how much information might be passed along by Facebook through a third party (something they are notorious for allowing Applications to do) I chose to use my OpenID which more or less says I am X user at X website and I am the owner of that website as well. A chain of authentications just good enough to allow me to make comments on an article is what OpenID provides. Not too much information, just enough information travels back and forth. And because of this absolute precision, abolishing all the unneeded private detail or having to create an account on the website hosting the article, I can just freely come and go as I please.
That is the lightweight joy of OpenID.
Stephen O’Grady, a founder at the technology analyst company RedMonk, said the technology industry often has swung back and forth between more standard computing systems and specialized gear.
via Big Web Operations Turn to Tiny Chips – NYTimes.com.
A little tip of the hat to Andrew Feldman, CEO of SeaMicro the startup company that announced it’s first product last week. The giant 512 cpu computer is being covered in this NYTimes article to spotlight the ‘exotic’ technologies both hardware and software some companies use to deploy huge web apps. It’s part NoSQL part low power massive parallelism.
I wonder: Is there an opportunity for Alan Kay’s Dynabook? An iPad with a Sqeak implementation that enables any user to write his or her own applications, rather than resorting to purchasing an app?
via Did Steve Jobs Steal The iPad? Genius Inventor Alan Kay Reveals All. (source: 6:20 PM – April 17, 2010 by Wolfgang Gruener, Tom’s Hardware)
Apple earlier this month instituted a new rule that also effectively blocks meta-platforms: clause 3.3.1, which stipulates that iPhone apps may only be made using Apple-approved programming languages. Many have speculated that the main target of the new rule was Adobe, whose CS5 software, released last week, includes a feature to easily convert Flash-coded software into native iPhone apps.
Some critics expressed concern that beyond attacking Adobe, Apple’s policies would result in collateral damage potentially stifling innovation in the App Store. Scratch appears to be a victim despite its tie to Jobs’ old friend.
Apple Rejects Kid-Friendly Programming App (April 20, 2010 2:15 pm)
What a difference 3 days makes right? Tom’s Hardware did a great retrospective on the ‘originality’ of the iPad and learned a heck of a lot of Computer History along the way. At the end of the article they plug Alan Kay’s Squeak based programming environment called Scratch. It is a free application that is used to create new graphical programs and is used as a means to teach mathematics problem-solving through writing programs in Scratch. The iPad was the next logical step in the distribution of the program, giving kids free access to it whenever and on whatever platform was available. But two days later, the announcement came out the Apple App Store, the only venue by which to purchase or even download software onto the iPhone or the iPad had roundly reject Scratch. The App Store will not allow it to be downloaded and that’s the end of that. The reasoning is Scratch (which is really a programming tool) has an interpreter built-in which allows it to execute the programs written within its programming environment. Java does this, Adobe Flash does this, it’s common with anything that’s like a programming tool. But Apple has forbidden anything that looks, sounds, or smells like a potential way of hijacking or hacking into their devices. So Scratch and Adobe Flash are now both forbidden to run on the Apple iPad. How quickly things change don’t they especially if you read the whole Tom’s Hardware article. Alan Kay and Steve Jobs are presented as really friendly towards one another.