Categories
google mobile support web standards wired culture

What’s a Chromebook good for? How about running PHOTOSHOP? • The Register

Netscape Communicator
Netscape Communicator (Photo credit: Wikipedia)

Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.

via What’s a Chromebook good for? How about running PHOTOSHOP? • The Register.

Back in 1997 and 1998 I spent a lot of time experimenting and playing with Netscape Communicator “Gold”. It had a built in web page editor that more or less gave you WYSIWYG rendering of the html elements live as you edited. It also had a Email client and news reader built into it. I spent also a lot of time reading Netscape white papers on their Netscape Communications server and LDAP server and this whole universe of Netscape trying to re-engineer desktop computing in such a way that the Web Browser was the THING. Instead of a desktop with apps, you had some app-like behavior resident in the web browser. And from there you would develop your Javascript/ECMAscript web applications that did other useful things. Web pages with links in them could take the place of Powerpoint. Netscape Communicator Gold would take the place of Word, Outlook. This is the triumvirate that Google would assail some 10 years later with its own Google Apps and the benefit of AJAX based web app interfaces and programming.

Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!

Given that there’s even a possibility to stream Photoshop on a Google Chromebook based device, I think we’ve now hit that which Netscape had discovered long ago. The web-browser is the desktop, pure and simple. It was bound to happen especially now with the erosion into different form factors and mobile OSes. iOS and Android have shown what we are willing to call an “app” most times is nothing more than a glorified link to a web page, really. So if they can manage to wire-up enough of the codebase of Photoshop to make it work in realtime through a web browser without tons and tons of plug-ins and client-side Javascript, I say all the better. Because this means architecturally speaking good old Outlook Web Access (OWA) can only get better and become more like it’s desktop cousin Outlook 2013. Microsoft too is eroding the distinction between Desktop and Mobile. It’s all just a matter of more time passing.

Categories
cloud data center technology web standards

From Big Data to NoSQL: Part 3 (ReadWriteWeb.com)

Image representing ReadWriteWeb as depicted in...
Image via CrunchBase

In Part One we covered data, big data, databases, relational databases and other foundational issues. In Part Two we talked about data warehouses, ACID compliance, distributed databases and more. Now well cover non-relational databases, NoSQL and related concepts.

via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology Part 3.

I really give a lot of credit to ReadWriteWeb for packaging up this 3 part series (started May 24th I think). This at least narrows down what is meant by all the fast and loose terms White Papers and Admen are throwing around to get people to consider their products in RFPs. Just know this though, in many cases to NoSQL databases that keep coming into the market tend to be one-off solutions created by big social networking companies who couldn’t get MySQL/Oracle/MSQL to scale in size/speed sufficiently during their early build-outs. Just think of Facebook hitting the 500million user mark and you will know that there’s got to be a better way than relational algebra and tables with columns and rows.

In part 3 we finally get to what we have all been waiting for, Non-relational Databases, so-called NoSQL. Google’s MapReduce technology is quickly shown as one of the most widely known examples of a NoSQL type distributed database that while not adhering to absolute or immediate consistency gets there with ‘eventual consistency (Consistency being the big C in the acronym ACID). The coolest thing about MapReduce is the similarity (at least in my mind) it bears to the Seti@Home Project where ‘work units’ were split out of large data tapes and distributed piecemeal over the Internet and analyzed on a person’s desktop computer. The complete units were then gathered up and brought together into a final result. This is similar to how Google does it’s big data analysis to get work done in its data centers. And it follows on in the opensource project Hadoop, an opensource version of MapReduce started by Yahoo and now part of the Apache organization.

Document databases are cool too, and very much like an Object-oriented Database where you have a core item with attributes appended. I think also of LDAP directories which also have similarities to Object -oriented databases. A person has a ‘Common Name’ or CN attribute. The CN is as close to a unique identifier as you can get, with all the attributes strung along, appended on the end as they need to be added, in no particular order. The ability to add attributes as needed is like ‘tagging’ in the way Social networking websites like Picture, Bookmark websites do it. You just add an arbitrary tag in order to help search engines index the site and help relevant web searches find your content.

The relationship between Graph Databases and Mind-Mapping is also very interesting. There’s a good graphic illustrating a Graph database of blog content to show how relation lines are drawn and labeled. So now I have a much better understanding of Graph databases as I have used mind-mapping products before. Nice parallel there I think.

At the very end of hte article there’s mention of NewSQL of which Drizzle is an interesting offshoot. Looking up more about it, I found it interesting as a fork of the MySQL project. Specifically Drizzle factors out tons of functions some folks absolutely need but don’t always have (like say 32-bit legacy support). There’s a lot of attempts to get the code smaller so the overall lines of code went from over 1 million for MySQL to just under 300,000 for the Drizzle project. Speed and simplicity is the order of the day with Drizzle. Add missing functions by simply add the plug-in to the main app and you get back some of the MySQL features that might have been missing.

*Note: Older survey of the NoSQL field conducted by ReadWriteWeb in 2009

Categories
media surveillance web standards wired culture

JSON Activity Streams Spec Hits Version 1.0

This is icon for social networking website. Th...
Image via Wikipedia

The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

via JSON Activity Streams Spec Hits Version 1.0.

Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

Categories
cloud data center google surveillance web standards

From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1)

Process and data modeling
Image via Wikipedia

Big Data

In short, big data simply means data sets that are large enough to be difficult to work with. Exactly how big is big is a matter of debate. Data sets that are multiple petabytes in size are generally considered big data (a petabye is 1,024 terabytes). But the debate over the term doesn’t stop there.

via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1).

There’s big doin’s inside and outside the data center theses days. You cannot spend a day without a cool new article about some new project that’s just been open sourced from one of the departments inside the social networking giants. Hadoop being the biggest example. What you ask is Hadoop? It is a project Yahoo started after Google started spilling the beans on it’s two huge technological leaps in massively parallel databases and processing real time data streams. The first one was called BigTable. It is a huge distributed database that could be brought up on an inordinately large number of commodity servers and then ingest all the indexing data sent by Google’s web bots as they found new websites. That’s the database and ingestion point. The second point is the way in which the rankings and ‘pertinence’ of the indexed websites would be calculated through PageRank. The invention for the realtime processing of this data being collected is called MapReduce. It was a way of pulling in, processing and quickly sorting out the important highly ranked websites. Yahoo read the white papers put out by Google and subsequently created a version of those technologies which today power the Yahoo! search engine. Having put this into production and realizing the benefits of it, Yahoo turned it into an open source project to lower the threshold of people wanting to get into the Big Data industry. Similarly, they wanted to get many eyes of programmers looking at the source code and adding features, packaging it, and all importantly debugging what was already there. Hadoop was the name given to the Yahoo bag of software and this is what a lot of people initially adopt if they are trying to do large scale collection and real-time analysis of Big Data.

Another discovery along the way towards the Big Data movement was a parallel attempt to overcome the limitations of extending the schema of a typical database holding all the incoming indexed websites. Tables and Rows and Structured Query Language (SQL) have ruled the day since about 1977 or so, and for many kinds of tabbed data there is no substitute. However, the kinds of data being stored now fall into the big amorphous mass of binary large objects (BLOBs) that can slow down a traditional database. So a non-SQL approach was adopted and there are parts of the BigTable database and Hadoop that dump the unique key values and relational tables of SQL to just get the data in and characterize it as quickly as possible, or better yet to re-characterize it by adding elements to the schema after the fact. Whatever you are doing, what you collect might not be structured or easily structured so you’re going to need to play fast and loose with it and you need a database of some sort equal to that task. Enter the NoSQL movement to collect and analyze Big Data in its least structured form. So my recommendation to anyone trying to get the square peg of Relational Databases to fit the round hole of their unstructured data is to give up. Go NoSQL and get to work.

This first article from Read Write Web is good in that it lays the foundation for what a relational database universe looks like and how you can manipulate it. Having established what IS, future articles will be looking at what quick, dirty workarounds and one off projects people have come up with to fit their needs. And subsequently which ‘Works for Me’ type solutions have been turned into bigger open source projects that will ‘Work for Others’, as that is where each of these technologies will really differentiate themselves. Ease of use and lowering the threshold will be deciding factors for many people’s adoption of a NoSQL database I’m sure.

Categories
google mobile technology web standards

SPDY: An experimental protocol for a faster web – The Chromium Projects

Google Chromium alpha for Linux. User agent: M...
Image via Wikipedia

As part of the “Let’s make the web faster” initiative, we are experimenting with alternative protocols to help reduce the latency of web pages. One of these experiments is SPDY (pronounced “SPeeDY”), an application-layer protocol for transporting content over the web, designed specifically for minimal latency.  In addition to a specification of the protocol, we have developed a SPDY-enabled Google Chrome browser and open-source web server. In lab tests, we have compared the performance of these applications over HTTP and SPDY, and have observed up to 64% reductions in page load times in SPDY. We hope to engage the open source community to contribute ideas, feedback, code, and test results, to make SPDY the next-generation application protocol for a faster web.

via SPDY: An experimental protocol for a faster web – The Chromium Projects.

Google wants the World Wide Web to go faster. I think we all would like to have that as well. But what kind of heavy lifting is it going to take? The transition from Arpanet to the TCP/IP protocol took a very long time and required some heavy handed shoving to accomplish the cutover in 1984. We can all thank Vint Cerf for making that happen so that we could continue to grow and evolve as an online species (Tip of Hat). But now what? There’s been a move to evolved from TCP/IP version 4 to version 6 to accommodate the increase in number of network devices. Speed really wasn’t a consideration in that revision. I don’t know how this project integrates with TCP/IP vers. 6. But I hope maybe it can be pursued on a parallel course with the big migration to TCP/IP vers. 6.

What would be the worst thing that could happen is to create another Facebook/Twitter/Apple Store/Google/AOL cul-de-sac that only benefits the account holders loyal to Google. Yes it would be nice if Google Docs and all the other attendant services provided via/through Google got onboard the SPDY accelerator train. I would stand to benefit, but things like this should be pushed further up into the wider Internet so that everyone, everywhere has the same benefits. Otherwise this is an attempt to steal away user accounts and create churn in the competitors account databases.

Categories
cloud data center technology web standards wired culture

Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud

Image representing Amazon Web Services as depi...
Image via CrunchBase

Almost as galling as the Amazon Web Services outage itself is a the litany of blog posts, such as this one and this one, that place the blame not on AWS for having a long failure and not communicating with its customers about it, but on AWS customers for not being better prepared for an outage.

via Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud.

As Klint Finley points out in his article, everyone seems to be blaming the folks who ponied up money to host their websites/webapps on the Amazon data center cloud. Until the outage, I was not really aware of the ins and outs, workflow and configuration required to run something on Amazons infrastructure. I am small-scale, small potatoes mostly relying on free services which when the work is great, and when they don’t work, meh! I can take or leave them, my livelihood doesn’t depend on them (thank goodness). But for those who do depend on uptime and pay money for it, they need  some greater level of understanding by their service provider.

Amazon doesn’t make things explicit enough to follow a best practice in configuring your website installation using their services. It appears some business had no outages (but didn’t follow best practices) and some folks did have long outages though they had set up everything ‘by the book’ following best practices. The service that lay at the center of the outage was called Relational Database Service (RDS) and Elastic Block Storage (EBS). Many websites use databases to hold contents of the website, collect data and transaction information, collect metadata about users likes/dislikes, etc. The Elastic Block Storage acts as the container for the data in the RDS. When your website goes down if you have things setup correctly things fail gracefully, you have duplicate RDS and EBS containers in the Amazon data center cloud that will take over and continue responding to people clicking on things and typing in information on your website instead of throwing up error messages or not responding at all (in a word it just magically continues working). However, if you don’t follow the “guidelines” as specified by Amazon, all bets are off you wasted money paying double for the more robust, fault tolerant failover service.

Most people don’t care about this especially if they weren’t affected by the outages. But the business owners who suffered and their customers who they are liable for definitely do. So if the entrepreneurial spirit bites you, and you’re very interested in online commerce always be aware. Nothing is free, and especially nothing is free even if you pay for it and don’t get what you paid for. I would hope a leading online commerce company like Amazon could do a better job and in future make good on its promises.

Categories
blogtools technology web standards wired culture

OpenID: The Web’s Most Successful Failure|Wired.com

First 37Signals announced it would drop support for OpenID. Then Microsoft’s Dare Obasanjo called OpenID a failure (along with XML and AtomPub). Former Facebooker Yishan Wong’s scathing (and sometimes wrong) rant calling OpenID a failure is one of the more popular answers on Quora.

But if OpenID is a failure, it’s one of the web’s most successful failures.

via OpenID: The Web’s Most Successful Failure | Webmonkey | Wired.com.

I was always of the mind that said Single Sign-on is a good thing, not bad. And any service whether it be for work or outside of work that can re-use an identifier and authentication, or whatnot should make things easier to manage and possibly be more secure in the long run. There are proponents for and against anything that looks or acts like a single sign-on. Detractors always argue that if one of the services gets hacked they somehow can gain access to your password and identity and hack in to your accounts on all the other systems out there. In reality with a typical single sign-on service you don’t ever send a password to the place your logging into (unless it’s the source of record like the website that hosts your OpenID). Instead you send something more like a scrambled message that only you could have originated and which the website you’re logging into will be able to descramble. And the message it is sending is based on your OpenID provider, the source of record for your identity online. So nobody is storing your password, nobody is able to hack into all your other accounts when they hijack your favorite web service.

Where I work I was a strong advocate for centralized identity management like OpenID. Some people thought the only use for this was as a single sign-on service. But real centralize identity management also encompasses the authorizations you have once you have declared and authenticated your identity. And it’s the authorization that is key to what is really useful for a Single Sign-on service.

I may be given a ‘role’ within someone’s website or page on a social networking website that either adds or takes away levels off privacy to the person who has declared me as a ‘friend’. And if they wanted to ‘redefine’ my level of privilege, all they would have to do is change privileges for that ‘role’ not for me personally and all my levels of access would change accordingly. Why? Because a role is kind off like a rank or group membership. Just like everyone in the army who is an officer can enjoy benefits like attending an officers club because they have the role, officer. I can see more of a person’s profile or personal details because I have been declared a friend. Nowhere in this is it absolutely necessary to define specific restrictions, levels of privilege to me Individually! It’s all based on my membership in a group. And if someone wants to eliminate that group or change the permissions to all members of the group, they do it once, and only once to the definition of that role, and it rolls out, cascades out to all the members after that point. So OpenID can be authentication (which is what most people stop at) and it can additionally be authorization (what am I allowed and not allowed to do once I prove who I am). It’s a very powerful and poorly understood capability.

The widest application I’ve seen so far using something like OpenID is the Facebook ‘sign-on’ service that allows you to make comments to articles on news websites and weblogs. Disqus is a third party provider that acts as a hub to anyone that wants to re-use someone’s Facebook or OpenID credentials to prove that they are real and not a rogue spambot. That chain of identity is maintained by Disqus providing the plumbing back to whichever of the many services someone might be subscribed to or participate in. I already have an OpenID but I also have a Facebook account. Disqus will allow me to use either one. Given how much information might be passed along by Facebook through a third party (something they are notorious for allowing Applications to do) I chose to use my OpenID which more or less says I am X user at X website and I am the owner of that website as well. A chain of authentications just good enough to allow me to make comments on an article is what OpenID provides. Not too much information, just enough information travels back and forth. And because of this absolute precision, abolishing all the unneeded private detail or having to create an account on the website hosting the article, I can just freely come and go as I please.

That is the lightweight joy of OpenID.

Categories
blogroll blogtools web standards

Dave Winer’s EC2 for poets | Wired.com

Dave Winer
Image via Wikipedia

Winer wants to demystify the server. “Engineers sometimes mystify what they do, as a form of job security,” writes Winer, “I prefer to make light of it… it was easy for me, why shouldn’t it be easy for everyone?”

via A DIY Data Manifesto | Webmonkey | Wired.com.

Dave Winer believes Amazon’s Elastic Compute Cloud (EC2) is the path towards a more self reliant, self actualizing future for anyone who keeps any of their data on the Internet. So he proposes a project entitled EC2 for Poets. Having been a user of Dave’s blogging software in the past, Radio Userland, I’m very curious as to what the new project looks like.

Back in the old days I paid $40 to Frontier for the privilege of reading and publishing my opinions on articles I subscribed to through the Radio Userland client. It was a great RSS reader at the time and I loved being able to clip and snip out bits of articles and embed my comments around them. I then subsequently moved on to Bloglines and now Google Reader exactly in that order. Now I use WordPress to keep my comments and article snippets organized and published on the Web.

Categories
blogroll blogtools web standards

Personal data stores and pub/sub networks – O’Reilly Radar

Now social streams have largely eclipsed RSS readers, and the feed reading service I’ve used for years — Bloglines — will soon go dark. Dave Winer thinks the RSS ecosystem could be rebooted, and argues for centralized subscription handling on the next turn of the crank. Of course definitions tend to blur when we talk about centralized versus decentralized services.

via Personal data stores and pub/sub networks – O’Reilly Radar.

Here now, more Uncertainty and Doubt surrounding RSS readers as the future of consuming web pages. I wouldn’t expect this from the one guy I most respect when it comes to future developments in computer technology. I have followed Jon Udell’s shining example each step of the way from Radio Userland to Bloglines. And I breathed deeply the religion of loosely coupled services tied together with ‘services’ like pub/sub or RSS feeds. The flexibility and robustness of not letting a single vendor or purveyor of a free services to me was obvious. However I have fallen prey to the siren song of social media, starting with Digg, Flickr, Google Reader, LinkedIn. Each one claiming some amount of market share, but none of them anticipating the wild popularity of Friendster, MySpace and now Facebook. I actively participate in Facebook to help keep everyone energized and to let them know someone is reading the stuff they post. I want this service to succeed. And by all accounts it’s succeeding beyond its wildest dreams, through advertising revenue.

But who wants to be marketed to? Doc Searles argued rightly our personal information is ours, our ‘attention’ is ours. He wants something like a Vendor Relationship Management service where we keep our ‘profile’ information and dole out the absolute minimum necessary to participate online or do commerce. And Jon in this article uses the elmcity project as a sterling example of how many stovepipe social networks in which we participate. Jon’s work with elmcity is an ongoing attempt to have events be ‘subscribe-enabled’ the way blogs or online news websites are already. Each online calendar program has a web presence, but usually does not have a comparable publication/subscription service like RSS or iCalendar formats associated with them. To ‘really’ know what is going requires a network of event curators who can manage the data feeds that then get plugged into an information hub that aggregates all the events in a geographical region. It’s all loosely coupled and more robust than trying to get everyone to adopt a single calendar.

Which brings us back to the online personal data store, why can’t we have a ‘hub’ that aggregates these ‘services’ we participate in but contain the single source of profile information that we manage and dole out? In that way I’m not hostage to End User Licenses and the attendant risks of letting someone else be my profile steward. Instead I can manage it and let the services subscribe to my hub, and all my ‘data stores’ can exist across all the social networks that exist or may exist. No Lock In. Think about this, I cannot export all the little write-ups and comments on made on headlines I posted in Bloglines. I could export my Blogroll though, using OPML (thanks Dave Winer!) Similarly I won’t ever be able export any of my numerous status updates in Facebook. In fact as near as I can tell there is no Export Button anywhere for anything. It’s like AOL, an internet cul-de-sac that we all willingly participate in, never considering consequences.

Categories
blogtools computers google technology web standards wired culture

Email is crap: The past is yours, the future’s mine!

File this!
File this!

Considering the evolution of email and the Internet it’s a wonder we cling to it so tenaciously. The original Internet was slow, unreliable, and had a small number of actual users. Email was a messaging mechanism allowed communication to occur asynchronously over a slow unreliable network. And the mechanims used to transport it prior to the ever popular SMTP server was something called Unix to Unix Copy Protocol. Your messages to people would get copied over the network as files to another Unix computer. Eventually they would get routed to the mail spool on a machine your recipient had an account on. He could then read the message and reply to it. Kind of like telegrams back and forth. So what if you got a telegram with no subject line? Or a telegram with all kinds of tasks for different projects all wrapped up into a single message?

Dan Dube @ dandube.com complains that Filing Cabinets which approximates the desktop computing metaphor are not good. The extra work required to make the Filing Cabinet work outweighs the benefit of the activity the email is helping take place.

Each email is a file, so each email needs an informative, relevant title.  Look in your inbox — I would guess there are almost no emails that fit that bill.

Nobody uses subject lines. I get blank subject lines from people. Or they put the vaguest subjects in the subject line.

Emails don’t happen in a vacuum, people reply to them, are added and subtracted from the distribution list, change the content, etc.  Yet we still treat each email as a singular file.

That’s the truth, especially for group projects, or worse committee projects where people come and go. You don’t know sometimes where a requirement or task ever came from because you don’t have the original text in an email from the person that proposed it. There’s no trail or flight data recorder for what transpired in that email message.

Emails don’t always categorize nicely.  If they fit in more than one “folder”, the filing cabinet metaphor will fail.

I couldn’t agree more. If you have a boss who starts using ‘bullet points’ in the email you know you will need to file that thing in more than one spot. I have a boss that does this often and it takes a few minutes to parse out the tasks that are expected to be accomplished. Once that’s done, which “project” do you file that email message into?

Emails are extraordinarily redundant, with the original message copied hundreds of times in long conversations.

Oh the insanity of quote all. And worse yet I think Outlook turns it on by default. Occasionally I will go back through that really long message and delete everything except my own contributions so the email is physically shorter in length and easier to read.

Files can be emailed, which immediately forks the original file and makes any further edits a synching problem.

This happens all the time rather than copy and paste the text of another file into the medium of the email message, the immediacy of ‘attaching’ just makes it too appealing. Someone is ‘dumping’ the task off on you with the minimum effort necessary, and that means they attach the file that has the exact same text they could have included in the email. Worse yet, sometimes those attachments are PDFs! Useless,useless,useless. Try keeping track of that mix of files.

All of these gripes apply to the file system of the computer, too.  Regular files (mp3, doc, html, etc) all have the same shortcomings.

Again it’s hard to associate files in a wide range of ways that make sense for a variety of projects. None of us are limited to one file type in all the projects we do. We might have pictures, audio, video, text, etc.

Now Dan mentions Google Waves. And I wrote a quick blurb about Google Waves about week after the Google demo in San Francisco. Waves is by design, very different from email. It’s not copying files from one server to another over an unreliable slow network. It is meant to give you realtime text based communication in whatever collaborative style you prefer. And it keeps a record of everything, so you can step back through a document at each version or stage of editing.

It’s kind of like chat too. You just start a connection with one other person, start inviting in participants as you go. And as part of the record of the ‘Wave’ or wavelength, you have buddy icons of all the participants. And everything is a reference to that original wave. So file it wherever you want, open it from wherever you want, it all points back to the original and will edit that original file for you AND all the participants. Because like I said, there is but one original, one index everyone’s client points to that same EXACT REFERENCE. That’s the genius of the wave format of communication and collaboration. Waves is a giant shared workspace, nobody really keeps private copies and edits them. They always edit the shared copy no matter what. And so the mailbox/cabinet metaphor is broken at last.

So if it’s not a filing cabinet we’re looking for, but Google Waves, what’s the metaphor? Instead of a filing cabinet in my office, I now use a big giant bulletin board that sits in the hallway in my building. And everyone posts there and edits there and nobody keeps copies of anything anywhere on the bulletin board. The original bulletin is there with all it’s edits recorded, all the participants in the document are recorded for all to see. Scary isn’t it?