Categories
blogroll diy technology wired culture

Picture This: Hosted Lifebits in the Personal Cloud | Cloudline | Wired.com

Jon Udell
Jon Udell (Photo credit: Wikipedia)

It’s not just photos. I want the same for my whole expanding set of digital objects, including medical and financial records, commercial transactions, personal correspondence, home energy use data, you name it. I want all of my lifebits to be hosted in the cloud under my control. Is that feasible? Technically there are huge challenges, but they’re good ones, the kind that will spawn new businesses.

via (Jon UdellPicture This: Hosted Lifebits in the Personal Cloud | Cloudline | Wired.com.

From Gordon Moore‘s MyLifeBits to most recently Stephen Wolfram‘s personal collection of data and now to Jon Udell. Witness the ever expanding universe of personal data. Thinking about Gordon Moore now, I think the emphasis from Microsoft Research was always on video and pictures and ‘recollecting’ what’s happened in any given day. Stephen Wolfram’s emphasis was not so much on collecting the data but analyzing it after the fact and watching patterns emerge. Now with Jon Udell we get a nice kind of advancing of the art by looking at possible end-game scenarios. So you have collected a mass of LifeBits, now what?

Who’s going to manage this thing? Is anyone going to offer a service that will help manage it? All great questions because the disparate form social networking lifebits take versus other like health and ‘performance’ lifebits (like Stephen Wolfram collects and maintains for himself) are pointing up a big gap that exists in the cloud services sector. Ripe pickings for anyone in the entrepreneurial vein to step in and bootstrap a service like the one Jon Udell proposes. If someone was really smart they could get it up and running cheaply on Amazon Web Services (AWS) until it got to be too cost and performance prohibitive to keep it hosted there. That would both allow an initial foray to test the waters, see the size and tastes of the market and adapt the hosted lifebits service to anyone willing to pay up. That might just be a recipe for success.

Categories
cloud computers data center

$1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud

Amazon Web Services logo
Image via Wikipedia

Amazon EC2 and other cloud services are expanding the market for high-performance computing. Without access to a national lab or a supercomputer in your own data center, cloud computing lets businesses spin up temporary clusters at will and stop paying for them as soon as the computing needs are met.

via $1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud.

If you own your Data Center, you might be a little nervous right now as even a Data Center can be outsourced on an as needed basis. Especially if you are doing scientific computing you should consider the fixed costs of acquiring and maintaining those sunk, capital costs after the cluster is up and running. This story provides one great example of what I think the Cloud Computer could one day become. Rent-a-Center style data centers and compute clusters seem like an incredible value especially for a University but even more so for a business that may not need a to keep a real live data center under their control. Examples abound as even online services like Drop Box lease their compute cycles from the likes of Amazon Web Services and the Elastic Compute Cloud (EC2). And if migrating an application into a Data Center along with the data set to be analyzed can be sped up sufficiently and the cost kept down, who knows what might be possible.

Opportunity costs are many when it comes to having access to a sufficiently large number of nodes in a compute cluster. Mostly with modeling applications, you get to run a simulation at finer time slices, at higher resolution possibly gaining a better understanding of how close your algorithms match the real world. This isn’t just for business but for science as well and I think being saddled with a typical Data Center installation and it’s infrastructure and depreciation costs along with staffing make it seem less attractive if the big Data Center providers are willing to sell part of their compute cycles at a reasonable rate. The best part is you can shop around too. In the bad old days of batch computing and the glassed in data center, before desktops and mini-computers people were dying to get access to the machine and run their jobs. Now the surplus of computing cycles is so great for the big players, they help subsidize the costs of build-outs and redundancies by letting people bid of the spare compute cycles they have just lying around generating heat. It’s a whole new era of compute cycle auctions and I for one am dying to see more stories like this in the future.

Categories
cloud data center technology web standards wired culture

Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud

Image representing Amazon Web Services as depi...
Image via CrunchBase

Almost as galling as the Amazon Web Services outage itself is a the litany of blog posts, such as this one and this one, that place the blame not on AWS for having a long failure and not communicating with its customers about it, but on AWS customers for not being better prepared for an outage.

via Stop Blaming the Customers – the Fault is on Amazon Web Services – ReadWriteCloud.

As Klint Finley points out in his article, everyone seems to be blaming the folks who ponied up money to host their websites/webapps on the Amazon data center cloud. Until the outage, I was not really aware of the ins and outs, workflow and configuration required to run something on Amazons infrastructure. I am small-scale, small potatoes mostly relying on free services which when the work is great, and when they don’t work, meh! I can take or leave them, my livelihood doesn’t depend on them (thank goodness). But for those who do depend on uptime and pay money for it, they need  some greater level of understanding by their service provider.

Amazon doesn’t make things explicit enough to follow a best practice in configuring your website installation using their services. It appears some business had no outages (but didn’t follow best practices) and some folks did have long outages though they had set up everything ‘by the book’ following best practices. The service that lay at the center of the outage was called Relational Database Service (RDS) and Elastic Block Storage (EBS). Many websites use databases to hold contents of the website, collect data and transaction information, collect metadata about users likes/dislikes, etc. The Elastic Block Storage acts as the container for the data in the RDS. When your website goes down if you have things setup correctly things fail gracefully, you have duplicate RDS and EBS containers in the Amazon data center cloud that will take over and continue responding to people clicking on things and typing in information on your website instead of throwing up error messages or not responding at all (in a word it just magically continues working). However, if you don’t follow the “guidelines” as specified by Amazon, all bets are off you wasted money paying double for the more robust, fault tolerant failover service.

Most people don’t care about this especially if they weren’t affected by the outages. But the business owners who suffered and their customers who they are liable for definitely do. So if the entrepreneurial spirit bites you, and you’re very interested in online commerce always be aware. Nothing is free, and especially nothing is free even if you pay for it and don’t get what you paid for. I would hope a leading online commerce company like Amazon could do a better job and in future make good on its promises.