What’s a Chromebook good for? How about running PHOTOSHOP? • The Register

Netscape Communicator
Netscape Communicator (Photo credit: Wikipedia)

Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.

via What’s a Chromebook good for? How about running PHOTOSHOP? • The Register.

Back in 1997 and 1998 I spent a lot of time experimenting and playing with Netscape Communicator “Gold”. It had a built in web page editor that more or less gave you WYSIWYG rendering of the html elements live as you edited. It also had a Email client and news reader built into it. I spent also a lot of time reading Netscape white papers on their Netscape Communications server and LDAP server and this whole universe of Netscape trying to re-engineer desktop computing in such a way that the Web Browser was the THING. Instead of a desktop with apps, you had some app-like behavior resident in the web browser. And from there you would develop your Javascript/ECMAscript web applications that did other useful things. Web pages with links in them could take the place of Powerpoint. Netscape Communicator Gold would take the place of Word, Outlook. This is the triumvirate that Google would assail some 10 years later with its own Google Apps and the benefit of AJAX based web app interfaces and programming.

Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!

Given that there’s even a possibility to stream Photoshop on a Google Chromebook based device, I think we’ve now hit that which Netscape had discovered long ago. The web-browser is the desktop, pure and simple. It was bound to happen especially now with the erosion into different form factors and mobile OSes. iOS and Android have shown what we are willing to call an “app” most times is nothing more than a glorified link to a web page, really. So if they can manage to wire-up enough of the codebase of Photoshop to make it work in realtime through a web browser without tons and tons of plug-ins and client-side Javascript, I say all the better. Because this means architecturally speaking good old Outlook Web Access (OWA) can only get better and become more like it’s desktop cousin Outlook 2013. Microsoft too is eroding the distinction between Desktop and Mobile. It’s all just a matter of more time passing.

Advertisements

The CompuServe of Things

English: Photo of two farm silos
English: Photo of two farm silos (Photo credit: Wikipedia)

Summary

On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?

via The CompuServe of Things.

Phil Windley as absolutely right. And when it comes to Silos, consider the silos we call App Stores and Network Providers. Cell phones get locked to the subsidizing provider of the phone. The phone gets locked to the app store the manufacturer has built. All of this is designed to “capture” and ensnare a user into the cul-de-sac called the “brand”. And it would seem if we let manufacturers and network providers make all the choices this will be no different than the cell phone market we see today.

 

Meet the godfather of wearables | The Verge

Stasi HQ building, Berlin, Germany
Stasi HQ building, Berlin, Germany (Photo credit: Wikipedia)

He continues, “People are upset about privacy, but in one sense they are insufficiently upset because they don’t really understand what’s at risk. They are looking only at the short term.” And to him, there is only one viable answer to these potential risks: “You’re going to control your own data.” He sees the future as one where individuals make active sharing decisions, knowing precisely when, how, and by whom their data will be used. “That’s the most important thing, control of the data,” he reflects. “It has to be done correctly. Otherwise you end up with something like the Stasi.”

via Meet the godfather of wearables | The Verge.

Sounds a little bit like VRM and a little bit like Jon Udell‘s Thali project. Wearables don’t fix the problem of metadata being collected about you, no. You still don’t control those ingoing/outgoing feeds of information.

Sandy Pentland points out a lot can be derived and discerned simply from the people you know. Every contact in your friend list adds one more bit of intelligence about you without anyone ever talking to your directly. This kind of analysis is only possible now due to the End User License Agreements posted by each of the collecting entities (so-called social networking websites).

An alternative to this wildcat, frontier mentality by data collectors is Vendor Relationship Management (as proposed in the Cluetrain Manifesto) Doc Searls wants people to be able to share the absolute minimum necessary in order to get what they want or need from vendors on the Internet, especially the data collecting types. And then from that point if an individual wants to share more, they should get rewarded with a higher level of something in return from the people they share with (prime example are vendors, the ‘V’ in VRM).

Thali in another way allows you to share data as well. But instead of letting someone into your data mesh in an all or nothing way, it lets strongly identified individuals have linkages into our out of your own data streams whatever form those data streams may take. I think Sandy Pentland, Doc Searls and Jon Udell would all agree there needs to be some amount of ownership and control ceded back to the individual going forward. Too many of the vendors own the data and the metadata right now, and will do what they like with it including responding to National Security Letters. So instead of being a commercial venture, they are swiftly evolving into branches or defacto subsidiary of the National Security Agency. If we can place controls on the data, we’ll maybe get closer to the ideal of social networking and controlled data sharing.

Enhanced by Zemanta

PiPhone – A Raspberry Pi based Smartphone

PiPhone
PiPhone (Photo credit: Stratageme.com)

Here’s my latest DIY project, a smartphone based on a Raspberry Pi. It’s called – wait for it – the PiPhone. It makes use an Adafruit touchscreen interface and a Sim900 GSM/GPRS module to make phone calls.

via PiPhone – A Raspberry Pi based Smartphone.

Dave Hunt doesn’t just do photography, he’s a Maker through and through. And the components are out there, you just need to know where to look to buy them. Once purchased then you get down to brass tacks of what IS a cellphone anyways. And that’s what Dave has documented in his write-up of the PiPhone. Hopefully an effort like this will spawn copycats enough to trigger a landslide in DIY fab and assembly projects for people that want their own. I think it would be cool to just have an unlocked phone I could use wherever I wanted with the appropriate carrier’s SIM card.

I think it’s truly remarkable that Dave was able to get Lithium ion gel battery packs and TFT displays that were touch sensitive. The original work of designing, engineering and manufacturing those displays alone made them a competitive advantage to folks like Apple. Being first to market with something that capable and forward expansive, was a true visionary move. Now the vision is percolating downward through the market and even so-called “feature” phones or dumb-phones might have some type of touch sensitive display.

This building by bits and pieces reminds me a bit of the research Google is doing in open hardware, modular cell phone designs like the Ara Project written up by Wired.com. Ara is an interesting experiment in divvying up the whole motherboard into block sized functions that can be swapped in and out, substituted by the owner according to their needs. If you’re not a camera hound, why spend the extra money on a overly capable, very high rez camera? Why not add a storage module instead because you like to watch movies or play games instead? Or in the case of open hardware developers, why not develop a new module that others could then manufacture themselves, with a circuit board or even a 3D printer? The possibilities are numerous and seeing an effort like what Dave Hunt did with his PiPhone as a lone individual working on his own, proves there’s a lot of potential in the open hardware area for cell phones. Maybe this device or future versions will break somewhat of the lock current monopoly providers have on their closed hardware, closed source code products.

Enhanced by Zemanta

Google Glass teardown puts rock-bottom price on hardware • The Register

Google Glass OOB Experience 27126
Google Glass OOB Experience 27126 (Photo credit: tedeytan)

A teardown report on Google Glass is raising eyebrows over suggestions that the augmented reality headset costs as little as $80 to produce.

via Google Glass teardown puts rock-bottom price on hardware • The Register.

One more reason to not be a Glasshole is you don’t want to be a sucker. Given what the Oculus Rift is being sold for versus Google Glass, one has to ask themselves why is Glass so much more expensive? It doesn’t do low latency stereoscopic 3D. It doesn’t have special eye adapters PROVIDED depending on your eyeglass correction. Glass requires you to provided prescription lenses if you really needed them. It doesn’t have large, full color, high rez AMOLED display. So why $1500 when Rift is $350? And even the recently announced Epson Moverio is priced at $700.

These days with the proliferation of teardown sites and the experts at iFixit and their partners at Chipworks, it’s just a matter of time before someone writes up your Bill of Materials (BOM). Once that’s hit the Interwebs and communicated widely all the business analysts and Wall Street Hedgefunders know how to predict the profit of the company based on sales. If Google retails Glass at the same price it is the development kits, it’s going to be real difficult to compete for very long given lower price and more capable alternatives. I appreciate what Google’s done making it lightweight and power efficient, but it’s still $80 in parts being sold at a mark-up of $1500. That’s the bottom line, that’s the Bill of Materials.

Enhanced by Zemanta

AnandTech | Apple’s Cyclone Microarchitecture Detailed

Image representing Apple as depicted in CrunchBase

So for now, Cyclone’s performance is really used to exploit race to sleep and get the device into a low power state as quickly as possible.

via AnandTech | Apple’s Cyclone Microarchitecture Detailed.

Race to sleep, is the new, new thing for mobile cpus. Power conservation at a given clock speed is all done through parceling out a task and with more cores or higher clock speed. All cores execute and comple the task then cores are put to sleep or a much lower power state. That’s how you get things done and maintain a 10 hour battery life for an iPad Air or iPhone 5s.

So even though a mobile processor could be the equal of the average desktop cpu, it’s the race to sleep state that is the big differentiation now. That is what Apple’s adopting of a 64bit ARM vers. 8 architecture is bringing to market, the race to sleep. At the very beginning of the hints and rumors 64bit seemed more like an attempt to address more DRAM or gain some desktop level performance capability. But it’s all for the sake of executing quick and going into sleep mode to preserve the battery capacity.

I’m thinking now of some past articles covering the nascent, emerging market for lower power, massively parallel data center servers. 64bits was an absolute necessary first step to get ARM cpus into blades and rack servers destined for low power data centers. Memory addressing is considered a non-negotiable feature that even the most power efficient server must have. Didn’t matter what CPU it is designed around, memory address HAS got to be 64bits or it cannot be considered. That rule still applies today and will be the sticking point still for folks sitting back and ignoring the Tilera architecture or SeaMicro’s interesting cloud in a box designs. To date, it seems like Apple was first to market with a 64bit ARM design, without ARM actually supplying the base circuit design and layouts for the new generation of 64bit ARM. Apple instead did the heavy lifting and engineering themselves to get the 64bit memory addressing it needed to continue its drive to better battery life. Time will tell if this will herald other efficiency or performance improvements in raw compute power.

Enhanced by Zemanta

Jaunt – Meet the Crazy Camera That Can Make Movies for the Oculus Rift (Jordan Kushins-Gizmodo)

Oculus Rift
Oculus Rift (Photo credit: Digitas Photos)

If Facebook buying Oculus for a cool $2 billion is a step towards democratizing the currently-niche platform, Jaunt seems like an equally monumental step towards making awesome virtual reality content that appeals to folks beyond the gaming community. The VR movies in addition to VR games.

via Meet the Crazy Camera That Can Make Movies for the Oculus Rift.

Amazing story about a stealthy little company with a 3D video recording rig. This isn’t James Cameron like motion capture for 3D rendering. This is just 2D video in real time stitched together. No modeling, or texture-mapping, or animating required. Just run the video camera, capture the footage, bring it back to the studio and stitch it all together. Watch the production on your Oculus Rift head set. If you can produce 3D movies with this without having to invest in the James Cameron high end, ultra-expensive virtual sets, you just lowered the barriers to entry.

I’m also kind of disappointed that in the article the author keeps insisting that you “had to be there”. Telling us words cannot express the experience is like telling me in writing the “dog ate my homework”. I guess I “had to be there” for that too. Anyway you put it, telling me more about the company and the premises and about the prototypes means you’re writing for a Venture Capital audience, not someone who might make work using the camera or those who might consume the work made by the artists working with the camera. I say just cave into the temptation and TRY expressing the experience in words. Don’t worry if you fail, as you’ve just increased the comment rate on your story, engaging people longer after the initial date the story was published. In spite off the lack of daring, to describe the experience, I picked up enough detail, extrapolated it enough and read between the lines in a way that indicates this camera rig might well be the killer app, or authoring app for the Oculus Rift platform. Let’s hope it sees the light of day and makes it market quicker than the Google Glass prototypes floating around these days.

Enhanced by Zemanta