Categories
google mobile support web standards wired culture

What’s a Chromebook good for? How about running PHOTOSHOP? • The Register

Netscape Communicator
Netscape Communicator (Photo credit: Wikipedia)

Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.

via What’s a Chromebook good for? How about running PHOTOSHOP? • The Register.

Back in 1997 and 1998 I spent a lot of time experimenting and playing with Netscape Communicator “Gold”. It had a built in web page editor that more or less gave you WYSIWYG rendering of the html elements live as you edited. It also had a Email client and news reader built into it. I spent also a lot of time reading Netscape white papers on their Netscape Communications server and LDAP server and this whole universe of Netscape trying to re-engineer desktop computing in such a way that the Web Browser was the THING. Instead of a desktop with apps, you had some app-like behavior resident in the web browser. And from there you would develop your Javascript/ECMAscript web applications that did other useful things. Web pages with links in them could take the place of Powerpoint. Netscape Communicator Gold would take the place of Word, Outlook. This is the triumvirate that Google would assail some 10 years later with its own Google Apps and the benefit of AJAX based web app interfaces and programming.

Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!

Given that there’s even a possibility to stream Photoshop on a Google Chromebook based device, I think we’ve now hit that which Netscape had discovered long ago. The web-browser is the desktop, pure and simple. It was bound to happen especially now with the erosion into different form factors and mobile OSes. iOS and Android have shown what we are willing to call an “app” most times is nothing more than a glorified link to a web page, really. So if they can manage to wire-up enough of the codebase of Photoshop to make it work in realtime through a web browser without tons and tons of plug-ins and client-side Javascript, I say all the better. Because this means architecturally speaking good old Outlook Web Access (OWA) can only get better and become more like it’s desktop cousin Outlook 2013. Microsoft too is eroding the distinction between Desktop and Mobile. It’s all just a matter of more time passing.

Categories
cloud data center google macintosh

Apple’s CDN Now Live: Has Paid Deals With ISPs, Massive Capacity In Place – Dan Rayburn – StreamingMediaBlog.com

A sample apple grown around Shenandoah Valley, Va.
A sample apple grown around Shenandoah Valley, Va. (Photo credit: Boston Public Library)

Since last year, Apple’s been hard at work building out their own CDN and now those efforts are paying off. Recently, Apple’s CDN has gone live in the U.S. and Europe and the company is now delivering some of their own content, directly to consumers. In addition, Apple has interconnect deals in place with multiple ISPs, including Comcast and others, and has paid to get direct access to their networks.

via Apple’s CDN Now Live: Has Paid Deals With ISPs, Massive Capacity In Place – Dan Rayburn – StreamingMediaBlog.com.

Given some of my experiences attempting to watch the Live Stream from Apple’s combined iPhone, Watch event, I wanted to address CDN. Content Distribution Networks are designed to speed the flow of many types of files from Data Centers or Video head ends for Live Events. So I note, I started this article back on August 1st when this original announcement went out. And now it’s doubly poignant as the video stream difficulties at the start of the show (1PM EDT) kind of ruined it for me and for a few others. They lost me in that scant few first 10 minutes and they never recovered. I did connect later but that was after the Apple Watch presentation was half done. Oh well, you get what you pay for. I paid nothing for the Live Event stream from Apple and got nothing in return.

Back during the Steve Jobs era, one of the biggest supporters of Akamais and its content delivery network was Apple Inc. And this was not just for streaming of the Keynote Speeches and MacWorld (before they withdrew from that event) but also the World Developers Conference (WWDC). At the time enjoyed great access to free streams and great performance levels for free. But Apple cut way back on that simulcasts and rivals like Eventbrite began to eat in to Akamai’s lower end. Since then the huge data center providers began to build out their own data centers worldwide. And in so doing, a kind of internal monopoly of content distribution went into effect. Google was first to really scale up in a massive way then scale out, to make sure all those GMail accounts ran faster and better in spite of the huge mail spools on each account member. Eventually the second wave of social media outlets joined in (with Facebook leading a revolution in Open Stack and Open Hardware specs) and created their own version of content delivery as well.

Now Apple has attempted to scale up and scale out to keep people tightly bound to brand. iCloud really is a thing, but more than that now the real heavy lifting is going on once and for all time. Peering arrangements (anathema to the open Internet) would be signed and deals made to scratch each other’s backs by sharing the load/burden of carrying not just your own internal traffic, but those of others too. And depending on the ISP you could really get gouged by those negotiations. But no matter Apple soldiered on and now they’re ready to really let all the prep work be put to good use. Hopefully the marketing will be sufficient to express the satisfaction and end user experience at all levels, iTunes, iApps, iCloud data storage and everything else would experience the boosts in speed. If Apple can hold its own against both Facebook and Gmail in this regard, they future’s so bright they’re gonna need shades.

Categories
mobile science & technology

Batteries take the lithium for charge boost • The Register

Embed from Getty Images

To do that, the researchers coated a lithium anode with a layer of hollow carbon nanospheres, to prevent the growth of the dendrites.

via Batteries take the lithium for charge boost • The Register.

As research is being done on incremental improvements in Lithium Ion batteries, some occasional discoveries are being made. In this instance, the anode is being switched to pure lithium with a coating to protect the very reactive metal surface. The problem with using pure lithium is the growth of micro crystalline “dendrites”, kind of like stalagmites/stalactites in caves, along the whole surface. As the the dendrites build up, the anode loses it’s efficiency and that battery slowly loses it’s ability to charge all the way. This research has shown how to coat a pure lithium anode with a later of carbon nanotubes to help act as a permeable layer between the the electrolytic liquid in the battery and the pure lithium anode.

In past articles on Carpetbomberz.com we’ve seen announcements of other possible battery technologies like Zinc-Air, Lithium-Air and possible use of carbon nanotubes as a anode material. This announcement is promising in that it’s added costs might be somewhat smaller versus a wholesale change in battery chemistry. Similarly the article points out how much lighter elemental Lithium is versus the current anode materials (Carbon and Silicon). If the process of coating the anode is sufficiently inexpensive and can be done on a industrial production line, you will see this get adopted. But with most experiments like these, scaling up and lowering costs is the hardest thing to do. Hopefully this is one that will make it into shipping products and see the light of day.

 

Categories
technology wintel

Resentment, Jealousy, Feuds: A Look at Intel’s Founding Team – Michael S. Malone – Harvard Business Review

English: Michael S. Malone is a U.S. author, a...
English: Michael S. Malone is a U.S. author, a former editor of Forbes magazine and host of a talk show on PBS. Español: Michael S. Malone es un escritor y guionista estadounidense. (Photo credit: Wikipedia)

Just when you think you understand the trio (as I thought I did up until my final interview with Grove) you learn something new that turns everything upside-down. The Intel Trinity must be considered one of the most successful teams in business history, yet it seems to violate all the laws of successful teams.

via Resentment, Jealousy, Feuds: A Look at Intel’s Founding Team – Michael S. Malone – Harvard Business Review.

Agreed, this is a topic near and dear to my heart as I’ve collectively read a number of the stories published over the years from the Tech Press. From Tracy Kidder‘s, Soul of a New Machine, to Fred Brook’s The Miracle Man Month, Steven Levy’s Insanely Great. The story of Xerox PARC as told in Dealer’s of Lightning, the Arpanet Project as told in Where Wizards Stay Up Late. And moving somewhat along those lines, Stewart Brand’s The Media Lab and Howard Rheingold’s Virtual Reality. All of these are studies at some level of organizational theory in the high technology field.

And one thing you find commonly is there’s one charismatic individual that joins up at some point (early or late doesn’t matter) who then brings in a flood of followers and talent that is the kick in the pants that really gets momentum going. The problem is with a startup company say like Intel or its predecessor, Fairchild Semiconductor, there’s more than one charismatic individual. And keeping that organization stitched together even just loosely is probably the biggest challenge of all. So I’ll be curious to read this book Michael Malone and see how it compares to the other books in my anthology of organization theory in high tech. Should be a good, worthwhile read.

 

Categories
surveillance wired culture

The CompuServe of Things

English: Photo of two farm silos
English: Photo of two farm silos (Photo credit: Wikipedia)

Summary

On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?

via The CompuServe of Things.

Phil Windley as absolutely right. And when it comes to Silos, consider the silos we call App Stores and Network Providers. Cell phones get locked to the subsidizing provider of the phone. The phone gets locked to the app store the manufacturer has built. All of this is designed to “capture” and ensnare a user into the cul-de-sac called the “brand”. And it would seem if we let manufacturers and network providers make all the choices this will be no different than the cell phone market we see today.

 

Categories
cloud data center fpga science & technology

MIT Puts 36-Core Internet on a Chip | EE Times

Partially connected mesh topology
Partially connected mesh topology (Photo credit: Wikipedia)

Today many different interconnection topologies are used for multicore chips. For as few as eight cores direct bus connections can be made — cores taking turns using the same bus. MIT’s 36-core processors, on the other hand, are connected by an on-chip mesh network reminiscent of Intel’s 2007 Teraflop Research Chip — code-named Polaris — where direct connections were made to adjacent cores, with data intended for remote cores passed from core-to-core until reaching its destination. For its 50-core Xeon Phi, however, Intel settled instead on using multiple high-speed rings for data, address, and acknowledgement instead of a mesh.

via MIT Puts 36-Core Internet on a Chip | EE Times.

I commented some time back on a similar article on the same topic. It appears now the MIT research group has working silicon of the design. As mentioned in the pull-quote, the Xeon Phi (which has made some news in the Top 500 SuperComputer stories recently) is a massively multicore architecture but uses a different interconnect that Intel designed on their own. These stories as they appear get filed into the category of massively multicore or low power CPU developments. Most times the same CPUs add cores without significantly drawing more power and thus provide a net increase in compute ability. Tilera, Calxeda and yes even SeaMicro were all working along towards those ends. Either through mergers, or cutting of funding each one has seemed to trail off and not succeed at its original goal (massively multicore, low power designs). Also along the way Intel has done everything it can to dull and dent the novelty of the new designs by revising an Atom based or Celeron based CPU to provide much lower power at the scale of maybe 2 cores per CPU.

Like this chip MIT announced Tilera too was originally an MIT research product spun off of the University campus. Its principals were the PI and a research associate if I remember correctly. Now that MIT has the working silicon they’re going to benchmark and test and verify their design. The researchers will release the verilog hardware description of chip for anyone use, research or verify for themselves once they’ve completed their own study. It will be interesting to see how much of an incremental improvement this design provides, and possibly could be the launch of another Tilera style product out of MIT.

Categories
data center flash memory science & technology SSD

AnandTech | Intel SSD DC P3700 Review: The PCIe SSD Transition Begins with NVMe

We don’t see infrequent blips of CPU architecture releases from Intel, we get a regular, 2-year tick-tock cadence. It’s time for Intel’s NSG to be given the resources necessary to do the same. I long for the day when we don’t just see these SSD releases limited to the enterprise and corporate client segments, but spread across all markets – from mobile to consumer PC client and of course up to the enterprise as well.

via AnandTech | Intel SSD DC P3700 Review: The PCIe SSD Transition Begins with NVMe.

Big news in the SSD/Flash memory world at Computex in Taipei, Taiwan. Intel has entered the fray with Samsung and SandForce issuing a fully NVMe compliant set of drives running on PCIe cards. Throughputs are amazing, but the prices are overly competitive. You can enter the market for as low as $600 for a 400GB PCIe card running as an NVMe compliant drive. On Windows Server 2012 R2 and Windows 8.1 you get native support for NVMe drives. This is going to get really interesting. Especially considering all the markets and levels of consumers within the market. On the budget side is the SATA Express interface which is an attempt to factor out some of the slowness inherent in SSDs attached to SATA bus interfaces. Then there’s M.2 which is the smaller form factor PCIe based drive interface being adopted by manufacturers making light and small form factor tablets and laptops. That is a big jump past SATA altogether and also has a speed bump associated with it as it communicates directly with the PCIe bus. Last and most impressive of all is the NVMe devices announced by Intel with yet a further speed bump as it’s addressing multiple data lanes on PCI Express. Some concern trolls in the gaming community are quick to point out the data lanes are being lost to I/O when they already are maxing them out with their 3D graphics boards.

The route forward it seems would be Intel motherboard designs with a PCIe 3 interface with the equivalent data lanes for two full speed 16x graphics cards, but using that extra 16x lane to devote to I/O instead or maybe a 1.5X arrangement with a fully 16X lane and 2 more 8X lanes to handle regular I/O plus a dedicated 8X NVMe interface? It’s going to require some reengineering and BIOS updating no doubt to get all the speed out of all the devices simultaneously. That’s why I would also like to remind readers of the Flash-DIMM phenomenon as well sitting out there on the edges in the high speed, high frequency trading houses in the NYC metro area. We haven’t seen nor heard much since the original product announcement from IBM for the X6-series servers and the options for Flash-DIMMs on that product line. Smart Memory Technology (the prime designer/manufacturer of Flash-DIMMs for SanDisk) has now been bought out by SanDisk. Again now word on that product line now. Same is true for the Lenovo takeover of IBM’s Intel server product line (of which the X6-series is the jewel in the crown). Mergers and acquisitions have veiled and blunted some of these revolutionary product announcements, but I hope eventually that Flash-DIMMs see the light of day and gain full BIOS support and eventually make it into the Desktop computer market. As good as NVMe is going forward, I think we need too a mix of Flash-DIMM to see the full speed of the multi-core X86 Intel chips.

Categories
blogtools entertainment google media surveillance

Audrey Watters: The Future of Ed-Tech is a Reclamation Project #DLFAB

Audrey Watters Media Predicts 2011
Audrey Watters Media Predicts 2011 (Photo credit: @Photo.)

We can reclaim the Web and more broadly ed-tech for teaching and learning. But we must reclaim control of the data, content, and knowledge we create. We are not resources to be mined. Learners do not enter our schools and in our libraries to become products for the textbook industry and the testing industry and the technology industry and the ed-tech industry to profit from. 

via The Future of Ed-Tech is a Reclamation Project #DLFAB.(by Audrey Watters)

Really philosophical article about what it is Higher Ed is trying to do here. It’s not just about student portfolios, it’s Everything. It is the books you check out the seminars you attend, the videos you watched the notes you took all the artifacts of learning. And currently they are all squirreled away and stashed inside data silos like Learning Management Systems.

The original World Wide Web was like the Wild, Wild West, an open frontier without visible limit. Cloud services and commercial offerings has fenced in the frontier in a series of waves of fashion. Whether it was AOL, Tripod.com, Geocities, Friendster, MySpace, Facebook the web grew in the form of gated communities and cul-de-sacs for “members only”. True the democracy of it all was membership was open and free, practically anyone could join, all you had to do was hand over the control, the keys to YOUR data. That was the bargain, by giving up your privacy, you gained all the rewards of socializing with long lost friends and acquaintances. From that little spark the surveillance and “data mining” operation hit full speed.

Reclaiming ownership of all this data, especially the component that is generated in one’s lifetime of learning is a worthy cause. Audrey Watters references Jon Udell in an example of the kind of data we would want to own and limit access to our whole lives. From the article:

Udell then imagines what it might mean to collect all of one’s important data from grade school, high school, college and work — to have the ability to turn this into a portfolio — for posterity, for personal reflection, and for professional display on the Web.

Indeed, and at the same time though this data may live on the Internet somewhere access is restricted to those whom we give explicit permission to access it. That’s in part a project unto itself, this mesh of data could be text, or other data objects that might need to be translated, converted to future readable formats so it doesn’t grow old and obsolete in an abandoned file format. All of this stuff could be give a very fine level of access control to individuals you have approved to read parts or pieces or maybe even give wholesale access to. You would make that decision and maybe just share the absolute minimum necessary. So instead of seeing a portfolio of your whole educational career, you just give out the relevant links and just those links. That’s what Jon Udell is pursuing now through the Thali Project. Thali is a much more generalized way to share data from many devices but presented in a holistic, rationalized manner to whomever you define as a trusted peer. It’s not just about educational portfolios, it’s about sharing your data. But first and foremost you have to own the data or attempt to reclaim it from the wilds and wilderness of the social media enterprise, the educational enterprise, all these folks who want to own your data while giving you free services in return.

Audrey uses the metaphor, “Data is the new oil” and that at the heart is the problem. Given the free oil, those who invested in holding onto and storing the oil are loathe to give it up. And like credit reporting agencies with their duplicate and sometime incorrect datasets, those folks will give access to that unknown quantity to the highest bidder for whatever reason. Whether its campaign staffers, private detectives, vengeful spouses, doesn’t matter as they own the data and set the rules as to how it is shared. However in the future when we’ve all reclaimed ownership of our piece of the oil field, THEN we’ll have something. And when it comes to the digital equivalent of the old manila folder, we too will truly own our education.

Categories
flash memory macintosh SSD wintel

AnandTech | Samsung SSD XP941 Review: The PCIe Era Is Here

Mini PCI-Express Connector on Inspiron 11z Mot...
Mini PCI-Express Connector on Inspiron 11z Motherboard, Front (Photo credit: DandyDanny)

I don’t think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It’s not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ’s Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.

via AnandTech | Samsung SSD XP941 Review: The PCIe Era Is Here.

Listen well as you pine away for your very own SSD SATA drive. One day you will get that new thing. But what you really, really want is the new, NEW thing. And that my friends is quite simply the PCIe SSD. True the enterprise level purchasers have had a host of manufacturers and models to choose from in this form factor. But the desktop market cannot afford Fusion-io products at ~15K per card fully configured. That’s a whole different market there. RevoDrive has had a wider range of products that go from heights of Fusion-io down to the top end Gamer market with the RevoDrive R-series PCIe drives. But those have always been SATA drives piggy-backed onto a multi-lane PCIe card (4x or 8x depending on how many controllers were installed onboard the card). Here now the evolutionary step of dumping SATA in favor of a more native PCIe to NAND memory controller is slowly taking place. Apple has adopted it for the top end Mac Pro revision (the price and limited availability has made it hard to publicize this architectural choice). It has also been adopted in the laptops available since Summer 2013 that Apple produces (and I have the MacBook Air to prove it). Speedy, yes it is. But how do I get this on my home computer?

Anandtech was able to score an aftermarket card through a 3rd party in Australia along with a PCIe adapter card for that very Samsung PCIe drive. So where there is a will, there is a way. From that purchase of both the drive and adapter, this review of the Samsung PCIe drive has come about. And all one can say looking through all the benchmarks is we have not seen anything yet. Drive speeds which have been the bottle-neck in desktop and mobile computing since the dawn of the Personal Computer are slowly lifting. And not by a little but by a lot. This is going to herald a new age in personal computers that is as close to former Intel Chairman, Andy Grove’s 10X Effect. Samsung’s PCIe native SSD is that kind of disruptive, perspective altering product that will put all manufacturers on notice and force a sea change in design and manufacture.

As end users of the technology SSD’s with SATA interfaces have already had a big time impact on our laptops and desktops. But what I’ve been writing about and trying to find signs of ever since the first introduction of SSD drives is the logical path through the legacy interfaces. Whether it was ATA/BIOS or the bridge chips that glue the motherboard to the CPU, a number of “old” architecture items are still hanging around on the computers of today. Intel’s adoption of UEFI has been a big step forward in shedding the legacy bottleneck components. Beyond that native on CPU controllers for PCIe are a good step forward as well. Lastly the sockets and bridging chips on the motherboard are the neighborhood improvements that again help speed things up. The last mile however is the dumping of the “disk” interace, the ATA/SATA spec as a pre-requisite for reading data off of a spinning magnetic hard drive. We need to improve that last mile to the NAND memory chips and then we’re going to see the full benefit of products like the Samsung PCIe drive. And that day is nearly upon us with the most recent motherboard/chipset revision from Intel. We may need another revision to get exactly what we want, but the roadmap is there and all the manufacturers had better get on it. As Samsung’s driving this revolution,…NOW.

Enhanced by Zemanta
Categories
surveillance wired culture

Meet the godfather of wearables | The Verge

Stasi HQ building, Berlin, Germany
Stasi HQ building, Berlin, Germany (Photo credit: Wikipedia)

He continues, “People are upset about privacy, but in one sense they are insufficiently upset because they don’t really understand what’s at risk. They are looking only at the short term.” And to him, there is only one viable answer to these potential risks: “You’re going to control your own data.” He sees the future as one where individuals make active sharing decisions, knowing precisely when, how, and by whom their data will be used. “That’s the most important thing, control of the data,” he reflects. “It has to be done correctly. Otherwise you end up with something like the Stasi.”

via Meet the godfather of wearables | The Verge.

Sounds a little bit like VRM and a little bit like Jon Udell‘s Thali project. Wearables don’t fix the problem of metadata being collected about you, no. You still don’t control those ingoing/outgoing feeds of information.

Sandy Pentland points out a lot can be derived and discerned simply from the people you know. Every contact in your friend list adds one more bit of intelligence about you without anyone ever talking to your directly. This kind of analysis is only possible now due to the End User License Agreements posted by each of the collecting entities (so-called social networking websites).

An alternative to this wildcat, frontier mentality by data collectors is Vendor Relationship Management (as proposed in the Cluetrain Manifesto) Doc Searls wants people to be able to share the absolute minimum necessary in order to get what they want or need from vendors on the Internet, especially the data collecting types. And then from that point if an individual wants to share more, they should get rewarded with a higher level of something in return from the people they share with (prime example are vendors, the ‘V’ in VRM).

Thali in another way allows you to share data as well. But instead of letting someone into your data mesh in an all or nothing way, it lets strongly identified individuals have linkages into our out of your own data streams whatever form those data streams may take. I think Sandy Pentland, Doc Searls and Jon Udell would all agree there needs to be some amount of ownership and control ceded back to the individual going forward. Too many of the vendors own the data and the metadata right now, and will do what they like with it including responding to National Security Letters. So instead of being a commercial venture, they are swiftly evolving into branches or defacto subsidiary of the National Security Agency. If we can place controls on the data, we’ll maybe get closer to the ideal of social networking and controlled data sharing.

Enhanced by Zemanta