Categories
cloud data center google macintosh

Apple’s CDN Now Live: Has Paid Deals With ISPs, Massive Capacity In Place – Dan Rayburn – StreamingMediaBlog.com

A sample apple grown around Shenandoah Valley, Va.
A sample apple grown around Shenandoah Valley, Va. (Photo credit: Boston Public Library)

Since last year, Apple’s been hard at work building out their own CDN and now those efforts are paying off. Recently, Apple’s CDN has gone live in the U.S. and Europe and the company is now delivering some of their own content, directly to consumers. In addition, Apple has interconnect deals in place with multiple ISPs, including Comcast and others, and has paid to get direct access to their networks.

via Apple’s CDN Now Live: Has Paid Deals With ISPs, Massive Capacity In Place – Dan Rayburn – StreamingMediaBlog.com.

Given some of my experiences attempting to watch the Live Stream from Apple’s combined iPhone, Watch event, I wanted to address CDN. Content Distribution Networks are designed to speed the flow of many types of files from Data Centers or Video head ends for Live Events. So I note, I started this article back on August 1st when this original announcement went out. And now it’s doubly poignant as the video stream difficulties at the start of the show (1PM EDT) kind of ruined it for me and for a few others. They lost me in that scant few first 10 minutes and they never recovered. I did connect later but that was after the Apple Watch presentation was half done. Oh well, you get what you pay for. I paid nothing for the Live Event stream from Apple and got nothing in return.

Back during the Steve Jobs era, one of the biggest supporters of Akamais and its content delivery network was Apple Inc. And this was not just for streaming of the Keynote Speeches and MacWorld (before they withdrew from that event) but also the World Developers Conference (WWDC). At the time enjoyed great access to free streams and great performance levels for free. But Apple cut way back on that simulcasts and rivals like Eventbrite began to eat in to Akamai’s lower end. Since then the huge data center providers began to build out their own data centers worldwide. And in so doing, a kind of internal monopoly of content distribution went into effect. Google was first to really scale up in a massive way then scale out, to make sure all those GMail accounts ran faster and better in spite of the huge mail spools on each account member. Eventually the second wave of social media outlets joined in (with Facebook leading a revolution in Open Stack and Open Hardware specs) and created their own version of content delivery as well.

Now Apple has attempted to scale up and scale out to keep people tightly bound to brand. iCloud really is a thing, but more than that now the real heavy lifting is going on once and for all time. Peering arrangements (anathema to the open Internet) would be signed and deals made to scratch each other’s backs by sharing the load/burden of carrying not just your own internal traffic, but those of others too. And depending on the ISP you could really get gouged by those negotiations. But no matter Apple soldiered on and now they’re ready to really let all the prep work be put to good use. Hopefully the marketing will be sufficient to express the satisfaction and end user experience at all levels, iTunes, iApps, iCloud data storage and everything else would experience the boosts in speed. If Apple can hold its own against both Facebook and Gmail in this regard, they future’s so bright they’re gonna need shades.

Categories
flash memory macintosh SSD wintel

AnandTech | Samsung SSD XP941 Review: The PCIe Era Is Here

Mini PCI-Express Connector on Inspiron 11z Mot...
Mini PCI-Express Connector on Inspiron 11z Motherboard, Front (Photo credit: DandyDanny)

I don’t think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It’s not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ’s Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.

via AnandTech | Samsung SSD XP941 Review: The PCIe Era Is Here.

Listen well as you pine away for your very own SSD SATA drive. One day you will get that new thing. But what you really, really want is the new, NEW thing. And that my friends is quite simply the PCIe SSD. True the enterprise level purchasers have had a host of manufacturers and models to choose from in this form factor. But the desktop market cannot afford Fusion-io products at ~15K per card fully configured. That’s a whole different market there. RevoDrive has had a wider range of products that go from heights of Fusion-io down to the top end Gamer market with the RevoDrive R-series PCIe drives. But those have always been SATA drives piggy-backed onto a multi-lane PCIe card (4x or 8x depending on how many controllers were installed onboard the card). Here now the evolutionary step of dumping SATA in favor of a more native PCIe to NAND memory controller is slowly taking place. Apple has adopted it for the top end Mac Pro revision (the price and limited availability has made it hard to publicize this architectural choice). It has also been adopted in the laptops available since Summer 2013 that Apple produces (and I have the MacBook Air to prove it). Speedy, yes it is. But how do I get this on my home computer?

Anandtech was able to score an aftermarket card through a 3rd party in Australia along with a PCIe adapter card for that very Samsung PCIe drive. So where there is a will, there is a way. From that purchase of both the drive and adapter, this review of the Samsung PCIe drive has come about. And all one can say looking through all the benchmarks is we have not seen anything yet. Drive speeds which have been the bottle-neck in desktop and mobile computing since the dawn of the Personal Computer are slowly lifting. And not by a little but by a lot. This is going to herald a new age in personal computers that is as close to former Intel Chairman, Andy Grove’s 10X Effect. Samsung’s PCIe native SSD is that kind of disruptive, perspective altering product that will put all manufacturers on notice and force a sea change in design and manufacture.

As end users of the technology SSD’s with SATA interfaces have already had a big time impact on our laptops and desktops. But what I’ve been writing about and trying to find signs of ever since the first introduction of SSD drives is the logical path through the legacy interfaces. Whether it was ATA/BIOS or the bridge chips that glue the motherboard to the CPU, a number of “old” architecture items are still hanging around on the computers of today. Intel’s adoption of UEFI has been a big step forward in shedding the legacy bottleneck components. Beyond that native on CPU controllers for PCIe are a good step forward as well. Lastly the sockets and bridging chips on the motherboard are the neighborhood improvements that again help speed things up. The last mile however is the dumping of the “disk” interace, the ATA/SATA spec as a pre-requisite for reading data off of a spinning magnetic hard drive. We need to improve that last mile to the NAND memory chips and then we’re going to see the full benefit of products like the Samsung PCIe drive. And that day is nearly upon us with the most recent motherboard/chipset revision from Intel. We may need another revision to get exactly what we want, but the roadmap is there and all the manufacturers had better get on it. As Samsung’s driving this revolution,…NOW.

Enhanced by Zemanta
Categories
macintosh mobile wired culture

AnandTech | Apple’s Cyclone Microarchitecture Detailed

Image representing Apple as depicted in CrunchBase

So for now, Cyclone’s performance is really used to exploit race to sleep and get the device into a low power state as quickly as possible.

via AnandTech | Apple’s Cyclone Microarchitecture Detailed.

Race to sleep, is the new, new thing for mobile cpus. Power conservation at a given clock speed is all done through parceling out a task and with more cores or higher clock speed. All cores execute and comple the task then cores are put to sleep or a much lower power state. That’s how you get things done and maintain a 10 hour battery life for an iPad Air or iPhone 5s.

So even though a mobile processor could be the equal of the average desktop cpu, it’s the race to sleep state that is the big differentiation now. That is what Apple’s adopting of a 64bit ARM vers. 8 architecture is bringing to market, the race to sleep. At the very beginning of the hints and rumors 64bit seemed more like an attempt to address more DRAM or gain some desktop level performance capability. But it’s all for the sake of executing quick and going into sleep mode to preserve the battery capacity.

I’m thinking now of some past articles covering the nascent, emerging market for lower power, massively parallel data center servers. 64bits was an absolute necessary first step to get ARM cpus into blades and rack servers destined for low power data centers. Memory addressing is considered a non-negotiable feature that even the most power efficient server must have. Didn’t matter what CPU it is designed around, memory address HAS got to be 64bits or it cannot be considered. That rule still applies today and will be the sticking point still for folks sitting back and ignoring the Tilera architecture or SeaMicro’s interesting cloud in a box designs. To date, it seems like Apple was first to market with a 64bit ARM design, without ARM actually supplying the base circuit design and layouts for the new generation of 64bit ARM. Apple instead did the heavy lifting and engineering themselves to get the 64bit memory addressing it needed to continue its drive to better battery life. Time will tell if this will herald other efficiency or performance improvements in raw compute power.

Enhanced by Zemanta
Categories
computers gpu h.264 macintosh technology wintel

AnandTech – Testing OpenCL Accelerated Handbrake with AMD’s Trinity

Image representing AMD as depicted in CrunchBase
Image via CrunchBase

AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year.

via AnandTech – What We’ve Been Waiting For: Testing OpenCL Accelerated Handbrake with AMD’s Trinity.

There’s a lot to talk about when it comes to accelerated video transcoding, really. Not the least of which is HandBrake’s dominance generally for anyone doing small scale size reductions of their DVD collections for transport on mobile devices. We owe it all to the open source x264 codec and all the programmers who have contributed to it over the years, standing on one another’s shoulders allowing us to effortlessly encode or transcode gigabytes of video to manageable sizes. But Intel has attempted to rock the boat by inserting itself into the fray by tooling its QuickSync technology for accelerating the compression and decompression of video frames. However it is a proprietary path pursued by a few small scale software vendors. And it prompts the question, when is open source going to benefit from the proprietary Intel QuickSync technology? Maybe its going to take a long time. Maybe it won’t happen at all. Lucky for the HandBrake users in the audience some attempt is being made now to re-engineer the x264 codec to take advantage of any OpenCL compliant hardware on a given computer.

Image representing NVidia as depicted in Crunc...
Image via CrunchBase
Categories
macintosh mobile technology

AnandTech – The iPad 2,4 Review: 32nm Brings Better Battery Life

New A5 chip from Apple
This is a 32nm A5 cpu from a new model Apple TV, the same CPU being installed in some small number of iPad 2

I would like to applaud Apples 32nm migration plan. By starting with lower volume products and even then, only on a portion of the iPad 2s available on the market, Apple maintains a low profile and gets great experience with Samsungs 32nm HK+MG process.

via AnandTech – The iPad 2,4 Review: 32nm Brings Better Battery Life.

Anand Lal Shimpi @ Anandtech.com does a great turn explaining some of the Electrical Engineering minutiae entailed by Apple’s un-publicized switch to a smaller design rule for some of it’s 2nd Generation iPads. Specifically this iPad’s firmware reads as the iPad 2,4 version indicating a 32nm version of the Apple A5 chip. And boy howdy, is there a difference between 45nm A5 vs. 32nm A5 on the iPad 2.

Anand first explains the process technology involved in making the new chip (metal gate electrodes and High dielectric constant gate oxides). Most of it is chosen to keep electricity from leaking between the two sides of the transistor “switch” that populate the circuits on the processor. The metal gates can handle a higher voltage which is needed to overcome the high dielectric constant of the gate oxide (it is more resistant to conducting electricity, so it needs more voltage ‘oomph!’ applied it). Great explanation I think regarding those two on-die changes with the new Samsung 32nm design ruling. Both of the changes help keep the electrical current from leaking all over the processor.

What does this change mean? Well the follow-up to that question is the benchmarks that Anand runs in the rest of the article checking battery life at each step of the way. Informally it appears the iPad2,4 will have roughly 1 extra hour of battery life as compared to the original iPad2,1 using the larger 45nm A5 chip. Performance of the graphics and cpu are exactly the SAME as the first generation A5. So as the article title indicates this change was just a straightforward die shrink from 45nm to 32nm and no doubt is helping validate the A5 architecture on the new production line process technology. And this will absolutely be required to wedge the very large current generation A5x cpu on the iPad 3 into a new iPhone in the Fall 2012.

But consider this, even as Apple and Samsung both refine and innovate on the ARM architecture for mobile devices, Intel is still the technology leader (bar none). Intel has got 22nm production lines up and running and is releasing Ivy Bridge CPUs with that design rule this Summer 2012. While Intel doesn’t literally compete in the mobile chip industry (there have been attempts in the past), it at least can tout being the most dense, power efficient chip in the categories it dominates. I cannot help but wonder what kind of gains could be made if an innovator like Apple had access to an ARM chip foundry with all of Intel’s process engineering and optimization. What would an A5X chip look like at the 22nm design ruling with all the power efficiency and silicon process technologies applied to it? How large would the die be? What kind of battery life would you see if you die-shrunk an A5X all the way down to 22nm? That to me is the Andy Grove 10X improvement I would like to see. Could we get 11-12 continuous hours of battery life on a cell phone? Could we see a cell phone with more cpu/graphics capability than current generation Xbox and Playstations? Hard to tell, I know, but thinking about it is just so darned much fun I cannot help but think about it.

Design rules at 45nm (left) and 32nm (right) indicate the scale being discussed in the Anandtech article.
Categories
computers gpu macintosh mobile technology

Apple A5X CPU in Review

Apple Inc.
Apple Inc. (Photo credit: marcopako )

A meta-analysis of the Apple A5X system on chip

(from the currently shipping 3rd Gen iPad)

New Ipad’s A5X beats NIVIDIA Tegra 3 in some tests (MacNN|Electronista)

Apple’s A5X Die (and Size?) Revealed (Anandtech.com)

Chip analysis reveals subtle changes to new iPad innards (AppleInsider-quoting Anandtech)

Apple A5X Die Size Measured: 162.94mm^2, Samsung 45nm LP Confirmed (Update from Anandtech based on a more technical analysis of the chip)

Reading through all the hubbub and hand-waving from the technology ‘teardown’ press outlets, one would have expected a bigger leap from Apple’s chip designers. A fairly large chip sporting an enormous graphics processor integrated into the die is what Apple came up with to help boost itself to the next higher rez display (so-called Retina Display). The design rule is still a pretty conservative 45nm (rather than try to push the envelope by going with 32nm or thinner to bring down the power requirements). Apple similarly had to boost its battery capacity to make up for this power hungry pixel demon by almost 2X more than the first gen iPad. So for almost the ‘same’ amount of battery capacity (10 hours of reserve power), you get the higher rez display. But a bigger chip and higher rez display will add up to some extra heat being generated, generally speaking. Which leads us to a controversy.

Given this knowledge there has been a recent back and forth argument over thermal design point for iPad 3rd generation. Consumer Reports published an online article saying the power/heat dissipation was much higher than previous generation iPads. They included some thermal photographs indicating the hot spots on the back of the device and relative temperatures. While the iPad doesn’t run hotter than a lot of other handheld devices (say Android tablets). It does run hotter than say an iPod Touch. But as Apple points out that has ALWAYS been the case. So you gain some things you give up some things and still Apple is the market leader in this form factor, years ahead of the competition. And now the tempest in the teapot is winding down as Consumer Reports (via LA Times.com)has rated the 3rd Gen iPad as it’s no. 1 tablet on the market (big surprise). So while they aren’t willing to retract their original claim of high heat, they are willing to say it doesn’t count as ’cause for concern’. So you be the judge when you try out the iPad in the Apple Store. Run it through its paces, a full screen video or 2 should heat up the GPU and CPU enough to get the electrons really racing through the device.

A picture of the Apple A5X
This is the new System on Chip used by the Apple 3rd generation iPad
Categories
computers diy macintosh wintel wired culture

Hope for a Tool-Less Tomorrow | iFixit.org

I’ve seen the future, and not only does it work, it works without tools. It’s moddable, repairable, and upgradeable. Its pieces slide in and out of place with hand force. Its lid lifts open and eases shut. It’s as sleek as an Apple product, without buried components or proprietary screws.

via Hope for a Tool-Less Tomorrow | iFixit.org.HP Z1 worstation

Oh how I wish this were true today for Apple. I say this as a recent purchaser of a Apple re-furbished iMac 27″. My logic and reasoning for going with the refurbished over new was based on a few bits of knowledge gained reading Macintosh weblogs. The rumors I read included the idea that Apple repaired items are strenuously tested before being re-sold. In some cases return items are not even broken, they are returns based on buyers remorse or cosmetic problems. So there’s a good chance the logic board and lcd have no problems. Now reading back this Summer just after the launch of Mac OS X 10.7 (Lion), I read about lots of problems with crashes off 27″ iMacs. So I figured a safer bet would be to get a 21″ iMac. But then I started thinking about Flash-based Solid State Disks. And looking at the prohibitively high prices Apple charges for their installed SSDs, I decided I needed something that I could upgrade myself.

But as you may know iMacs over time have never been and continue to remain not user up-gradable. However, that’s not to say people haven’t tried or succeeded in upgrading their own iMacs over the years. Enter the aftermarket for SSD upgrades. Apple has attempted to zig and zag as the hobbyists swap in newer components like larger hard drives and SSDs. Witness the Apple temperature sensor on the boot drive in the 27″ iMac, where they have added a sensor wire to measure the internal heat of the hard drive. As the Mac monitors this signal it will rev-up the internal fans. Any iMac hobbyist attempting to swap out a a 4TByte or 3TByte drive for the stock Apple 2TByte drive will suffer the inevitable panic mode of the iMac as it cannot see its temperature sensor (these replacement drives don’t have the sensor built-in) and assumes the worst. They say the noise is deafening when those fans speed up, and they never, EVER slow down. This Apple’s attempt insure sanctity through obscurity. No one is allowed to mod or repair, and that means anyone foolish enough to attempt to swap their internal hard drive on the iMac.

But, there’s a workaround thank goodness and that is the 27″ iMac whose internal case is just large enough to install a secondary hard drive. You can slip a 2.5″ SSD into that chassis. You just gotta know how to open it up. And therein lies the theme of this essay, the user upgradable, user friendly computer case design. The antithesis of this idea IS the iMac 27″ if you read these steps from iFixit and the photographer Brian Tobey. Both of these websites make clear the excruciating minutiae of finding and disconnecting the myriad miniature cables that connect the logic board to the computer. Without going through those steps one cannot gain access to the spare SATA connectors facing towards the back of the iMac case. I decided to go through these steps to add an SSD to my iMac right after it was purchased. I thought Brian Tobey’s directions were just slightly better and had more visuals pertinent to the way I was working on the iMac as I opened up the case.

It is in a word a non-trivial task. You need the right tools, the right screwdrivers. In fact you even need suction cups! (thankyou Apple). However there is another way, even for so-called All-in-One style computer designs like the iMac. It’s a new product from Hewlett-Packard targeted for the desktop engineering and design crowd. It’s an All-in-One workstation that is user upgradable and it’s all done without any tools at all. Let me repeat that last bit again, it is a ‘tool-less’ design. What you may ask is a tool-less design? I hadn’t heard of it either until I read this article in iFixit. And after having followed the links to the NewEgg.com website to see what other items were tagged as ‘tool-less’ I began to remember some hints and stabs at this I had seen in some Dell Optiplex desktops some years back. The ‘carrier’ bracket for the CD/DVD and HDD drive bays were these green plastic rails that just simply ‘pushed’ into the sides of the drive (no screws necessary).

And when I considered my experience working with the 27″ iMac actually went pretty well (it booted up the first time no problems) after all I had done to it, I consider myself very lucky. But it could have been better. And there’s no reason it cannot be better for EVERYONE. It also made me think of the XO Laptop (One Laptop Per Child project) and I wondered how tool-less that laptop might be. How accessible are any of these designs? And it also made me recall the Facebook story I recently commented on about how Facebook is designing its own hard drive storage units to make them easier to maintain (no little screws to get lost and dropped onto a fully powered motherboard and short things out). So I much more hope than when I first embarked on the do it yourself journey of upgrading my iMac. Tool-less design today, Tool-less design tomorrow and Tool-less design forever.

Image representing Hewlett-Packard as depicted...
Image via CrunchBase
Categories
blogroll macintosh support technology

Daring Fireball: Mountain Lion

Wrestling with Mountain Lion

And then the reveal: Mac OS X — sorry, OS X — is going on an iOS-esque one-major-update-per-year development schedule. This year’s update is scheduled for release in the summer, and is ready now for a developer preview release. Its name is Mountain Lion.1

via Daring Fireball: Mountain Lion.

Mountain Lion is the next iteration of Mac OS X. And while there are some changes since the original Lion was released just this past Summer, they are more like further improvements than real changes. I say this in part due to the concentration on aligning the OS X apps with iOS apps for small things like using the same name:

iCal versus Calendar

iChat versus Messages

Address book versus Contacts

Reminders versus Notes

etc.

Under the facial, superficial level more of the Carbonized libraries and apps are being factored out and being given full Cocoa libraries and app equivalents where possible. But one of the bigger changes, one that’s been slipping since the release of Mac OS X 10.7 is the use of ‘Sand-boxing’ as a security measure for Apps. The sand-box would be implemented by the Developers to adhere to strict rules set forth by Apple. Apps wouldn’t be allowed to do certain things anymore like writing to an external Filesystem, meaning saving or writing out to a USB drive without special privileges being asked for. Seems trivial at first but on the level of a day to day user of a given App it might break it altogether. I’m thinking of iMovie as an example where you can specify you want new Video clips saved into an Event Folder kept on an external hard drive. Will iMovie need to be re-written in order to work on Mountain Lion? Will sand-boxing hurt other Apple iApps as well?

Then there is the matter of ‘GateKeeper’ which is another OS mechanism to limit trust based on who the developer. Apple will issue security certificates to registered developers who post their software through the App Store, but independents who sell direct can also register for these certs as well, thus establishing a chain of trust from the developer to Apple to the OS X user. From that point you can choose to trust either just App store certified apps, independent developers who are Apple certified or unknown, uncertified apps. Depending on your needs the security level can be chosen according to which type of software you use. some people are big on free software which is the least likely to have a certification, but still may be more trustworthy than even the most ‘certified’ of AppStore software (I’m thinking emacs as an example). So sandboxes, gatekeepers all conspire to funnel developers into the desktop OS and thus make it much harder for developers of malware to infect Apple OS X computers.

These changes should be fully ready for consumption upon release of the OS in July. But as I mentioned sandboxing has been rolled back no less than two times so far. First roll-back occurred in November. The most recent rollback was here in February. The next target date for sandboxing is in June and should get all the Apple developers to get on board  prior to the release of Mountain Lion the following month, in July. This reminds me a bit of the flexibility Apple had to show in the face of widespread criticism and active resistance to the Final Cut Pro X release last June. Apple had to scramble for a time to address concerns of bugs and stability under Mac OS X 10.7 (the previous Snow Leopard release seemed to work better for some who wrote on Apple support discussion forums). Apple quickly came up with an alternate route for dissatisfied customers who demanded satisfaction by giving copies of Final Cut Pro Studio 7 (with just the Final Cut Pro app included) to people who called up their support lines asking to substitute the older version of the software for a recent purchase of FCP X. Flexibility like this seems to be more frequent going forward which is great to see Apple’s willingness to adapt to an adverse situation of their own creation. We’ll see how this migration goes come July.

Mac OS X logo
Image via Wikipedia
Categories
flash memory macintosh SSD

More PCI-express SSD cards coming to OS X | MacFixIt – CNET Reviews

The card will use the Marvell 88SE9455 RAID controller that will interface with the SandForce 2200-based daughter cards that can be added to the main controller on demand. This will allow for user-configurable drive sizes from between 60GB and 2TB in size, allowing you to expand your storage as your need for it increases.

via More PCI-express SSD cards coming to OS X | MacFixIt – CNET Reviews.

OWC Logo
Other World Computing

I’m a big fan of Other World Computing (OWC) and have always marveled at their ability to create new products they brand on their own. In the article they talk about a new Mac compatible PCIe SSD. It sounds like an uncanny doppleganger to the Angelbird board announced about 2 years ago and started shipping last Fall 2011. The add-on sockets especially remind me of the ugpradable Angelbird board especially. There are not many PCIe SSD cards that have sockets for Flash memory modules and Other World Computing would be the second one I have seen since I’ve been commenting on these devices when they hit the consumer market. Putting sockets on the board makes it easier to come into the market at a lower price point for users where price is most important. However at the high end capacity is king for some purchasers of PCIe SSD drives. So the oddball upgradeable PCIe SSD fills a niche that’s for sure.

Performance projections for this card are really good and typical of most competing PCIe SSD cards. So depending on your needs you might find this perfect. Price however is always harder to pin down. Angelbird sold a bare PCIe card with no SSDs for around $249. It came with 32GB onboard for that price. What was really nice was the card used SATA sockets set far enough apart to place full sized SSDs on the card without crowding each other. This brought the possibility of slowly upgrading to higher speed drives or larger capacity drives over time to the consumer market.

Angelbird PCIe SSD
Welcome to Wings from Angelbird - Mac comaptible PCIe SSD

But what’s cooler still is Angelbird’s card allowed it to run under ANY OS, even Mac OS as it was engineered to be a a free standing computer with a large Flash memory attached to it. That allowed it to pre-boot into an embedded OS before handing over control to the Host OS whatever flavor it might be. I don’t know if the OWC card works similarly, but it does NOT use SATA sockets or provide enough room to plug in SSD drives. The plug-in modules for this device are mSATA style sockets used in tablets and netbook style computers. So the modules will most likely need to be purchased direct from OWC to peform capacity upgrades over the life of the PCIe card itself. Prices have not yet been set according to this article.

Categories
computers macintosh mobile technology wintel wired culture

The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It

Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there’d be no realistic way to make it happen.

via The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It.

While true that Microsoft didn’t tax Software Developers who sold product running on the Windows OS, a kind of a tax levy did exist for hardware manufacturers creating desktop pc’s with Intel chips inside. But message received I get the bigger point, cul-de-sacs don’t make good computers. They do however make good appliances. But as the author Jonathan Zittrain points out we are becoming less aware of the distinction between a computer and an applicance, and have lowered our expectation accordingly.

In fact this points to the bigger trend of not just computers becoming silos of information/entertainment consumption no, not by a long shot. This trend was preceded by the wild popularity of MySpace, followed quickly by Facebook and now Twitter. All platforms as described by their owners with some amount of API publishing and hooks allowed to let in 3rd party developers (like game maker Zynga). But so what if I can play Scrabble or Farmville with my ‘friends’ on a social networking ‘platform’? Am I still getting access to the Internet? Probably not, as you are most likely reading what ever filters into or out of the central all-encompassing data store of the Social Networking Platform.

Like the old World Maps in the days before Columbus, there be Dragons and the world ends HERE even though platform owners might say otherwise. It is an Intranet pure and simple, a gated community that forces unique identities on all participants. Worse yet it is a big brother-like panopticon where each step and every little movement monitored and tallied. You take quizzes, you like, you share, all these things are collection points, check points to get more data about you. And that is the TAX levied on anyone who voluntarily participates in a social networking platform.

So long live the Internet, even though it’s frontier, wild-catting days are nearly over. There will be books and movies like How the Cyberspace was Won, and the pioneers will all be noted and revered. We’ll remember when we could go anywhere we wanted and do lots of things we never dreamed. But those days are slipping as new laws get passed under very suspicious pretenses all in the name of Commerce. As for me I much prefer Freedom over Commerce, and you can log that in your stupid little database.

Cover of "The Future of the Internet--And...
Cover via Amazon