Categories
technology wintel

Resentment, Jealousy, Feuds: A Look at Intel’s Founding Team – Michael S. Malone – Harvard Business Review

English: Michael S. Malone is a U.S. author, a...
English: Michael S. Malone is a U.S. author, a former editor of Forbes magazine and host of a talk show on PBS. Español: Michael S. Malone es un escritor y guionista estadounidense. (Photo credit: Wikipedia)

Just when you think you understand the trio (as I thought I did up until my final interview with Grove) you learn something new that turns everything upside-down. The Intel Trinity must be considered one of the most successful teams in business history, yet it seems to violate all the laws of successful teams.

via Resentment, Jealousy, Feuds: A Look at Intel’s Founding Team – Michael S. Malone – Harvard Business Review.

Agreed, this is a topic near and dear to my heart as I’ve collectively read a number of the stories published over the years from the Tech Press. From Tracy Kidder‘s, Soul of a New Machine, to Fred Brook’s The Miracle Man Month, Steven Levy’s Insanely Great. The story of Xerox PARC as told in Dealer’s of Lightning, the Arpanet Project as told in Where Wizards Stay Up Late. And moving somewhat along those lines, Stewart Brand’s The Media Lab and Howard Rheingold’s Virtual Reality. All of these are studies at some level of organizational theory in the high technology field.

And one thing you find commonly is there’s one charismatic individual that joins up at some point (early or late doesn’t matter) who then brings in a flood of followers and talent that is the kick in the pants that really gets momentum going. The problem is with a startup company say like Intel or its predecessor, Fairchild Semiconductor, there’s more than one charismatic individual. And keeping that organization stitched together even just loosely is probably the biggest challenge of all. So I’ll be curious to read this book Michael Malone and see how it compares to the other books in my anthology of organization theory in high tech. Should be a good, worthwhile read.

 

Categories
flash memory macintosh SSD wintel

AnandTech | Samsung SSD XP941 Review: The PCIe Era Is Here

Mini PCI-Express Connector on Inspiron 11z Mot...
Mini PCI-Express Connector on Inspiron 11z Motherboard, Front (Photo credit: DandyDanny)

I don’t think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It’s not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ’s Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.

via AnandTech | Samsung SSD XP941 Review: The PCIe Era Is Here.

Listen well as you pine away for your very own SSD SATA drive. One day you will get that new thing. But what you really, really want is the new, NEW thing. And that my friends is quite simply the PCIe SSD. True the enterprise level purchasers have had a host of manufacturers and models to choose from in this form factor. But the desktop market cannot afford Fusion-io products at ~15K per card fully configured. That’s a whole different market there. RevoDrive has had a wider range of products that go from heights of Fusion-io down to the top end Gamer market with the RevoDrive R-series PCIe drives. But those have always been SATA drives piggy-backed onto a multi-lane PCIe card (4x or 8x depending on how many controllers were installed onboard the card). Here now the evolutionary step of dumping SATA in favor of a more native PCIe to NAND memory controller is slowly taking place. Apple has adopted it for the top end Mac Pro revision (the price and limited availability has made it hard to publicize this architectural choice). It has also been adopted in the laptops available since Summer 2013 that Apple produces (and I have the MacBook Air to prove it). Speedy, yes it is. But how do I get this on my home computer?

Anandtech was able to score an aftermarket card through a 3rd party in Australia along with a PCIe adapter card for that very Samsung PCIe drive. So where there is a will, there is a way. From that purchase of both the drive and adapter, this review of the Samsung PCIe drive has come about. And all one can say looking through all the benchmarks is we have not seen anything yet. Drive speeds which have been the bottle-neck in desktop and mobile computing since the dawn of the Personal Computer are slowly lifting. And not by a little but by a lot. This is going to herald a new age in personal computers that is as close to former Intel Chairman, Andy Grove’s 10X Effect. Samsung’s PCIe native SSD is that kind of disruptive, perspective altering product that will put all manufacturers on notice and force a sea change in design and manufacture.

As end users of the technology SSD’s with SATA interfaces have already had a big time impact on our laptops and desktops. But what I’ve been writing about and trying to find signs of ever since the first introduction of SSD drives is the logical path through the legacy interfaces. Whether it was ATA/BIOS or the bridge chips that glue the motherboard to the CPU, a number of “old” architecture items are still hanging around on the computers of today. Intel’s adoption of UEFI has been a big step forward in shedding the legacy bottleneck components. Beyond that native on CPU controllers for PCIe are a good step forward as well. Lastly the sockets and bridging chips on the motherboard are the neighborhood improvements that again help speed things up. The last mile however is the dumping of the “disk” interace, the ATA/SATA spec as a pre-requisite for reading data off of a spinning magnetic hard drive. We need to improve that last mile to the NAND memory chips and then we’re going to see the full benefit of products like the Samsung PCIe drive. And that day is nearly upon us with the most recent motherboard/chipset revision from Intel. We may need another revision to get exactly what we want, but the roadmap is there and all the manufacturers had better get on it. As Samsung’s driving this revolution,…NOW.

Enhanced by Zemanta
Categories
cloud computers google wintel

Microsoft Office applications barely used by many employees, new study shows – Techworld.com

The Microsoft Office Core Applications
The Microsoft Office Core Applications (Photo credit: Wikipedia)

After stripping out unnecessary licensing Office licenses, organisations were left with a hybrid environment, part cloud, part desktop Office.

via Microsoft Office applications barely used by many employees, new study shows – Techworld.com.

The Center IT outfit I work for is dumping as much on premise Exchange Mailbox hosting as it can. However we are sticking with Outlook365 as provisioned by Microsoft (essentially an Outlook’d version of Hotmail). It has the calendar and global address list we all have come to rely on. But as this article goes into great detail on the rest of the Office Suite, people aren’t creating as many documents as they once did. We’re viewing them yes, but we just aren’t creating them.

I wonder how much of this is due in part to re-use or the assignment of duties to much higher top level people to become the authors. Your average admin assistant or even secretary doesn’t draft anything dictated to them anymore. The top level types now generally would be embarrassed to dictate something out to anyone. Plus the culture of secrecy necessitates more 1-to-1 style communications. And long form writing? Who does that anymore? No one writes letters, they write brief email or even briefer text, Tweets or Facebook updates. Everything is abbreviated to such a degree you don’t need thesaurus, pagination, or any of the super specialized doo-dads and add-ons we all begged M$ and Novell to add to their première word processors back in the day.

From an evolutionary standpoint, we could get by with the original text editors first made available on timesharing systems. I’m thinking of utilities like line editors (that’s really a step backwards, so I’m being really facetious here). The point I’m making is we’ve gone through a very advanced stage in the evolution of our writing tool of choice and it became a monopoly. WordPerfect lost out and fell by the wayside. Primary, Secondary and Middle Schools across the U.S. adopted M$ Word. They made it a requirement. Every college freshman has been given discounts to further the loyalty to the Office Suite. Now we don’t write like we used to, much less read. What’s the use of writing something so long in pages, no one will ever read it? We’ve jumped the shark of long form writing, and therefore the premiere app, the killer app for the desktop computer is slowly receding behind us as we keep speeding ahead. Eventually we’ll see it on the horizon, it’s sails being the last visible part, the crow’s nest, then poof! It will disappear below the horizon line. We’ll be left with our nostalgic memories of the first time we used MS Word.

Enhanced by Zemanta
Categories
computers mobile technology wintel

DDR4 Heir-Apparent Makes Progress | EE Times

The first DDR4 memory module was manufactured ...
The first DDR4 memory module was manufactured by Samsung and announced in January 2011. (Photo credit: Wikipedia)

The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.

via DDR4 Heir-Apparent Makes Progress | EE Times.

Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.

Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.

Enhanced by Zemanta
Categories
computers flash memory SSD wintel

AnandTech | Testing SATA Express And Why We Need Faster SSDs

PCIe- und PCI-Slots im Vergleich
PCIe- und PCI-Slots im Vergleich (Photo credit: Wikipedia)

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

via AnandTech | Testing SATA Express And Why We Need Faster SSDs.

As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were  always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).

Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.

So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.

Enhanced by Zemanta
Categories
gpu technology wintel

The Memory Revolution | Sven Andersson | EE Times

A 256Kx4 Dynamic RAM chip on an early PC memor...
A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

via The Memory Revolution | Sven Andersson | EE Times

Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

Enhanced by Zemanta
Categories
computers gpu h.264 macintosh technology wintel

AnandTech – Testing OpenCL Accelerated Handbrake with AMD’s Trinity

Image representing AMD as depicted in CrunchBase
Image via CrunchBase

AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year.

via AnandTech – What We’ve Been Waiting For: Testing OpenCL Accelerated Handbrake with AMD’s Trinity.

There’s a lot to talk about when it comes to accelerated video transcoding, really. Not the least of which is HandBrake’s dominance generally for anyone doing small scale size reductions of their DVD collections for transport on mobile devices. We owe it all to the open source x264 codec and all the programmers who have contributed to it over the years, standing on one another’s shoulders allowing us to effortlessly encode or transcode gigabytes of video to manageable sizes. But Intel has attempted to rock the boat by inserting itself into the fray by tooling its QuickSync technology for accelerating the compression and decompression of video frames. However it is a proprietary path pursued by a few small scale software vendors. And it prompts the question, when is open source going to benefit from the proprietary Intel QuickSync technology? Maybe its going to take a long time. Maybe it won’t happen at all. Lucky for the HandBrake users in the audience some attempt is being made now to re-engineer the x264 codec to take advantage of any OpenCL compliant hardware on a given computer.

Image representing NVidia as depicted in Crunc...
Image via CrunchBase
Categories
computers gpu h.264 technology wintel

AnandTech – The Intel Ivy Bridge Core i7 3770K Review

Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.

via AnandTech – The Intel Ivy Bridge Core i7 3770K Review.

QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.

In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel  Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.

Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.

Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.

Timeline of Intel processor codenames includin...
Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)
Categories
cloud computers data center gpu technology wintel

AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com

Image representing AMD as depicted in CrunchBase
Image via CrunchBase

Chip designer and chief Intel rival AMD has signed an agreement to acquire SeaMicro, a Silicon Valley startup that seeks to save power and space by building servers from hundreds of low-power processors.

via AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com.

It was bound to happen eventually, I guess. SeaMicro has been acquired by AMD. We’ll see what happens as a result as SeaMicro is a customer of Intel’s Atom chips and now most recently Xeon server chips as well. I have no idea where this is going or what AMD intends to do, but hopefully this won’t scare off any current or near future customers.

SeaMicro’s competitive advantage has been and will continue to be the development work they performed on that custom ASIC chip they use in all their systems. That bit of intellectual property was in essence the reason AMD decided to acquire SeaMicro and hopefully let it gain an engineering advantage for systems it might put out on the market in the future for large scale Data Centers.

While this is all pretty cool technology, I think that SeaMicro’s best move was to design its ASIC so that it could take virtually any common CPU. In fact, SeaMicro’s last big announcement introduced its SM10000-EX option, which uses low-power, quad-core Xeon processors to more than double compute performance while still keeping the high density, low-power characteristics of its siblings.

via SeaMicro acquisition: A game-changer for AMD • The Register.

So there you have it Wired and The Register are reporting the whole transaction pretty positively. Looks on the surface to be a win for AMD as it can design new server products and get them to market quickly using the SeaMicro ASIC as a key ingredient. SeaMicro can still service it’s current customers and eventually allow AMD to up sell or upgrade as needed to keep the ball rolling. And with AMD’s Fusion architecture marrying GPUs with CPU cores who knows what cool new servers might be possible? But as usual the nay-sayers the spreaders of Fear, Uncertainty and Doubt have questioned the value of SeaMicro and their original product he SM-10000.

Diane Bryant, the general manager of Intel’s data center and connected systems group at a press conference for the launch of new Xeon processors had this to say, ““We looked at the fabric and we told them thereafter that we weren’t even interested in the fabric,” when asked about SeaMicro’s attempt to interest Intel in buying out the company. To Intel there’s nothing special enough in the SeaMicro to warrant buying the company. Furthermore Bryant told Wired.com:

“…Intel has its own fabric plans. It just isn’t ready to talk about them yet. “We believe we have a compelling solution; we believe we have a great road map,” she said. “We just didn’t feel that the solution that SeaMicro was offering was superior.”

This is a move straight out of Microsoft’s marketing department circa 1992 where they would pre-announce a product that never shipped was barely developed beyond a prototype stage. If Intel is really working on this as a new product offering you would have seen an announcement by now, rather than a vague, tangential reference that appears more like a parting shot than a strategic direction. So I will be watching intently in the coming months and years if needed to see what if any Intel ‘fabric technology’ makes its way from the research lab, to the development lab and to final product shipping. However don’t be surprised if this is Intel attempting to undermine AMD’s choice to purchase SeaMicro. Likewise, Forbes.com later reported from a representative from SeaMicro that their company had not tried to encourage Intel to acquire SeaMicro. It is anyone’s guess who is really correct and being 100% honest in their recollections. However I am still betting on SeaMicro’s long term strategy of pursuing low power, ultra dense, massively parallel servers. It is an idea whose time has come.

Image representing Intel as depicted in CrunchBase
Image via CrunchBase
Categories
computers diy macintosh wintel wired culture

Hope for a Tool-Less Tomorrow | iFixit.org

I’ve seen the future, and not only does it work, it works without tools. It’s moddable, repairable, and upgradeable. Its pieces slide in and out of place with hand force. Its lid lifts open and eases shut. It’s as sleek as an Apple product, without buried components or proprietary screws.

via Hope for a Tool-Less Tomorrow | iFixit.org.HP Z1 worstation

Oh how I wish this were true today for Apple. I say this as a recent purchaser of a Apple re-furbished iMac 27″. My logic and reasoning for going with the refurbished over new was based on a few bits of knowledge gained reading Macintosh weblogs. The rumors I read included the idea that Apple repaired items are strenuously tested before being re-sold. In some cases return items are not even broken, they are returns based on buyers remorse or cosmetic problems. So there’s a good chance the logic board and lcd have no problems. Now reading back this Summer just after the launch of Mac OS X 10.7 (Lion), I read about lots of problems with crashes off 27″ iMacs. So I figured a safer bet would be to get a 21″ iMac. But then I started thinking about Flash-based Solid State Disks. And looking at the prohibitively high prices Apple charges for their installed SSDs, I decided I needed something that I could upgrade myself.

But as you may know iMacs over time have never been and continue to remain not user up-gradable. However, that’s not to say people haven’t tried or succeeded in upgrading their own iMacs over the years. Enter the aftermarket for SSD upgrades. Apple has attempted to zig and zag as the hobbyists swap in newer components like larger hard drives and SSDs. Witness the Apple temperature sensor on the boot drive in the 27″ iMac, where they have added a sensor wire to measure the internal heat of the hard drive. As the Mac monitors this signal it will rev-up the internal fans. Any iMac hobbyist attempting to swap out a a 4TByte or 3TByte drive for the stock Apple 2TByte drive will suffer the inevitable panic mode of the iMac as it cannot see its temperature sensor (these replacement drives don’t have the sensor built-in) and assumes the worst. They say the noise is deafening when those fans speed up, and they never, EVER slow down. This Apple’s attempt insure sanctity through obscurity. No one is allowed to mod or repair, and that means anyone foolish enough to attempt to swap their internal hard drive on the iMac.

But, there’s a workaround thank goodness and that is the 27″ iMac whose internal case is just large enough to install a secondary hard drive. You can slip a 2.5″ SSD into that chassis. You just gotta know how to open it up. And therein lies the theme of this essay, the user upgradable, user friendly computer case design. The antithesis of this idea IS the iMac 27″ if you read these steps from iFixit and the photographer Brian Tobey. Both of these websites make clear the excruciating minutiae of finding and disconnecting the myriad miniature cables that connect the logic board to the computer. Without going through those steps one cannot gain access to the spare SATA connectors facing towards the back of the iMac case. I decided to go through these steps to add an SSD to my iMac right after it was purchased. I thought Brian Tobey’s directions were just slightly better and had more visuals pertinent to the way I was working on the iMac as I opened up the case.

It is in a word a non-trivial task. You need the right tools, the right screwdrivers. In fact you even need suction cups! (thankyou Apple). However there is another way, even for so-called All-in-One style computer designs like the iMac. It’s a new product from Hewlett-Packard targeted for the desktop engineering and design crowd. It’s an All-in-One workstation that is user upgradable and it’s all done without any tools at all. Let me repeat that last bit again, it is a ‘tool-less’ design. What you may ask is a tool-less design? I hadn’t heard of it either until I read this article in iFixit. And after having followed the links to the NewEgg.com website to see what other items were tagged as ‘tool-less’ I began to remember some hints and stabs at this I had seen in some Dell Optiplex desktops some years back. The ‘carrier’ bracket for the CD/DVD and HDD drive bays were these green plastic rails that just simply ‘pushed’ into the sides of the drive (no screws necessary).

And when I considered my experience working with the 27″ iMac actually went pretty well (it booted up the first time no problems) after all I had done to it, I consider myself very lucky. But it could have been better. And there’s no reason it cannot be better for EVERYONE. It also made me think of the XO Laptop (One Laptop Per Child project) and I wondered how tool-less that laptop might be. How accessible are any of these designs? And it also made me recall the Facebook story I recently commented on about how Facebook is designing its own hard drive storage units to make them easier to maintain (no little screws to get lost and dropped onto a fully powered motherboard and short things out). So I much more hope than when I first embarked on the do it yourself journey of upgrading my iMac. Tool-less design today, Tool-less design tomorrow and Tool-less design forever.

Image representing Hewlett-Packard as depicted...
Image via CrunchBase