Category: technology

General technology, not anything in particular

  • Nvidia: No magic compilers for HPC coprocessors • The Register

    Image representing NVidia as depicted in Crunc...
    Image via CrunchBase

    And with clock speeds topped out and electricity use and cooling being the big limiting issue, Scott says that an exaflops machine running at a very modest 1GHz will require one billion-way parallelism, and parallelism in all subsystems to keep those threads humming.

    via Nvidia: No magic compilers for HPC coprocessors • The Register.

    Interesting write-up of a blog entry from nVidia‘s chief of super-computing, including his thoughts regarding scaling up to an exascale supercomputer. I’m surprised at how power efficient a GPU is for floating point operations. I’m amazed at these company’s ability to measure the power consumption down to the single operation level. Microjoules and picojoules are worlds apart from on another and here’s the illustration:

    1 Microjoule is 1 millionth of a joule or 1×10-6 (six decimal places) whereas 1 picojoule is 1×10-12 or twice as many decimal places a total of 12 zeroes. So that is a HUGE difference 6 orders of magnitude in efficiency from an electrical consumption standpoint. The nVidia guy, author Steve estimates that to get to exascale supercomputers any hybrid CPU/GPU machine would need GPUs that have one order of magnitude higher efficiency in joules per floating point operation (FLOP) or 1×10-13, one whole decimal point better. To borrow a cliche, Supercomputer manufacturers have their work cut out for them. The way forward is efficiency and the GPU has the edge per operation, and all they need do is increase the efficiency that one decimal point to get them closer to the exascale league of super-computing.

    Why is exascale important to the scientific community at large? In one segment there’s never enough cycles per second to satisfy the scale of the computations being done. Models of systems can be created but the simulations they provide may not have enough fine grained ‘detail’. The detail say for weather model simulating a period of time in the future needs to know the current conditions then it can start the calculation. But the ‘resolution’ or fine-grained detail of ‘conditions’ is what limits the accuracy over time. Especially when small errors get amplified by each successive cycle of calculating. One way to help limit the damage by these small errors is to increase the resolution or the land area over which you are assign a ‘current condition’. So instead of 10 miles of resolution (meaning each block on the face of the planet is 10miles square), you switch to 1mile resolution. Any error in a one mile square patch is less likely to cause huge errors in the future weather prediction. But now you have to calculate 10x the number of squares as compared to the previous best model which you set at 10miles of resolution. That’s probably the easiest way to see how demands on the computer increase as people increase the resolution of their weather prediction models. But it’s not limited just to weather. It could be used to simulate a nuclear weapon aging over time. Or it could be used to decrypt foreign messages intercepted by NSA satellites. The speed of the computer would allow more brute force attempts ad decrypting any message they capture.

    Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva ...
    Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva TNT2 M64 Grafikprozessor (Photo credit: Wikipedia)

    In spite of all the gains to be had with an exascale computer, you still have to program the bloody thing to work with your simulation. And that’s really the gist of this article, no free lunch in High Performance Computing. The level of knowledge of the hardware required to get anything like the maximum theoretical speed is a lot higher than one would think. There’s no magic bullet or ‘re-compile’ button that’s going to get your old software running smoothly on the exascale computer. More likely you and a team of the smartest scientists are going to work for years to tailor your simulation to the hardware you want to run it on. And therein lies the rub, the hardware alone isn’t going to get you the extra performance.

  • What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld

    Astronauts Steven L. Smith, and John M. Grunsf...
    Astronauts Steven L. Smith, and John M. Grunsfeld, appear as small figures in this wide scene photographed during extravehicular activity (EVA). On this space walk they are replacing gyroscopes, contained in rate sensor units (RSU), inside the Hubble Space Telescope. A wide expanse of waters, partially covered by clouds, provides the backdrop for the photograph. (Photo credit: Wikipedia)

    “Theres a bunch of research I’ve come across in this work, where people say that the social context is a 78-80 per cent determinant of performance; individual abilities are 10 per cent. So why do we make this mistake? Because we spend all of these years in higher education being trained that its about individual abilities.”

    via What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld.

    Former NASA Director Charlie Pellerin is now a consultant on how to prevent failures on teams charged with carrying out large scale, technically impossible projects. He’s had to face two biggies while at NASA the Challenger explosion and subsequent to that and more directly the Hubble Space Telescope mirror failure. In that time he tried to really look at the source of the failures rather than just let the investigative committees do all the work. And what he’s decided is that culture is a bigger part of the chain of failure than technical ability.

    Which leads me to ask the question how often does this happen in other circumstances as well? I’ve seen the PBS NOVA program on the 747 runway collision in Tenerife back in 1977. At that time the KLM Airliner more or less start taking off before the Pan American 747 had taxied off of the runway. In spite of all the protocols and controls in place to manage planes on the ground the captain of the KLM 747 made the decision to take-off not once, but TWICE! The first time it happened his co-pilot corrected him saying they didn’t have clearance from the tower. The second time, the co-pilot and navigator both sat back and let the whole thing unfold, to their own detriment. No one survived in that KLM 747 after it crashed into the Pan American 747 and bounced down the runway. In the article I link to above there’s an anecdote that Charlie Pellerin relates about a number of Korean Air crashes that occurred in the 1990s. Similarly it was the cockpit ‘culture’ that was leading to the bad decisions being made and resulting in the loss of the airplane and passengers on board.

    Some people like to call it ‘top-down’ management, where everyone defers to the person recognized as the person in charge. Worse yet sometimes the person in charge doesn’t always realize this. They go on about their decision making process never once thinking people are restraining themselves holding back questions. The danger is always once this pattern is in place, any mistake by the person in charge gets amplified over time. In Charlie Pellerin’s judgement modern airliners are designed to run by a team who SHARE the responsibilities of conducting the airplane. And while the planes themselves have many safety systems in place to make things run smoothly the assumption is always made by the plane designers of a TEAM. But when you have a hierarchy of people in charge and people that defer to them, the TEAM as such doesn’t exist and you have now broken the primary design principle of the aircraft’s designer. No TEAM, No plane, and there’s many examples that show this not just in the airline accident investigations.

    Polishing the Hubble Mirror at Perkin-Elmer

    In the case of the Hubble Telescope mirror, things broke down when a simple calibration step was rushed. The sub-contractor in charge of measuring the point of focus on the mirror followed the procedure as given to him. But skipped a step that threw the whole calibration off. The step that he skipped was to simply apply spray paint onto two end caps that would then be placed on to a very delicately measured and finely cut metal rod. The black spray paint was meant to act as a non-reflective surface to expose a very small bit of the rod end to a laser that would measure the distance to the focus point. What happened instead because the whole telescope program was going over budget and was constantly delayed all sub-contractors were pressured to ‘hurry up’. When the guy who was responsible for this calibration step couldn’t find matte black spray paint to put on the end caps he improvised (like a true engineer). He got black electrical tape, wrapped it on the end of the cap, cut a hole with the tip of an Xacto knife and began his calibration step.

    But that one detail was what put the whole Hubble Space Telescope in jeopardy. In the rush to get this step done, the Xacto knife nicked a bit off the metal end cap and a small shiny, metal burr was created. Almost too small to see, the burr poke out into the hole cut into the black electrical tape for the laser to shine through. When the engineer calibrated it, the small burr was reflecting light back to the sensors measuring the distance. The burr was only 1mm off the polished surface of the calibration rod. And that 1mm distance was registered as ‘in spec’ and the full distance to the focus point had 1 mm added to it. Considering how accurate a mirror has to be for telescope work, and how long the Hubble mirror spent being ground and polished, 1mm would be equal to 1 mile in the real world. And this was the source of the ‘blur’ in the Hubble Telescope when it was first turned on after being deployed by the Space Shuttle. The culture of hurry up and get it done, we’re behind schedule jeopardized a billion dollar space telescope mission that was over budget and way behind schedule.

    All these cautionary tales reiterate the over-arching theme of big failures are not technical, no. These failures are cultural and everyone has the capacity to do better every chance they get. I encourage anyone and everyone reading this article to read the original interview with Charlie Pellerin as he’s got a lot to say on this subject and some fixes that can be applied to avoid the fire next time. Because statistically speaking there will always be a next time.

    KLM's 747-406 PH-BFW - nose
    KLM’s 747-406 PH-BFW – nose (Photo credit: caribb)
  • Apple A5X CPU in Review

    Apple Inc.
    Apple Inc. (Photo credit: marcopako )

    A meta-analysis of the Apple A5X system on chip

    (from the currently shipping 3rd Gen iPad)

    New Ipad’s A5X beats NIVIDIA Tegra 3 in some tests (MacNN|Electronista)

    Apple’s A5X Die (and Size?) Revealed (Anandtech.com)

    Chip analysis reveals subtle changes to new iPad innards (AppleInsider-quoting Anandtech)

    Apple A5X Die Size Measured: 162.94mm^2, Samsung 45nm LP Confirmed (Update from Anandtech based on a more technical analysis of the chip)

    Reading through all the hubbub and hand-waving from the technology ‘teardown’ press outlets, one would have expected a bigger leap from Apple’s chip designers. A fairly large chip sporting an enormous graphics processor integrated into the die is what Apple came up with to help boost itself to the next higher rez display (so-called Retina Display). The design rule is still a pretty conservative 45nm (rather than try to push the envelope by going with 32nm or thinner to bring down the power requirements). Apple similarly had to boost its battery capacity to make up for this power hungry pixel demon by almost 2X more than the first gen iPad. So for almost the ‘same’ amount of battery capacity (10 hours of reserve power), you get the higher rez display. But a bigger chip and higher rez display will add up to some extra heat being generated, generally speaking. Which leads us to a controversy.

    Given this knowledge there has been a recent back and forth argument over thermal design point for iPad 3rd generation. Consumer Reports published an online article saying the power/heat dissipation was much higher than previous generation iPads. They included some thermal photographs indicating the hot spots on the back of the device and relative temperatures. While the iPad doesn’t run hotter than a lot of other handheld devices (say Android tablets). It does run hotter than say an iPod Touch. But as Apple points out that has ALWAYS been the case. So you gain some things you give up some things and still Apple is the market leader in this form factor, years ahead of the competition. And now the tempest in the teapot is winding down as Consumer Reports (via LA Times.com)has rated the 3rd Gen iPad as it’s no. 1 tablet on the market (big surprise). So while they aren’t willing to retract their original claim of high heat, they are willing to say it doesn’t count as ’cause for concern’. So you be the judge when you try out the iPad in the Apple Store. Run it through its paces, a full screen video or 2 should heat up the GPU and CPU enough to get the electrons really racing through the device.

    A picture of the Apple A5X
    This is the new System on Chip used by the Apple 3rd generation iPad
  • ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com

    Image representing Wired Magazine as depicted ...
    Image via CrunchBase

    On Tuesday, the company unveiled its new ARM Cortex-M0+ processor, a low-power chip designed to connect non-PC electronics and smart sensors across the home and office.

    Previous iterations of the Cortex family of chips had the same goal, but with the new chip, ARM claims much greater power savings. According to the company, the 32-bit chip consumes just nine microamps per megahertz, an impressively low amount even for an 8- or 16-bit chip.

    via ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com.

    Lower power means a very conservative power budget especially for devices connected to the network. And 32 bits is nothing to sneeze at considering most manufacturers would pick a 16 or 8-bit chip to bring down the cost and power budget too. According to this article the degree of power savings is so great in fact that in sleep mode the chip consumes almost no power at all. For this market Moore’s Law is paying off big benefits especially given the bonus of a 32bit core. So not only will you get a very small lower power cpu, you’ll have a much more diverse range of software that could run on it and take advantage of a larger memory address space as well. I think non-PC electronics could include things as simple as web cams or cellphone cameras. Can you imagine a CMOS camera chip with a whole 32bit cpu built in? Makes you wonder no just what it could do, but what ELSE it could do, right?

    The term ‘Internet of Things‘ is bandied about quite a bit as people dream about cpus and networks connecting ALL the things. And what would be the outcome if your umbrella was connected to the Internet? What if ALL the umbrellas were connected? You could log all kinds of data, whether it was opened or close, what the ambient temperature is. It would be like a portable weather station for anyone aggregating all the logged data potentially. And the list goes on and on. Instead of Tire pressure monitors, why not also capture video of the tire as it is being used commuting to work. It could help measure the tire wear and setup and appointment when you need to get a wheel alignment. It could determine how many times you hit potholes and suggest smoother alternate routes. That’s the kind of blue sky wide open conjecture that is enabled by a 32-bit low/no power cpu.

    Moore's Law, The Fifth Paradigm.
    Moore’s Law, The Fifth Paradigm. (Photo credit: Wikipedia)
  • Accidental Time Capsule: Moments from Computing in 1994 (from RWW)

    Byte Magazine is one of the reasons Im here today, doing what I do. Every month, Byte set its sights on the bigger picture, a significant trend that might be far ahead or way far ahead. And in July 1994, Jon Udell to this very day, among the most insightful people ever to sign his name to an article was setting his sights on the inevitable convergence between the computer and the telephone.

    via Accidental Time Capsule: Moments from Computing in 1994, by 

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    I also liked Tom Halfhill, Jerry Pournelle, Steve Gilmore, and many other writers at Byte Inc. over the years too. I couldn’t agree more with Scott Fulton, as I still am a big fan of Jon Udell and any projects he worked on and documented. I can credit Jon Udell for getting me to be curious about weblogging, Radio Userland, WordPress, Flickr and del.icio.us (social bookmarking website). And watching his progress on a ‘Calendar of Public Calendars’, The elmcity project. Jon’s attempting to catalog and build an aggregated list of calendars that have RSS style feeds that anyone can subscribe to. No need for automated emails filling a filtered email box. No, you just fire up a browser and read what’s posted. You find out what’s going on and just add the event to your calendar.

    As Jon has discovered the calendar exists, the events are there, they just aren’t evenly distributed yet (ie much like the future). So in his analysis of ‘what works’ Jon’s found some sterling examples of calendar keeping and maintenance some of which has popped up in interesting places, like Public School systems. However the biggest downfall of all events calendars is the all too common practice of taking Word Documents and exporting them as PDF files which get posted to a website. THAT is the calendar for far too many organizations and it fails utterly as a means of ‘discovering’ what’s going on.

    Suffice it to say elmcity has been a long term goal of organizing and curatorial work that Jon is attempting to get an informal network of like-minded people involved in. And as different cities form up calendar ‘hubs’ Jon is collecting them into larger networks so that you can just search one spot and find out ‘what’s happening’ and then adding those events to your own calendar in a very seamless and lightweight manner. I highly recommend following Jon’s weblog as he’s got the same ability to explain and analyze these technologies that he excelled at while at Byte Inc. And continues to follow his bliss and curiosity about computers, networks and more generally technology.

  • Stephen Wolfram Blog : The Personal Analytics of My Life

    Publicity photo of en:Stephen Wolfram.
    Publicity photo of en:Stephen Wolfram. (Photo credit: Wikipedia)

    One day I’m sure everyone will routinely collect all sorts of data about themselves. But because I’ve been interested in data for a very long time, I started doing this long ago. I actually assumed lots of other people were doing it too, but apparently they were not. And so now I have what is probably one of the world’s largest collections of personal data.

    via Stephen Wolfram Blog : The Personal Analytics of My Life.

    Gordon Bell
    Gordon Bell (Photo credit: Wikipedia)

    In some ways similar to Stephen Wolfram, Gordon Bell at Microsoft has engaged in an attempt to record his “LifeBits” using a ‘wearable’ computer to record video and capture what goes on in his life. In my opinion, Stephen Wolfram has done Gordon Bell one better by collecting data over a much longer period and of a much wider range than Gordon Bell accomplished within the scope of LifeBits. Reading Wolfram’s summary of all his data plots is as interesting as seeing the plots themselves. There can be no doubt that Stephen Wolfram has always and will continue to think differently than most folks, and dare I say most scientists. Bravo!

    The biggest difference between MyLifeBits versus Wolfram’s personal data collection is the Wolram’s emphasis on non-image based data. The goal it seems for the Microsoft Research group is to fulfill the promise of Vannevar Bush’s old article titled “As we may think” printed in the Atlantic, July 1945. In this article Bush proposes a prototype of a more ‘visual computer’ that would act as a memory recall and analytic thinking aid. He named it the Memex.

    Gordon Bell and Jim Gemmell of Microsoft Research, seemed to be focused on the novelty of a camera carried and taking pictures automatically of the area immediately in front of it. This log of ‘what was seen’ was meant to help cement visual memory and recall. Gordon Bell had spent a long period of time digitizing, “articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally.” This over emphasis on visual data I think if used properly might be useful to some but is more a product of Gordon Bell’s own personal interest in seeing how much he could capture then catalog after the fact.

    Stephen Wolfram’s data wasn’t even necessarily based on a ‘wearable computer‘ the way MyLifeBits seems to be. Wolfram built in a logging/capture system into things he did daily on a computer. This even included data collected by a digital pedometer to measure the steps he would take in a day. The plots of the data are most interesting in comparison to one another especially given the length of time over which they were collected (a much bigger set than Gordon Bell’s Life Bits I dare say). So maybe this points to another step forward in the evolution of Lifebits perhaps? Wolfram’s data seems to be more useful in a lot of ways, he’s not as focused on memory and recall of any given day. But maybe a synthesis of Wolfram’s data collection methods and analysis and Gordon Bell’s MyLifeBits capture of image data might be useful to a broader range of people if someone wanted to embrace and extend these two scientists’ personal data projects.

  • Nginx is some serious voodoo for serving up websites. I’m bowled over by this level of performance for a consumer level Intel box running Ubuntu. And from 200 connections to 1000 connections performance stays high without any big increases in latency. Amazing.

    lowlatencyweb's avatarThe Low Latency Web

    A modern HTTP server running on somewhat recent hardware is capable of servicing a huge number of requests with very low latency. Here’s a plot showing requests per second vs. number of concurrent connections for the default index.html page included with nginx 1.0.14.


    With this particular hardware & software combination the server quickly reaches over 500,000 requests/sec and sustains that with gradually increasing latency. Even at 1,000 concurrent connections, each requesting the page as quickly as possible, latency is only around 1.5ms.

    The plot shows the average requests/sec and per-request latency of 3 runs of wrk -t 10 -c N -r 10m http://localhost:8080/index.html where N = number of connections. The load generator is wrk, a scalable HTTP benchmarking tool.

    Software

    The OS is Ubuntu 11.10 running Linux 3.0.0-16-generic #29-Ubuntu SMP Tue Feb 14 12:48:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux. The following kernel parameters were changed to increase…

    View original post 149 more words

  • Raspberry Pi Hits a Slight Manufacturing Delay

    English: Extract from Raspberry Pi board at Tr...
    Image via Wikipedia

    The $35 Raspberry Pi “Model B” is board of choice to ship out to consumers first. It contains two USB ports, 256 MB of RAM, an Ethernet port and a 700 MHz Broadcom BCM2835 SoC. The Videocore 4 GPU within the SoC is roughly the equivalent to the original Xboxs level of performance, providing Open GL ES 2.0, hardware-accelerated OpenVG, and 1080p30 H.264 high-profile decode.

    via Raspberry Pi Hits a Slight Manufacturing Delay.

    Raspberry Pi boards are on the way and the components list is still pretty impressive for $35 USD. Not bad, given they had a manufacturing delay. The re-worked boards should ship out as a second batch once they have been tested fully. It also appears all the other necessary infrastructure is slowly falling into place to help create a rich environment for curious and casually interested purchasers of the Raspberry Pi. For instance let’s look at the Fedora remixes for Raspberry Pi.

    A remix in the Open source software community refers to a distribution of an OS that can run without compiling on a particular chip architecture whether it be the Raspberry Pi Broadcom chip or an Intel x86 variety. In addition to the OS a number of other pre-configured applications will be included so that you can start using the computer right away instead of having to download lots of apps. The best part of this is not only the time savings but the lowering of the threshold to less technical users. Also of note is the particular Fedora OS distributions chosen LXDE and XFCE both noted for being less resource intensive and smaller in physical size. The documentation on the Fedora website indicates these two distros are geared for older less capable, less powerful computers that you would still like to use around the house. And for a Raspberry Pi user, getting a tuned OS specifically compiled for your CPU and ready to go is a big boon.

    What’s even more encouraging is the potential for a Raspberry Pi community to begin optimizing and developing a new range of apps specifically geared towards this new computer architecture. I know the Fedora Yum project is a great software package manager using the RPM format for adding and removing software components as things change. And having a Yum app geared specifically for Raspberry Pi users might give a more App store like experience for the more casual users interested in dabbling. Right now there’s a group at Seneca College in Toronto, CA doing work on an app store-like application that would facilitate the process off discovering, downloading and

    Logo for Raspberry Pi
    Raspberry Pi project

    trying out different software pre-compiled for the Raspberry Pi computer.

    Broadcom
    Image via Wikipedia
  • AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase

    Chip designer and chief Intel rival AMD has signed an agreement to acquire SeaMicro, a Silicon Valley startup that seeks to save power and space by building servers from hundreds of low-power processors.

    via AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com.

    It was bound to happen eventually, I guess. SeaMicro has been acquired by AMD. We’ll see what happens as a result as SeaMicro is a customer of Intel’s Atom chips and now most recently Xeon server chips as well. I have no idea where this is going or what AMD intends to do, but hopefully this won’t scare off any current or near future customers.

    SeaMicro’s competitive advantage has been and will continue to be the development work they performed on that custom ASIC chip they use in all their systems. That bit of intellectual property was in essence the reason AMD decided to acquire SeaMicro and hopefully let it gain an engineering advantage for systems it might put out on the market in the future for large scale Data Centers.

    While this is all pretty cool technology, I think that SeaMicro’s best move was to design its ASIC so that it could take virtually any common CPU. In fact, SeaMicro’s last big announcement introduced its SM10000-EX option, which uses low-power, quad-core Xeon processors to more than double compute performance while still keeping the high density, low-power characteristics of its siblings.

    via SeaMicro acquisition: A game-changer for AMD • The Register.

    So there you have it Wired and The Register are reporting the whole transaction pretty positively. Looks on the surface to be a win for AMD as it can design new server products and get them to market quickly using the SeaMicro ASIC as a key ingredient. SeaMicro can still service it’s current customers and eventually allow AMD to up sell or upgrade as needed to keep the ball rolling. And with AMD’s Fusion architecture marrying GPUs with CPU cores who knows what cool new servers might be possible? But as usual the nay-sayers the spreaders of Fear, Uncertainty and Doubt have questioned the value of SeaMicro and their original product he SM-10000.

    Diane Bryant, the general manager of Intel’s data center and connected systems group at a press conference for the launch of new Xeon processors had this to say, ““We looked at the fabric and we told them thereafter that we weren’t even interested in the fabric,” when asked about SeaMicro’s attempt to interest Intel in buying out the company. To Intel there’s nothing special enough in the SeaMicro to warrant buying the company. Furthermore Bryant told Wired.com:

    “…Intel has its own fabric plans. It just isn’t ready to talk about them yet. “We believe we have a compelling solution; we believe we have a great road map,” she said. “We just didn’t feel that the solution that SeaMicro was offering was superior.”

    This is a move straight out of Microsoft’s marketing department circa 1992 where they would pre-announce a product that never shipped was barely developed beyond a prototype stage. If Intel is really working on this as a new product offering you would have seen an announcement by now, rather than a vague, tangential reference that appears more like a parting shot than a strategic direction. So I will be watching intently in the coming months and years if needed to see what if any Intel ‘fabric technology’ makes its way from the research lab, to the development lab and to final product shipping. However don’t be surprised if this is Intel attempting to undermine AMD’s choice to purchase SeaMicro. Likewise, Forbes.com later reported from a representative from SeaMicro that their company had not tried to encourage Intel to acquire SeaMicro. It is anyone’s guess who is really correct and being 100% honest in their recollections. However I am still betting on SeaMicro’s long term strategy of pursuing low power, ultra dense, massively parallel servers. It is an idea whose time has come.

    Image representing Intel as depicted in CrunchBase
    Image via CrunchBase
  • Hope for a Tool-Less Tomorrow | iFixit.org

    I’ve seen the future, and not only does it work, it works without tools. It’s moddable, repairable, and upgradeable. Its pieces slide in and out of place with hand force. Its lid lifts open and eases shut. It’s as sleek as an Apple product, without buried components or proprietary screws.

    via Hope for a Tool-Less Tomorrow | iFixit.org.HP Z1 worstation

    Oh how I wish this were true today for Apple. I say this as a recent purchaser of a Apple re-furbished iMac 27″. My logic and reasoning for going with the refurbished over new was based on a few bits of knowledge gained reading Macintosh weblogs. The rumors I read included the idea that Apple repaired items are strenuously tested before being re-sold. In some cases return items are not even broken, they are returns based on buyers remorse or cosmetic problems. So there’s a good chance the logic board and lcd have no problems. Now reading back this Summer just after the launch of Mac OS X 10.7 (Lion), I read about lots of problems with crashes off 27″ iMacs. So I figured a safer bet would be to get a 21″ iMac. But then I started thinking about Flash-based Solid State Disks. And looking at the prohibitively high prices Apple charges for their installed SSDs, I decided I needed something that I could upgrade myself.

    But as you may know iMacs over time have never been and continue to remain not user up-gradable. However, that’s not to say people haven’t tried or succeeded in upgrading their own iMacs over the years. Enter the aftermarket for SSD upgrades. Apple has attempted to zig and zag as the hobbyists swap in newer components like larger hard drives and SSDs. Witness the Apple temperature sensor on the boot drive in the 27″ iMac, where they have added a sensor wire to measure the internal heat of the hard drive. As the Mac monitors this signal it will rev-up the internal fans. Any iMac hobbyist attempting to swap out a a 4TByte or 3TByte drive for the stock Apple 2TByte drive will suffer the inevitable panic mode of the iMac as it cannot see its temperature sensor (these replacement drives don’t have the sensor built-in) and assumes the worst. They say the noise is deafening when those fans speed up, and they never, EVER slow down. This Apple’s attempt insure sanctity through obscurity. No one is allowed to mod or repair, and that means anyone foolish enough to attempt to swap their internal hard drive on the iMac.

    But, there’s a workaround thank goodness and that is the 27″ iMac whose internal case is just large enough to install a secondary hard drive. You can slip a 2.5″ SSD into that chassis. You just gotta know how to open it up. And therein lies the theme of this essay, the user upgradable, user friendly computer case design. The antithesis of this idea IS the iMac 27″ if you read these steps from iFixit and the photographer Brian Tobey. Both of these websites make clear the excruciating minutiae of finding and disconnecting the myriad miniature cables that connect the logic board to the computer. Without going through those steps one cannot gain access to the spare SATA connectors facing towards the back of the iMac case. I decided to go through these steps to add an SSD to my iMac right after it was purchased. I thought Brian Tobey’s directions were just slightly better and had more visuals pertinent to the way I was working on the iMac as I opened up the case.

    It is in a word a non-trivial task. You need the right tools, the right screwdrivers. In fact you even need suction cups! (thankyou Apple). However there is another way, even for so-called All-in-One style computer designs like the iMac. It’s a new product from Hewlett-Packard targeted for the desktop engineering and design crowd. It’s an All-in-One workstation that is user upgradable and it’s all done without any tools at all. Let me repeat that last bit again, it is a ‘tool-less’ design. What you may ask is a tool-less design? I hadn’t heard of it either until I read this article in iFixit. And after having followed the links to the NewEgg.com website to see what other items were tagged as ‘tool-less’ I began to remember some hints and stabs at this I had seen in some Dell Optiplex desktops some years back. The ‘carrier’ bracket for the CD/DVD and HDD drive bays were these green plastic rails that just simply ‘pushed’ into the sides of the drive (no screws necessary).

    And when I considered my experience working with the 27″ iMac actually went pretty well (it booted up the first time no problems) after all I had done to it, I consider myself very lucky. But it could have been better. And there’s no reason it cannot be better for EVERYONE. It also made me think of the XO Laptop (One Laptop Per Child project) and I wondered how tool-less that laptop might be. How accessible are any of these designs? And it also made me recall the Facebook story I recently commented on about how Facebook is designing its own hard drive storage units to make them easier to maintain (no little screws to get lost and dropped onto a fully powered motherboard and short things out). So I much more hope than when I first embarked on the do it yourself journey of upgrading my iMac. Tool-less design today, Tool-less design tomorrow and Tool-less design forever.

    Image representing Hewlett-Packard as depicted...
    Image via CrunchBase