Category: science & technology

This is what I read to find out what else is going on, not just with the Internet and desktop computers

  • Intel’s Tri-Gate gamble: It’s now or never • The Register

    I am the author of this image.
    Image via Wikipedia

    Analysis  There are two reasons why Intel is switching to a new process architecture: it can, and it must.

    via Intel’s Tri-Gate gamble: It’s now or never • The Register.

    Usually every time there’s a die shrink of a computer processor there’s always an attendant evolution of the technology to to produce it. I think back recently to the introduction of super filtered water immersion lithography. The goal of immersion lithography was to increase the ability to resolve the fine line wire traces of the photo masks as they were exposed onto photosensitive emulsion coating a silicon wafer. The problem is the light travels from the photomask to the surface of the wafer through ‘air’. There’s a small gap, and air is full of optical scrambling atoms and molecules that make the photomask slightly blurry. If you put a layer of water between the mask the wafer, you have in a sense a ‘lens’ made of optically superior water molecules that act more predictably than ‘air’. Likewise you get better chip yields, more profit, higher margins etc.

    As the wire traces on microchips continue to get thinner and transistors smaller the physics involved are harder to control. Electrodynamics begin to follow the laws of Quantum Electro-dynamics rather than Maxwell’s equations. This makes it harder to tell when a transistor has switched on or off and the basic digits of the digital computer (1s and 0s) become harder and harder to measure and register properly. IBM and Intel have waged a war on shrinking their dies all through the 80s and 90s. IBM chose to adopt new, sometimes exotic materials (copper metal for traces instead of aluminum, silicon on insulator, high-K dielectric gates). Intel chose to go the direction of improving what they had using higher energy light sources and only adopting very new processes when absolutely, positively necessary. At the same time, Intel was cranking out such volumes of current generation product it almost seem as though it didn’t need to innovate at all. But IBM kept Intel honest as did Taiwan Semiconductor Manufacturing Co. (contract manufacturer of micro-processors). And Intel continued to maintain its volume and technological advantage.

    ARM (formerly the Acorn Risc Machine) became a cpu manufacturer during the golden age off RISC computers (early and mid-1980s). Over time they got out of manufacturing and started selling their processor designs to anyone that wanted to embed a core microprocessor into a bigger chip design. Eventually ARM became the defacto standard micro chip for smart handheld devices and telephones before Intel had to react. Intel had come up with a market leading low voltage cheap cpu in the Atom processor. But they did not have the specialized knowledge and capability ARM had with embedded cpus. Licensees of ARM designs began cranking out newer generations of higher performance and lower power cpus than Intel’s research labs could create and the stage was set for a battle royale of low power/high performance.

    Which brings us now to an attempt to continue to scale down the  processor power requirements through the same brute force that worked in the past. Moore’s Law, an epigram quoted from Intel’s Gordon Moore indicated the rate at which the ‘industry’ would continue to scaled down the size of the ‘wires’ in silicon chips would increase speed and lower costs. Speeds would double, prices would halve and this would continue on ad infinitum to some distant future. The problem has been always that the future is now. Intel hit a brick wall back around the end off the Pentium IV era when they couldn’t get speeds to double anymore without also doubling the amount of waste heat coming off of the chip. That heat was harder and harder to remove efficiently and soon, it appeared the chips would create so much heat they might melt. Intel worked around this by putting multiple CPUs on the same silicon wafers they used for previous generation chips and got some amount of performance scaling to work. Along those lines they have research projects to create first an 80 core processor, then a 48 and now a 24 core processor (which might actually turn into a shippable product). But what about Moore’s Law? Well, the scaling has continued downward, and power requirements have improved but it’s getting harder and harder to shave down those little wire traces and get the bang that drives profits for Intel. Now Intel is going the full-on research and development route by adopting a new way of making transistors on silicon. It’s called a Fin Field Effect Trasistor or FinFET. And it makes use of not just the surface layer of metal but the surface and the left and right sides, effectively giving you 3x the surface to move the electrons around the processor. If they can get this to work on a modern day silicon chip production line, they will be able to continue differentiating their product, keeping their costs manageable and selling more chips. But it’s a big risk and bet I’m sure everyone hopes will pay off.

  • Chip upstart Tilera in the news

    Diagram Of A Partial Mesh Network
    Diagram Of A Partial Mesh Network

    As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.

    via Chip upstart Tilera lines up $45m in funding • The Register.

    There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.

    Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.

    Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!

    Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.

  • 450mm chip wafers | Electronista

    Image:Wafer 2 Zoll bis 8 Zoll.jpg uploaded by ...
    Image via Wikipedia

    Intel factory to make first 450mm chip wafers | Electronista.

    Being a student of the history of technology I know that the silicon semiconductor industry has been able to scale production according to Moore’s Law. However apart from the advances in how small the transistors can be made (the real basis of Moore’s Law), the other scaling factor has been the size of the wafers. Back in the old days silicon crystals had to be drawn out from a furnace at a very even steady rate which forced them to be thin cylinders 1-2″ in diameter. However as techniques improved (including a neat trick where the crystal was re-melted to purify it) the crystals increase in diameter to a nice 4″ size that helped bring down costs. Then came the big migration to 6″ wafers, 8″ and now the 300mm wafer (roughly 12″). Now Intel is still on its freight train to further bring down costs by moving the wafers up to the next largest size (450mm) and is stilling shrinking the parts (down to an unbelievably skinny 22nm in size). As the wafers continue to grow, the cost of processing equipment goes up and the cost of the whole production facility will too. The last big price point for a new production fab for Intel was always $2Billion. There may be multiple production lines in that Fab, but you needed to always have upfront that required money in order to be competitive. And Intel was more than competitive, it could put 3 lines into production in 3 years (blowing the competition out of the water for a while) and make things very difficult in the industry.

    Where things will really shake up is in the Flash memory production lines. The size of the design rulings for current flash memory chips at Intel is right around 22nm. Intel and Samsung both are trying to shrink down the feature sizes of all the circuits on their Single and Multi-Level Flash memory chips. Add to this the stacking of chips into super sandwiches and you find they can glue together 8 of their 8Gbyte chips, making for a single very thin 64Gbyte memory chip. This chip is then mated up to a memory controller and voila, the iPhone suddenly hits 64Gbytes of storage for all your apps and mp4’s from iTunes. Similarly on the hard drive end of the scale things will also wildly improve. Solid State Disk capacities should creep upwards further (beyond the top of the line 512Gbyte SSDs) as will the PCI Express based storage devices (probably doubling in capacity to 2 TeraBytes) after 450mm wafers take hold across the semiconductor industry. So it’s going to be a big deal if Chinese, Japanese and American companies get on the large silicon wafer bandwagon.

  • A Conversation with Ed Catmull – ACM Queue

    EC: Here are the things I would say in support of that. One of them, which I think is really important—and this is true especially of the elementary schools—is that training in drawing is teaching people to observe.
    PH: Which is what you want in scientists, right?
    EC: Thats right. Or doctors or lawyers. You want people who are observant. I think most people were not trained under artists, so they have an incorrect image of what an artist actually does. Theres a complete disconnect with what they do. But there are places where this understanding comes across, such as in that famous book by Betty Edwards [Drawing on the Right Side of the Brain].

    via A Conversation with Ed Catmull – ACM Queue.

    This interview is with a computer scientist named Ed Catmull. In the time Ed Catmull entered the field, we’ve gone from computers crunching numbers like a desktop calculator to computers doing full 3D animated films. Ed Catmull’s single most important goal was to created an animated film using a computer. He eventually accomplished that and more onced he helped form up Pixar. All of his research and academic work was focused on that one goal.

    I’m always surprised to see what references or influences people quote in interviews. In fact, I am really encouraged. It was about 1988 or so when I took a copy of Betty Edward’s book my mom had and started reading it and doing some of the exercises in it. Stranger still I want back to college and majored in art (not drawing but Photography). So I think I understand exactly what Ed Catmull means when he talks about being observant. In every job I’ve had computer related or otherwise that ability to be observant just doesn’t exist in a large number of people. Eventually people begin to ask me how do know all this stuff, when did you learn it? Most times, the things they are most impressed by are things like noticing something and trying a different strategy in attempting to fix a problem. The proof is, I can do this with things I am unfamiliar with and usually make some headway towards fixing a thing. Whether that thing is mechanical, or computer related doesn’t matter. I make good guesses and it’s not because I’m an expert in anything, I merely notice things. That’s all it is.

    So maybe everyone should read and go through Betty Edwards’s book Drawing on the Right Side of the Brain. If nothing else it might make you feel a little dislocated and uncomfortable. It might shake you up, and make you question some pre-conceived notions about yourself like, the feeling you can’t draw or you are not good at art. I think with practice, anyone can draw and with practice anyone can become observant.

  • Intel lets outside chip maker into its fabs • The Register

     

    Banner image Achronix 22i
    Intel and Achronix-2 Great tastes that taste great together

     

    According to Greg Martin, a spokesman for the FPGA maker, Achronix can compete with Xilinx and Altera because it has, at 1.5GHz in its current Speedster1 line, the fastest such chips on the market. And by moving to Intel’s 22nm technology, the company could have ramped up the clock speed to 3GHz.

    via Intel lets outside chip maker into its fabs • The Register.

    That kind of says it all in one sentence, or two sentences in this case. The fastest FPGA on the market is quite an accomplishment unto itself. Putting that FPGA on the world’s most advanced production line and silicon wafter technology is what Andy Grove would called the 10X Effect. FPGA’s are reconfigurable processors that can have their circuits re-routed and optimized for different tasks over and over again. This is real beneficial for very small batches of processors where you need a custom design. Some of the things they can speed up is doing math or looking up things in a very large search through a database. In the past I was always curious whether they could be used a general purpose computer which could switch gears and optimize itself for different tasks. I didn’t know whether or not it would work or be worthwhile but it really seemed like there was a vast untapped reservoir of power in the FPGA.

    Some super computer manufacturers have started using FPGAs as special purpose co-processors and have found immense speed-ups as a result. Oil prospecting companies have also used them to speed up analysis of seismic data and place good bets on dropping a well bore in the right spot. But price has always been a big barrier to entry as quoted in this article. $1,000 per chip is the cost. Which limits the appeal to those buyers where price is no object but speed and time are more important. The two big competitors in the field off FPGA manufacturing are Altix and Xilinx both of which design the chips but have them manufactured in other countries. This has led to FPGAs being second class citizens used older generation chip technologies on old manufacturing lines. They always had to deal with what they could get. Performance in terms of clock speed was always less too.

    It was not unusual to see during the Megahertz and Gigahertz wars chip speeds increasing every month. FPGAs sped up too, but not nearly as fast. I remember seeing 200Mhz/sec and 400Mhz/sec touted as Xilinx and Altix top of the line products. With Achrnix running at 1.5Ghz, things have changed quite a bit. That’s a general purposed CPU speed in a completely customizable FPGA. This means you get speed that makes the FPGA even more useful. However, instead of going faster this article points out people would rather buy the same speed but use less electricity and generate less heat. There’s no better way to do this than to shrink the size of the circuits on the FPGA and that is the core philosophy of Intel Inc. They have just teamed up to put the Achronix FPGA on the smallest feature size production line using the most optimized, cost conscious manufacturer of silicon chips bar none.

    Another point being made in the article is the market for FPGAs at this level of performance also tends to be more defense contract oriented. As a result, to maintain the level of security necessary to sell chips to this industry, the chips need to be made in the good ol’ USA and Intel doesn’t outsource anything when it comes to it’s top of the line production facilities. Everything is in Oregon, Arizona or Washington State and is guaranteed not to have any secret backdoors built in to funnel data to foreign governments.

    I would love to see some University research projects start looking at FPGAs again and see if as speeds go up, power goes down if there’s a happy medium or mix of general purpose CPUs and FPGAs that might help the average joe working on his desktop, laptop or iPad. All I know is Intel entering a market will make it more competitive and hopefully lower the barrier of entry to anyone who would really like to get their hands on a useful processor that they can customize to their needs.

  • Custom superchippery pulls 3D from 2D images like humans • The Register

    Computing brainboxes believe they have found a method which would allow robotic systems to perceive the 3D world around them by analysing 2D images as the human brain does – which would, among other things, allow the affordable development of cars able to drive themselves safely.

    via Custom superchippery pulls 3D from 2D images like humans • The Register.

    The beauty of this new work is they designed a custom CPU using a Virtex 6 FPGA (Field Programmable Gate Array). FPGA for those who don’t know is a computer chip that you can ‘re-wire’ through software to take on mathematical task you can dream up. In the old days this would have required a custom chip to be engineered, validated and manufactured at great cost. FPGAs require development kits and FPGA chips you need to program. With this you can optimize every step within the computer processor and speed things up much more than a general purpose computer processor (like the Intel chip that powers your Windows or Mac computer). In this example of the research being done the custom designed computer circuitry is using video images to decide where in the world a robot can safely drive as it maneuvers around on the ground. I know Hans Moravec has done a lot with it at Carnegie Mellon U. And it seems that this group is from Yale’s engineering dept. which is encouraging to see the techniques embraced and extended by another U.S. university. The low power of this processor and it’s facility for processing the video images in real-time is ahead of its time and hopefully will find some commercial application either in robotics or automotive safety controls. As for me I’m still hoping for a robot chauffeur.

  • Drive suppliers hit capacity increase difficulties • The Register

    Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology.

    via Drive suppliers hit capacity increase difficulties • The Register.

    This is a good survey of upcoming HDD platter technologies. HAMR (Heat Assisted Magnetic Recording)and BPM (Bit Patterned Media) are the next generation after the current Perpendicular Magnetic Recording slowly hits the top end of its ability to squash together the 1’s and 0’s of a spinning hard drive platter. HAMR is like the old Floptical technology from the halls of Steve Job’s old NEXT Computer company. It uses a laser to heat the surface of the drive platter before the Read/Write head starts recording data to the drive. This ‘change’ in the state of the surface of the drive (the heat) helps align the magnetism of the bits written so that the tracks of the drive and the bits recorded inside them can be more tightly spaced. In the world of HAMR, Heat + Magnetism = bigger hard drives on the same old 3.5″ platters and 2.5″ platters we have now.  With BPM, the whole drive is manufactured to hold a set number of bits and tracks in advance. Each bit is created directly on the platter as a ‘well’ with a ring of insulating material surround it. The sizes of the wells are sufficiently small and dense enough to allow a light tighter spacing than PMR. But as is often the case the new technologies aren’t ready for manufacturing. A few test samples of possible devices are out in limited or custom made engineering prototypes to test the waters.

    Given the slow down in silicon CMOS chip speeds from the likes of Intel and AMD along with the wall of PMR it would appear the frontier days of desktop computing are coming to a close. Gone are the days of Megahertz wars and now Gigabyte wars waged in the labs of review sites and test labs across the Interwebs. The torrid pace of change in hardware we all experienced from the release of Windows 95 to the release this year of Windows 7 has slowed to a radical incrementalism. Intel releases so many chips with ‘slight’ variations in clock speed and cache one cannot keep up with them all. Hard drive manufacturers try to increment their disks about .5 Tbytes every 6 months but now that will stop. Flash-based SSD will be the biggest change for most of us and will help break through the inherent speed barriers enforced by SATA and spinning disk technologies. I hope a hybrid approach is used mixing SSDs and HDDs for speed and size in desktop computers. Fast things that need to be fast can use the SDD, slow things that are huge in size or quantity will go to the HDD. As for next gen disk based technologies, I’m sure there will be a change to the next higher density technology. But it will no doubt be a long time in coming.

  • Tilera, SeaMicro: The era of ultra high density computing

    The Register did an article recently following up on a press release from Tilera. The news this week is Tilera is now working on the next big thing, Quanta will be shipping a 2U rack mounted computer with 512 processing cores inside. Why is that significant? Well 512 is the magic number quoted in the announcement last week from upstart server maker SeaMicro. The SM10000 from SeaMicro boasts 512 Intel cores inside a 10U box. Which makes me wonder who or what is all this good for? Based solely on press releases and articles written to date about Tilera, their targeted customers aren’t quite as general say as SeaMicro. Even though each core in a Tilera cpu can run it’s own OS and share data, it is up to the device manufacturers licensing the Tilera chip to do the heavy lifting of developing the software and applications that make all that raw iron do useful work. The cpus on the SeaMicro hardware however are full Intel x86 capable Atom cpus tied together with a lot of management hardware and software provided by SeaMicro. Customers in this case are most likely going to load software applications they already have in operation on existing Intel hardware. Development time or re-coding or recompiling is unnecessary as SeaMicro’s value add is the management interface for all that raw iron. Quanta is packaging up the Tilera in a way that will make it more palatable to a potential customer who might also be considering buying SeaMicro’s project. It all depends on what apps you want to run, what performance you expect, and how dense you need all your cores to be when they are mounted in the rack. Numerically speaking, the race for ultimate density right now the Quanta SQ2 wins with 512 general purpose CPUs in a 2U rack mount. SeaMicro has 512 in a 10U rack mount. However, that in now way reflects the differences in the OSes and types of applications and performance you might see when using either piece of hardware.

    http://www.theregister.co.uk/2007/08/20/tilera_tile64_chip/ (The Register August 20, 2007)

    “Hot Chips The multi-core chip revolution advanced this week with the emergence of Tilera – a start-up using so-called mesh processor designs to go after the networking and multimedia markets.”

    http://www.theregister.co.uk/2007/09/28/tilera_new_ceo/ (The Register September 28, 2007)

    “Tahernia arrives at Tilera from FPGA shop Xilinx where he was general manager in charge of the Processing Solutoins (sic) Group.”

    http://www.linuxfordevices.com/c/a/News/64way-chip-gains-Linux-IDE-dev-cards-design-wins/
    (Linux for Devices April 30 2008)

    “Tilera introduced a Linux-based development kit for its scalable, 64-core Tile64 SoC (system-on-chip). The company also announced a dual 10GbE PCIExpress card based on the chip (pictured at left), revealed a networking customer win with Napatech, and demo’d the Tile64 running real-time 1080P HD video.”

    http://www.theregister.co.uk/2008/09/23/tilera_cpu_upgrade/ (The Register September 23 2008)

    “This week, Tilera is putting its second-generation chips into the field and is getting some traction among various IT suppliers, who want to put the Tile64 processors and their homegrown Linux environment to work.”

    “Tilera was founded in Santa Clara, California, in October 2004. The company’s research and development is done in its Westborough, Massachusetts lab, which makes sense given that the Tile64 processor that is based on an MIT project called Raw. The Raw project was funded by the U.S. National Science Foundation and the Defense Advanced Research Projects Agency, the research arm of the U.S. Department of Defense, back in 1996, and it delivered a 16-core processor connected by a mesh of on-core switches in 2002.”

    http://www.theregister.co.uk/2009/10/26/tilera_third_gen_mesh_chips/ (The Register October 26 2009)

    “Upstart massively multicore chip designer Tilera has divulged the details on its upcoming third generation of Tile processors, which will sport from 16 to 100 cores on a single die.”

    http://www.goodgearguide.com.au/article/323692/tilera_targets_intel_amd_100-core_processor/#comments
    (Good Gear Guide October 26 2009)

    “Look at the markets Tilera is aiming these chips at. These applications have lots of parallelism, require very high throughput, and need a low power footprint. The benefits of a system using a custom processor are large enough that paying someone to write software for the job is more than worth it.”

    http://www.theregister.co.uk/2009/11/02/tilera_quanta_servers/ (The Register November 2 2009)

    “While Doud was not at liberty to reveal the details, he did tell El Reg that Tilera had inked a deal with Quanta that will see the Taiwanese original design manufacturer make servers based on the future Tile-Gx series of chips, which will span from 16 to 100 RISC cores and which will begin to ship at the end of 2010.”

    http://www.theregister.co.uk/2010/03/09/tilera_vc_funding/ (The Register March 9 2010)

    “The current processors have made some design wins among networking, wireless infrastructure, and communications equipment providers, but the Tile-Gx series is going to give gear makers a slew of different options.”

  • Big Web Operations Turn to Tiny Chips – NYTimes.com

    Stephen O’Grady, a founder at the technology analyst company RedMonk, said the technology industry often has swung back and forth between more standard computing systems and specialized gear.

    via Big Web Operations Turn to Tiny Chips – NYTimes.com.

    A little tip of the hat to Andrew Feldman, CEO of SeaMicro the startup company that announced it’s first product last week. The giant 512 cpu computer is being covered in this NYTimes article to spotlight the ‘exotic’ technologies both hardware and software some companies use to deploy huge web apps. It’s part NoSQL part low power massive parallelism.

  • SeaMicro Announces SM10000 Server with 512 Atom CPUs

    From where I stand, the SM10000 looks like the type of product that if you could benefit from having it, you’ve been waiting for something like it. In other words, you will have been asking for something like the SM10000 for quite a while already. SeaMicro is simply granting your wish.

    via SeaMicro Announces SM10000 Server with 512 Atom CPUs and Low Power Consumption – AnandTech :: Your Source for Hardware Analysis and News.

    This announcement that has been making the rounds this Monday June 14th has hit Wired.com, Anandtech, Slashdot, everywhere. It is a press release full court press. But it is an interesting product on paper for anyone who is doing analysis of datasets using large numbers of CPUs for regressions or large scale simulations too. And it is at it’s core virtual Machines, with virtual peripherals (memory, disk, networking). I don’t know how you benchmark something like this, but it is impressive in its low power consumption and size. It only takes up 10U of a 42U rack. It fits 512 CPUs in that 10U area as well.

    Imagine 324 of these plugged in and racked up

    This takes me back to the days of RLX Technologies when blade servers were so new nobody knew what they were good for. The top of the line RLX unit had 324 CPUs in a 42U rack. And each blade had a Transmeta Crusoe processor which was designed to run at a lower clock speed and much more efficiently from a thermal standpoint. When managed by the RLX chassis hardware and software and paired up to an F5 Networks load balancer BIG-IP, the whole thing was an elegant design. However the advantage of using Transmeta’s CPU was lost on a lot of people, including technology journalists who bashed it for being too low performance for most IT shops and data centers. Nobody had considered the total cost of ownership including the cooling and electricity. In those days, clock speed was the only measure of a server’s usefulness.

    Enter Google into the data center market, and the whole scale changes. Google didn’t care about clock speed nearly as much as lowering its total overall costs for its huge data centers. Even the technical journalists began to understand the cost savings of lowering the clock speed a few hundred megahertz and placing servers more densely into a fixed sized data center. Movements in the High Performance computing also led to large scale installations of commodity servers being all bound together into one massively parallel super computer. More space was needed for physical machines racked up in the data centers. Everyone could see the only way to build out was to build more data centers, build bigger data centers or pack more servers into the existing footprint of current data centers. Manufacturers like Compaq got into the Blade server market, along with IBM and Hewlett Packard. Everyone engineered their own proprietary interfaces and architectures, but all of them focused on the top of the line server CPUs from Intel. As a result, the heat dissipation was enormous and the densities of these blade centers was pretty low (possibly 14 CPUs in a 4U rack mount).

    Blue Gene super computer has high density motherboards
    Look at all those CPUs on one motherboard!

    IBM began to experiment with lower clocked PowerPC chips in a massively parallel super computer called the Blue Gene. In my opinion this started to change people’s belief about what direction data center architectures could go. The density of the ‘drawers’ in the Blue Gene server cabinets is pretty high. Lot more CPUs, power supplies, storage and RAM in each unit than in a comparable base level commodity server from Dell or HP (the previous most common building block for the massively parallel super computers). Given these trends it’s very promising to see what Seamicro has done with its first product. I’m not saying this is a super computer in a 10U box, but there are plenty of workloads that would fit within the scope of this server’s capabilities. And what’s cooler is the virtual abstraction of all the hardware from the RAM, to the networking to the storage. It’s like the golden age of IBM machine partitioning and Virtual Machines but on an Intel architecture. Depending on how quickly they can ramp up production and market their goods, Seamicro might be game changer or it might be a takeover target from the likes of HP or IBM.