Tag: seamicro

  • SeaMicro drops 64-bit Atom bomb server • The Register

    Image representing SeaMicro as depicted in Cru...
    Image via CrunchBase

    The base configuration of the original SM10000 came with 512 cores, 1 TB of memory, and a few disks; it was available at the end of July last year and cost $139,000. The new SM10000-64 uses the N570 processors, for a total of 256 chips but 512 cores, the same 1 TB of memory, eight 500 GB disks, and eight Gigabit Ethernet uplinks, for $148,000. Because there are half as many chipsets on the new box compared to the old one, it burns about 18 percent less power, too, when configured and doing real work.

    via SeaMicro drops 64-bit Atom bomb server • The Register.

    I don’t want to claim that Seamicro is taking a page out of the Apple playbook, but keeping your name in the Technology News press is always a good thing. I have to say it is a blistering turnaround time to release a second system board for the SM10000 server so quickly. And knowing they do have some sales to back up the need for further development makes me thing this company really could make a  go of it. 512 CPU cores in a 10U rack is still a record of some sort and I hope to see one day Seamicro publish some white papers and testimonials from their current customers to see what killer application this machine has in the data center.

  • Chip upstart Tilera in the news

    Diagram Of A Partial Mesh Network
    Diagram Of A Partial Mesh Network

    As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.

    via Chip upstart Tilera lines up $45m in funding • The Register.

    There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.

    Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.

    Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!

    Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.

  • Tilera, SeaMicro: The era of ultra high density computing

    The Register did an article recently following up on a press release from Tilera. The news this week is Tilera is now working on the next big thing, Quanta will be shipping a 2U rack mounted computer with 512 processing cores inside. Why is that significant? Well 512 is the magic number quoted in the announcement last week from upstart server maker SeaMicro. The SM10000 from SeaMicro boasts 512 Intel cores inside a 10U box. Which makes me wonder who or what is all this good for? Based solely on press releases and articles written to date about Tilera, their targeted customers aren’t quite as general say as SeaMicro. Even though each core in a Tilera cpu can run it’s own OS and share data, it is up to the device manufacturers licensing the Tilera chip to do the heavy lifting of developing the software and applications that make all that raw iron do useful work. The cpus on the SeaMicro hardware however are full Intel x86 capable Atom cpus tied together with a lot of management hardware and software provided by SeaMicro. Customers in this case are most likely going to load software applications they already have in operation on existing Intel hardware. Development time or re-coding or recompiling is unnecessary as SeaMicro’s value add is the management interface for all that raw iron. Quanta is packaging up the Tilera in a way that will make it more palatable to a potential customer who might also be considering buying SeaMicro’s project. It all depends on what apps you want to run, what performance you expect, and how dense you need all your cores to be when they are mounted in the rack. Numerically speaking, the race for ultimate density right now the Quanta SQ2 wins with 512 general purpose CPUs in a 2U rack mount. SeaMicro has 512 in a 10U rack mount. However, that in now way reflects the differences in the OSes and types of applications and performance you might see when using either piece of hardware.

    http://www.theregister.co.uk/2007/08/20/tilera_tile64_chip/ (The Register August 20, 2007)

    “Hot Chips The multi-core chip revolution advanced this week with the emergence of Tilera – a start-up using so-called mesh processor designs to go after the networking and multimedia markets.”

    http://www.theregister.co.uk/2007/09/28/tilera_new_ceo/ (The Register September 28, 2007)

    “Tahernia arrives at Tilera from FPGA shop Xilinx where he was general manager in charge of the Processing Solutoins (sic) Group.”

    http://www.linuxfordevices.com/c/a/News/64way-chip-gains-Linux-IDE-dev-cards-design-wins/
    (Linux for Devices April 30 2008)

    “Tilera introduced a Linux-based development kit for its scalable, 64-core Tile64 SoC (system-on-chip). The company also announced a dual 10GbE PCIExpress card based on the chip (pictured at left), revealed a networking customer win with Napatech, and demo’d the Tile64 running real-time 1080P HD video.”

    http://www.theregister.co.uk/2008/09/23/tilera_cpu_upgrade/ (The Register September 23 2008)

    “This week, Tilera is putting its second-generation chips into the field and is getting some traction among various IT suppliers, who want to put the Tile64 processors and their homegrown Linux environment to work.”

    “Tilera was founded in Santa Clara, California, in October 2004. The company’s research and development is done in its Westborough, Massachusetts lab, which makes sense given that the Tile64 processor that is based on an MIT project called Raw. The Raw project was funded by the U.S. National Science Foundation and the Defense Advanced Research Projects Agency, the research arm of the U.S. Department of Defense, back in 1996, and it delivered a 16-core processor connected by a mesh of on-core switches in 2002.”

    http://www.theregister.co.uk/2009/10/26/tilera_third_gen_mesh_chips/ (The Register October 26 2009)

    “Upstart massively multicore chip designer Tilera has divulged the details on its upcoming third generation of Tile processors, which will sport from 16 to 100 cores on a single die.”

    http://www.goodgearguide.com.au/article/323692/tilera_targets_intel_amd_100-core_processor/#comments
    (Good Gear Guide October 26 2009)

    “Look at the markets Tilera is aiming these chips at. These applications have lots of parallelism, require very high throughput, and need a low power footprint. The benefits of a system using a custom processor are large enough that paying someone to write software for the job is more than worth it.”

    http://www.theregister.co.uk/2009/11/02/tilera_quanta_servers/ (The Register November 2 2009)

    “While Doud was not at liberty to reveal the details, he did tell El Reg that Tilera had inked a deal with Quanta that will see the Taiwanese original design manufacturer make servers based on the future Tile-Gx series of chips, which will span from 16 to 100 RISC cores and which will begin to ship at the end of 2010.”

    http://www.theregister.co.uk/2010/03/09/tilera_vc_funding/ (The Register March 9 2010)

    “The current processors have made some design wins among networking, wireless infrastructure, and communications equipment providers, but the Tile-Gx series is going to give gear makers a slew of different options.”

  • Big Web Operations Turn to Tiny Chips – NYTimes.com

    Stephen O’Grady, a founder at the technology analyst company RedMonk, said the technology industry often has swung back and forth between more standard computing systems and specialized gear.

    via Big Web Operations Turn to Tiny Chips – NYTimes.com.

    A little tip of the hat to Andrew Feldman, CEO of SeaMicro the startup company that announced it’s first product last week. The giant 512 cpu computer is being covered in this NYTimes article to spotlight the ‘exotic’ technologies both hardware and software some companies use to deploy huge web apps. It’s part NoSQL part low power massive parallelism.

  • SeaMicro Announces SM10000 Server with 512 Atom CPUs

    From where I stand, the SM10000 looks like the type of product that if you could benefit from having it, you’ve been waiting for something like it. In other words, you will have been asking for something like the SM10000 for quite a while already. SeaMicro is simply granting your wish.

    via SeaMicro Announces SM10000 Server with 512 Atom CPUs and Low Power Consumption – AnandTech :: Your Source for Hardware Analysis and News.

    This announcement that has been making the rounds this Monday June 14th has hit Wired.com, Anandtech, Slashdot, everywhere. It is a press release full court press. But it is an interesting product on paper for anyone who is doing analysis of datasets using large numbers of CPUs for regressions or large scale simulations too. And it is at it’s core virtual Machines, with virtual peripherals (memory, disk, networking). I don’t know how you benchmark something like this, but it is impressive in its low power consumption and size. It only takes up 10U of a 42U rack. It fits 512 CPUs in that 10U area as well.

    Imagine 324 of these plugged in and racked up

    This takes me back to the days of RLX Technologies when blade servers were so new nobody knew what they were good for. The top of the line RLX unit had 324 CPUs in a 42U rack. And each blade had a Transmeta Crusoe processor which was designed to run at a lower clock speed and much more efficiently from a thermal standpoint. When managed by the RLX chassis hardware and software and paired up to an F5 Networks load balancer BIG-IP, the whole thing was an elegant design. However the advantage of using Transmeta’s CPU was lost on a lot of people, including technology journalists who bashed it for being too low performance for most IT shops and data centers. Nobody had considered the total cost of ownership including the cooling and electricity. In those days, clock speed was the only measure of a server’s usefulness.

    Enter Google into the data center market, and the whole scale changes. Google didn’t care about clock speed nearly as much as lowering its total overall costs for its huge data centers. Even the technical journalists began to understand the cost savings of lowering the clock speed a few hundred megahertz and placing servers more densely into a fixed sized data center. Movements in the High Performance computing also led to large scale installations of commodity servers being all bound together into one massively parallel super computer. More space was needed for physical machines racked up in the data centers. Everyone could see the only way to build out was to build more data centers, build bigger data centers or pack more servers into the existing footprint of current data centers. Manufacturers like Compaq got into the Blade server market, along with IBM and Hewlett Packard. Everyone engineered their own proprietary interfaces and architectures, but all of them focused on the top of the line server CPUs from Intel. As a result, the heat dissipation was enormous and the densities of these blade centers was pretty low (possibly 14 CPUs in a 4U rack mount).

    Blue Gene super computer has high density motherboards
    Look at all those CPUs on one motherboard!

    IBM began to experiment with lower clocked PowerPC chips in a massively parallel super computer called the Blue Gene. In my opinion this started to change people’s belief about what direction data center architectures could go. The density of the ‘drawers’ in the Blue Gene server cabinets is pretty high. Lot more CPUs, power supplies, storage and RAM in each unit than in a comparable base level commodity server from Dell or HP (the previous most common building block for the massively parallel super computers). Given these trends it’s very promising to see what Seamicro has done with its first product. I’m not saying this is a super computer in a 10U box, but there are plenty of workloads that would fit within the scope of this server’s capabilities. And what’s cooler is the virtual abstraction of all the hardware from the RAM, to the networking to the storage. It’s like the golden age of IBM machine partitioning and Virtual Machines but on an Intel architecture. Depending on how quickly they can ramp up production and market their goods, Seamicro might be game changer or it might be a takeover target from the likes of HP or IBM.