Blog

  • Quanta crams 512 cores into pizza box server • The Register

    Image representing Tilera as depicted in Crunc...
    Image via CrunchBase

    Two of these boards are placed side-by-side in the chassis and stacked two high, for a total of eight server nodes. Eight nodes at 64 cores each gives you 512 total cores in a 2U chassis. The server boards slide out on individual trays and share two 1,100 watt power supplies that are stacked on top of each other and that are put in the center of the chassis. Each node has three SATA II ports and can have three 2.5-inch drives allocated to it; the chassis holds two dozen drives, mounted in the front and hot pluggable.

    via Quanta crams 512 cores into pizza box server • The Register.

    Amazing how power efficient Tilera has made it’s shipping products as Quanta has jammed 512 cores into a 2 Rack Unit high box. Roughly this is 20% the size of the SeaMicro SM-10000 based on Intel Atom cpus. Now that there’s a shipping product, I would like to see benchmarks or comparisons made on similar workloads using both sets of hardware. Numerically speaking it will be an apples-to-apples comparison. But each of these products is unique and are going to be difficult to judge in the coming year.

    First off, Intel Atom is an x86 compatible low power chip that helped launch the Asus/Acer netbook revolution (which until the iPad killed it was a big deal). However Quanta in order to get higher density on its hardware has chosen a different CPU than the Intel Atom (as used by SeaMicro). Instead Quanta is the primary customer for a new innovated chip company we have covered on carpetbomberz.com previously: Tilera. For those who have not been following the press releases from the company Tilera is a spin-off of an MIT research project in chip-scale networking. The idea was to create very simplified systems on a chip (whole computers scaled down to single chip) and then network them together all the same slice of silicon die. The speeds would be faster due to most of the physical interfaces and buses being contained directly on the chip circuits instead of externally on the computer’s motherboard. The promise of the Tilera chip is you can scaled up on the silicon wafer as opposed to the racks and racks of equipment within the datacenter. Performance of the Tilera chip has been somewhat a secret, no benchmarks or real comparisons to commercially shipping CPUs have been performed. But the feeling generally is any single core within a Tilera chip should be about as capable as the processor in your smartphone, and every bit as power efficient. Tilera has been planning to scale up to 100 cpus eventually within one single processor die and appears to have scaled up to 64 on its most recent research chips (far from being commercially produced at this point.)

    I suspect both SeaMicro and Quanta will have their own custom OSes which run as a central supervisor allowing the administrators to install and sets up instances of their  favorite workhorse OSes. Each OS instance will be doled out to an available CPU core and then be linked up to a virtual network and virtual storage interface. Boom! You got a web server, file server, rendering station, streaming server, whatever you need in one fell swoop. And it is all bound together with two 1,100 watt power supplies in each 2 Rack Unit sized box. I don’t know how that compares to the SeaMicro power supply, but I imagine it is likely smaller per core than the SM-10000. Which can only mean in the war for data power efficiency Quanta might deliver to market a huge shot across the bow of SeaMicro. All I can say is let the games begin, let the market determine the winner.

  • AppleInsider | Expanded GPU support in Apple’s Mac OS X 10.6.7 hints at future Mac hardware

    HIS Radeon HD 5850 AMD ATI
    Image by Forrestal_PL via Flickr

    “Could Apple be opening up the platform more?” he asked. “What happens to NVIDIA? Why support for cards that aren’t in Macs yet? Will the 2011 Sandy Bridge iMacs contain one or more of these new 6xxx cards?”

    via AppleInsider | Expanded GPU support in Apple’s Mac OS X 10.6.7 hints at future Mac hardware.

    This is an interesting tidbit of news. A Macintosh hacker has discovered within the most recent update of Mac OS X 10.6 a number of hardware drivers for ATI graphics cards that do not ship and are currently ‘unsupported’ on the Mac. Anyone who has attempted to buy after market, third party OEM graphics cards for Macs know this is treacherous minefield to navigate. The principle problem being Apple absolutely positively does not want people sticking any old graphics card in the Macintosh Pro towers. Or even in old legacy towers going back to the first PowerPC/PCI based Macs. No, you must buy direct from Apple the bona fide supported hardware with drivers they supply. In a pinch you might be able to fake it with a PC graphics card that has had its BIOS flashed to make it appear to be a genuine Apple part.

    But now if Apple is just bundling up a bunch of drivers for various and sundry graphics cards (albeit from one supplier: ATI), is it possible you could finally buy any card you wanted and it would work? That would be big news indeed for any owner of an end-user upgradeable Macintosh Pro owner and welcome news at that. I’m hoping that this news continues to develop and Apple comes out with a policy or strategy statement heralding a change in past policy towards peripheral manufacturers. More devices being supported would be a great thing.

  • Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future

    Image representing Microsoft as depicted in Cr...
    Image via CrunchBase

    Probase is a Microsoft Research project described as an “ongoing project that focuses on knowledge acquisition and knowledge serving.” Its primary goal is to “enable machines to understand human behavior and human communication.” It can be compared to  Cyc, DBpedia or Freebase in that it is attempting to compile a massive collection of structured data that can be used to power artificial intelligence applications.

    via Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future – ReadWriteCloud.

    Who knew Microsoft was so interested in things only IBM Research’s Watson could demonstrate? AI (artificial intelligence) seems to be targeted at Bing search engine results. And in order to back this all up, they have to ditch their huge commitment to Microsoft SQL Server and go for a NoSQL database in order to hold all the unstructured data. This seems like a huge shift away from desktop and data center applications and something much more oriented to a cloud computing application where collected data is money in the bank. This is best expressed in the example given in the story of Google vs. Facebook. Google may collect data, but it is really delivering ads to eyeballs. Whereas Facebook is just collecting the data and sharing that to the highest bidder. Seems like Microsoft is going the Facebook route of wanting to collect and own the data rather than merely hosting other people’s data (like Google and Yahoo).

  • OCZ Acquires Indilinx SSD Controller Maker

    OCZ Technology
    Image via Wikipedia

    Prior to SandForce‘s arrival, Indilinx was regarded as the leading makers of controllers for solid-state drives. The company gained both consumer and media favoritism when it demonstrated that drives based on its own controllers were competitive with lead drives made by Intel. Indilinx’s controllers allowed many SSD manufacturers to bring SSD prices down to a level where a large number of mainstream consumers started to take notice.

    via OCZ Acquires Indilinx SSD Controller Maker.

    This is surprising news especially following the announcement and benchmark testing of OCZ’s most recent SSD drives. They are the highest performing SATA based SSDs on the market and the boost in speed is derived primarily from their drive controller chip supplied by SandForce not Indilinx. Buying a competing manufacturer no doubt is going to disappoint their suppliers at SandForce. And I worry a bit that SandForce’s technical lead is something that even a good competitor like Indilinx won’t be able to overcome. I’m sticking with any drive that has the SandForce disk controller inside due to their track record of increasing performance and reliability with each new generation of product.

    So I am of two minds, I guess it’s cool OCZ has enough power and money to provide its own drive controllers for its SSDs. But at the same time, the second place drive controller is a much slower, lower performance part than the top competitor. In future I hope OCZ is either able to introduce price variation by offering up SandForce vs. Indilinx based SSDs and charge less for Indilinx. If not, I don’t know how they will technologically achieve superiority now that SandForce has such a lead.

  • Calxeda boasts of 5 watt ARM server node • The Register

    Calxeda is not going to make and sell servers, but rather make chips and reference machines that it hopes other server makers will pick up and sell in their product lines. The company hopes to start sampling its first ARM chips and reference servers later this year. The first reference machine has 120 server nodes in a 2U rack-mounted format, and the fabric linking the nodes together internally can be extended to interconnect multiple enclosures together.

    via Calxeda boasts of 5 watt ARM server node • The Register.

    SeaMicro and now Calxeda are going gangbusters for the ultra dense low power server market. Unlike SeaMicro, Calxeda wants to create reference designs it licenses to manufacturers who will build machines with 120 cores in a 2 Unit rack. SeaMicro’s record right now is 512 cores per 10U rack  or roughly 102+ cores in a 2 Unit rack. The difference is the SeaMicro product uses an Intel low power Atom cpu,  whereas Calxeda is using a processor used more often in smart phones and tablet computers. SeaMicro has hinted they are not wedded to the Intel Architecture, but they are more interested in shipping real live product than coming up with generic designs others can license. In the long run it’s entirely possible SeaMicro may switch to a different CPU, they have indicated previously they have designed their servers with flexibility enough to swap out the processor to any other CPU if necessary. It would be really cool to see an apples-to-apples comparison of a SeaMicro server using first Intel CPUs versus ARM-based CPUs.

  • AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine

    Original 1984 Macintosh desktop
    Image via Wikipedia

    However, Windows’ Shadow Copy is really intended for creating a snapshot of an entire volume for backup purposes; users can’t trigger the creation of a new version of an individual file in Windows. This makes Lion’s Versions a very different beast: its more akin to a versioning file system that works like Time Machine, but local to the user’s own disk.

    via AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine [Page 2].

    Reading this article from Apple Insider’s series of previews of Mac OS X 10.7 has been an education in both the iOS based universe and the good ol’ desktop universe I already know and love. At first I was apprehensive about the desktop OS taking such a back seat to the mobile devices Apple has been introducing at an increasingly fast pace. From iPods to iPhones to iPod Touch and now the iPad, there’s no end to the permutations iOS based devices can take. Prior to the iPhone and iPod Touch releases, Apple was using an embedded OS with none of the sophistication and capability of a real desktop operating system. This was both a frugal and conservative approach as media players while having real CPUs inside were never intended to have network stacks, garbage collection on UI servers, etc. There was always enough there to present a User Interface off some sort, with access to a local file system and ability to sync files between a host based iTunes client and the device (whichever generation iPod it might be). Along with that each generation hardware most likely varied by degrees as video playback  became a touted feature in newer iPods with bigger internal hard drives (so-called video ipods). I can imagine that got complicated quickly as CPU and video chips and media playback capabilities ranged widely up and down the product line. As each device required its own tweaks to the embedded OS, and iTunes was tweaked to accommodate these local variations, I’m sure the all seeing eye of Steve Jobs began to wince at the increasing complexity of the iPod product line. Enter the iOS, a smaller, cleaner fully optimized OS for low power mobile devices. It’s got everything a desktop OS has without any of the legacy device concerns (backward compatibility) of a typical desktop OS. This allowed for creating ‘just enough’ capability in the networking capability the UI Server and the local storage. Apps written for iOS were unique to that environment though they might have started out as Mac OS X apps. By taking the original code base, re-factoring it and doing complete low level rewrites from top to bottom, you got a version of the Safari web browser on a mobile device. It could display ANY webpage and kind of do some display optimizations of the page on the fly. And there were a number of developers rushing to get an app to run on the new devices. So wither the Apple Mac OS X?

    Well in the rush of creating an iOS app universe, the iOS development team added many features along the way. One great gap was the missing cut & paste analogy long enjoyed on desktop OSes. Eventually this feature made it in, and others like it slowly got integrated. Apple’s custom A4 chip using and ARM Core 8 cpu was tearing up the charts, out competing every other mobile phone OS on the market. Similarly the iPad took that same approach of getting out there with new features and becoming a more desktop like mobile device. A year has passed since the original iPad hit the market, the Mac OS is due for a change, the big question is what does Steve Jobs think? There were hints and rumors he wanted everyone to enjoy the clean room design of the iOS, dump the legacy messiness of old Mac OS X. Dan Lyons of Newsweek gave voice to these concerns quite clearly in his June 8 article in Newseek. Steve Jobs would eventually reply directly to this author and state emphatically he was wrong. Actions speak louder than words, Apple’s World Wide Developer Conference in 2010 seemed to really hard sell the advantages of developing for the new iOS. Conversely, Microsoft has proven over and over again, legacy support in an OS is a wonderful source of income, once you have established your monopoly. However, Apple has navigated the legacy hardware seas before with its first big migration from Motorola 68000 processors to the PowerPC chip, then subsequently the migration from PowerPC to Intel chips. From a software standpoint attrition occurs as people dump their legacy hardware anyways (not uncommon amongst Apple users to eventually get rid of their older hardware). So to help deliver the benefit of newer software requirements are now fully in place that even certain first gen Intel based Macs won’t be able to run the newest Mac OS X (that’s the word now). Similarly legacy support for PowerPC native apps running under Intel in emulation (using the Rosetta software) will also go away. Which then brings us to the point of this whole blog posting, where’s the beef?

    The beef dear reader is not in the computers but in ourselves. As Macintosh OSes evolve so do the workflow and the new paradigm being foisted upon us through the use of mobile devices is the lack of need to go to the File Menu -> Choose Save or Save As… That’s what the new iOS design portends in the future. Same goes for open documents in process, everything is done for you at long last. The computer does what finally you thought it did all the time and what Microsoft eventually built into Word (not the OS itself), Autosave. Newly developed versions of TextEdit made by Apple to run under OS X 10.7 were tested and tried out to see how they work under the new Auto Save and Versions architecture. Now, you just make a new document and the computer (safely) assumes you will most likely want to save the document as you are working on it, and you may want to go back and undo some changes you made. After all these years of using desktop computers, this is now built right in at long last. So from the commandline to the GUI and now to the Mobile OS, computer architects and UI engineers have a good idea of what you might want to do before you choose to do it, and it’s built in at the lowest level of the OS finally! And all of these are going to be in the next version of Mac OS X, due for release this July, 2011. After reading these articles from AppleInsider looking at the screenshots, I’m way more enthused and willing to change and adapt the way I work to the new regime of hybrid iOS and MacOS X going forward.

  • OCZ Vertex 3 Preview – AnandTech

    UEFI Logo
    Image via Wikipedia

    The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.

    via OCZ Vertex 3 Preview: Faster and Cheaper than the Vertex 3 Pro – AnandTech :: Your Source for Hardware Analysis and News.

    The cat is out of the bag, OCZ has not one but two SandForce SF-2000 series based SSDs out on the market now. And performance-wise the consumer level product is even slightly better performing than the enterprise level product at less cost. These indeed are interesting times. The speeds are so fast with the newer SandForce drive controllers that with a SATA 6GB/s drive interface you get speeds close to what could only be purchased on a PCIe based SSD drive array for $1200 or so. The economics of this is getting topsy-turvy, new generations of single drives outdistancing previous top-end products (I’m talking about you Fusion-io and you Violin Memory). SandForce has become the drive controller for the rest of us and with speeds like this 500MB/sec. read and write what more could you possibly ask for? I would say the final bottleneck on the desktop/laptop computer is quickly vanishing and we’ll have to wait and see just how much faster the SSD drives become. My suspicion is now a computer motherboard’s BIOS will slowly creep up to be the last link in the chain of noticeable computer speed. Once we get a full range of UEFI motherboards and fully optimized embedded software to configure them we will have theoretically the fastest personal computers one could possibly design.

  • TidBITS Macs & Mac OS X: Apple Reveals More about Mac OS X Lion

    Image representing Apple as depicted in CrunchBase
    Image via CrunchBase

    Finally, despite Apple’s dropping of the Xserve line (see “A Eulogy for the Xserve: May It Rack in Peace,” 8 November 2010), Mac OS X Server will make the transition to Lion, with Apple promising that the new version will make setting up a server easier than ever. That’s in part because Lion Server will be built directly into Lion, with software that guides you through configuring the Mac as a server. Also, a new Profile Manager will add support for setting up and managing Mac OS X Lion, iPhone, iPad, and iPod touch devices. Wiki Server 3 will offer improved navigation and a new Page Editor. And Lion Server’s WebDAV support will provide iPad users the ability to access, copy, and share server-based documents.

    via TidBITS Macs & Mac OS X: Apple Reveals More about Mac OS X Lion.

    Here’s to seeing a great democratization of OS X Server once and for all time. While Apple did deserve to make some extra cash on a server version of the OS, I’m sure it had very little impact on their sales overall (positive or negative). However, including/bundling it with the base level OS and letting it be unlocked (for money or for free) can only be a good thing. Where I work I already run a single CPU 4core Intel Xserve. I think I should buy some cheap RAM and max out the memory and upgrade this Summer to OS X Lion Server.

  • links for 2011-03-09

    • Great tutorial on how to deal with floats, collapses and container divs and the 'normal flow' of HTML elements in a web page. Highly Recommended.
  • SeaMicro drops 64-bit Atom bomb server • The Register

    Image representing SeaMicro as depicted in Cru...
    Image via CrunchBase

    The base configuration of the original SM10000 came with 512 cores, 1 TB of memory, and a few disks; it was available at the end of July last year and cost $139,000. The new SM10000-64 uses the N570 processors, for a total of 256 chips but 512 cores, the same 1 TB of memory, eight 500 GB disks, and eight Gigabit Ethernet uplinks, for $148,000. Because there are half as many chipsets on the new box compared to the old one, it burns about 18 percent less power, too, when configured and doing real work.

    via SeaMicro drops 64-bit Atom bomb server • The Register.

    I don’t want to claim that Seamicro is taking a page out of the Apple playbook, but keeping your name in the Technology News press is always a good thing. I have to say it is a blistering turnaround time to release a second system board for the SM10000 server so quickly. And knowing they do have some sales to back up the need for further development makes me thing this company really could make a  go of it. 512 CPU cores in a 10U rack is still a record of some sort and I hope to see one day Seamicro publish some white papers and testimonials from their current customers to see what killer application this machine has in the data center.