Categories
cloud computers data center technology

Facebook Shakes Hardware World With Own Storage Gear | Wired Enterprise | Wired.com

Image representing Facebook as depicted in Cru...
Image via CrunchBase

Now, Facebook has provided a new option for these big name Wall Street outfits. But Krey also says that even among traditional companies who can probably benefit from this new breed of hardware, the project isn’t always met with open arms. “These guys have done things the same way for a long time,” he tells Wired.

via Facebook Shakes Hardware World With Own Storage Gear | Wired Enterprise | Wired.com.

Interesting article further telling the story of Facebook’s Open Compute project. This part of the story concentrates on the mass storage needs of the social media company. Which means Wall Street data center designer/builders aren’t as enthusiastic about Open Compute as one might think. The old school Wall Streeters have been doing things the same way as Peter Krey says for a very long time. But that gets to the heart of the issue, what the members of the Open Compute project hope to accomplish. Rack Space AND Goldman Sachs are members, both contributing and getting pointers from one another. Rack Space is even beginning to virtualize equipment down to the functional level replacing motherboards with a Virtual I/O service. That would allow components to be ganged up together based on the frequency of their replacement and maintenance. According to the article, CPUs could be in one rack cabinet, DRAM in another, Disks in yet another (which is already the case now with storage area networks).

The newest item to come into the Open Compute circus tent is storage. Up until now that’s been left to Value Added Resellers (VARs) to provide. So different brand loyalties and technologies still hold sway for many Data Center shops including Open Compute. Now Facebook is redesigning the disk storage rack to create a totally tool-less design. No screws, no drive carriers, just a drive and a latch and that is it. I looked further into this tool-less phenomenon and found an interesting video at HP

HP Z-1 all in one CAD workstation

Along with this professional video touting how easy it is to upgrade this all in one design:

The Making of the HP Z1

Having recently purchased a similarly sized iMac 27″ and upgrading it by adding a single SSD drive into the case, I can tell you this HP Z1 demonstrates in every way possible the miracle of toolless designs. I was bowled over and remember back to some of my memories of different Dell tower designs over the years (some with more toolless awareness than others). If a toolless future is inevitable I say bring it on. And if Facebook ushers in the era of toolless Storage Racks as a central design tenet of Open Compute so much the better.

Image representing Goldman Sachs as depicted i...
Image via CrunchBase
Categories
cloud computers data center technology

Tilera routs Intel, AMD in Facebook bakeoff • The Register

Structure of the TILE64 Processor from Tilera
Tile64 processor from Tilera

Facebook lined up the Tilera-based Quanta servers against a number of different server configurations making use of Intels four-core Xeon L5520 running at 2.27GHz and eight-core Opteron 6128 HE processors running at 2GHz. Both of these x64 chips are low-voltage, low power variants. Facebook ran the tests on single-socket 1U rack servers with 32GB and on dual-socket 1U rack servers with 64GB.All three machines ran CentOS Linux with the 2.6.33 kernel and Memcached 1.2.3h.

via Tilera routs Intel, AMD in Facebook bakeoff • The Register.

You will definitely want to read this whole story as presented El Reg. They have a few graphs displaying the performance of the Tilera based Quanta data cloud in a box versus the Intel server rack. And let me tell you on certain very specific workloads like the Web Caching using Memcached I declare advantage Tilera. No doubt data center managers need to pay attention to this and get some more evidence to back up this initial white paper from Facebook, but this is big, big news. And all one need do apart from tuning the software for the chipset is add a few PCIe based SSDs or TMS RamSan and you have what could theoretically be the fastest possible web performance possible. Even at this level of performance, there’s still room to grow I think on the hard drive storage front. What I would hope in future to see is Facebook do an exhaustive test on the Quanta SQ-2 product versus Calxeda (ARM cloud in a box) and the Seamicro SM-10000×64 (64bit Intel Atom cloud in a box). It would prove an interesting research project just to see how much chipsets, chip architectures and instruction sets play in optimizing each for a particular style and category of data center workload. I know I will be waiting and watching.

Categories
technology

Provocative article on Fibre Channel Storage

Over at the Register there’s an article on a report about the Future of Fibre Channel in the Data Centre (British spellings of course). The trends being spotted now are duofold.

  1. Internal disks on storage arrays are moving to Serial Attached Storage (SAS) whose interface speeds continue on a blistering increase with each new generation.
  2. Optical Fibre Channel interconnects are deemed too difficult to manage along with the attendant switches and directors. Between software and hardware far too much expertise is required or needs to be added to existing Data Center staffing.

Following these trends to their logical conclusions a recent development called Fibre Channel over Ethernet (FCoE) us usurping the mindshare that FC over fibre optics once had. Costs and expertise have dictated the cheaper less complicated interface be used wherever and whenever possible. The prediction now is that Serial Attached Storage will be the next big thing, the next wave of migrations within the Data Center. The possibilities extend to SAS over Ethernet as the eventual target of these consolidations and migrations. So FCoE may be a bridge to SASoE. As Data Center migrations go, it may be the case new installs adopt the new technology with older FC based systems eventually being left to migrate when they reach their operational lifespan (10 years for a Data Center hardware/software combo?).