Carpet Bomberz Inc.

Scouring the technology news sites every day

Posts Tagged ‘facebook

Facebook Opens Up Hardware World With Magic Hinge | Wired Enterprise | Wired.com

Profile shown on Thefacebook in 2005

Profile shown on Thefacebook in 2005 (Photo credit: Wikipedia)

Codenamed “Knox,” Facebook’s storage prototype holds 30 hard drives in two separate trays, and it fits into a nearly 8-foot-tall data center rack, also designed by Facebook.The trick is that even if Knox sits at the top of the rack — above your head — you can easily add and remove drives. You can slide each tray out of the the rack, and then, as if it were a laptop display, you can rotate the tray downwards, so that you’re staring straight into those 15 drives.

via Facebook Opens Up Hardware World With Magic Hinge | Wired Enterprise | Wired.com.

Nice article around Facebook’s own data center design and engineering efforts. I think their approach is going to advance the state of the art way more than Apple/Google/Amazon’s own protected and secretive data center efforts. Although they have money and resources to plow into custom engineered bits for their data centers, Facebook can at least show off what its learned in the time that it has scaled up to a huge number of daily users. Not the least of which is expressed best by their hard drive rack design, a tool-less masterpiece.

This article emphasizes the physical aspects of the racks in which the hard drives are kept. It’s a tool-less design not unlike what I talked about in this article from a month ago. HP has adopted a tool-less design for its all-in-one (AIO) Engineering Workstation, see Introducing the HP Z1 Workstation. The video link will demonstrate the idea of a tool-less design for what is arguably not the easiest device to design without the use of proprietary connectors, fasteners, etc. I use my personal experience of attempting to upgrade my 27″ iMac as the foil for what is presented in the HP promo video. If Apple adopted a tool-less design for its iMacs there’s no telling what kind of aftermarket might spring up for the hobbyist or even the casually interested Mac owners.

I don’t know how much of Facebook’s decisions regarding their data center designs is driven by the tool-less methodology. But I can honestly say that any large outfit like Facebook and HP attempting to go tool-less in some ways is a step in the right direction. Comapnies like O’Reilly’s Make: magazine and iFixit.org are readily providing path for anyone willing to put in the work to learn how to fix the things they own. Also throw into that mix less technology and more Home Maintenance style outfits like Repair Clinic, while not as sexy technologically, I can vouch for their ability to teach me how to fix a fan in my fridge.

Borrowing the phrase, “If you can’t fix it, you don’t own it” let me say I wholeheartedly agree. And also borrowing from the old Apple commercial, Here’s to the crazy ones because they change things. They have no respect for the status quo, so lots stop throwing away those devices, appliances, automobiles and let’s start first by fixing some things.

Written by Eric Likness

May 21, 2012 at 3:00 pm

Facebook Shakes Hardware World With Own Storage Gear | Wired Enterprise | Wired.com

Image representing Facebook as depicted in Cru...

Image via CrunchBase

Now, Facebook has provided a new option for these big name Wall Street outfits. But Krey also says that even among traditional companies who can probably benefit from this new breed of hardware, the project isn’t always met with open arms. “These guys have done things the same way for a long time,” he tells Wired.

via Facebook Shakes Hardware World With Own Storage Gear | Wired Enterprise | Wired.com.

Interesting article further telling the story of Facebook’s Open Compute project. This part of the story concentrates on the mass storage needs of the social media company. Which means Wall Street data center designer/builders aren’t as enthusiastic about Open Compute as one might think. The old school Wall Streeters have been doing things the same way as Peter Krey says for a very long time. But that gets to the heart of the issue, what the members of the Open Compute project hope to accomplish. Rack Space AND Goldman Sachs are members, both contributing and getting pointers from one another. Rack Space is even beginning to virtualize equipment down to the functional level replacing motherboards with a Virtual I/O service. That would allow components to be ganged up together based on the frequency of their replacement and maintenance. According to the article, CPUs could be in one rack cabinet, DRAM in another, Disks in yet another (which is already the case now with storage area networks).

The newest item to come into the Open Compute circus tent is storage. Up until now that’s been left to Value Added Resellers (VARs) to provide. So different brand loyalties and technologies still hold sway for many Data Center shops including Open Compute. Now Facebook is redesigning the disk storage rack to create a totally tool-less design. No screws, no drive carriers, just a drive and a latch and that is it. I looked further into this tool-less phenomenon and found an interesting video at HP

HP Z-1 all in one CAD workstation

Along with this professional video touting how easy it is to upgrade this all in one design:

The Making of the HP Z1

Having recently purchased a similarly sized iMac 27″ and upgrading it by adding a single SSD drive into the case, I can tell you this HP Z1 demonstrates in every way possible the miracle of toolless designs. I was bowled over and remember back to some of my memories of different Dell tower designs over the years (some with more toolless awareness than others). If a toolless future is inevitable I say bring it on. And if Facebook ushers in the era of toolless Storage Racks as a central design tenet of Open Compute so much the better.

Image representing Goldman Sachs as depicted i...

Image via CrunchBase

Written by Eric Likness

March 5, 2012 at 3:00 pm

The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It

Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there’d be no realistic way to make it happen.

via The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It.

While true that Microsoft didn’t tax Software Developers who sold product running on the Windows OS, a kind of a tax levy did exist for hardware manufacturers creating desktop pc’s with Intel chips inside. But message received I get the bigger point, cul-de-sacs don’t make good computers. They do however make good appliances. But as the author Jonathan Zittrain points out we are becoming less aware of the distinction between a computer and an applicance, and have lowered our expectation accordingly.

In fact this points to the bigger trend of not just computers becoming silos of information/entertainment consumption no, not by a long shot. This trend was preceded by the wild popularity of MySpace, followed quickly by Facebook and now Twitter. All platforms as described by their owners with some amount of API publishing and hooks allowed to let in 3rd party developers (like game maker Zynga). But so what if I can play Scrabble or Farmville with my ‘friends’ on a social networking ‘platform’? Am I still getting access to the Internet? Probably not, as you are most likely reading what ever filters into or out of the central all-encompassing data store of the Social Networking Platform.

Like the old World Maps in the days before Columbus, there be Dragons and the world ends HERE even though platform owners might say otherwise. It is an Intranet pure and simple, a gated community that forces unique identities on all participants. Worse yet it is a big brother-like panopticon where each step and every little movement monitored and tallied. You take quizzes, you like, you share, all these things are collection points, check points to get more data about you. And that is the TAX levied on anyone who voluntarily participates in a social networking platform.

So long live the Internet, even though it’s frontier, wild-catting days are nearly over. There will be books and movies like How the Cyberspace was Won, and the pioneers will all be noted and revered. We’ll remember when we could go anywhere we wanted and do lots of things we never dreamed. But those days are slipping as new laws get passed under very suspicious pretenses all in the name of Commerce. As for me I much prefer Freedom over Commerce, and you can log that in your stupid little database.

Cover of "The Future of the Internet--And...

Cover via Amazon

Written by Eric Likness

December 19, 2011 at 3:00 pm

Tilera routs Intel, AMD in Facebook bakeoff • The Register

Structure of the TILE64 Processor from Tilera

Tile64 processor from Tilera

Facebook lined up the Tilera-based Quanta servers against a number of different server configurations making use of Intels four-core Xeon L5520 running at 2.27GHz and eight-core Opteron 6128 HE processors running at 2GHz. Both of these x64 chips are low-voltage, low power variants. Facebook ran the tests on single-socket 1U rack servers with 32GB and on dual-socket 1U rack servers with 64GB.All three machines ran CentOS Linux with the 2.6.33 kernel and Memcached 1.2.3h.

via Tilera routs Intel, AMD in Facebook bakeoff • The Register.

You will definitely want to read this whole story as presented El Reg. They have a few graphs displaying the performance of the Tilera based Quanta data cloud in a box versus the Intel server rack. And let me tell you on certain very specific workloads like the Web Caching using Memcached I declare advantage Tilera. No doubt data center managers need to pay attention to this and get some more evidence to back up this initial white paper from Facebook, but this is big, big news. And all one need do apart from tuning the software for the chipset is add a few PCIe based SSDs or TMS RamSan and you have what could theoretically be the fastest possible web performance possible. Even at this level of performance, there’s still room to grow I think on the hard drive storage front. What I would hope in future to see is Facebook do an exhaustive test on the Quanta SQ-2 product versus Calxeda (ARM cloud in a box) and the Seamicro SM-10000×64 (64bit Intel Atom cloud in a box). It would prove an interesting research project just to see how much chipsets, chip architectures and instruction sets play in optimizing each for a particular style and category of data center workload. I know I will be waiting and watching.

Written by Eric Likness

August 15, 2011 at 3:00 pm

JSON Activity Streams Spec Hits Version 1.0

This is icon for social networking website. Th...

Image via Wikipedia

The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

via JSON Activity Streams Spec Hits Version 1.0.

Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

Written by Eric Likness

June 14, 2011 at 3:00 pm

Facebook: No ‘definite plans’ to ARM data centers • The Register

Image representing Facebook as depicted in Cru...

Image via CrunchBase

Clearly, ARM and Tilera are a potential threat to Intel’s server business. But it should be noted that even Google has called for caution when it comes to massively multicore systems. In a paper published in IEEE Micro last year, Google senior vice president of operations Urs Hölzle said that chips that spread workloads across more energy-efficient but slower cores may not be preferable to processors with faster but power-hungry cores.

“So why doesn’t everyone want wimpy-core systems?” Hölzle writes. “Because in many corners of the real world, they’re prohibited by law – Amdahl’s law.

via Facebook: No ‘definite plans’ to ARM data centers • The Register.

The explanation given here by Google’s top systems person is that latency versus parallel processes overhead. Which means if you have to do all the steps in order, with a very low level of parallel tasks that results in much higher performance. And that is the measure that all the users of your service will judge you by. Making things massively parallel might provide the same level of response, but at a lower energy cost. However the complications due to communication and processing overhead to assemble all the data and send it over the wire will offset any advantage in power efficiency. In other words, everything takes longer and latency increases, and the users will deem your service to be slow and unresponsive. That’s the dilemna of Amdahl’s Law, the point of diminishing returns when adopting parallel computer architectures.

Now compare this to something say we know much more concretely, like the Airline Industry. As the cost of tickets came down, the attempt to cut costs went up. Schedules for landings and gate assignments got more complicated and service levels have suffered terribly. No one is really all that happy about the service they get, even from the best airline currently operating. So maybe Amdahl’s Law doesn’t apply when there’s a false ceiling placed on what is acceptable in terms of the latency of a ‘system’. If airlines are not on time, but you still make your connection 99% of the time, who will complain? So by way of comparison there is a middle ground that may be achieved where more parallelizing of compute tasks will lower the energy required by a data center. It will require greater latency, and a worse experience for the users. But if everyone suffers equally from this and the service is not great but adequate, then the company will be able to cut costs through implementing more parallel processors in their data centers.

I think Tilera holds a special attraction potentially for Facebook. Especially since Quanta their hardware assembler of choice is already putting together computers with the Tilera chip for customers now. It seems like this chain of associations might prove a way for Facebook to test the waters on a scale large enough to figure out the cost/benefits of massively parallel cpus in the data center. Maybe it will take another build out of a new data center to get there, but it will happen no doubt eventually.

Written by Eric Likness

April 27, 2011 at 3:00 pm

Follow

Get every new post delivered to your Inbox.

Join 295 other followers