Categories
computers data center technology wired culture

Facebook Opens Up Hardware World With Magic Hinge | Wired Enterprise | Wired.com

Profile shown on Thefacebook in 2005
Profile shown on Thefacebook in 2005 (Photo credit: Wikipedia)

Codenamed “Knox,” Facebook’s storage prototype holds 30 hard drives in two separate trays, and it fits into a nearly 8-foot-tall data center rack, also designed by Facebook.The trick is that even if Knox sits at the top of the rack — above your head — you can easily add and remove drives. You can slide each tray out of the the rack, and then, as if it were a laptop display, you can rotate the tray downwards, so that you’re staring straight into those 15 drives.

via Facebook Opens Up Hardware World With Magic Hinge | Wired Enterprise | Wired.com.

Nice article around Facebook’s own data center design and engineering efforts. I think their approach is going to advance the state of the art way more than Apple/Google/Amazon’s own protected and secretive data center efforts. Although they have money and resources to plow into custom engineered bits for their data centers, Facebook can at least show off what its learned in the time that it has scaled up to a huge number of daily users. Not the least of which is expressed best by their hard drive rack design, a tool-less masterpiece.

This article emphasizes the physical aspects of the racks in which the hard drives are kept. It’s a tool-less design not unlike what I talked about in this article from a month ago. HP has adopted a tool-less design for its all-in-one (AIO) Engineering Workstation, see Introducing the HP Z1 Workstation. The video link will demonstrate the idea of a tool-less design for what is arguably not the easiest device to design without the use of proprietary connectors, fasteners, etc. I use my personal experience of attempting to upgrade my 27″ iMac as the foil for what is presented in the HP promo video. If Apple adopted a tool-less design for its iMacs there’s no telling what kind of aftermarket might spring up for the hobbyist or even the casually interested Mac owners.

I don’t know how much of Facebook’s decisions regarding their data center designs is driven by the tool-less methodology. But I can honestly say that any large outfit like Facebook and HP attempting to go tool-less in some ways is a step in the right direction. Comapnies like O’Reilly’s Make: magazine and iFixit.org are readily providing path for anyone willing to put in the work to learn how to fix the things they own. Also throw into that mix less technology and more Home Maintenance style outfits like Repair Clinic, while not as sexy technologically, I can vouch for their ability to teach me how to fix a fan in my fridge.

Borrowing the phrase, “If you can’t fix it, you don’t own it” let me say I wholeheartedly agree. And also borrowing from the old Apple commercial, Here’s to the crazy ones because they change things. They have no respect for the status quo, so lots stop throwing away those devices, appliances, automobiles and let’s start first by fixing some things.

Categories
cloud data center flash memory SSD support technology

Artur Bergman Wikia on SSDs @ OReilly Media Conferences/Don Bazile CEO of Violin Memory

Image representing Violin Memory as depicted i...
Image via CrunchBase

Artur Bergman of Wikia explains why you should buy and use Solid State Disks (strong language)

via Artur Bergman Wikia on SSDs on OReilly Media Conferences – live streaming video powered by Livestream.

This is the shortest presentation I’ve seen and most pragmatic about what SSDs can do for you. He recommends buying Intel 320s and getting your feet wet by moving from a bicycle to a Ferrari. Later on if you need to go with a PCIe SSD do it, but it’s like the difference between a Formula 1 race car and a Ferrari. Personally in spite of the lack of major difference Artur is trying to illustrate I still like the idea of buying once and getting more than you need. And if this doesn’t start you down the road of seriously buying SSDs of some sort check out this interview with Violin Memory CEO, Don Bazile:

Violin tunes up for billion dollar flash gig: Chris Mellor@theregister.co.uk (Saturday June 25th)

Basile said: “Larry is telling people to use flash … That’s the fundamental shift in the industry. … Customers know their competitors will adopt the technology. Will they be first, second or last in their industry to do so? … It will happen and happen relatively quickly. It’s not just speed; its the lowest cost of data base transaction in history. [Flash] is faster and cheaper on the exact same software. It’s a no-brainer.”

Violin Memory is the current market leader in data center SSD installations for transactional data or analytical processing. The boost folks are getting from putting the databases on Violin Memory boxes is automatic, requires very little tuning and the results are just flat out astounding. The ‘Larry’ quoted above is the Larry Ellison of Oracle, the giant database maker. So with that kind of praise I’m going to say the tipping point is near, but please read the article. Chris Mellor lays out a pretty detailed future of evolution in SSD sales and new product development. 3-bit Multi-Level memory cells in NAND flash is what Mellor thinks will be the tipping point as price is still the biggest sticking point for anyone responsible for bidding on new storage system installs. However while that price sticking point is a bigger issue for batch oriented off-line data warehouse analysis, for online streaming analysis SSD is cheaper per byte per second throughput. So depending on the typical style of database work you do or performance you need SSD is putting the big iron spinning hard disk vendors to shame. The inertia of these big capital outlays and cozy relationships with these vendors will make some shops harder to adopt the new technology (But IBM is giving us such a big discount!…WE are an EMC shop,etc.). However the competitors of the folks owning those datacenters will soon eat all that low hanging fruit a simple cutover to SSDs will afford and the competitive advantage will swing to the early adopters.

*Late Note: Chris Mellor just followed up Monday night (June 27th) with an editorial further laying out the challenge to disk storage presented by the data center Flash Array vendors. Check it out:

What should the disk drive array vendors do, if this scenario plays out?They should buy in or develop their own all-flash array technology. Having a tier of SSD storage in a disk drive array is a good start but customers will want the simpler choice of an all-flash array and, anyway, they are here now. Guys like Violin and Whiptail and TMS are knocking on the storage array vendors customer doors right now.

via All aboard the flash array train? • The Register.

Categories
cloud computers data center flash memory SSD technology

EMC’s all-flash benediction: Turbulence ahead • The Register

msystems
Image via Wikipedia

A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”

via EMC’s all-flash benediction: Turbulence ahead • The Register.

I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.

In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.

Categories
cloud computers data center technology wintel

Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future

Image representing Microsoft as depicted in Cr...
Image via CrunchBase

Probase is a Microsoft Research project described as an “ongoing project that focuses on knowledge acquisition and knowledge serving.” Its primary goal is to “enable machines to understand human behavior and human communication.” It can be compared to  Cyc, DBpedia or Freebase in that it is attempting to compile a massive collection of structured data that can be used to power artificial intelligence applications.

via Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future – ReadWriteCloud.

Who knew Microsoft was so interested in things only IBM Research’s Watson could demonstrate? AI (artificial intelligence) seems to be targeted at Bing search engine results. And in order to back this all up, they have to ditch their huge commitment to Microsoft SQL Server and go for a NoSQL database in order to hold all the unstructured data. This seems like a huge shift away from desktop and data center applications and something much more oriented to a cloud computing application where collected data is money in the bank. This is best expressed in the example given in the story of Google vs. Facebook. Google may collect data, but it is really delivering ads to eyeballs. Whereas Facebook is just collecting the data and sharing that to the highest bidder. Seems like Microsoft is going the Facebook route of wanting to collect and own the data rather than merely hosting other people’s data (like Google and Yahoo).

Categories
support technology

EDS mainframe goes titsup, crashes RBS cheque system • The Register

HP managers are reaping the harvest of their deep cost-cutting at EDS, in the form of a massive mainframe failure that crippled some very large clients, including the taxpayer-owned bank RBS.

via EDS mainframe goes titsup, crashes RBS cheque system • The Register.

Royal Bank of Scotland
Royal Bank of Scotland had a big datacenter outage

The Royal Bank of Scotland is a National Bank and a big player in the European banking market. In Datacenter speak 5 Nines of availability is a guarantee the computer will stay up and running 99.999% of the time. This roughly calculates to 5.26 minutes of downtime allowed PER YEAR. This Royal Bank of Scotland computer was down 12Hours which tranlates to 99.8% Reliability. I think HP and EDS owe some people money for breaking the terms of their contract. It just proves outsourcing is not a cure-all for cost savings. You as the customer don’t know when they are going to start dropping head count to inflate the value of their stock on Wall Street. And when the economy soured, they dropped head count, like you wouldn’t believe. What does that mean for outstanding contracts to provide datacenter services? Well it means all bets are off, you get what ever they are willing to give you. If you are employed to make and manage contracts like this for your company be forewarned. Your outsourcing company can fire everyone at the drop of a hat.