The base configuration of the original SM10000 came with 512 cores, 1 TB of memory, and a few disks; it was available at the end of July last year and cost $139,000. The new SM10000-64 uses the N570 processors, for a total of 256 chips but 512 cores, the same 1 TB of memory, eight 500 GB disks, and eight Gigabit Ethernet uplinks, for $148,000. Because there are half as many chipsets on the new box compared to the old one, it burns about 18 percent less power, too, when configured and doing real work.
I don’t want to claim that Seamicro is taking a page out of the Apple playbook, but keeping your name in the Technology News press is always a good thing. I have to say it is a blistering turnaround time to release a second system board for the SM10000 server so quickly. And knowing they do have some sales to back up the need for further development makes me thing this company really could make a go of it. 512 CPU cores in a 10U rack is still a record of some sort and I hope to see one day Seamicro publish some white papers and testimonials from their current customers to see what killer application this machine has in the data center.
Monday IBM announced a partnership with UK chip developer ARM to develop 14-nm chip processing technology. The news confirms the continuation of an alliance between both parties that launched back in 2008 with an overall goal to refine SoC density, routability, manufacturability, power consumption and performance.
Interesting that IBM is striking out so far away from the current state of the art processing node for silicon chips. 22nm or there abouts is the what most producers of flash memory are targeting for their next generation product. Smaller sizes mean more chips per wafer, higher density means storage sizes go up for both flash drives and SSDs without increasing in physical size (who wants to use brick sized external SSDs right?). Too, it is interesting that ARM is the partner with IBM for their farthest target yet in chip production design rule sizes. But it appears that System-on-Chip (SoC) designers like ARM are now state of the art producers of power and waste heat optimized computing. Look at Apple’s custom A4 processor for the iPad and iPhone. That chip has lower power requirements than any other chip on the market. It is currently leading the pack for battery life in the iPad (10 hours!). So maybe it does make sense to choose ARM right now as they can benefit the most and the fastest from any shrink in the size of the wire traces used to create a microprocessor or a whole integrated system on a chip. Strength built on strength, that’s a winning combination and shows that IBM and ARM have an affinity for the lower power consumption future of cell phone and tablet computing.
But consider this also, the last article I wrote about Tilera’s product plans regarding cloud computing in a box. ARM chips could easily be the basis for much lower power, much higher density computing clouds. Imagine a GooglePlex style datacenter running ARM CPUs on cookie trays instead of commodity Intel parts. That’s a lot of CPUs and a lot less power draw, both big pluses for a Google design team working on a new data center. True, legacy software concerns might over rule a switch to lower power parts. But if the cost of electricity would offset the opportunity cost of switching to a new CPU (an having to re-compile software for the new chip) then Google would be crazy not to seize up on this.
As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.
There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.
Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.
Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!
Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.
Last week during CES 2011, The Tech Report spotted OCZ’s Vertex 3 Pro SSD–running in a demo system–using a next-generation SandForce SF-2582 controller and a 6Gbps Serial ATA interface. OCZ demonstrated its read and write speeds by running the ATTO Disk Benchmark which clearly showed the disk hitting sustained read speeds of 550 MB/s and sustained write speeds of 525 MB/s.
Big news, test samples of the SandForce SF-2000 series flash memory controllers are being shown in products demoed at the Consumer Electronics Shows. And SSDs with SATA interfaces are testing through the roof. The numbers quoted for a 6GB/sec. SATA SSD are in the 500+GB/sec. range. Previously you would need to choose a PCIe based SSD drive from OCZ or Fusion-io to get anywhere near that high of speed sustained. Combine this with the future possibility of SF-2000 being installed on future PCIe based SSDs and there’s no telling how much the throughput will scale. If four of the Vertex drives were bound together as a RAID 0 set with SF-2000 drive controllers managing it, is it possible to see a linear scaling of throughput. Could we see 2,000 MB/sec. on PCIe 8x SSD cards? And what would be the price on such a card fully configured with 1.2 TB of SSD drives? Hard to say what things may come, but just the thought of being able to buy retail versions of these makes me think a paradigm shift is in the works that neither Intel nor Microsoft are really thinking about right now.
One comment on this article as posted on the original website, Tom’s Hardware, included the observation that the speeds quoted for this SATA 6GBps drive are approaching the memory bandwidth of several generations old PC-133 DRAM memory chips. And as I have said previously, I still have an old first generation Titanium Powerbook from Apple that uses that same memory chip standard PC-133. So given that SSD hard drives are fast approaching the speed of somewhat older main memory chips I can only say we are fast approaching a paradigm shift in desktop and enterprise computing. I dub thee, the All Solid State (ASS) era where no magnetic or rotating mechanical media enter into the equation. We run on silicon semiconductors from top to bottom, no Giant Magneto-Resistive technology necessary. Even our removable media are flash memory based USB drives we put in our pockets and walk around with on key chains.
The next wave of high end consumer SSDs will begin shipping this month, and I believe Corsair may be the first out the gate. Micron will follow shortly with its C400 and then we’ll likely see a third generation offering from Intel before eventually getting final hardware based on SandForce’s SF-2000 controllers in May.
This just in from Consumer Electronics Show in Las Vegas, via Anandtech. SandForce SF-2000 scheduled to drop in May of this year. Get ready as you will see a huge upsurge in releases of new SSD products attempting to best one another in the sustained Read/Write category. And I’m not talking just SSDs but PCIe based cards with SSD RAIDs embedded on them communicating through a 2 Lane 8X PCI Express interface. I’m going to take a wild guess and say you will see products fitting this description easily hitting 700 to 900 MB/s sustained Read and Write. Prices will be on the top end of the scale as even the current shipping products all fall in to the $1200 to $1500 range. Expect the top end to be LSI based products for $15,000 or third party OEM manufacturers who might be willing to sell a fully configured 1TByte card for maybe ~$2,000. After the SF-2000 is released, I don’t know how long it will take for designers to prototype and release to manufacturing any new designs incorporating this top of the line SSD flash memory controller. It’s possible as the top end continues to increase in performance current shipping product might start to fall in price to clear out the older, lower performance designs.
Being a student of the history of technology I know that the silicon semiconductor industry has been able to scale production according to Moore’s Law. However apart from the advances in how small the transistors can be made (the real basis of Moore’s Law), the other scaling factor has been the size of the wafers. Back in the old days silicon crystals had to be drawn out from a furnace at a very even steady rate which forced them to be thin cylinders 1-2″ in diameter. However as techniques improved (including a neat trick where the crystal was re-melted to purify it) the crystals increase in diameter to a nice 4″ size that helped bring down costs. Then came the big migration to 6″ wafers, 8″ and now the 300mm wafer (roughly 12″). Now Intel is still on its freight train to further bring down costs by moving the wafers up to the next largest size (450mm) and is stilling shrinking the parts (down to an unbelievably skinny 22nm in size). As the wafers continue to grow, the cost of processing equipment goes up and the cost of the whole production facility will too. The last big price point for a new production fab for Intel was always $2Billion. There may be multiple production lines in that Fab, but you needed to always have upfront that required money in order to be competitive. And Intel was more than competitive, it could put 3 lines into production in 3 years (blowing the competition out of the water for a while) and make things very difficult in the industry.
Where things will really shake up is in the Flash memory production lines. The size of the design rulings for current flash memory chips at Intel is right around 22nm. Intel and Samsung both are trying to shrink down the feature sizes of all the circuits on their Single and Multi-Level Flash memory chips. Add to this the stacking of chips into super sandwiches and you find they can glue together 8 of their 8Gbyte chips, making for a single very thin 64Gbyte memory chip. This chip is then mated up to a memory controller and voila, the iPhone suddenly hits 64Gbytes of storage for all your apps and mp4’s from iTunes. Similarly on the hard drive end of the scale things will also wildly improve. Solid State Disk capacities should creep upwards further (beyond the top of the line 512Gbyte SSDs) as will the PCI Express based storage devices (probably doubling in capacity to 2 TeraBytes) after 450mm wafers take hold across the semiconductor industry. So it’s going to be a big deal if Chinese, Japanese and American companies get on the large silicon wafer bandwagon.
Hitachi GST flash drives are hitting the streets and, at last, ending STEC’s monopoly in the supply of Fibre Channel interface SSDs.
EMC startled the enterprise storage array world by embracing STEC SSDs (solid state drives) in its arrays last year as a way of dramatically lowering the latency for access to the most important data in the arrays. It has subsequently delivered FAST automated data movement across different tiers of storage in its arrays, ensuring that sysadms don’t have to involved in managing data movement at a tedious and time-consuming level.
In the computer world the data center is often the measure of all things in terms of speed and performance. Time was, the disk drive interface of choice was the SCSI drive and then it’s higher speed evolutions Fast/Wide UltraSCSI. But then a new interface hit that used fibre optic cables to move storage out of the computer box to a separate box that managed all the hard drives in one spot and this was called a Storage Array. The new connector/cable combo was named Fibre Channel and it was fast, fast, fast. It become the absolute brand name off all vendors trying to sell more and more hard drives into the data center. Newer evolved versions of Fibre Channel came to market, each one slightly faster than the rest. And eventually Fibre Channel was built right into the hard drives themselves, so that you could be assured the speed was native Fibre Channel 3Gigabytes per second from one end to the other. But Fibre Channel has always been prohibitively expensive though a lot of it has been sold over the years. Volume has not brought down the price of Fibre Channel one bit in the time that it’s been the most widely deployed disk drive interface. A few competitors have cropped up the old Parallel ATA and Serial ATA drives from the desktop market have attempted to compete. And a newer SCSI drive interface called Serial Attached SCSI is now seeing some wider acceptance. However the old guard who are mentally and emotionally attached to their favorite Fibre Channel drive interface are not about to give up even has spinning disk speeds have been trumped by the almighty Flash memory based solid state drive (SSD). And a company named STEC knew it could sell a lot of SSDs if only someone could put a Fibre Channel interface on the circuit board, allaying any fears of the Fibre Channel adherents that they needed to evolve and change.
Yes it’s true STEC was the only game in town for what I consider the Fibre Channel legacy interface in old-line Storage Array manufacturers. They have sold tons of their drives to third parties who package up their wares into turnkey ‘Enterprise’ solutions for drive arrays and cache controllers (all of which just speed up things). And being the first-est with the most-est is a good business strategy until the second source of your product comes online. So it’s always a race to sell as much as you can until the deadline hits and everyone rushes to the second source. Here now is Hitachi’s announcement they are now manufacturing an SSD with a Fibre Channel interface onboard for the Enterprise data center customers.
Tuesday LSI Corp announced the WarpDrive SLP-300 PCIe-based acceleration card, offering 300 GB of SLC solid state storage and performance up to 240,000 sustained IOPS. It also delivers I/O performance equal to hundreds of mechanical hard drives while consuming less than 25W of power–all for a meaty $11,500 USD.
This is the cost of entry for anyone working on an Enterprise Level project. You cannot participate unless you can cross the threshold of a PCIe card costing $11,500 USD. This is the first time I have seen an actual price quote on one of these cards that swims in the Data center consulting and provisioning market. Fusion-io cannot be too far off of this price when it’s not sold as a full package as part of a larger project RFP. I am somewhat stunned at the price premium, but LSI is a top engineering firm and they definitely can design their own custom silicon to get the top speed out of just about any commercial off the shelf Flash memory chips. I am impressed they went with the PCI Express (8X) 8 lane interface. I’m guessing that’s a requirement for more server owners whereas 4X is for the desktop market. Still I don’t see any 16X interfaces as of yet (that’s the interface most desktops use for their graphics cards from AMD and nVidia). One more part that makes this a premium offering is the choice of Single Level Cell Flash memory chips for the ultimate in speed and reliability along with the Serial Attached Storage interface onboard the PCIe card itself. Desktop models opt for SATA to PCI-X to PCI-e bridge chips forcing you to translate and re-order your data multiple times. I have a feel SAS bridges to PCI-e at the full 8X interface speeds and that is the key to getting faster than 1,000 MB/sec. speeds for write and reads. This part is quoted as getting in the range of ~1,400 MB/sec. and other than some very expensive turnkey boxes from manufacturers like Violin, this is a great user installable part to get the benefit of a really fast SSD drive array on a PCIe card.
EC: Here are the things I would say in support of that. One of them, which I think is really important—and this is true especially of the elementary schools—is that training in drawing is teaching people to observe. PH: Which is what you want in scientists, right?
EC: Thats right. Or doctors or lawyers. You want people who are observant. I think most people were not trained under artists, so they have an incorrect image of what an artist actually does. Theres a complete disconnect with what they do. But there are places where this understanding comes across, such as in that famous book by Betty Edwards [Drawing on the Right Side of the Brain].
This interview is with a computer scientist named Ed Catmull. In the time Ed Catmull entered the field, we’ve gone from computers crunching numbers like a desktop calculator to computers doing full 3D animated films. Ed Catmull’s single most important goal was to created an animated film using a computer. He eventually accomplished that and more onced he helped form up Pixar. All of his research and academic work was focused on that one goal.
I’m always surprised to see what references or influences people quote in interviews. In fact, I am really encouraged. It was about 1988 or so when I took a copy of Betty Edward’s book my mom had and started reading it and doing some of the exercises in it. Stranger still I want back to college and majored in art (not drawing but Photography). So I think I understand exactly what Ed Catmull means when he talks about being observant. In every job I’ve had computer related or otherwise that ability to be observant just doesn’t exist in a large number of people. Eventually people begin to ask me how do know all this stuff, when did you learn it? Most times, the things they are most impressed by are things like noticing something and trying a different strategy in attempting to fix a problem. The proof is, I can do this with things I am unfamiliar with and usually make some headway towards fixing a thing. Whether that thing is mechanical, or computer related doesn’t matter. I make good guesses and it’s not because I’m an expert in anything, I merely notice things. That’s all it is.
So maybe everyone should read and go through Betty Edwards’s book Drawing on the Right Side of the Brain. If nothing else it might make you feel a little dislocated and uncomfortable. It might shake you up, and make you question some pre-conceived notions about yourself like, the feeling you can’t draw or you are not good at art. I think with practice, anyone can draw and with practice anyone can become observant.
Apple’s Xserve was born in the spring of 2002 and is scheduled to die in the winter of 2011, and I now step up before its mourners to speak the eulogy for Apple’s maligned and misunderstood server product.
Chuck Goolsbee’s Eulogy is spot on, and every point is true according even to my limited experience. I’ve purchased 2 different Xserves since they were introduced. On is 2nd generation G4 model, the other is a 2006 Intel model (thankfully I skipped the G5 altogether). Other than a weird bug in the Intel based Xserve (weird blue video screen), there have been no bumps or quirks to report. I agree that form factor of the housing is way too long. Even in the rack I used (a discard SUN Microsystems unit), the thing was really inelegant. Speaking of the drive bays too is a sore point for me. I have wanted dearly to re-arrange reconfigure and upgrade the drive bays on both the old and newer Xserve but the expense of acquiring new units was prohibitive at best, and they went out of manufacture very quickly after being introduced. If you neglected to buy your Xserve fully configured with the maximum storage available when it shipped you were more or less left to fend for yourself. You could troll Ebay and Bulletin Boards to score a bona fide Apple Drivebay but the supply was so limited it drove up prices and became a black market. The XRaid didn’t help things either, as drivebays were not consistently swappable from the Xserve to the XRaid box. Given the limited time most sysadmins have with doing research on purchases like this to upgrade an existing machine, it was a total disaster, big fail and unsurprising.
I will continue to run my Xserve units until the drives or power supplies fail. It could happen any day, any time and hopefully I will have sufficient warning to get a new Mac mini server to replace it. Until then, I too, along with Chuck Goolsbee among the rest of the Xserve sysadmins will kind of wonder what could have been.