The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.
The cat is out of the bag, OCZ has not one but two SandForce SF-2000 series based SSDs out on the market now. And performance-wise the consumer level product is even slightly better performing than the enterprise level product at less cost. These indeed are interesting times. The speeds are so fast with the newer SandForce drive controllers that with a SATA 6GB/s drive interface you get speeds close to what could only be purchased on a PCIe based SSD drive array for $1200 or so. The economics of this is getting topsy-turvy, new generations of single drives outdistancing previous top-end products (I’m talking about you Fusion-io and you Violin Memory). SandForce has become the drive controller for the rest of us and with speeds like this 500MB/sec. read and write what more could you possibly ask for? I would say the final bottleneck on the desktop/laptop computer is quickly vanishing and we’ll have to wait and see just how much faster the SSD drives become. My suspicion is now a computer motherboard’s BIOS will slowly creep up to be the last link in the chain of noticeable computer speed. Once we get a full range of UEFI motherboards and fully optimized embedded software to configure them we will have theoretically the fastest personal computers one could possibly design.
Finally, despite Apple’s dropping of the Xserve line (see “A Eulogy for the Xserve: May It Rack in Peace,” 8 November 2010), Mac OS X Server will make the transition to Lion, with Apple promising that the new version will make setting up a server easier than ever. That’s in part because Lion Server will be built directly into Lion, with software that guides you through configuring the Mac as a server. Also, a new Profile Manager will add support for setting up and managing Mac OS X Lion, iPhone, iPad, and iPod touch devices. Wiki Server 3 will offer improved navigation and a new Page Editor. And Lion Server’s WebDAV support will provide iPad users the ability to access, copy, and share server-based documents.
Here’s to seeing a great democratization of OS X Server once and for all time. While Apple did deserve to make some extra cash on a server version of the OS, I’m sure it had very little impact on their sales overall (positive or negative). However, including/bundling it with the base level OS and letting it be unlocked (for money or for free) can only be a good thing. Where I work I already run a single CPU 4core Intel Xserve. I think I should buy some cheap RAM and max out the memory and upgrade this Summer to OS X Lion Server.
Professional network LinkedIn has just introduced the beta launch of a new feature LinkedIn Skills, a way for you to search for particular skills and expertise, and of course, showcase your own and in LinkedIn’s words, “a whole new way to understand the landscape of skills & expertise, who has them, and how it’s changing over time.”
It may not seem that important at first, especially if people don’t keep their profiles up to date in LinkedIn. However, for the largest number of ‘new’ users that are in the job market actively seeking positions, I’m hoping those data are going to be more useful. Those might be worth following over time to see what demand there is for those skills in the market place. That is the promise at least. My concern though is just as grades have inflated over time at most U.S. Universities, skills too will be overstated, lied about and be very untrustworthy as people try to compete with one another on LinkedIn.
Interesting indeed, it appears Apple is letting supplies run low for the iPod Classic. No word immediately as to why but there could be a number of reasons as speculated in this article. Most technology news websites understand the divide between the iPhone/Touch operating system and all the old legacy iPod devices (an embedded OS that only runs the device itself). Apple would like to consolidate its consumer products development efforts by slowly winnowing out non-iOS based ipods. However, due to the hardware requirements demanded by iOS, Apple will be hard pressed to jam such a full featured bit of software into iPod nano and iPod shuffles. So whither the old click wheel interface iPod empire?
Seeing this announcement reminded me a little of the old IBM Microdrive. A 1.8″ wide spinning disk that fit into a Compact Flash sized form factor (roughly 1.8″ square). Those drives were at the time 340MB and astoundingly dense storage format that digital photographs gravitated to very quickly. Eventually this Microdrive was improved up to around 1GByte per drive in the same small form factor. Eventually the market for this storage dried up as smaller and smaller cameras became available with larger and larger amounts of internal storage and slots for removable storage like Sony’s Memory stick format or the SD Card format. The Microdrive was also impeded by a very high cost per MByte versus other available storage by the end of its useful lifespan.
But no one knows what new innovative products might hit the market. Laptop manufacturers continued to improve on their expansion bus known as PCMCIA, PC Card and eventually Card Bus. The idea was you could plug any kind of device you wanted into that expansion bus connect to a a dial-up network, a wired Ethernet network or a Wireless network. Card Bus was 32-bit clean and designed to be as close to the desktop PCI expansion bus as possible. Folks like Toshiba were making small hard drives that would fit the tiny dimensions of that slot, containing all the drive electronics within the Card Bus card itself. Storage size improved as the hard drive market itself improved the density of it’s larger 2.5″ and 3.5″ desktop hard drive product.
I remember the first 5GByte Card Bus hard drive and marveling at how far folks at Toshiba and Samsung had outdistanced IBM. Followed soon after by the 10GByte drive. However just as we wondered how cool this was, Apple created a copy of a product being popularized by a company named Rio. It was a new kind of hand held music player that primarily could play back audio .mp3 files. It could hold 5GBytes of music (compared to 128MBytes and 512MBytes for most top of the line Rio products at the time). It had a slick, and very easy to navigate interface with a spinning wheel you could click down on with the thumb of your hand. Yes it was the first generation iPod and it demanded a large quantity of those little bitty hard drives Samsung and Toshiba were bringing to market.
Each year storage density would increase and a new generation of drives would arrive. Each year a new iPod would hit the market taking advantage of the new hard drives. The numbers seemed to double very quickly. 20Gig, 30Gig-the first ‘video’ capable iPod, 40Gig,60gig,120gig and finally today the iPod Classic at a whopping 160GBytes of storage! And then the great freeze, the slowdown and transition to Flash memory based iPods which were mechanically solid state. No moving parts, no chance for mechanical failure, no loss of data and speeds unmatched by any hard drive of any size currently on the market. The Flash storage transition also meant lower power requirements, longer battery time and now for the first time the real option of marrying a cell phone with your iPod (I do know there was an abortive attempt to do this on a smaller scale with Motorola phones @ Cingular). The first two options were 4GB and 8GB iPhones using the solid state flash memory. So wither the iPod classic?
iPod Classic is still on the market for those wishing to pay slightly less than the price for an iPod touch. You get much larger amount of total storage (video and audio both) but things have stayed put at 160GBytes for a very long time now. Manufacturers like Toshiba haven’t come out with any new product seeing the end in sight for the small 1.8″ hard drive. Samsung dropped it’s 1.8″ hard drives altogether seeing where Apple was going with its product plan. So I’m both surprised and slightly happy to see Toshiba soldier onward bringing out a new product. I’m thinking Apple should really do a product refresh on the iPod classic. They could also add iOS as a means of up-scaling and up-marketing the device to people who cannot afford the iPod Touch, leaving the price right where it is today.
Monday IBM announced a partnership with UK chip developer ARM to develop 14-nm chip processing technology. The news confirms the continuation of an alliance between both parties that launched back in 2008 with an overall goal to refine SoC density, routability, manufacturability, power consumption and performance.
Interesting that IBM is striking out so far away from the current state of the art processing node for silicon chips. 22nm or there abouts is the what most producers of flash memory are targeting for their next generation product. Smaller sizes mean more chips per wafer, higher density means storage sizes go up for both flash drives and SSDs without increasing in physical size (who wants to use brick sized external SSDs right?). Too, it is interesting that ARM is the partner with IBM for their farthest target yet in chip production design rule sizes. But it appears that System-on-Chip (SoC) designers like ARM are now state of the art producers of power and waste heat optimized computing. Look at Apple’s custom A4 processor for the iPad and iPhone. That chip has lower power requirements than any other chip on the market. It is currently leading the pack for battery life in the iPad (10 hours!). So maybe it does make sense to choose ARM right now as they can benefit the most and the fastest from any shrink in the size of the wire traces used to create a microprocessor or a whole integrated system on a chip. Strength built on strength, that’s a winning combination and shows that IBM and ARM have an affinity for the lower power consumption future of cell phone and tablet computing.
But consider this also, the last article I wrote about Tilera’s product plans regarding cloud computing in a box. ARM chips could easily be the basis for much lower power, much higher density computing clouds. Imagine a GooglePlex style datacenter running ARM CPUs on cookie trays instead of commodity Intel parts. That’s a lot of CPUs and a lot less power draw, both big pluses for a Google design team working on a new data center. True, legacy software concerns might over rule a switch to lower power parts. But if the cost of electricity would offset the opportunity cost of switching to a new CPU (an having to re-compile software for the new chip) then Google would be crazy not to seize up on this.
As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.
There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.
Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.
Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!
Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.
I’ve decided I want to blog more. Rather than just thinking about doing it, I’m starting right now. I will be posting on this blog once a week for all of 2011.
I know it won’t be easy, but it might be fun, inspiring, awesome and wonderful. Therefore I’m promising to make use of The DailyPost, and the community of other bloggers with similiar goals, to help me along the way, including asking for help when I need it and encouraging others when I can.
If you already read my blog, I hope you’ll encourage me with comments and likes, and good will along the way.