Category: computers

Interesting pre-announced products that may or may not ship, and may or may not have an impact on desktop/network computing

  • Tilera preps 100-core chips for network gear • The Register

    One Blue Gene/L node board
    Image via Wikipedia

    Upstart multicore chip maker Tilera is using the Interop networking trade show as the coming out party for its long-awaited Tile-Gx series of processors, which top out at 100 cores on a single die.

    via Tilera preps 100-core chips for network gear • The Register.

    A further update on Tilera’s product launches as the old Interop tradeshow for network switch and infrastructure vendors is held in Las Vegas. They have tweaked the chip packaging of their cpus and now are going to market different cpus to different industries. This family of Tilera chips is called the 8000 series and will be followed by a next generation of 3000 and 5000 series chips. Projections are by the time the Tilera 3000 series is released the density of the chips will be sufficient to pack upwards of 20,000 cpu cores of Tilera chips in a single 42 unit tall, 19 inch wide server rack. with a future revision possibly doubling that number of cores to 40,000. That road map is very agressive but promising and shows that there is lots of scaling possible with the Tilera product over time. Hopefully these plans will lead to some big customers signing up to use Tilera in shipping product in the immediate and near future.

    What I’m most interested in knowing is how does the Qanta server currently shipping that uses the Tilera cpu benchmark compared to an Intel Atom based or ARM based server on a generic webserver benchmark. While white papers and press releases have made regular appearances on the technolog weblogs, very few have attempted to get sample product and run it through the paces. I suspect, and cannot confirm that anyone who is a potential customer are given Non-disclosure Agreements and shipping samples to test in their data centers before making any big purchases. I also suspect that as is often the case the applications for these low power massively parallel dense servers is very narrow. Not unlike that for a super computer. IBM‘s Cell Processor that powers the Blue Gene super computers is essentially a PowerPC architecture with some extra optimizations and streamlining to make it run very specific workloads and algorithms faster. In a super computing environment you really need to tune your software to get the most out of the huge up front investment in the ‘iron’ that you got from the manufacturer. There’s not a lot of value add available in that scientific and super computing environment. You more or less roll your own solution, or beg, borrow or steal it from a colleague at another institution using the same architecture as you. So the Quanta S2Q server using the Tilera chip is similarly likely to be a one off or niche product, but a very valuable one to those who  purchase it. Tilera will need a software partner to really pump up the volumes of shipping product if they expect a wider market for their chips.

    But using a Tilera processor in a network switch or a ‘security’ device or some other inspection engine might prove very lucrative. I’m thinking of your typical warrantless wire-tapping application like the NSA‘s attempt to scoop up and analyze all the internet traffic at large carriers around the U.S. Analyzing data traffic in real time prevents folks like NSA from capturing and having to move around large volumes of useless data in order to have it analyzed at a central location. Instead localized computing nodes can do the initial inspection in realtime keying on phrases, words, numbers, etc. which then trigger the capturing process and send the tagged data back to NSA for further analysis. Doing that in parallel with a 100 core CPU would be very advantageous in that a much smaller footprint would be required in the secret closets NSA maintains at those big data carriers operations centers. Smaller racks, less power makes for a much less obvious presence in the data center.

  • TMS flash array blows Big Blue away • The Register

    Memory collection
    Image by teclasorg via Flickr

    Texas Memory Systems has absolutely creamed the SPC-1 storage benchmark with a system that comfortably exceeds the current record-holding IBM system at a cost per transaction of 95 per cent less.

    via TMS flash array blows Big Blue away • The Register.

    One might ask a simple question, how is this even possible given the cost of the storage media involved. How is it a Flash based storage array from RamSan beat a huge pile of IBM hard drives all networked and bound together in a massive storage system? And how did it do it for less? Woe be to those unschooled in the ways of the Per-feshunal Data Center purchasing dept. You cannot enter the halls of the big players unless you got million dollar budgets for big iron servers and big iron storage. Fibre Channel and Infiniband rule the day when it comes to big data throughput. All those spinning drives accessed simultaneously as if each one held one slice of the data you were asking for, each one delivering up it’s 1/10 of 1% of the total file you were trying to retrieve. And the resulting speed makes it look like one hard drive that is 10X10 faster than your desktop computer hard drive all through the smoke and mirrors of the storage controllers and the software that makes them go. But what if, just what if we decided to take Flash memory chips and knit them together with a storage controller that made them appear to be just like a big iron storage system? Well since Flash obviously costs something more than $1 per gigabyte and disk drives cost somewhere less than 10 cents per gigabyte the Flash storage loses right?

    In terms of total storage capacity Flash will lose for quite some time when you are talking about holding everything on disk all at the same time. But that is not what’s being benchmarked here at all. No, in fact what is being benchmarked is the rate at which Input (writing of data) and Output (reading of data) is done through the storage controllers. IOPS measure the total number of completed reads/writes done in a given amount of time. Previous to this latest example of the RamSan-630, IBM was king of the mountain with it’s huge striped Fibre Channel arrays all linked up through it’s own storage array controllers. RamSan came in at 400,503.2 IOPS as compared to IBM’s top of the line San Volume Controller with 380,489.3. That’s not very much difference you say, especially considering how much smaller the amount of data a RamSan can hold,… And that would be a valid argument but consider again, that’s not what we’re benchmarking it is the IOPS.

    Total cost for the IBM benchmarked system per IOP was $18.83. RamSan (which best IBM in total IOPS) was a measly $1.05 per IOP. The cost is literally 95% less than IBM’s cost. Why? Consider the price (even if it was steeply discounted as most Tech Writers will say as a cavea) for IBM’s benchmarked system costs $7.17Million dollars. Remember I said you need million dollar budgets to play in the data center space. Now consider the RamSan-630 costs $419,000. If you want speed, dump your spinning hard drives, Flash is here to stay and you cannot argue with the speed versus the price at this level of performance. No doubt this is going to threaten the livelihood of a few big iron storage manufacturers. But through disruption, progress is made.

  • Intel’s Tri-Gate gamble: It’s now or never • The Register

    I am the author of this image.
    Image via Wikipedia

    Analysis  There are two reasons why Intel is switching to a new process architecture: it can, and it must.

    via Intel’s Tri-Gate gamble: It’s now or never • The Register.

    Usually every time there’s a die shrink of a computer processor there’s always an attendant evolution of the technology to to produce it. I think back recently to the introduction of super filtered water immersion lithography. The goal of immersion lithography was to increase the ability to resolve the fine line wire traces of the photo masks as they were exposed onto photosensitive emulsion coating a silicon wafer. The problem is the light travels from the photomask to the surface of the wafer through ‘air’. There’s a small gap, and air is full of optical scrambling atoms and molecules that make the photomask slightly blurry. If you put a layer of water between the mask the wafer, you have in a sense a ‘lens’ made of optically superior water molecules that act more predictably than ‘air’. Likewise you get better chip yields, more profit, higher margins etc.

    As the wire traces on microchips continue to get thinner and transistors smaller the physics involved are harder to control. Electrodynamics begin to follow the laws of Quantum Electro-dynamics rather than Maxwell’s equations. This makes it harder to tell when a transistor has switched on or off and the basic digits of the digital computer (1s and 0s) become harder and harder to measure and register properly. IBM and Intel have waged a war on shrinking their dies all through the 80s and 90s. IBM chose to adopt new, sometimes exotic materials (copper metal for traces instead of aluminum, silicon on insulator, high-K dielectric gates). Intel chose to go the direction of improving what they had using higher energy light sources and only adopting very new processes when absolutely, positively necessary. At the same time, Intel was cranking out such volumes of current generation product it almost seem as though it didn’t need to innovate at all. But IBM kept Intel honest as did Taiwan Semiconductor Manufacturing Co. (contract manufacturer of micro-processors). And Intel continued to maintain its volume and technological advantage.

    ARM (formerly the Acorn Risc Machine) became a cpu manufacturer during the golden age off RISC computers (early and mid-1980s). Over time they got out of manufacturing and started selling their processor designs to anyone that wanted to embed a core microprocessor into a bigger chip design. Eventually ARM became the defacto standard micro chip for smart handheld devices and telephones before Intel had to react. Intel had come up with a market leading low voltage cheap cpu in the Atom processor. But they did not have the specialized knowledge and capability ARM had with embedded cpus. Licensees of ARM designs began cranking out newer generations of higher performance and lower power cpus than Intel’s research labs could create and the stage was set for a battle royale of low power/high performance.

    Which brings us now to an attempt to continue to scale down the  processor power requirements through the same brute force that worked in the past. Moore’s Law, an epigram quoted from Intel’s Gordon Moore indicated the rate at which the ‘industry’ would continue to scaled down the size of the ‘wires’ in silicon chips would increase speed and lower costs. Speeds would double, prices would halve and this would continue on ad infinitum to some distant future. The problem has been always that the future is now. Intel hit a brick wall back around the end off the Pentium IV era when they couldn’t get speeds to double anymore without also doubling the amount of waste heat coming off of the chip. That heat was harder and harder to remove efficiently and soon, it appeared the chips would create so much heat they might melt. Intel worked around this by putting multiple CPUs on the same silicon wafers they used for previous generation chips and got some amount of performance scaling to work. Along those lines they have research projects to create first an 80 core processor, then a 48 and now a 24 core processor (which might actually turn into a shippable product). But what about Moore’s Law? Well, the scaling has continued downward, and power requirements have improved but it’s getting harder and harder to shave down those little wire traces and get the bang that drives profits for Intel. Now Intel is going the full-on research and development route by adopting a new way of making transistors on silicon. It’s called a Fin Field Effect Trasistor or FinFET. And it makes use of not just the surface layer of metal but the surface and the left and right sides, effectively giving you 3x the surface to move the electrons around the processor. If they can get this to work on a modern day silicon chip production line, they will be able to continue differentiating their product, keeping their costs manageable and selling more chips. But it’s a big risk and bet I’m sure everyone hopes will pay off.

  • Viking Modular plugs flash chips into memory sockets • The Register

    The 536,870,912 byte (512×2 20 ) capacity of t...
    Image via Wikipedia

    What a brilliant idea: put flash chips into memory sockets. Thats what Viking Modular is doing this with its SATADIMM product.

    via Viking Modular plugs flash chips into memory sockets • The Register.

    This sounds like an interesting evolution of the SSD type of storage. But, I don’t know if there is a big advantage forcing a RAM memory controller to be the bridge to a Flash Memory controller. In terms of bandwidth, the speed seems comparable to a 4x PCIe interface. I’m thinking now of how it might compare to PCIe based SSD from OCZ or Fusion-io. It seems like the advantage is still held by PCIe in terms of total bandwidth and capacity (above 500MB/sec and 2Terabytes total storage). It maybe a slightly lower cost, but the use of Single Level Cell Flash memory chips raises the cost considerably for any given size of storage, and this product from Viking uses the Single Level Cell flash memory. I think if this product ships, it will not compete very well against products like consumer level SSDs, PCIe SSDs, etc. However if they continue to develop the product and evolve it, there might be a niche where it can be performance or price competitive.

  • Data hand tools – O’Reilly Radar

    A Shebang, also Hashbang or Sharp bang. This i...
    Image via Wikipedia

    Whenever you need to work with data, don’t overlook the Unix “hand tools.” Sure, everything I’ve done here could be done with Excel or some other fancy tool like R or Mathematica. Those tools are all great, but if your data is living in the cloud, using these tools is possible, but painful. Yes, we have remote desktops, but remote desktops across the Internet, even with modern high-speed networking, are far from comfortable. Your problem may be too large to use the hand tools for final analysis, but they’re great for initial explorations. Once you get used to working on the Unix command line, you’ll find that it’s often faster than the alternatives. And the more you use these tools, the more fluent you’ll become.

    via Data hand tools – O’Reilly Radar.

    This is a great remedial refresher on the Unix commandline and for me kind of reinforces an idea I’ve had that when it comes to computing We Live Like Kings. What? How is that possible, well think about what you are trying to accomplish and finding the least complicated quickest way to that point is a dying art. More often one is forced to follow or highly encouraged to set out on a journey with very well defined protocols/rituals included. You must use the APIs, the tools, the methods as specified by your group. Things falling outside that orthodoxy are frowned upon no matter what the speed and accuracy of the result. So doing it quick and dirty using some Shell scripting and utilities is going to be embarrassing for those unfamiliar with those same tools.

    My experience doing this involved a very low end attempt to split Web access logs into nice neat bits that began an ended on certain dates. I used grep, split, and a bunch of binaries I borrowed for doing log analysis and formatting the output into a web report. Overall it didn’t take much time, and required very little downloading, uploading,uncompressing,etc. It was all commandline based with all the output dumped to a directory on the same machine. I probably spent 20 minutes every Sunday running these by hand (as I’m not a cronjob master much less an atjob master). And none of the work I did was mission critical other than being a barometer of how much use the websites were getting from the users. I realize now I could have had the whole works automated with variables setup in the shell script to accommodate running on different days of the week, time changes, etc. But editing the scripts by hand in vi editor only made me quicker and more proficient in vi (which I still gravitate towards using even now).

    And as low end as my needs were and how little experience I had initially using these tools, I am grateful for the time I spent doing it. I feel so much more comfortable knowing I can figure out how to do these tasks on my own, pipe outputs into inputs for other utilities and get useful results. I think I understand it though I’m not a programmer, and couldn’t really leverage higher level things like data structures to get work done, no. I’m a brute force kind of guy and given how fast the CPUs are running, a few ugly, inefficient recursions isn’t going to kill me or my reputation. So here’s to Mike Loukides article and how much it reminds me of what I like about Unix.

  • Quanta crams 512 cores into pizza box server • The Register

    Image representing Tilera as depicted in Crunc...
    Image via CrunchBase

    Two of these boards are placed side-by-side in the chassis and stacked two high, for a total of eight server nodes. Eight nodes at 64 cores each gives you 512 total cores in a 2U chassis. The server boards slide out on individual trays and share two 1,100 watt power supplies that are stacked on top of each other and that are put in the center of the chassis. Each node has three SATA II ports and can have three 2.5-inch drives allocated to it; the chassis holds two dozen drives, mounted in the front and hot pluggable.

    via Quanta crams 512 cores into pizza box server • The Register.

    Amazing how power efficient Tilera has made it’s shipping products as Quanta has jammed 512 cores into a 2 Rack Unit high box. Roughly this is 20% the size of the SeaMicro SM-10000 based on Intel Atom cpus. Now that there’s a shipping product, I would like to see benchmarks or comparisons made on similar workloads using both sets of hardware. Numerically speaking it will be an apples-to-apples comparison. But each of these products is unique and are going to be difficult to judge in the coming year.

    First off, Intel Atom is an x86 compatible low power chip that helped launch the Asus/Acer netbook revolution (which until the iPad killed it was a big deal). However Quanta in order to get higher density on its hardware has chosen a different CPU than the Intel Atom (as used by SeaMicro). Instead Quanta is the primary customer for a new innovated chip company we have covered on carpetbomberz.com previously: Tilera. For those who have not been following the press releases from the company Tilera is a spin-off of an MIT research project in chip-scale networking. The idea was to create very simplified systems on a chip (whole computers scaled down to single chip) and then network them together all the same slice of silicon die. The speeds would be faster due to most of the physical interfaces and buses being contained directly on the chip circuits instead of externally on the computer’s motherboard. The promise of the Tilera chip is you can scaled up on the silicon wafer as opposed to the racks and racks of equipment within the datacenter. Performance of the Tilera chip has been somewhat a secret, no benchmarks or real comparisons to commercially shipping CPUs have been performed. But the feeling generally is any single core within a Tilera chip should be about as capable as the processor in your smartphone, and every bit as power efficient. Tilera has been planning to scale up to 100 cpus eventually within one single processor die and appears to have scaled up to 64 on its most recent research chips (far from being commercially produced at this point.)

    I suspect both SeaMicro and Quanta will have their own custom OSes which run as a central supervisor allowing the administrators to install and sets up instances of their  favorite workhorse OSes. Each OS instance will be doled out to an available CPU core and then be linked up to a virtual network and virtual storage interface. Boom! You got a web server, file server, rendering station, streaming server, whatever you need in one fell swoop. And it is all bound together with two 1,100 watt power supplies in each 2 Rack Unit sized box. I don’t know how that compares to the SeaMicro power supply, but I imagine it is likely smaller per core than the SM-10000. Which can only mean in the war for data power efficiency Quanta might deliver to market a huge shot across the bow of SeaMicro. All I can say is let the games begin, let the market determine the winner.

  • Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future

    Image representing Microsoft as depicted in Cr...
    Image via CrunchBase

    Probase is a Microsoft Research project described as an “ongoing project that focuses on knowledge acquisition and knowledge serving.” Its primary goal is to “enable machines to understand human behavior and human communication.” It can be compared to  Cyc, DBpedia or Freebase in that it is attempting to compile a massive collection of structured data that can be used to power artificial intelligence applications.

    via Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future – ReadWriteCloud.

    Who knew Microsoft was so interested in things only IBM Research’s Watson could demonstrate? AI (artificial intelligence) seems to be targeted at Bing search engine results. And in order to back this all up, they have to ditch their huge commitment to Microsoft SQL Server and go for a NoSQL database in order to hold all the unstructured data. This seems like a huge shift away from desktop and data center applications and something much more oriented to a cloud computing application where collected data is money in the bank. This is best expressed in the example given in the story of Google vs. Facebook. Google may collect data, but it is really delivering ads to eyeballs. Whereas Facebook is just collecting the data and sharing that to the highest bidder. Seems like Microsoft is going the Facebook route of wanting to collect and own the data rather than merely hosting other people’s data (like Google and Yahoo).

  • Calxeda boasts of 5 watt ARM server node • The Register

    Calxeda is not going to make and sell servers, but rather make chips and reference machines that it hopes other server makers will pick up and sell in their product lines. The company hopes to start sampling its first ARM chips and reference servers later this year. The first reference machine has 120 server nodes in a 2U rack-mounted format, and the fabric linking the nodes together internally can be extended to interconnect multiple enclosures together.

    via Calxeda boasts of 5 watt ARM server node • The Register.

    SeaMicro and now Calxeda are going gangbusters for the ultra dense low power server market. Unlike SeaMicro, Calxeda wants to create reference designs it licenses to manufacturers who will build machines with 120 cores in a 2 Unit rack. SeaMicro’s record right now is 512 cores per 10U rack  or roughly 102+ cores in a 2 Unit rack. The difference is the SeaMicro product uses an Intel low power Atom cpu,  whereas Calxeda is using a processor used more often in smart phones and tablet computers. SeaMicro has hinted they are not wedded to the Intel Architecture, but they are more interested in shipping real live product than coming up with generic designs others can license. In the long run it’s entirely possible SeaMicro may switch to a different CPU, they have indicated previously they have designed their servers with flexibility enough to swap out the processor to any other CPU if necessary. It would be really cool to see an apples-to-apples comparison of a SeaMicro server using first Intel CPUs versus ARM-based CPUs.

  • AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine

    Original 1984 Macintosh desktop
    Image via Wikipedia

    However, Windows’ Shadow Copy is really intended for creating a snapshot of an entire volume for backup purposes; users can’t trigger the creation of a new version of an individual file in Windows. This makes Lion’s Versions a very different beast: its more akin to a versioning file system that works like Time Machine, but local to the user’s own disk.

    via AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine [Page 2].

    Reading this article from Apple Insider’s series of previews of Mac OS X 10.7 has been an education in both the iOS based universe and the good ol’ desktop universe I already know and love. At first I was apprehensive about the desktop OS taking such a back seat to the mobile devices Apple has been introducing at an increasingly fast pace. From iPods to iPhones to iPod Touch and now the iPad, there’s no end to the permutations iOS based devices can take. Prior to the iPhone and iPod Touch releases, Apple was using an embedded OS with none of the sophistication and capability of a real desktop operating system. This was both a frugal and conservative approach as media players while having real CPUs inside were never intended to have network stacks, garbage collection on UI servers, etc. There was always enough there to present a User Interface off some sort, with access to a local file system and ability to sync files between a host based iTunes client and the device (whichever generation iPod it might be). Along with that each generation hardware most likely varied by degrees as video playback  became a touted feature in newer iPods with bigger internal hard drives (so-called video ipods). I can imagine that got complicated quickly as CPU and video chips and media playback capabilities ranged widely up and down the product line. As each device required its own tweaks to the embedded OS, and iTunes was tweaked to accommodate these local variations, I’m sure the all seeing eye of Steve Jobs began to wince at the increasing complexity of the iPod product line. Enter the iOS, a smaller, cleaner fully optimized OS for low power mobile devices. It’s got everything a desktop OS has without any of the legacy device concerns (backward compatibility) of a typical desktop OS. This allowed for creating ‘just enough’ capability in the networking capability the UI Server and the local storage. Apps written for iOS were unique to that environment though they might have started out as Mac OS X apps. By taking the original code base, re-factoring it and doing complete low level rewrites from top to bottom, you got a version of the Safari web browser on a mobile device. It could display ANY webpage and kind of do some display optimizations of the page on the fly. And there were a number of developers rushing to get an app to run on the new devices. So wither the Apple Mac OS X?

    Well in the rush of creating an iOS app universe, the iOS development team added many features along the way. One great gap was the missing cut & paste analogy long enjoyed on desktop OSes. Eventually this feature made it in, and others like it slowly got integrated. Apple’s custom A4 chip using and ARM Core 8 cpu was tearing up the charts, out competing every other mobile phone OS on the market. Similarly the iPad took that same approach of getting out there with new features and becoming a more desktop like mobile device. A year has passed since the original iPad hit the market, the Mac OS is due for a change, the big question is what does Steve Jobs think? There were hints and rumors he wanted everyone to enjoy the clean room design of the iOS, dump the legacy messiness of old Mac OS X. Dan Lyons of Newsweek gave voice to these concerns quite clearly in his June 8 article in Newseek. Steve Jobs would eventually reply directly to this author and state emphatically he was wrong. Actions speak louder than words, Apple’s World Wide Developer Conference in 2010 seemed to really hard sell the advantages of developing for the new iOS. Conversely, Microsoft has proven over and over again, legacy support in an OS is a wonderful source of income, once you have established your monopoly. However, Apple has navigated the legacy hardware seas before with its first big migration from Motorola 68000 processors to the PowerPC chip, then subsequently the migration from PowerPC to Intel chips. From a software standpoint attrition occurs as people dump their legacy hardware anyways (not uncommon amongst Apple users to eventually get rid of their older hardware). So to help deliver the benefit of newer software requirements are now fully in place that even certain first gen Intel based Macs won’t be able to run the newest Mac OS X (that’s the word now). Similarly legacy support for PowerPC native apps running under Intel in emulation (using the Rosetta software) will also go away. Which then brings us to the point of this whole blog posting, where’s the beef?

    The beef dear reader is not in the computers but in ourselves. As Macintosh OSes evolve so do the workflow and the new paradigm being foisted upon us through the use of mobile devices is the lack of need to go to the File Menu -> Choose Save or Save As… That’s what the new iOS design portends in the future. Same goes for open documents in process, everything is done for you at long last. The computer does what finally you thought it did all the time and what Microsoft eventually built into Word (not the OS itself), Autosave. Newly developed versions of TextEdit made by Apple to run under OS X 10.7 were tested and tried out to see how they work under the new Auto Save and Versions architecture. Now, you just make a new document and the computer (safely) assumes you will most likely want to save the document as you are working on it, and you may want to go back and undo some changes you made. After all these years of using desktop computers, this is now built right in at long last. So from the commandline to the GUI and now to the Mobile OS, computer architects and UI engineers have a good idea of what you might want to do before you choose to do it, and it’s built in at the lowest level of the OS finally! And all of these are going to be in the next version of Mac OS X, due for release this July, 2011. After reading these articles from AppleInsider looking at the screenshots, I’m way more enthused and willing to change and adapt the way I work to the new regime of hybrid iOS and MacOS X going forward.

  • OCZ Vertex 3 Preview – AnandTech

    UEFI Logo
    Image via Wikipedia

    The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.

    via OCZ Vertex 3 Preview: Faster and Cheaper than the Vertex 3 Pro – AnandTech :: Your Source for Hardware Analysis and News.

    The cat is out of the bag, OCZ has not one but two SandForce SF-2000 series based SSDs out on the market now. And performance-wise the consumer level product is even slightly better performing than the enterprise level product at less cost. These indeed are interesting times. The speeds are so fast with the newer SandForce drive controllers that with a SATA 6GB/s drive interface you get speeds close to what could only be purchased on a PCIe based SSD drive array for $1200 or so. The economics of this is getting topsy-turvy, new generations of single drives outdistancing previous top-end products (I’m talking about you Fusion-io and you Violin Memory). SandForce has become the drive controller for the rest of us and with speeds like this 500MB/sec. read and write what more could you possibly ask for? I would say the final bottleneck on the desktop/laptop computer is quickly vanishing and we’ll have to wait and see just how much faster the SSD drives become. My suspicion is now a computer motherboard’s BIOS will slowly creep up to be the last link in the chain of noticeable computer speed. Once we get a full range of UEFI motherboards and fully optimized embedded software to configure them we will have theoretically the fastest personal computers one could possibly design.