Category: flash memory

  • EMC’s all-flash benediction: Turbulence ahead • The Register

    msystems
    Image via Wikipedia

    A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”

    via EMC’s all-flash benediction: Turbulence ahead • The Register.

    I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.

    In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.

  • TMS flash array blows Big Blue away • The Register

    Memory collection
    Image by teclasorg via Flickr

    Texas Memory Systems has absolutely creamed the SPC-1 storage benchmark with a system that comfortably exceeds the current record-holding IBM system at a cost per transaction of 95 per cent less.

    via TMS flash array blows Big Blue away • The Register.

    One might ask a simple question, how is this even possible given the cost of the storage media involved. How is it a Flash based storage array from RamSan beat a huge pile of IBM hard drives all networked and bound together in a massive storage system? And how did it do it for less? Woe be to those unschooled in the ways of the Per-feshunal Data Center purchasing dept. You cannot enter the halls of the big players unless you got million dollar budgets for big iron servers and big iron storage. Fibre Channel and Infiniband rule the day when it comes to big data throughput. All those spinning drives accessed simultaneously as if each one held one slice of the data you were asking for, each one delivering up it’s 1/10 of 1% of the total file you were trying to retrieve. And the resulting speed makes it look like one hard drive that is 10X10 faster than your desktop computer hard drive all through the smoke and mirrors of the storage controllers and the software that makes them go. But what if, just what if we decided to take Flash memory chips and knit them together with a storage controller that made them appear to be just like a big iron storage system? Well since Flash obviously costs something more than $1 per gigabyte and disk drives cost somewhere less than 10 cents per gigabyte the Flash storage loses right?

    In terms of total storage capacity Flash will lose for quite some time when you are talking about holding everything on disk all at the same time. But that is not what’s being benchmarked here at all. No, in fact what is being benchmarked is the rate at which Input (writing of data) and Output (reading of data) is done through the storage controllers. IOPS measure the total number of completed reads/writes done in a given amount of time. Previous to this latest example of the RamSan-630, IBM was king of the mountain with it’s huge striped Fibre Channel arrays all linked up through it’s own storage array controllers. RamSan came in at 400,503.2 IOPS as compared to IBM’s top of the line San Volume Controller with 380,489.3. That’s not very much difference you say, especially considering how much smaller the amount of data a RamSan can hold,… And that would be a valid argument but consider again, that’s not what we’re benchmarking it is the IOPS.

    Total cost for the IBM benchmarked system per IOP was $18.83. RamSan (which best IBM in total IOPS) was a measly $1.05 per IOP. The cost is literally 95% less than IBM’s cost. Why? Consider the price (even if it was steeply discounted as most Tech Writers will say as a cavea) for IBM’s benchmarked system costs $7.17Million dollars. Remember I said you need million dollar budgets to play in the data center space. Now consider the RamSan-630 costs $419,000. If you want speed, dump your spinning hard drives, Flash is here to stay and you cannot argue with the speed versus the price at this level of performance. No doubt this is going to threaten the livelihood of a few big iron storage manufacturers. But through disruption, progress is made.

  • Viking Modular plugs flash chips into memory sockets • The Register

    The 536,870,912 byte (512×2 20 ) capacity of t...
    Image via Wikipedia

    What a brilliant idea: put flash chips into memory sockets. Thats what Viking Modular is doing this with its SATADIMM product.

    via Viking Modular plugs flash chips into memory sockets • The Register.

    This sounds like an interesting evolution of the SSD type of storage. But, I don’t know if there is a big advantage forcing a RAM memory controller to be the bridge to a Flash Memory controller. In terms of bandwidth, the speed seems comparable to a 4x PCIe interface. I’m thinking now of how it might compare to PCIe based SSD from OCZ or Fusion-io. It seems like the advantage is still held by PCIe in terms of total bandwidth and capacity (above 500MB/sec and 2Terabytes total storage). It maybe a slightly lower cost, but the use of Single Level Cell Flash memory chips raises the cost considerably for any given size of storage, and this product from Viking uses the Single Level Cell flash memory. I think if this product ships, it will not compete very well against products like consumer level SSDs, PCIe SSDs, etc. However if they continue to develop the product and evolve it, there might be a niche where it can be performance or price competitive.

  • Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista

    The microcontroller on the right of this USB f...
    Image via Wikipedia

    The schedules may help back mounting beliefs that the iPhone 5 will 64GB iPhone 4 prototype appeared last month that hinted Apple was exploring the idea as early as last year. Just on Tuesday, a possible if disputed iPod touch with 128GB of storage also appeared and hinted at an upgrade for the MP3 player as well. Both the iPhone and the iPod have been stuck at 32GB and 64GB of storage respectively since 2009 and are increasingly overdue for additional space.

    via Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista.

    Toshiba has revised its flash memory production lines again to keep pace with the likes of Intel, Micron and Samsung. Higher densities and smaller form factors seemed to indicate they are gearing up for a big production run of the highest capacity memory modules they can make. It’s looking like a new iPhone might be the candidate to receive newer multi-layer single chip 64GB Flash memory modules this year.

    A note of caution in this arms race of ever smaller feature sizes on the flash memory modules, the smaller you go the less memory read/write cycles you get. I’m becoming aware that each new generation of flash memory production has lost an amount of robustness. This problem has been camouflaged maybe even handled outright by the increase in over-provisioning of chips on a given size Solid State Disk (sometimes as low as 17% more chips than that which is typically used when the drive is full). Through careful statistical modeling and use of algorithms, an ideal shuffling of the deck of available flash memory chips allows the load to be spread out. No single chip fails as it’s workload is shifted continuously to insure it doesn’t receive anywhere near the maximum number of reliable read write cycles. Similarly, attempts to ‘recover’ data from failing memory cells within a chip module are also making up for these problems. Last but not least outright error-correcting hardware has been implemented on chip to insure everything just works from the beginning of the life of the Solid State Disk (SSD) to the finals days of its useful life.

    We may not see the SSD eclipse the venerable kind off high density storage, the Hard Disk Drive (HDD). Given the point of diminishing return provided by Moore’s Law (scaling down increases density, increases speed, lowers costs), Flash may never get down to the level of density we enjoy in a typical consumer brand HDD (2TBytes). We may have to settle for other schemes that get us to that target through other means. Which brings me to my favorite product of the moment, the PCIe based SSD. Which is nothing more than a big circuit board with a bunch of SSD’s tied together in a disk array with a big fat memory controller/error-correction controller sitting on it. In terms of speeds using the PCI Express bus, there are current products that beat single SATA 6 SSDs by a factor of two. And given the requirements of PCI, the form factor of any given module could be several times bigger and two generations older to reach the desired 2Terbyte storage of a typical SATA Hard Drive of today. Which to me sounds like a great deal if we could also see drops in price and increases in reliability by using older previous generation products and technology.

    But the mobile market is hard to please, as they are driving most decisions when it comes to what kind of Flash memory modules get ordered en masse. No doubt Apple, Samsung and anyone in consumer electronics will advise manufacturers to consistently shrink their chip sizes to increase density and keep prices up on final shipping product. I don’t know how efficiently an iPhone or iPad use the available memory say on a 64GByte iTouch let’s say. Most of that goes into storing the music, TV shows, and Apps people want to have readily available while passing time. The beauty of that design is it rewards consumption by providing more capacity and raising marginal profit at the same time. This engine of consumer electronics design doesn’t look likely to end in spite of the physical limitations of shrinking down Flash memory chips. But there will be a day of reckoning soon, not unlike when Intel hit the wall at 4Ghz serial processors and had to go multi-core to keep it’s marginal revenue flowing. It’s been very lateral progress in terms of processor performance since then. It is more than likely Flash memory chips cannot get any smaller without being really unreliable and defective, thereby sliding  into the same lateral incrementalism Intel has adopted. Get ready for the plateau.

  • OCZ Acquires Indilinx SSD Controller Maker

    OCZ Technology
    Image via Wikipedia

    Prior to SandForce‘s arrival, Indilinx was regarded as the leading makers of controllers for solid-state drives. The company gained both consumer and media favoritism when it demonstrated that drives based on its own controllers were competitive with lead drives made by Intel. Indilinx’s controllers allowed many SSD manufacturers to bring SSD prices down to a level where a large number of mainstream consumers started to take notice.

    via OCZ Acquires Indilinx SSD Controller Maker.

    This is surprising news especially following the announcement and benchmark testing of OCZ’s most recent SSD drives. They are the highest performing SATA based SSDs on the market and the boost in speed is derived primarily from their drive controller chip supplied by SandForce not Indilinx. Buying a competing manufacturer no doubt is going to disappoint their suppliers at SandForce. And I worry a bit that SandForce’s technical lead is something that even a good competitor like Indilinx won’t be able to overcome. I’m sticking with any drive that has the SandForce disk controller inside due to their track record of increasing performance and reliability with each new generation of product.

    So I am of two minds, I guess it’s cool OCZ has enough power and money to provide its own drive controllers for its SSDs. But at the same time, the second place drive controller is a much slower, lower performance part than the top competitor. In future I hope OCZ is either able to introduce price variation by offering up SandForce vs. Indilinx based SSDs and charge less for Indilinx. If not, I don’t know how they will technologically achieve superiority now that SandForce has such a lead.

  • OCZ Vertex 3 Preview – AnandTech

    UEFI Logo
    Image via Wikipedia

    The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.

    via OCZ Vertex 3 Preview: Faster and Cheaper than the Vertex 3 Pro – AnandTech :: Your Source for Hardware Analysis and News.

    The cat is out of the bag, OCZ has not one but two SandForce SF-2000 series based SSDs out on the market now. And performance-wise the consumer level product is even slightly better performing than the enterprise level product at less cost. These indeed are interesting times. The speeds are so fast with the newer SandForce drive controllers that with a SATA 6GB/s drive interface you get speeds close to what could only be purchased on a PCIe based SSD drive array for $1200 or so. The economics of this is getting topsy-turvy, new generations of single drives outdistancing previous top-end products (I’m talking about you Fusion-io and you Violin Memory). SandForce has become the drive controller for the rest of us and with speeds like this 500MB/sec. read and write what more could you possibly ask for? I would say the final bottleneck on the desktop/laptop computer is quickly vanishing and we’ll have to wait and see just how much faster the SSD drives become. My suspicion is now a computer motherboard’s BIOS will slowly creep up to be the last link in the chain of noticeable computer speed. Once we get a full range of UEFI motherboards and fully optimized embedded software to configure them we will have theoretically the fastest personal computers one could possibly design.

  • iPod classic stock dwindling at Apple

    iPod classic
    Image by Freimut via Flickr

    Apple could potentially upgrade the Classic to use a recent 220GB Toshiba drive, sized at the 1.8 inches the player would need.

    via iPod classic stock dwindling at Apple, other retailers | iPodNN.

    Interesting indeed, it appears Apple is letting supplies run low for the iPod Classic. No word immediately as to why but there could be a number of reasons as speculated in this article. Most technology news websites understand the divide between the iPhone/Touch operating system and all the old legacy iPod devices (an embedded OS that only runs the device itself). Apple would like to consolidate its consumer products development efforts by slowly winnowing out non-iOS based ipods. However, due to the hardware requirements demanded by iOS, Apple will be hard pressed to jam such a full featured bit of software into iPod nano and iPod shuffles. So whither the old click wheel interface iPod empire?

  • Next-Gen SandForce Controller Seen on OCZ SSD

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    Last week during CES 2011, The Tech Report spotted OCZ’s Vertex 3 Pro SSD–running in a demo system–using a next-generation SandForce SF-2582 controller and a 6Gbps Serial ATA interface. OCZ demonstrated its read and write speeds by running the ATTO Disk Benchmark which clearly showed the disk hitting sustained read speeds of 550 MB/s and sustained write speeds of 525 MB/s.

    via Next-Gen SandForce Controller Seen on OCZ SSD.

    Big news, test samples of the SandForce SF-2000 series flash memory controllers are being shown in products demoed at the Consumer Electronics Shows. And SSDs with SATA interfaces are testing through the roof. The numbers quoted for a 6GB/sec. SATA SSD are in the 500+GB/sec. range. Previously you would need to choose a PCIe based SSD drive from OCZ or Fusion-io to get anywhere near that high of  speed sustained. Combine this with the future possibility of SF-2000 being installed on future PCIe based SSDs and there’s no telling how much the throughput will scale. If four of the Vertex drives were bound together as a RAID 0 set with SF-2000 drive controllers managing it, is it possible to see a linear scaling of throughput. Could we see 2,000 MB/sec. on PCIe 8x SSD cards? And what would be the price on such a card fully configured with 1.2 TB of SSD drives? Hard to say what things may come, but just the thought of being able to buy retail versions of these makes me think a paradigm shift is in the works that neither Intel nor Microsoft are really thinking about right now.

    One comment on this article as posted on the original website, Tom’s Hardware, included the observation that the speeds quoted for this SATA 6GBps drive are approaching the memory bandwidth of several generations old PC-133 DRAM memory chips. And as I have said previously, I still have an old first generation Titanium Powerbook from Apple that uses that same memory chip standard PC-133. So given that SSD hard drives are fast approaching the speed of somewhat older main memory chips I can only say we are fast approaching a paradigm shift in desktop and enterprise computing. I dub thee, the All Solid State (ASS) era where no magnetic or rotating mechanical media enter into the equation. We run on silicon semiconductors from top to bottom, no Giant Magneto-Resistive technology necessary. Even our removable media are flash memory based USB drives we put in our pockets and walk around with on key chains.

  • CES 2011: Corsair Performance Series 3 SSD Benchmarks – AnandTech :: Your Source for Hardware Analysis and News

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    The next wave of high end consumer SSDs will begin shipping this month, and I believe Corsair may be the first out the gate. Micron will follow shortly with its C400 and then we’ll likely see a third generation offering from Intel before eventually getting final hardware based on SandForce’s SF-2000 controllers in May.

    via CES 2011: Corsair Performance Series 3 SSD Benchmarks – AnandTech :: Your Source for Hardware Analysis and News.

    This just in from Consumer Electronics Show in Las Vegas, via Anandtech. SandForce SF-2000 scheduled to drop in May of this year. Get ready as you will see a huge upsurge in releases of new SSD products attempting to best one another in the sustained Read/Write category. And I’m not talking just SSDs but PCIe based cards with SSD RAIDs embedded on them communicating through a 2 Lane 8X PCI Express interface. I’m going to take a wild guess and say you will see products fitting this description easily hitting 700 to 900 MB/s sustained Read and Write. Prices will be on the top end of the scale as even the current shipping products all fall in to the $1200 to $1500 range. Expect the top end to be LSI based products for $15,000 or third party OEM manufacturers who might be willing to sell a fully configured 1TByte card for maybe ~$2,000. After the SF-2000 is released, I don’t know how long it will take for designers to prototype and release to manufacturing any new designs incorporating this top of the line SSD flash memory controller. It’s possible as the top end continues to increase in performance current shipping product might start to fall in price to clear out the older, lower performance designs.

  • Micron’s ClearNAND: 25nm + ECC

    Image representing Intel as depicted in CrunchBase
    Intel is a partner wish Micron

    Micron’s ClearNAND: 25nm + ECC, Combats Increasing Error Rates – AnandTech

    This is a really good technical article on attempts made by Micron and Intel to fix read/write errors in their Solid State memory based on Flash memory chips. Each revision of their design and materials for manufacture helps decrease the size of the individual memory cells on the flash memory chip however as the design rules (the distance between the wires) decrease, random errors increase. And the materials themselves suffer from fatigue with each read and write cycle. The fatigue is due in no small part (pun intended) on the size, specifically thickness of some layers in the sandwich that make up a flash memory cell. Thinner materials just wear out quicker. Typically this wearing out was addressed by adding extra unused memory cells that could act as a spare memory cell whenever one of them finally gave up the ghost, stopped working altogether. Another technique is to spread reads/writes over an area much greater than (sometimes 23% bigger) than the size of the storage on the outside of the packing. This is called wear levelling and it’s like rotating your tires to ensure they don’t start to get bare patches on them too quickly.

    All these techniques will only go so far as the sizes and thickness continue to shrink. So taking a chapter out of the bad old days of computing, we are back into Error Correcting Codes or ECC. When memory errors were common and you needed to guarantee your electronic logic was not creating spontaneous errors, bits of data called parity bits would be woven into all the operations to insure something didn’t accidentally flip from being a 1 to a 0. ECC memory is still widely used in data center computers that need to guarantee the spontaneous bits don’t get flipped by say, a stray cosmic ray raining down upon us. Now however ECC is becoming the next tool after spare memory cells and wear leveling to insure flash memory can continue to grow smaller and still be reliable.

    Two methods in operation today are to build the ECC memory controllers into the Flash memory modules themselves. This raises the cost of the chip, but lowers the cost to the manufacturer of a Solid State Disk or MP3 player. They don’t have to add the error correction after the fact or buy another part and integrate it into their design. The other more ‘state of the art’ method is to build the error correction into the Flash memory controller (as opposed to the memory cells), providing much more leeway in how it can be implemented, updated over time. As it turns out the premier manufacturer/designer of Flash memory controllers SandForce already does this with the current shipping version of their SF-1200 Flash memory controller. SandForce still has two more advanced controllers yet to hit the market, so they are only going to become stronger if they have already adopted ECC into their current shipping product.

    Which way the market chooses to go will depend on how low the target price is for the final shipping product. Low margin, high volume goods will most likely go with no error correction and take their chances. Other higher end goods may adopted the embedded ECC from Micron and Intel. Top of the line data center purchasers will not stray far from the cream of the crop, high margin SandForce controllers as they are still providing great performance/value even in their early generation products.