Categories
flash memory macintosh SSD technology wintel

AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND

English: NAND Flash memory circuit
Image via Wikipedia

The big question is endurance, however we wont see a reduction in write cycles this time around. IMFTs 20nm client-grade compute NAND used in consumer SSDs is designed for 3K – 5K write cycles, identical to its 25nm process.

via AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND.

If true this will help considerably in driving down cost of Flash memory chips while maintaining the current level of wear and performance drop seen over the lifetime of a chip. Stories I have read previously indicated that Flash memory might not continue to evolve using the current generation of silicon chip manufacturing technology. Performance drops occur as memory cells wear out. Memory cells were wearing out faster and faster as the wires and transistors got smaller and narrower on the Flash memory chip.

The reason for this is memory cells have to be erased in order to free them up and writing and erasing take a toll on the memory cell each time one of these operations is performed. Single Level memory cells are the most robust, and can go through many thousands even millions of write and erase cycles before they wear out. However the cost per megabyte of Single Level memory cells make it an Enterprise level premium price level for Corporate customers generally speaking. Two Level memory cells are much more cost effective, but the structure of the cells makes them less durable than Single Level cells. And as the wires connecting them get thinner and narrower, the amount of write and erase cycles they can endure without failing drops significantly. Enterprise customers in the past would not purchase products specifically because of this limitation of the Two level memory cell.

As companies like Intel and Samsung tried to make Flash memory chips smaller and less expensive to manufacture, the durability of the chips became less and less. The question everyone asked is there a point of diminishing return where smaller design rules, thinner wires is going to make chips so fragile? The solution for most manufacturers is to add spare memory cells, “over-providing” so that when a cell fails, you can unlock a spare and continue using the whole chip. The over -provisioning no so secret trick has been the way most Solid State Disks (SSDs) have handled the write/erase problem for Two Level memory cells. But even then, the question is how much do you over-provision? Another technique used is called wear-levelling where a memory controller distributes writes/erases over ALL the chips available to it. A statistical scheme is used to make sure each and every chip suffers equally and gets the same number of wear and tear apllied to it. It’s difficult balancing act manufacturers of Flash Memory and storage product manufacturers who consume those chips to make products that perform adequately, do not fail unexpectedly and do not cost too much for laptop and desktop manufacturers to offer to their customers.

If Intel and Micron can successfully address the fragility of Flash chips as the wiring and design rules get smaller and smaller, we will start to see larger memories included in more mobile devices. I predict you will see iPhones and Samsung Android smartphones with upwards of 128GBytes of Flash memory storage. Similarly, tablets and ultra-mobile laptops will also start to have larger and larger SSDs available. Costs should stay about where they are now in comparison to current shipping products. We’ll just have more products to choose from, say like 1TByte SSDs instead of the more typical high end 512GByte SSDs we see today. Prices might also come down, but that’s bound to take a little longer until all the other Flash memory manufacturers catch up.

A flash memory cell.
Image via Wikipedia: Wiring of a Flash Memory Cell
Categories
flash memory technology

Micron intros SSD speed king • The Register

The RealSSD P300 comes in a 2.5-inch form factor and in 50GB, 100GB and 200GB capacity points, and is targeted at servers, high-end workstations and storage arrays. The product is being sampled with customers now and mass production should start in October.

via Micron intros SSD speed king • The Register.

Sandisk C300 ssd drive
The C300 as it appears on Anandtech.com

I am now for the first time after SSDs have hit the market looking at the drive performance of each new product being offered. What I’ve begun to realize is the speeds of each product are starting to fall into a familiar range. For instance I can safely say that for a drive in the 120GB range with Multi-Level Cells you’re going to see a minimum 200MB/sec read/write speeds (writing is usually faster than reading by some amount on every drive). This is a vague estimate of course, but it’s becoming more and more common. Smaller size drives have slower speeds and suffer on benchmarks due in part to the smaller number of parallel data channels. Bigger capacity drives have more channels and therefore can write more data per second. A good capacity for a boot/data drive is going to be in the 120-128GB category. And while it won’t be the best for archiving all your photos and videos, that’s fine. Use a big old 2-3TB SATA drive for those heavy lifting duties. I think that will be a more common architecture in the future and not a premium choice as it is now. SSD for boot/data and typical HDD for big archive and backup.

On the enterprise front things are a little different speed and throughput are important, but the drive interface is as well. With SATA being the most widely used interface for consumer hardware, big drive arrays for the data center are wedded to a form of Serial Attached Storage (SAS) or Fibre Channel (FC). So now manufacturers and designers like Sandisk need to engineer niche products for the high margin markets that require SAS or FC versions of the SSD. As was the case with the transition from Parallel ATA top Serial ATA, the first products are going to SATA to X interface adapters and electronics on board to make them compatible. Likely this will be the standard procedure for quite a while as a ‘native’ Fibre or SAS interface will require a bit of engineering to be done and cost increases to accommodate the enterprise interfaces. Speeds however will likely always be tuned for the higher volume consumer market and the SATA version of each drive will likely be the highest possible throughput version in each drive category. I’m thinking that the data center folks should adapt and adjust and go with the consumer level gear or adopt SATA SSDs now that the drives are not mechanically spinning disks. Similarly as more and more manufacturers are also doing their own error correction and wear leveling on the memory chips in SSDs the reliability will be equal to or exceed that of a FC or SAS spinning disks.

And speaking of spinning disks, the highest throughput I’ve ever seen quoted for a SATA disk was always 150MB/sec. Hands down that was theoretically the best it could ever do. More likely you would only see 80MB/sec (which takes me back to the old days of Fast/Wide SCSI and the Barracuda). Given the limits of moving media like spinning disks and read/write heads tracking across their surface, Flash throughput is just stunning. We are now in an era that while the Flash SSDs are slower than RAM, they are awfully fast and fast enough to notice when booting a computer. I think the only real speed enhancement beyond the drive interface is to put Flash SSDs on the motherboard directly and build a SATA drive controller directly on the CPU to make read/write requests. I doubt it would be cost effective for the amount of improvement, but it would eliminate some of the motherboard electronics and smooth the flow a bit. Something to look for certainly in netbook or slate style computers in the future.