Archive for the ‘flash memory’ Category
Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.
As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).
Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.
So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.
This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.
Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.
Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28
My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.
As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”
More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.
Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.
- IBM Goes Modular And Flashy With X6 Systems – Timothy Prickett Morgan (carpetbomberz.com)
“The eXFlash DIMM is an option for IBM‘s System x3850 and x3950 X6 servers providing up to 12.8 TB of flash capacity. (Although just as this story was being written, IBM announced it was selling its x86 server business to Lenovo for $2.3 billion).”
Sadly it seems the party is over before it even got started in the sales and shipping of UltraDIMM equipped IBM x86 servers. If Lenovo snatches up this product line, I’m sure all the customers will still be perfectly happy but I worry about that level of innovation and product testing that led to the introduction of UltraDIMM may be slowed.
I’m not criticizing Lenovo for this, they have done a fine job taking over the laptops and desktop brand from IBM. The motivation to keep on creating new, early samples of very risky and untried technologies seems to be more IBM’s interest in maintaining it’s technological lead in the data center. I don’t know how Lenovo figures into that equation. How much will Lenovo sell in the way of rackmount servers like the X6 line? And just recently there’s been rumblings that IBM wants to sell off it’s long history of doing semi-conductor manufacturing as well.
It’s almost too much to think R&D would be given up by IBM in semi-conductors. Outside of Bell Labs, IBM’s fundamental work in this field brought things like silicon on insulator, copper interconnects and myriad other firsts to ever smaller, finer design rules. While Intel followed it’s own process R&D agenda, IBM went its own way too always trying to find advantage it’s in inventions. Albeit that blistering pace of patent filings means they will likely never see all the benefits of that Research and Development. At best IBM can only hope to enforce it’s patents in a Nathan Myhrvold like way, filing law suits on all infringers, protecting it’s intellectual property. That’s going to be a sad day for all of us who marveled at what they demoed, prototyped and manufactured. So long IBM, hello IBM Global Services.
Leaked Intel roadmap’ promises… er, gear that could die after 7 months [Chris Mellor for theregister.com]
Chris Mellor – The Register http://www.theregister.co.uk/2013/12/09/intel_ssd_roadmappery/
Chris does a quick write-up of a leaked SSD roadmap from Intel. Seems like we’re now in a performance plateau on the consumer/business end of the scale for SATA based SSD drives. I haven’t seen an uptick in Read/Write performance in a long time. Back in the heady days of OCZ/Crucial/SanDisk releasing new drives with new memory controllers on a roughly 6 month schedule, speeds slowly marched up the scale until we were seeing 200MB-Read/150MB-Write (equalling some of the fastest magnetic hard drives at the time). Then yowza, we blew right past that performance figure to 250MB/sec-275MB/sec-and higher. Intel vs. Samsung for the top speed champions at this point. SandForce was helping people enter the market at acceptable performance levels (250/200). Prices were not really edging downward, but speeds kept going up, up, up.
Now we’re in the PCIe era, with everyone building their own custom design for a particular platform, make and model. Apple’s using their own design PCIe SSDs for their laptops and soon for the Mac Pro desktop workstations. One or two other manufacturers are adpating m2 sized Memory devices as PCIe add-in cards for different ultra-lightweight designs. But there’s no wave of the equivalent aftermarket, 3rd party SSDs we saw when SATA drives were king. So now we’re left with a very respectable, and still somewhat under-utilized SATA SSD market with speeds in the 500/Less than 500 Read/Write speed range. Until PCIe starts to converge, consolidate and come up with a common form factor (card size, pin out, edge connector) we’ll be seeing a long slow commoditization of SATA SSD drives with the lucky few spinning their own PCIe products. Hopefully there will be an upset and someone will form up a group to support PCIe SSD mezzanine or expansion slot EVERYWHERE. When that time comes, we’ll get the second wave of SSD performance I think we all are looking for.
Seems like it was only two years ago when OCZ bought out memory controller and intellectual property (IP) holder Indilinx for it’s own branded SSD products. At the time everyone was buying SandForce memory controllers to keep up with the Joneses. Speed-wise and performance-wise SandForce was king. But with so many competitors about using the same memory controller there was no way to make a profit with a commodity technology. The thought was generally performance isn’t always the prime directive regarding SSDs. Going forward, price would be much more important. Anyone owning their own Intellectual Property wouldn’t have to pay license fees to companies like SandForce to stay in the business. So OCZ being on a wild profitable tear, bought out Indilinx a designer of NAND/Flash memory controllers. The die was cast and OCZ was in the drivers seat, creating the the Consumer market for high performance lower cost SSD drives. Market value went up and up, whispers were reported of a possible buy out of OCZ from larger hard drive manufacturers. The price of $1Billion was also mentioned in connection with this.
Two years later, much has changed. There’s been some amount of shift in the market from 2.5″ SATA drives to smaller and more custome designs. Apple jumped from SATA to PCIe with its MacBook Air just this past Fall 2013. The m2 form factor is really well liked in the tablet and lightweight laptop sector. So who knew OCZ was losing it’s glamor to such a degree that they would sell? And not just at the level of 10x cheaper than their hightest profile price from 2 years ago. No, not 10x, but more likely 100x cheaper that what they would have asked for 2 years ago. Two whole orders of magnitude less, very roughly, exactly 35Million dollars along with a large number of negotiated guarantees to keep the support/warranty system in place and not tarnish the OCZ brand (for now). This story is told over and over again to entrpreneurs and magnate wannabees. Sell, sell, sell. No harm in that. But just make sure you’re selling too early rather than too late.
May the SandForce be with you
Nice writeup from Anandtech regarding the press release from LSI about it’s new 3rd generation flash memory controllers. The 3000 series takes over from the 2200 and 1200 series that preceded it as the era of SSDs was just beginning to dawn (remember those heady days of 32GB SSD drives?). Like the Frontier days of old, things are starting to consolidate and find an equilibrium of price vs. performance. Commidity pricing rules the day, but SSDs much less PCIe Flash interfaces are just creeping into the high end of the market of Apple laptops and soon Apple desktops (apologies to the iMac which has already adopted the PCIe interface for its flash drives, but the Mac Pro is still waiting in the wings).
Things continue to improve in terms of future-proofing the interfaces. From SATA to PCIe there was little done to force a migration to one or the other interface as each market had its own peculiarities. SSDs were for the price conscious consumer level market, and PCIe was pretty much only for the enterprise. You had pick and choose your controller very wisely in order to maximize the return on a new device design. LSI did some heavy lifting according to Anandtech by refactoring, redesigning the whole controller thus allowing a manufacturer to buy one controller and use it either way as a SATA SSD controller or as an PCIe flash memory controller. Speeds of each interface indicate this is true at the theoretical throughput end of the scale. LSI reports the PCIe throughput it not too far off the theoretical MAX, (~1.45GB/sec range). Not bad for a chip that can also be use as an SSD controller at 500MB/sec throughput as well. This is going to make designers and hopefully consumers happy as well.
On a more technical note as written about in earlier articles mentioning the great Peak Flash memory density/price limit, LSI is fully aware of the memory architectures and the faillure rates, error rates they accumulate over time.
What would happen if we replaced those 16 disk-based V7000s with all-flash V7000s? Each of the disk-based ones delivered 32,502.7 IOPS. Let’s substitute them with 16 all-flash V7000s, like the one above, and, extrapolate linearly; we would get 1,927,877.4 SPC-1 IOPS – nearly 2 million IOPS. Come on IBM: go for it.
That’s right, IBM is understanding the Flash-based SSD SAN market and is making some benchmark systems to help market its disk arrays. Finally we’re seeing some best case scenarios for these high end throughput monsters. It’s entirely possible to create a 2Million IOPS storage SAN. You just have to assemble the correct components and optimize your storage controllers. What was once a theoretical maximum throughput (1M IOPs) is now achievable without anything more than a purchase order and an account representative from IBM Global Services. It’s not cheap, not by a longshot but your Big Data project or OLAP with Dashboard may just see orders of magnitude increases in speed. It’s all just a matter of money. And probably some tweaking via an IBM consultant as well (touche).
Granted that IBM doesn’t have this as a shipping product isn’t really the point. On paper what can be achieved by mixing matching enterprise storage appliances and disk arrays and software controllers is beyond what any other company is selling IS the point. There’s a goldmine to be had if anyone outside of a high frequency trading skunkworks just shares a little bit of in-house knowledge product familiarity. No doubt it’s not just the network connections that make things faster it is the IOPs that will out no matter what. Write vs. Read and latency will always trump the fastest access to an updated price in my book. But I don’t work for a high-frequency trading skunkworks either, I’m not privy to the demands made upon those engineers and consultants. But still we are now in the best, boldest time yet of nearly too much speed on the storage front. Only thing holding us back is the network access times.
- Extreme Blogging (ibm.com)
- IBM i Storage Options Overview (Which storage is right for me?) (ibm.com)
Does Fusion-io have a sustainable competitive advantage or will it get blown away by a hurricane of other PCIe flash card vendors attacking the market, such as EMC, Intel, Micron, OCZ, TMS, and many others?
More updates on the data center uptake of PCI SSD cards in the form of two big wins from Facebook and Apple. Price/Performance for database applications seems to be skewed heavily to Fusion-io versus the big guns in large scale SAN roll-outs. It seems like due to the smaller scale and faster speed PCI SSD outstrips the resources needed to get an equally fast disk based storage array (including power, and square feet taken up by all the racks). Typically a large rack of spinning disks can be aggregated by using RAID drive controllers and caches to look like a very large high speed hard drive. The Fibre Channel connections add yet another layer of aggregation on top of all that so that you can start splitting the underlying massive disk array into virtual logical drives that fit the storage needs of individual servers and OSes along the way. But to get sufficient speed equal to a Fusion-io style PCI SSD, say to speed up JUST your MySQL server the number of equivalent drives, racks, RAID controllers, caches and Fibre Channel host bus adapters is so large and costs so much, it isn’t worth it.
A single PCI SSD won’t quite have the same total storage capacity as say that larger scale SAN. But for a single, say one-off speed up of a MySQL database you don’t need the massive storage so much as the massive speed up in I/O. And that’s where the PCI SSD comes into play. With the newest PCI 3.0 interfaces and utilizing 8x (eight PCI lane) connectors the current generation of cards is able to maintain 2GB/sec through put on a single PCI card. To achieve that using the older SAN technology is not just cost prohibitive but seriously SPACE prohibitive in all but the largest of data centers. The race now is to see how dense and energy efficient a data center can be constructed. So it comes as no surprise that Facebook and Apple (who are attempting to lower costs all around) are the ones leading this charge of higher density and higher power efficiency as well.
Don’t get me wrong when I tout the PCI SSD so heavily. Disk storage will never go away in my lifetime. It’s just to cost effective and it is fast enough. But for the SysOps in charge of deploying production Apps and hitting performance brick walls, the PCI SSD is going to really save the day. And if nothing else will act as a bridge for most until a better solution can be designed and procured in any given situation. That alone I think would make the cost of trying out a PCI SSD well worth it. Longer term, which vendor will win is still a toss-up. I’m not well versed in the scale of sales into Enterprises of the big vendors in the PCI SSD market. But Fusion-io is doing a great job keeping their name in the press and marketing to some big identifiable names.
But also I give OCZ some credit to with their Z-Drive R5 though it’s not quite considered an Enterprise data center player. Design wise, the OCZ R5 is helping push the state of the art by trying out new controllers, new designs attempting to raise the total number of I/Os and bandwidth on single card. I’ve seen one story so far about a test sample at Computex(Anandtech) that a brand new clean R5 hit nearly 800,000 I/Os in benchmark tests. That peak peformance eventually eroded as the flash chips filled up and fell to around 530,000 I/Os but the trend is clear. We may see 1million IOPs on a single PCI SDD before long. And that my readers is going to be an Andy Grove style 10X difference that brings changes we never thought possible.
- SanDisk Reveals Lightning PCI Express SSD Cards (news.softpedia.com)
- Intel’s SSD 910: Finally a PCIe SSD from Intel (anandtech.com)
- Three questions Fusion-io’s rivals face after flash API bombshell (go.theregister.com)
- Souping Up the Mac Pro: OWC Accelsior PCI Express SSD (barefeats.com)
NoSQL database supplier Couchbase says it is tweaking its key-value storage server to hook into Fusion-ios PCIe flash ioMemory products – caching the hottest data in RAM and storing lukewarm info in flash. Couchbase will use the ioMemory SDK to bypass the host operating systems IO subsystems and buffers to drill straight into the flash cache.
Can you hear it? It’s starting to happen. Can you feel it? The biggest single meme of the last 2 years Big Data/NoSQL is mashing up with PCIe SSDs and in memory databases. What does it mean? One can only guess but the performance gains to be had using a product like CouchBase to overcome the limits of a traditional tables/rows SQL database will be amplified when optimized and paired up with PCIe SSD data stores. I’m imagining something like a 10X boost in data reads/writes on the CouchBase back end. And something more like realtime performance from something that might have been treated previously like a Data Mart/Data warehouse. If the move to use the ioMemory SDK and directFS technology with CouchBase is successful you are going to see some interesting benchmarks and white papers about the performance gains.
What is Violin Memory Inc. doing in this market segment of tiered database caches? Violin is teaming with SAP to create a tiered cache for the HANA in memory databasefrom SAP. The SSD SAN array provided by Violin could be multi-tasked to do other duties (providing a cache to any machine on the SAN network). However, this product most likely would be a dedicated caching store to speed up all operations of a RAM based HANA installation, speeding up Online transaction processing and parallel queries on realtime data. No doubt SAP users could stand to gain a lot if they are already invested heavily into the SAP universe of products. But for the more enterprising, entrepreneurial types I think Fusio-io and Couchbase could help get a legacy free group of developers up and running with equal performance and scale. Which ever one you pick is likely to do the job once it’s been purchased, installed and is up and running in a QA environment.
- Fusion-io Software Development Kit Enables Native Flash Memory Access (sacbee.com)
- Roundup: Fusion-io, Oracle, Teradata (datacenterknowledge.com)
- Fusion-io shoves OS aside, lets apps drill straight into flash – The Register (carpetbomberz.com)