Archive for the ‘flash memory’ Category
As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”
More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.
Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.
- IBM Goes Modular And Flashy With X6 Systems – Timothy Prickett Morgan (carpetbomberz.com)
“The eXFlash DIMM is an option for IBM‘s System x3850 and x3950 X6 servers providing up to 12.8 TB of flash capacity. (Although just as this story was being written, IBM announced it was selling its x86 server business to Lenovo for $2.3 billion).”
Sadly it seems the party is over before it even got started in the sales and shipping of UltraDIMM equipped IBM x86 servers. If Lenovo snatches up this product line, I’m sure all the customers will still be perfectly happy but I worry about that level of innovation and product testing that led to the introduction of UltraDIMM may be slowed.
I’m not criticizing Lenovo for this, they have done a fine job taking over the laptops and desktop brand from IBM. The motivation to keep on creating new, early samples of very risky and untried technologies seems to be more IBM’s interest in maintaining it’s technological lead in the data center. I don’t know how Lenovo figures into that equation. How much will Lenovo sell in the way of rackmount servers like the X6 line? And just recently there’s been rumblings that IBM wants to sell off it’s long history of doing semi-conductor manufacturing as well.
It’s almost too much to think R&D would be given up by IBM in semi-conductors. Outside of Bell Labs, IBM’s fundamental work in this field brought things like silicon on insulator, copper interconnects and myriad other firsts to ever smaller, finer design rules. While Intel followed it’s own process R&D agenda, IBM went its own way too always trying to find advantage it’s in inventions. Albeit that blistering pace of patent filings means they will likely never see all the benefits of that Research and Development. At best IBM can only hope to enforce it’s patents in a Nathan Myhrvold like way, filing law suits on all infringers, protecting it’s intellectual property. That’s going to be a sad day for all of us who marveled at what they demoed, prototyped and manufactured. So long IBM, hello IBM Global Services.
Leaked Intel roadmap’ promises… er, gear that could die after 7 months [Chris Mellor for theregister.com]
Chris Mellor – The Register http://www.theregister.co.uk/2013/12/09/intel_ssd_roadmappery/
Chris does a quick write-up of a leaked SSD roadmap from Intel. Seems like we’re now in a performance plateau on the consumer/business end of the scale for SATA based SSD drives. I haven’t seen an uptick in Read/Write performance in a long time. Back in the heady days of OCZ/Crucial/SanDisk releasing new drives with new memory controllers on a roughly 6 month schedule, speeds slowly marched up the scale until we were seeing 200MB-Read/150MB-Write (equalling some of the fastest magnetic hard drives at the time). Then yowza, we blew right past that performance figure to 250MB/sec-275MB/sec-and higher. Intel vs. Samsung for the top speed champions at this point. SandForce was helping people enter the market at acceptable performance levels (250/200). Prices were not really edging downward, but speeds kept going up, up, up.
Now we’re in the PCIe era, with everyone building their own custom design for a particular platform, make and model. Apple’s using their own design PCIe SSDs for their laptops and soon for the Mac Pro desktop workstations. One or two other manufacturers are adpating m2 sized Memory devices as PCIe add-in cards for different ultra-lightweight designs. But there’s no wave of the equivalent aftermarket, 3rd party SSDs we saw when SATA drives were king. So now we’re left with a very respectable, and still somewhat under-utilized SATA SSD market with speeds in the 500/Less than 500 Read/Write speed range. Until PCIe starts to converge, consolidate and come up with a common form factor (card size, pin out, edge connector) we’ll be seeing a long slow commoditization of SATA SSD drives with the lucky few spinning their own PCIe products. Hopefully there will be an upset and someone will form up a group to support PCIe SSD mezzanine or expansion slot EVERYWHERE. When that time comes, we’ll get the second wave of SSD performance I think we all are looking for.
Seems like it was only two years ago when OCZ bought out memory controller and intellectual property (IP) holder Indilinx for it’s own branded SSD products. At the time everyone was buying SandForce memory controllers to keep up with the Joneses. Speed-wise and performance-wise SandForce was king. But with so many competitors about using the same memory controller there was no way to make a profit with a commodity technology. The thought was generally performance isn’t always the prime directive regarding SSDs. Going forward, price would be much more important. Anyone owning their own Intellectual Property wouldn’t have to pay license fees to companies like SandForce to stay in the business. So OCZ being on a wild profitable tear, bought out Indilinx a designer of NAND/Flash memory controllers. The die was cast and OCZ was in the drivers seat, creating the the Consumer market for high performance lower cost SSD drives. Market value went up and up, whispers were reported of a possible buy out of OCZ from larger hard drive manufacturers. The price of $1Billion was also mentioned in connection with this.
Two years later, much has changed. There’s been some amount of shift in the market from 2.5″ SATA drives to smaller and more custome designs. Apple jumped from SATA to PCIe with its MacBook Air just this past Fall 2013. The m2 form factor is really well liked in the tablet and lightweight laptop sector. So who knew OCZ was losing it’s glamor to such a degree that they would sell? And not just at the level of 10x cheaper than their hightest profile price from 2 years ago. No, not 10x, but more likely 100x cheaper that what they would have asked for 2 years ago. Two whole orders of magnitude less, very roughly, exactly 35Million dollars along with a large number of negotiated guarantees to keep the support/warranty system in place and not tarnish the OCZ brand (for now). This story is told over and over again to entrpreneurs and magnate wannabees. Sell, sell, sell. No harm in that. But just make sure you’re selling too early rather than too late.
May the SandForce be with you
Nice writeup from Anandtech regarding the press release from LSI about it’s new 3rd generation flash memory controllers. The 3000 series takes over from the 2200 and 1200 series that preceded it as the era of SSDs was just beginning to dawn (remember those heady days of 32GB SSD drives?). Like the Frontier days of old, things are starting to consolidate and find an equilibrium of price vs. performance. Commidity pricing rules the day, but SSDs much less PCIe Flash interfaces are just creeping into the high end of the market of Apple laptops and soon Apple desktops (apologies to the iMac which has already adopted the PCIe interface for its flash drives, but the Mac Pro is still waiting in the wings).
Things continue to improve in terms of future-proofing the interfaces. From SATA to PCIe there was little done to force a migration to one or the other interface as each market had its own peculiarities. SSDs were for the price conscious consumer level market, and PCIe was pretty much only for the enterprise. You had pick and choose your controller very wisely in order to maximize the return on a new device design. LSI did some heavy lifting according to Anandtech by refactoring, redesigning the whole controller thus allowing a manufacturer to buy one controller and use it either way as a SATA SSD controller or as an PCIe flash memory controller. Speeds of each interface indicate this is true at the theoretical throughput end of the scale. LSI reports the PCIe throughput it not too far off the theoretical MAX, (~1.45GB/sec range). Not bad for a chip that can also be use as an SSD controller at 500MB/sec throughput as well. This is going to make designers and hopefully consumers happy as well.
On a more technical note as written about in earlier articles mentioning the great Peak Flash memory density/price limit, LSI is fully aware of the memory architectures and the faillure rates, error rates they accumulate over time.
What would happen if we replaced those 16 disk-based V7000s with all-flash V7000s? Each of the disk-based ones delivered 32,502.7 IOPS. Let’s substitute them with 16 all-flash V7000s, like the one above, and, extrapolate linearly; we would get 1,927,877.4 SPC-1 IOPS – nearly 2 million IOPS. Come on IBM: go for it.
That’s right, IBM is understanding the Flash-based SSD SAN market and is making some benchmark systems to help market its disk arrays. Finally we’re seeing some best case scenarios for these high end throughput monsters. It’s entirely possible to create a 2Million IOPS storage SAN. You just have to assemble the correct components and optimize your storage controllers. What was once a theoretical maximum throughput (1M IOPs) is now achievable without anything more than a purchase order and an account representative from IBM Global Services. It’s not cheap, not by a longshot but your Big Data project or OLAP with Dashboard may just see orders of magnitude increases in speed. It’s all just a matter of money. And probably some tweaking via an IBM consultant as well (touche).
Granted that IBM doesn’t have this as a shipping product isn’t really the point. On paper what can be achieved by mixing matching enterprise storage appliances and disk arrays and software controllers is beyond what any other company is selling IS the point. There’s a goldmine to be had if anyone outside of a high frequency trading skunkworks just shares a little bit of in-house knowledge product familiarity. No doubt it’s not just the network connections that make things faster it is the IOPs that will out no matter what. Write vs. Read and latency will always trump the fastest access to an updated price in my book. But I don’t work for a high-frequency trading skunkworks either, I’m not privy to the demands made upon those engineers and consultants. But still we are now in the best, boldest time yet of nearly too much speed on the storage front. Only thing holding us back is the network access times.
- Extreme Blogging (ibm.com)
- IBM i Storage Options Overview (Which storage is right for me?) (ibm.com)
Does Fusion-io have a sustainable competitive advantage or will it get blown away by a hurricane of other PCIe flash card vendors attacking the market, such as EMC, Intel, Micron, OCZ, TMS, and many others?
More updates on the data center uptake of PCI SSD cards in the form of two big wins from Facebook and Apple. Price/Performance for database applications seems to be skewed heavily to Fusion-io versus the big guns in large scale SAN roll-outs. It seems like due to the smaller scale and faster speed PCI SSD outstrips the resources needed to get an equally fast disk based storage array (including power, and square feet taken up by all the racks). Typically a large rack of spinning disks can be aggregated by using RAID drive controllers and caches to look like a very large high speed hard drive. The Fibre Channel connections add yet another layer of aggregation on top of all that so that you can start splitting the underlying massive disk array into virtual logical drives that fit the storage needs of individual servers and OSes along the way. But to get sufficient speed equal to a Fusion-io style PCI SSD, say to speed up JUST your MySQL server the number of equivalent drives, racks, RAID controllers, caches and Fibre Channel host bus adapters is so large and costs so much, it isn’t worth it.
A single PCI SSD won’t quite have the same total storage capacity as say that larger scale SAN. But for a single, say one-off speed up of a MySQL database you don’t need the massive storage so much as the massive speed up in I/O. And that’s where the PCI SSD comes into play. With the newest PCI 3.0 interfaces and utilizing 8x (eight PCI lane) connectors the current generation of cards is able to maintain 2GB/sec through put on a single PCI card. To achieve that using the older SAN technology is not just cost prohibitive but seriously SPACE prohibitive in all but the largest of data centers. The race now is to see how dense and energy efficient a data center can be constructed. So it comes as no surprise that Facebook and Apple (who are attempting to lower costs all around) are the ones leading this charge of higher density and higher power efficiency as well.
Don’t get me wrong when I tout the PCI SSD so heavily. Disk storage will never go away in my lifetime. It’s just to cost effective and it is fast enough. But for the SysOps in charge of deploying production Apps and hitting performance brick walls, the PCI SSD is going to really save the day. And if nothing else will act as a bridge for most until a better solution can be designed and procured in any given situation. That alone I think would make the cost of trying out a PCI SSD well worth it. Longer term, which vendor will win is still a toss-up. I’m not well versed in the scale of sales into Enterprises of the big vendors in the PCI SSD market. But Fusion-io is doing a great job keeping their name in the press and marketing to some big identifiable names.
But also I give OCZ some credit to with their Z-Drive R5 though it’s not quite considered an Enterprise data center player. Design wise, the OCZ R5 is helping push the state of the art by trying out new controllers, new designs attempting to raise the total number of I/Os and bandwidth on single card. I’ve seen one story so far about a test sample at Computex(Anandtech) that a brand new clean R5 hit nearly 800,000 I/Os in benchmark tests. That peak peformance eventually eroded as the flash chips filled up and fell to around 530,000 I/Os but the trend is clear. We may see 1million IOPs on a single PCI SDD before long. And that my readers is going to be an Andy Grove style 10X difference that brings changes we never thought possible.
- SanDisk Reveals Lightning PCI Express SSD Cards (news.softpedia.com)
- Intel’s SSD 910: Finally a PCIe SSD from Intel (anandtech.com)
- Three questions Fusion-io’s rivals face after flash API bombshell (go.theregister.com)
- Souping Up the Mac Pro: OWC Accelsior PCI Express SSD (barefeats.com)
NoSQL database supplier Couchbase says it is tweaking its key-value storage server to hook into Fusion-ios PCIe flash ioMemory products – caching the hottest data in RAM and storing lukewarm info in flash. Couchbase will use the ioMemory SDK to bypass the host operating systems IO subsystems and buffers to drill straight into the flash cache.
Can you hear it? It’s starting to happen. Can you feel it? The biggest single meme of the last 2 years Big Data/NoSQL is mashing up with PCIe SSDs and in memory databases. What does it mean? One can only guess but the performance gains to be had using a product like CouchBase to overcome the limits of a traditional tables/rows SQL database will be amplified when optimized and paired up with PCIe SSD data stores. I’m imagining something like a 10X boost in data reads/writes on the CouchBase back end. And something more like realtime performance from something that might have been treated previously like a Data Mart/Data warehouse. If the move to use the ioMemory SDK and directFS technology with CouchBase is successful you are going to see some interesting benchmarks and white papers about the performance gains.
What is Violin Memory Inc. doing in this market segment of tiered database caches? Violin is teaming with SAP to create a tiered cache for the HANA in memory databasefrom SAP. The SSD SAN array provided by Violin could be multi-tasked to do other duties (providing a cache to any machine on the SAN network). However, this product most likely would be a dedicated caching store to speed up all operations of a RAM based HANA installation, speeding up Online transaction processing and parallel queries on realtime data. No doubt SAP users could stand to gain a lot if they are already invested heavily into the SAP universe of products. But for the more enterprising, entrepreneurial types I think Fusio-io and Couchbase could help get a legacy free group of developers up and running with equal performance and scale. Which ever one you pick is likely to do the job once it’s been purchased, installed and is up and running in a QA environment.
- Fusion-io Software Development Kit Enables Native Flash Memory Access (sacbee.com)
- Roundup: Fusion-io, Oracle, Teradata (datacenterknowledge.com)
- Fusion-io shoves OS aside, lets apps drill straight into flash – The Register (carpetbomberz.com)
As a result of this impending price war, if you are planning on upgrading your system with an SSD, you might consider waiting for a few months to watch the market and see how much prices fall.
Great analysis and news from Topher Kessler at C|Net regarding competition in the flash memory industry. I have to say keep your eyes peeled between now and September and track those prices closely through both Amazon and Newegg. They are neck and neck when it comes to prices on any of big name brand SSDs. Samsung and Intel would be at the top of my list going into the Fall, but don’t be too quick to purchase your gear. Just wait for it as Intel goes up against OCZ and Crucial and Kingston.
The amount of change in prices will likely vary based on total capacity of each drive (that’s a fixed cost due to the chip count in the device). So don’t expect a 512GB SSD to be dropping by 50% by the end of Summer. It’s not going to be that drastic. But the price premium brought about by the semi-false scarcity of the SSDs is what is really going to be disappearing once the smaller vendors are eliminated from the market. I will be curious to see how Samsung fares in this battle between the other manufacturers as they were not specifically listed as a participant in the price war. However being a chip manufacturer gives them a genuine advantage as they supply many of the people who design and manufacture SSDs with Flash memory chips.
- Low-cost Intel 330 series SSDs sport SandForce SF-2281 SSD Controller (denalimemoryreport.wordpress.com)
- SSD Price Wars (robert.accettura.com)
- SSD Prices To Fall Thanks To Price War (itproportal.com)
Like the native API libraries, directFS is implemented directly on ioMemory, significantly reducing latency by entirely bypassing operating system buffer caches, file system and kernel block I/O layers. Fusion-io directFS will be released as a practical working example of an application running natively on flash to help developers explore the use of Fusion-io APIs.
via (Chris Mellor) Fusion-io shoves OS aside, lets apps drill straight into flash • The Register.
Another interesting announcement from the folks at Fusion-io regarding their brand of PCIe SSD cards. There was a proof of concept project covered previously by Chris Mellor in which Fusion-io attempted to top out at 1 Billion IOPs using a novel architecture where PCIe SSD drives were not treated as storage. In fact the Fusion-io was turned into a memory tier bypassing most of the OSes own buffers and queues for handling a traditional Filesystem. Doing this reaped many benefits in terms of depleting the latency inherent with a FileSystem and how it has to communicate through the OS kernel through to the memory subsystem and back again.
Considering also work done within the last 4 years or more using so-called “in memory’ databases and big data projects in general a product like directFS might pair nicely with them. The limit with in memory databases is always the amount of RAM available and total number of cpu nodes managing those memory subsystems. Tack on the necessary storage to load and snapshot the database over time and you have a very traditional looking database server. However, if you supplement that traditional looking architecture with a tier of storage like the directFS the SAN network becomes a 3rd tier of storage, almost like a tape backup device. Sounds interesting the more I daydream about it.
- Three questions Fusion-io’s rivals face after flash API bombshell (go.theregister.com)
- Fusion-io SDK gives developers native memory access, keys to the NAND realm (engadget.com)
- Fusion-io demos billion IOPS server config – The Register (carpetbomberz.com)