Categories
data center flash memory SSD

Fusion-io demos billion IOPS server config • The Register

Fusion-io has achieved a billion IOPS from eight servers in a demonstration at the DEMO Enterprise event

Image representing Fusion-io as depicted in Cr...
Image via CrunchBase

in San Francisco.

The cracking performance needed just eight HP DL370 G6 servers, running Linux 2.6.35.6-45 on two, 6-core Intel processors, 96GB RAM. Each server was fitted with eight 2.4TB ioDrive2 Duo PCIE flash drives; thats 19.2TB of flash per server and 153.6TB of flash in total.

via Fusion-io demos billion IOPS server config • The Register.

This is in a word, no mean feat. 1 Million IOPS was the target to beat not just 2 years ago for anyone attempting to buy/build their own Flash based storage from the top Enterprise Level manufacturers. So the bar has risen no less than 3 orders of magnitude higher than the top end from 1 year ago. Add to that the magic sauce of bypassing the host OS and using the Flash memory as just an enhanced large memory.

This makes me wonder, how exactly does the Flash memory get used alongside the RAM memory pool?

How do the Applications use the Flash memory, and how does the OS use it?

Those are the details I think that no one else other than Fusion-io can provide as a value-add beyond the PCIe based flash memory modules itself. Instead of hardware being the main differentiator (drive controllers, Single Level Cells, etc.) Fusion-io is using a different path through the OS to the Flash memory. The File I/O system traditionally tied to hard disk storage and more generically ‘storage’ of some kind is being sacrificed. But I understand the logic, design and engineering of bypassing the overhead of the ‘storage’ route and redefining the Flash memory as another form of system memory.

Maybe the old style Von Neumann architecture or Harvard architecture computers are too old school for this new paradigm of a larger tiered memory pool with DRAM and Flash memory modules consisting of the most important parts of the computer. Maybe disk storage could be used as a mere backup of the data held in the Flash memory? Hard to say, and I think Fusion-io is right to hold this info close as they might be able to make this a more general case solution to the I/O problems facing some customers (not just Wall Street type high frequency traders).

Categories
cloud computers data center flash memory SSD technology

EMC’s all-flash benediction: Turbulence ahead • The Register

msystems
Image via Wikipedia

A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”

via EMC’s all-flash benediction: Turbulence ahead • The Register.

I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.

In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.

Categories
computers data center flash memory SSD technology

TMS flash array blows Big Blue away • The Register

Memory collection
Image by teclasorg via Flickr

Texas Memory Systems has absolutely creamed the SPC-1 storage benchmark with a system that comfortably exceeds the current record-holding IBM system at a cost per transaction of 95 per cent less.

via TMS flash array blows Big Blue away • The Register.

One might ask a simple question, how is this even possible given the cost of the storage media involved. How is it a Flash based storage array from RamSan beat a huge pile of IBM hard drives all networked and bound together in a massive storage system? And how did it do it for less? Woe be to those unschooled in the ways of the Per-feshunal Data Center purchasing dept. You cannot enter the halls of the big players unless you got million dollar budgets for big iron servers and big iron storage. Fibre Channel and Infiniband rule the day when it comes to big data throughput. All those spinning drives accessed simultaneously as if each one held one slice of the data you were asking for, each one delivering up it’s 1/10 of 1% of the total file you were trying to retrieve. And the resulting speed makes it look like one hard drive that is 10X10 faster than your desktop computer hard drive all through the smoke and mirrors of the storage controllers and the software that makes them go. But what if, just what if we decided to take Flash memory chips and knit them together with a storage controller that made them appear to be just like a big iron storage system? Well since Flash obviously costs something more than $1 per gigabyte and disk drives cost somewhere less than 10 cents per gigabyte the Flash storage loses right?

In terms of total storage capacity Flash will lose for quite some time when you are talking about holding everything on disk all at the same time. But that is not what’s being benchmarked here at all. No, in fact what is being benchmarked is the rate at which Input (writing of data) and Output (reading of data) is done through the storage controllers. IOPS measure the total number of completed reads/writes done in a given amount of time. Previous to this latest example of the RamSan-630, IBM was king of the mountain with it’s huge striped Fibre Channel arrays all linked up through it’s own storage array controllers. RamSan came in at 400,503.2 IOPS as compared to IBM’s top of the line San Volume Controller with 380,489.3. That’s not very much difference you say, especially considering how much smaller the amount of data a RamSan can hold,… And that would be a valid argument but consider again, that’s not what we’re benchmarking it is the IOPS.

Total cost for the IBM benchmarked system per IOP was $18.83. RamSan (which best IBM in total IOPS) was a measly $1.05 per IOP. The cost is literally 95% less than IBM’s cost. Why? Consider the price (even if it was steeply discounted as most Tech Writers will say as a cavea) for IBM’s benchmarked system costs $7.17Million dollars. Remember I said you need million dollar budgets to play in the data center space. Now consider the RamSan-630 costs $419,000. If you want speed, dump your spinning hard drives, Flash is here to stay and you cannot argue with the speed versus the price at this level of performance. No doubt this is going to threaten the livelihood of a few big iron storage manufacturers. But through disruption, progress is made.

Categories
computers flash memory technology

Viking Modular plugs flash chips into memory sockets • The Register

The 536,870,912 byte (512×2 20 ) capacity of t...
Image via Wikipedia

What a brilliant idea: put flash chips into memory sockets. Thats what Viking Modular is doing this with its SATADIMM product.

via Viking Modular plugs flash chips into memory sockets • The Register.

This sounds like an interesting evolution of the SSD type of storage. But, I don’t know if there is a big advantage forcing a RAM memory controller to be the bridge to a Flash Memory controller. In terms of bandwidth, the speed seems comparable to a 4x PCIe interface. I’m thinking now of how it might compare to PCIe based SSD from OCZ or Fusion-io. It seems like the advantage is still held by PCIe in terms of total bandwidth and capacity (above 500MB/sec and 2Terabytes total storage). It maybe a slightly lower cost, but the use of Single Level Cell Flash memory chips raises the cost considerably for any given size of storage, and this product from Viking uses the Single Level Cell flash memory. I think if this product ships, it will not compete very well against products like consumer level SSDs, PCIe SSDs, etc. However if they continue to develop the product and evolve it, there might be a niche where it can be performance or price competitive.

Categories
blogroll flash memory macintosh technology

Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista

The microcontroller on the right of this USB f...
Image via Wikipedia

The schedules may help back mounting beliefs that the iPhone 5 will 64GB iPhone 4 prototype appeared last month that hinted Apple was exploring the idea as early as last year. Just on Tuesday, a possible if disputed iPod touch with 128GB of storage also appeared and hinted at an upgrade for the MP3 player as well. Both the iPhone and the iPod have been stuck at 32GB and 64GB of storage respectively since 2009 and are increasingly overdue for additional space.

via Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista.

Toshiba has revised its flash memory production lines again to keep pace with the likes of Intel, Micron and Samsung. Higher densities and smaller form factors seemed to indicate they are gearing up for a big production run of the highest capacity memory modules they can make. It’s looking like a new iPhone might be the candidate to receive newer multi-layer single chip 64GB Flash memory modules this year.

A note of caution in this arms race of ever smaller feature sizes on the flash memory modules, the smaller you go the less memory read/write cycles you get. I’m becoming aware that each new generation of flash memory production has lost an amount of robustness. This problem has been camouflaged maybe even handled outright by the increase in over-provisioning of chips on a given size Solid State Disk (sometimes as low as 17% more chips than that which is typically used when the drive is full). Through careful statistical modeling and use of algorithms, an ideal shuffling of the deck of available flash memory chips allows the load to be spread out. No single chip fails as it’s workload is shifted continuously to insure it doesn’t receive anywhere near the maximum number of reliable read write cycles. Similarly, attempts to ‘recover’ data from failing memory cells within a chip module are also making up for these problems. Last but not least outright error-correcting hardware has been implemented on chip to insure everything just works from the beginning of the life of the Solid State Disk (SSD) to the finals days of its useful life.

We may not see the SSD eclipse the venerable kind off high density storage, the Hard Disk Drive (HDD). Given the point of diminishing return provided by Moore’s Law (scaling down increases density, increases speed, lowers costs), Flash may never get down to the level of density we enjoy in a typical consumer brand HDD (2TBytes). We may have to settle for other schemes that get us to that target through other means. Which brings me to my favorite product of the moment, the PCIe based SSD. Which is nothing more than a big circuit board with a bunch of SSD’s tied together in a disk array with a big fat memory controller/error-correction controller sitting on it. In terms of speeds using the PCI Express bus, there are current products that beat single SATA 6 SSDs by a factor of two. And given the requirements of PCI, the form factor of any given module could be several times bigger and two generations older to reach the desired 2Terbyte storage of a typical SATA Hard Drive of today. Which to me sounds like a great deal if we could also see drops in price and increases in reliability by using older previous generation products and technology.

But the mobile market is hard to please, as they are driving most decisions when it comes to what kind of Flash memory modules get ordered en masse. No doubt Apple, Samsung and anyone in consumer electronics will advise manufacturers to consistently shrink their chip sizes to increase density and keep prices up on final shipping product. I don’t know how efficiently an iPhone or iPad use the available memory say on a 64GByte iTouch let’s say. Most of that goes into storing the music, TV shows, and Apps people want to have readily available while passing time. The beauty of that design is it rewards consumption by providing more capacity and raising marginal profit at the same time. This engine of consumer electronics design doesn’t look likely to end in spite of the physical limitations of shrinking down Flash memory chips. But there will be a day of reckoning soon, not unlike when Intel hit the wall at 4Ghz serial processors and had to go multi-core to keep it’s marginal revenue flowing. It’s been very lateral progress in terms of processor performance since then. It is more than likely Flash memory chips cannot get any smaller without being really unreliable and defective, thereby sliding  into the same lateral incrementalism Intel has adopted. Get ready for the plateau.

Categories
flash memory technology

Disk I/O: PCI Based SSDs (via makeitfaster)

Great article and lots of hardcore important details like drivers and throughput. It’s early days yet for the PCI based SSDs, so there’s going to be lots of changes and architectures until a great design or a cheap design begins to dominate the market. And while some PCIe cards may not be ready for the Enterprise Data Center, there may be a market in the high end gamer fanboy product segment. Stay Tuned!

Disk I/O: PCI Based SSDs The next step up from a regular sata based Solid State Disk is the PCIe based solid state disk. They bypass the SATA bottleneck and go straight through the PCI-Express bus, and are able to achieve better throughput. The access time is similar to a normal SSD, as that limit is imposed by the NAND chips themselves, and not the controller. So how is this different than taking a high end raid controller in a PCIe slot and slapping 8 or 12 good SSDs o … Read More

via makeitfaster

Categories
flash memory technology

PCIe based Flash caches

Let me start by saying Chris Mellor of The Register has been doing a great job of keeping up with the product announcements from the big vendors of the server based Flash memory products. I’m not talking simply Solid State Disks (SSD) with flash memory modules and Serial ATA (SATA) controllers. The new Enterprise level product that supersedes SSD disks is a much higher speed (faster than SATA) caches that plug into the PCIe slots on rack based servers. The fashion followed by many data center storage farms was to host large arrays of hot online, or warm nearly online spinning disks. Over time de-duplication was added to prevent unnecessary copies and backups being made on this valuable and scarce resource. Offline storage to tape back-up could be made throughout the day as a third tier of storage with the disks acting as the second tier. What was first tier? Well it would be the disks on the individual servers themselves or the vast RAM memory that the online transactional databases were running on. So RAM, disk, tape the three tier fashion came into being. But as data grows and grows, more people want some of the stuff that was being warehoused out to tape to do regression analysis on historical data. Everyone wants to create a model for trends they might spot in the old data. So what to do?

So as new data comes in, and old data gets analyzed it would seem there’s a need to hold everything in memory all the time, right? Why can’t we just always have it available? Arguing against this in corporate environment is useless. Similarly explaining why you can’t speed up the analysis of historical data is also futile. Thank god there’s a technological solution and that is higher throughput. Spinning disks are a hard limit in terms of Input/Output (I/O). You can only copy so many GBits per second over the SATA interface on a spinning disk hard drive. Even if you fake it by copying alternate bits to adjacent hard drives using RAID techniques you’re still limited. So Flash based SSDs have helped considerably as a tier of storage between the the old disk arrays and the demands made by the corporate overseers who want to see all their data all the time. The big 3 disk storage array makers IBM/Hitachi, EMC, and NetApp are all making hybrid, Flash SSD and spinning disk arrays and optimizing the throughput through the software running the whole mess. Speeds have improved considerably. More companies are doing online analysis to data that previously would be loaded from tape to do offline analysis.

And the interconnects to the storage arrays has improved considerably too. Fibre Channel was a godsend in the storage farm as it allowed much higher speed (first 2Gbytes per second, then doubling with each new generation). The proliferation of Fibre Channel alone made up for a number of failings in the speed of spinning disks and acted as a way of abstracting or virtualizing the physical and logical disks of the storage array. In terms of Fibre Channel the storage control software offers up a ‘virtual’ disk but can manage it on the storage array itself anyway it sees fit. Flexibility and speed reign supreme. But still there’s an upper limit to the Fibre Channel interface and the motherboard of the server itself. It’s the PCIe interface. And evenwith PCIe 2.0 there’s an upper limit to how much throughput you can get off the machine and back onto the machine. Enter the PCIe disk cache.

In this article I review the survey of PCIe based SSD and Flash memory disk caches since they entered the market (as it was written in The Register. It’s not a really mainstream technology. It’s prohibitively expensive to buy and is going to be purchased by those who can afford it in order to gain the extra speed. But even in the short time since STEC was marketing it’s SSDs to the big 3 storage makers, a lot of engineering and design has created a brand new product category and the performance within that category has made steady progress.

LSI’s entry into the market is still very early and shipping product isn’t being widely touted. The Register is the only website actively covering this product segment right now. But the speeds and the density of the chips on these products just keeps getting bigger, better and faster. Which provides a nice parallel to Moore’s Law but in a storage device context. Prior to the PCIe flash cache market opening, SATA, Serial Attached Storage (SAS) was the upper limit of what could be accomplished with even a flash memory chip. Soldering those chips directly onto an add-on board connected directly to the CPU through the PCIe 8-Lane channel is nothing short of miraculous in the speeds it has gained. Now the competition between current vendors is to build one off, customized setups to bench test the theoretical top limit of what can be done with these new products. And this recent article from Chris Mellor shines a light on the newest product on the market the LSI SSS6200. In this article Chris concludes:

None of these million IOPS demos can be regarded as benchmarks and so are not directly comparable. But they do show how the amount of flash kit you need to get a million IOPS has been shrinking

Moore’s law holds true now for the Flash caches which are now becoming the high speed storage option for many datacenters who absolutely have to have the highest I/O disk throughput available. And as the sizes and quantity of the chips continues to shrink and the storage volume increases who knows what the upper limit might be? But news travels swiftly and Chris Mellor got a whitepaper press release from Samsung and began drawing some conclusions.

Interestingly, the owner of the Korean Samsung 20nm process foundry has just taken a stake in Fusion-io, a supplier of PCIe-connected flash solid-state drives. This should mean an increase in Fusion-io product capacities, once Samsung makes parts for Fusion using the new process

The new Flash memory makers are now in an arms race with the product manufacturers. Apple and Fusion-io get first dibs on the shipping product as the new generation of Flash chips enters the market. Apple has Toshiba, and Fusion-io gets Samsung. In spite of LSI’s benchmark of 1million IOPs in their test system, I give the advantage to Fusion-io in the very near future. Another recent announcement from Fusion-io is a small round of venture capital funding that will hopefully cement its future as a going concern. Let’s hope their next generation caches top out at a size that is competitive with all its competitors and that its speed is equal to or faster than currently shipping product.

Outside the datacenter however things are more boring. I’m not seeing anyone try to peer into the future of the desktop or laptop and create a flash cache that performs at this level. Fusion-io does have a desktop product currently shipping mostly targeted at the PC gaming market. I have not seen Tom’s Hardware try it out or attempt to integrate it into a desktop system. The premium price is enough to make it very limited in its appeal (it lists MSRP $799 I think). But let’s step back and imagine what the future might be like. Given that Intel has incorporated the RAM memory controller into its i7 cpus and given that their cpu design rules have shrunk so far that adding the memory controller was not a big sacrifice, Is it possible the PCIe interface electronics could be migrated on CPU away from the Northbridge chipset? I’m not saying there should be no chipset at all. A bridge chip is absolutely necessary for really slow I/O devices like the USB interface. But maybe there could be at least on 16x PCIe lane directly into the CPU or possibly even an 8x PCIe lane. If this product existed, a Fusion-io cache could have almost 1TB storage of flash directly connected into the CPU and act as the highest speed storage yet available on the desktop.

Other routes to higher speed storage could even be another tier of memory slots with an accompanying JEDEC standard for ‘storage’ memory. So RAM would go in one set of slots, Flash in the other. And you could mix, match and add on as much Flash memory as you liked. This potentially could be addressed through the same memory controllers already built into Intel’s currently shipping CPUs. Why does this even matter or why do I think about it at all? I am awaiting the next big speed increase in desktop computing that’s why. Ever since the Megahertz Wars died out, much of the increase in performance has been so micro incremental that there’s not a dime’s worth of difference between any currently shipping PC. Disk storage has reigned supreme and has becoming painfully obvious as the last link in the I/O chain that has stayed pretty static. Parallel ATA migration to Serial ATA has improved things, but nothing like the march of improvements that occurred with each new generation of Intel chips. So I vote for dumping disks once and for all. Move to 2TByte Flash memory storage and let’s run it through the fastest channel we can onto and off the CPU. There’s not telling what new things we might be able to accomplish with the speed boost. Not just games, not just watching movies and not just scientific calculations. It seems to me everything OS and Apps both would receive a big benefit by dumping the disk.

Categories
computers science & technology technology

Which way the wind blows: Flash Memory in the Data Center

STEC Zeus IOPs solid state disk (ssd)
This hard drive with a Fibre Channel interface launched the flash revolution in the datacenter

First let’s just take a quick look backwards to see what was considered state of the art a year ago. A company called STEC was making Flash-based hard drives and selling them to big players in the enterprise storage market like IBM and NetApp. I depends solely on The Register for this information as you can read here: STEC becalmed as Fusion-io streaks ahead

STEC flooded the market according to The Register and subsequently the people using their product were suddenly left with a glut of product using these Fibre Channel based Flash Drives (Solid State Disk Drives – SSD). And the gains in storage array performance followed. However the supply exceeded the demand and EMC is stuck with a raft of last year’s product that it hasn’t marked up and re-sold to its current customers. Which created an opening for a similar but sexier product Fusion-io and it’s PCIe based Flash hard drive. Why sexy?

The necessity of a Fibre Channel interface for the Enterprise Storage market has long been an accepted performance standard. You need at minimum the theoretical 6GB/sec of FC interfaces to compete. But for those in the middle levels of the Enterprise who don’t own the heavy iron of giant multi-terabyte storage arrays, there was/is now an entry point through the magic of the PCIe 2.0 interface. Any given PC whether a server or not will have open PCIe slots in which a

Fusio-io duo PCIe Flash cache card
This is Fusion-io's entry into the Flash cache competition

Fusion-io SSD card could be installed. That lower threshold (though not a lower price necessarily) has made Fusion-io the new darling for anyone wanting to add SSD throughput to their servers and storage systems. And now everyone wants Fusion-io not the re-branded STEC Fibre Channel SSDs everyone was buying a year ago.

Anyone who has studied history knows in the chain of human relations there’s always another competitor out there that wants to sit on your head. Enter LSI and Seagate with a new product for the wealthy, well-heeled purchasing agent at your local data center: LSI and Seagate take on Fusion-io with flash

Rather than create a better/smarter Fibre Channel SSD, LSI and Seagate are assembling a card that plugs into PCIe slot of a storage array or server to act as a high speed cache to the slower spinning disks. The Register refers to three form factors in the market now RamSan, STEC and Fusion-io. Because Fusion-io seems to have moved into the market at the right time and is selling like hot cakes, LSI/Seagate are targeting that particular form factor with it’s SSS6200.

LSI's PCIe Flash hard drive card
This is LSI's entry into the Flash hard drive market

STEC is also going to create a product with a PCIe interface and Micron is going to design a product too. LSI’s product will not be available to ship until the end of the year.  In terms of performance the speeds being target are comparable between the Fusion-io Duo and the LSI SSS6200 (both using single level cell memory). So let the price war begin! Once we finally get some competition in the market I would hope the entry level price of Fusion-io (~$35,000) finally erodes a bit. It is a premium product right now intended to help some folks do some heavy lifting.

My hope for the future is we could see something comparable (though much less expensive and scaled down) available on desktop machines. I don’t care if it’s built-in to a spinning SATA hard drive (say as a high speed but very large cache) or some kind of card plugging into a bus on the motherboard (like the failed Intel Speed Boost cache). If a high speed flash cache could become part of the standard desktop PC architecture to sit in front of monstrous single hard drives (2TB or higher nowadays) we might get faster response from our OS of choice, and possible better optimization of reads/writes to fairly fast but incredibly dense and possibly more error prone HDDs. I say this after reading about the big charge by Western Digital to move from smaller blocks of data to the 4K block.

Much wailing and gnashing of teeth has accompanied the move recently by WD to address the issue of error correcting Cycle Redundancy Check (CRC) algorithms on the hard drives. Because 2Terabyte drives have so many 512bit blocks more and more time and space is taken up doing the CRC check as data is read and written to the drive. A larger block made up of 4096 bits instead of 512 makes the whole thing 4x less wasteful and possibly more reliable even if some space is wasted to small text files or web pages. I understand completely the implication and even more so, old-timers like Steve Gibson at GRC.com understand the danger of ever larger single hard drives. The potential for catastrophic loss of data as more data blocks need to be audited can numerically become overwhelming to even the fastest CPU and SATA bus. I think I remember Steve Gibson expressing doubts as to how large hard drives could theoretically become.

Steve Gibson's SpinRite 6
Steve Gibson's data recovery product SpinRite

As the creator of the SpinRite data recovery utility he knows fundamentally the limits to the design of the Parallel ATA interface. Despite advances in speeds, error-correcting hasn’t changed and neither has the quality of the magnetic medium used on the spinning disks. One thing that has changed is the physical size of the blocks of data. They have gotten infinitesimally smaller with each larger size of disk storage. The smaller the block of data the more error correcting must be done. The more error-correcting the more space to write the error-correcting information. Gibson himself observers something as random as cosmic rays can flip bits within a block of data at those incredibly small scales of the block of data on a 2TByte disk.

So my hope for the future is a new look at the current state of the art motherboard, chipset, I/O bus architecture. Let’s find a middle level, safe area to store the data we’re working on, one that doesn’t spontaneously degrade or is too susceptible to random errors (ie cosmic rays). Let the Flash Cache’s flow, let’s get better throughput and let’s put disks into the class of reliable but slower backing stores for our SSDs.

Categories
media technology

Next Flash Version Will Support Private Browsing

Slashdot Your Rights Online Story | Next Flash Version Will Support Private Browsing.

I’m beginning to think Adobe should just make Flash into a web browser that plays back it’s own movie format. That will end all debates over open standards and so forth and provide better support/integration. There is nothing wrong with a fragmented browser market. It’s what we already have right now.

If you have ever heard from someone that Adobe Flash is buggy and crashes a lot and have to trust their judgment, then please do. It’s not the worst thing ever invented, but it certainly could be better. Given Adobe’s monopoly on web delivered video (ie YouTube) one would think they could maintain competitive advantage through creating a better user experience (like Apple entering the smart phone market). But instead they have attempted to innovate as a way of maintaining their competitiveness and so Flash has bloated up to accommodate all kinds of ActionScript and interactivity that used to only exist in desktop applications. So why should Adobe settle for just being a tool maker and browser plug-in? I say show everyone what the web browser should be, and compete.

Categories
macintosh science & technology technology

64GBytes is the new normal (game change on the way)

Panasonic SDXC flash memory card
Flash memory chips are getting smaller and denser

I remember reading announcements of the 64GB SDXC card format coming online from Toshiba. And just today Samsung has announced it’s making a single chip 64GB flash memory module with a built-in memory controller. Apple’s iPhone design group has been big fans of the single chip large footprint flash memory from Toshiba. They bought up all of Toshiba’s supply of 32GB modules before they released the iPhone 3GS last Summer. Samsung too was providing the 32GB modules to Apple prior to the launch. Each Summer newer bigger modules are making for insanely great things that the iPhone can do. Between the new flash memory recorders from Panasonic/JVC/Canon and the iPhone what will we do with the doubling of storage every year? Surely there will be a point of diminishing return, where the chips cannot be made any thinner and stacked higher in order to make these huge single chip modules. I think back to the slow evolution and radical incrementalism in the iPod’s history going from 5GB’s of storage to start, then moving to 30GB and video! Remember that? the Video iPod @ 30GBytes was dumbfounding at the time. Eventually it would top out at 120 and now 160GBytes total on the iPod classic. At the rate of change in the flash memory market, the memory modules will double in density again by this time next year, achieving 128GBytes for a single chip modules with embedded memory controller. At that density a single SDHC sized memory card will also be able to hold that amount of storage as well. We are fast approaching the optimal size for any amount of video recording we could ever want to do and still edit when we reach the 128 Gbyte mark. At that size we’ll be able to record 1080p video upwards of 20 hours or more on today’s video cameras. Who wants to edit much less watch 20 hours of 1080p video? But for the iPhone, things are different, more apps means more fun. And at 128GB of storage you never have to delete an app, or an single song from your iTunes or a single picture or video, just keep everything. Similarly for those folks using GPS, you could keep all the maps you ever wanted to use right onboard rather than download them all the time thus providing continuous navigation capabilities like you would get with a dedicated GPS unit. I can only imagine the functionality of the iPhone increasing as a result of the increased storage 64GB Flash memory modules would provide. Things can only get better. And speaking of better, The Register just reported today some future directions.

There could be a die process shrink in the next gen flash memory products. There are also some opportunities to use slightly denser memory cells in the next gen modules. The combination of the two refinements might provide the research and design departments at Toshiba and Panasonic the ability to double the density of the SDXC and Flash memory modules to the point where we could see 128GBytes and 256GBytes in each successive revision of the technology. So don’t be surprised if you see a Flash memory module as standard equipment on every motherboard to hold the base Operating System with the option of a hard drive for backup or some kind of slower secondary storage. I would love to see that as a direction netbook or full-sized laptops might take.

http://www.electronista.com/articles/09/04/27/toshiba.32nm.flash.early/ (Toshiba) Apr 27, 2009

http://www.electronista.com/articles/09/05/12/samsung.32gb.movinand.ship/ (Samsung) May 13, 2009

http://www.theregister.co.uk/2010/01/14/samsung_64gbmovinand/ (Samsung) Jan 14, 2010