Blog

  • SLC vs MLC – does it matter any more?

    A very good survey of the Flash Memory choices for enterprise storage. SLC was at one time the only memory technology reliable enough for Enterprise Storage. Now MLC is catching up and allowing larger drives to be purchase at the same or slightly lower price as compared to SLC versions from the same manufacturers. It’s a sign that MLC is maturing and becoming “good enough” for most uses.

    Eric Slack's avatarStorageSwiss.com – The Home of Storage Switzerland

    When an IT professional starts looking into solid state drives (SSDs) they quickly learn that flash is very different from magnetic disk drives. Flash employs a much more complex write and erase process than traditional hard disk drive (HDD) technology does, a process that impacts performance, reliability and the device’s lifespan (flash eventually wears out). To address potential concerns, vendors have historically sold different types of flash, with the more expensive SLC (Single-Level Cell) being used in more critical environments. But with advances in controller technologies, is the more economical MLC (Multi-Level Cell) good enough for the enterprise?

    SLC or MLC

    As indicated above there are actually different types of NAND flash used in storage products. SLC NAND supports two logical states enabling each cell to store one bit of data. MLC NAND, and its “enterprise” cousin eMLC, have a capacity of up to four bits per cell. This increased…

    View original post 888 more words

  • 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

    OpenCL logo
    OpenCL logo (Photo credit: Wikipedia)

    OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

    via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

    There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

    To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

    Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

    For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase
    Enhanced by Zemanta
  • MLC vs. SLC – Podcast

    Definitely worth checking this out if you’re a solutions architect and spec’ing new hardware for a project going into a data center. I’ll be listening that’s for sure.

    Charlie Hodges's avatarStorageSwiss.com – The Home of Storage Switzerland

    Does the difference between MLC vs SLC matter anymore? Storage Switzerland Senior Analyst Eric Slack and I about his latest article on MLC and SLC and how manufacturers are working to make MLC more acceptable in the Enterprise.

    To read Eric’s report, click on the picture below.

    Link to Eric Slack's new report on MLC vs SLC. Link to Eric Slack’s new report on MLC vs SLC.

    View original post

  • Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register

    Image of a dismantled Seagate ST-225 harddisk....
    Image of a dismantled Seagate ST-225 harddisk. 5¼″ MFM harddisk with a stepper actuator. Technical Data: Capacity: 21.4 MB Speed: 3600 rpm Average Seek Time: 65 ms Heads: 4 (Photo credit: Wikipedia)

    Seagate subsidiary LaCie has launched a set of external storage boxes using a 5TB Seagate hard drive – even though disk maker Seagate hasn’t officially launched a 5TB part.

    via Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register.

    There isn’t a whole lot in the way of activity when it comes to new designs and advances in spinning magnetic hard drives these days. The capacity wars have plateau’d around 4TB or so. The next big threshold to cross is either Shingled recording or HAMR (which uses a laser to heat the surface just prior to a write being committed to the disk). Due to the technical advances required and the adoption by a slightly smaller field of manufacturers (there’s not as many here as there was a while ago) the speed at which higher density devices hit the market has slowed. We saw 1TB and 2TB quickly show up one after the other, but slowly eventually the 3TB and 4TB drives followed. And usually they were priced at the high end premium part of the market. Now Seagate has stitched together a 5TB drive and LaCie is rushing it into a number of its own desktop and pro-sumer level products.

    The assumption for now is Seagate has adopted the shingled recording method (which folds writing of blocks of data in an overlapping pattern to increase the density). We’ll see how well that design decision performs over the coming months as the early adopters and fanbois needing each and every last terabyte of storage they can get for their game roms, warez and film/music collections.

    Enhanced by Zemanta
  • The Memory Revolution | Sven Andersson | EE Times

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

    In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

    via The Memory Revolution | Sven Andersson | EE Times

    Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

    Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

    Enhanced by Zemanta
  • Google announces Project Tango, augmented reality for all

    I hope they work directly with the Google Glass group and turn it into a “suite” of freindly interoperable pieces and components. That would be a big plus. VR or AR doesn’t matter to me, I just want the augmentation to be real, and useful.

    vrzonesg's avatarTech News for Geeks

    Google officially unveils project Tango, an initiative that seeks to utilize smartphones to build upon Google’s already dominating mapping empire.

    Project Tango, a project that involves ‘specialists’ from the world over, will put the ability to create augmented reality data into the hands of co…

    Read more: http://vr-zone.com/articles/google-announces-project-tango-augmented-reality/72419.html

    View original post

  • Extending SSD’s Lifespan

    Glad to help out. I think SSDs are the first big thing in a while helping speed up desktop computers. After Intel came out with the i-series CPUs and the hard drives hit 4TB, things have been changing very slowly and incrementally. So SSD at least is giving people some extra speed boost. But with new tech, comes new problems. Like lifespan…

    markobroz's avatarcheapchipsmemory

    Thanks to my fellow blogger Carpetbomberz, I now have something to write. Thanks, mate!

    Well, today I will go around talking about SSDs.

    As a PC owner, I know for a fact that you experienced quite a number of problems on this department. On my case, there are instances when my PC can’t read the SSD and it is volatile to “Freezing” and became unresponsive which lead to its eventual downfall. Most PCs use SSD as a main storage that is why it is a big problem if it will be rendered useless, not to mention the files and data stored that will be forever lost. SSD is a good find but there are downsides to all these. The cost of the device is definitely a problem too that is why you have to make use of it to your full advantage.

    Thankfully, there are way to extend it’s…

    View original post 101 more words

  • Group Forms to Drive NVDIMM Adoption | EE Times

    English: flash memory
    English: flash memory (Photo credit: Wikipedia)

    As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”

    via Group Forms to Drive NVDIMM Adoption | EE Times.

    More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk  are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.

    Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.

     

    Enhanced by Zemanta
  • AnandTech | The Pixel Density Race and its Technical Merits

    Italiano: Descrizione di un pixel
    Italiano: Descrizione di un pixel (Photo credit: Wikipedia)

    If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.

    via AnandTech | The Pixel Density Race and its Technical Merits.

    Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.

    I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained  by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.

    What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.

    Enhanced by Zemanta
  • Jon Udell on filter failure

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    It’s time to engineer some filter failure

    Jon’s article points out his experience of the erosion of serendipity or at least opposing view points that social media enforces (somewhat) accidentally. I couldn’t agree more. One of the big promises of the Internet was that it was unimaginably vast and continuing to grow. The other big promise was that it was open in the way people could participate. There were no dictats or proscribed methods per se, but etiquette at best. There were FAQs to guide us, and rules of thumb to prevent us from embarrassing ourselves. But the Internet, It was something so vast one could never know or see everything that was out there, good or bad.

    But like the Wild est, search engines began fencing in the old prairie. At once both allowing us to get to the good stuff and waste less time doing important stuff. But therein lies the bargain of the “filter”, giving up control to an authority to help you do something with data or information. All the electrons/photons whizzing back and forth on the series of tubes exisiting all at once, available (more or less) all at once. But now with Social Neworks, like AOL before we suffer from the side effects of the filter.

    I remember being an AOL member, finally caving in and installing the app from some free floppy disk I would get in the mail at least once a week. I registered my credit card for the first free 20 hours (can you imagine?). And just like people who ‘try’ Netflix, I never unregistered. I lazily stayed the course and tried getting my money’s worth, spending more time online. At the same time ISPs, small mom and pop type shops were renting off parts of a Fractional T-1 leased line they owned, putting up modem pools and started selling access to the “Internet”. Nobody knew why you would want to do that with all teh kewl thingz one could do on AOL. Shopping, Chat Rooms, News, Stock quotes. It was ‘like’ the Internet. But not open and free and limitless like the Internet. And that’s where the failure begins to occur.

    AOL had to police it’s population, enforce some codes of conduct. They could kick you off, stop accepting your credit card payments. One could not be kicked of the ‘Internet’ in the same way, especially in those early days. But getting back to Jon’s point about filters that fail and allow you to see the whole world, discover an opposing viewpoint or better mulitple opposing viewpoints. That is the promise of the Internet, and we’re seeing less and less of it as we corral ourselves into our favorite brand name social networking community. I skipped MySpace, but I did jump on Flickr, and eventually Facebook. And in so doing gave up a little of that wildcat freedom and frontier-like experience of  dial-up over PPP or SLIP connection to a modem pool, doing a search first on Yahoo, then AltaVista, and then Google to find the important stuff.

    Enhanced by Zemanta