Blog

  • Review: Corning’s 33-foot Optical Thunderbolt cable allows you to move your Thunderbolt devices (or Mac) far away from your desk

    I’m so happy this finally is making it to the market. The promise of Thunderbolt in the early days was that it was going to be faster than any other connector on the market. Now at long last we have the optical flavor of Thunderbolt slowly painfully making it out of development and out to manufacturing. Here now is a review of an optical Thunderbolt cable from Corning.

  • AMD Launches First ARM-based Server CPU | EE Times

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase

    In addition, AMD is planning to contribute to the Open Compute Project with a new micro-server design that utilizes the Opteron A-series, along with other architecture specifications for motherboards that Facebook helped developed called “Group Hug,” an agnostic server board design that can support traditional x86 processors, as well as ARM chips.

    via AMD Launches First ARM-based Server CPU | EE Times.

    Kudos to Facebook as they still continue support for the Open Compute project which they spearheaded some years back to encourage more widespread expertise and knowledge of large scale data centers. This new charge is to allow a pick-and-choose, best of breed kind of design whereby a CPU is not a fixed quantity but can be chosen or changed like a hard drive or RAM module. And with the motherboard firmware remaining more or less consistent regardless of the CPU chosen. This would allow mass customization based solely on the best CPU for a given job (HTTP, DNS, Compute, Storage, etc). And the spare capacity might be allowed to erode a little so that any general CPU could be somewhat more aggressively scheduled while some of it’s former, less efficient services could be migrated to more specialist mobile CPUs on another cluster. Each CPU doing the set of protocols, services it inherently does best. This flies further in the face of always choosing general compute style CPUs and letting the software do most of the heavy lifting once the programming is completed.

    Enhanced by Zemanta
  • SLC vs MLC – does it matter any more?

    A very good survey of the Flash Memory choices for enterprise storage. SLC was at one time the only memory technology reliable enough for Enterprise Storage. Now MLC is catching up and allowing larger drives to be purchase at the same or slightly lower price as compared to SLC versions from the same manufacturers. It’s a sign that MLC is maturing and becoming “good enough” for most uses.

    Eric Slack's avatarStorageSwiss.com – The Home of Storage Switzerland

    When an IT professional starts looking into solid state drives (SSDs) they quickly learn that flash is very different from magnetic disk drives. Flash employs a much more complex write and erase process than traditional hard disk drive (HDD) technology does, a process that impacts performance, reliability and the device’s lifespan (flash eventually wears out). To address potential concerns, vendors have historically sold different types of flash, with the more expensive SLC (Single-Level Cell) being used in more critical environments. But with advances in controller technologies, is the more economical MLC (Multi-Level Cell) good enough for the enterprise?

    SLC or MLC

    As indicated above there are actually different types of NAND flash used in storage products. SLC NAND supports two logical states enabling each cell to store one bit of data. MLC NAND, and its “enterprise” cousin eMLC, have a capacity of up to four bits per cell. This increased…

    View original post 888 more words

  • 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

    OpenCL logo
    OpenCL logo (Photo credit: Wikipedia)

    OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

    via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

    There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

    To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

    Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

    For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase
    Enhanced by Zemanta
  • MLC vs. SLC – Podcast

    Definitely worth checking this out if you’re a solutions architect and spec’ing new hardware for a project going into a data center. I’ll be listening that’s for sure.

    Charlie Hodges's avatarStorageSwiss.com – The Home of Storage Switzerland

    Does the difference between MLC vs SLC matter anymore? Storage Switzerland Senior Analyst Eric Slack and I about his latest article on MLC and SLC and how manufacturers are working to make MLC more acceptable in the Enterprise.

    To read Eric’s report, click on the picture below.

    Link to Eric Slack's new report on MLC vs SLC. Link to Eric Slack’s new report on MLC vs SLC.

    View original post

  • Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register

    Image of a dismantled Seagate ST-225 harddisk....
    Image of a dismantled Seagate ST-225 harddisk. 5¼″ MFM harddisk with a stepper actuator. Technical Data: Capacity: 21.4 MB Speed: 3600 rpm Average Seek Time: 65 ms Heads: 4 (Photo credit: Wikipedia)

    Seagate subsidiary LaCie has launched a set of external storage boxes using a 5TB Seagate hard drive – even though disk maker Seagate hasn’t officially launched a 5TB part.

    via Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register.

    There isn’t a whole lot in the way of activity when it comes to new designs and advances in spinning magnetic hard drives these days. The capacity wars have plateau’d around 4TB or so. The next big threshold to cross is either Shingled recording or HAMR (which uses a laser to heat the surface just prior to a write being committed to the disk). Due to the technical advances required and the adoption by a slightly smaller field of manufacturers (there’s not as many here as there was a while ago) the speed at which higher density devices hit the market has slowed. We saw 1TB and 2TB quickly show up one after the other, but slowly eventually the 3TB and 4TB drives followed. And usually they were priced at the high end premium part of the market. Now Seagate has stitched together a 5TB drive and LaCie is rushing it into a number of its own desktop and pro-sumer level products.

    The assumption for now is Seagate has adopted the shingled recording method (which folds writing of blocks of data in an overlapping pattern to increase the density). We’ll see how well that design decision performs over the coming months as the early adopters and fanbois needing each and every last terabyte of storage they can get for their game roms, warez and film/music collections.

    Enhanced by Zemanta
  • The Memory Revolution | Sven Andersson | EE Times

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

    In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

    via The Memory Revolution | Sven Andersson | EE Times

    Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

    Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

    Enhanced by Zemanta
  • Google announces Project Tango, augmented reality for all

    I hope they work directly with the Google Glass group and turn it into a “suite” of freindly interoperable pieces and components. That would be a big plus. VR or AR doesn’t matter to me, I just want the augmentation to be real, and useful.

    vrzonesg's avatarTech News for Geeks

    Google officially unveils project Tango, an initiative that seeks to utilize smartphones to build upon Google’s already dominating mapping empire.

    Project Tango, a project that involves ‘specialists’ from the world over, will put the ability to create augmented reality data into the hands of co…

    Read more: http://vr-zone.com/articles/google-announces-project-tango-augmented-reality/72419.html

    View original post

  • Extending SSD’s Lifespan

    Glad to help out. I think SSDs are the first big thing in a while helping speed up desktop computers. After Intel came out with the i-series CPUs and the hard drives hit 4TB, things have been changing very slowly and incrementally. So SSD at least is giving people some extra speed boost. But with new tech, comes new problems. Like lifespan…

    markobroz's avatarcheapchipsmemory

    Thanks to my fellow blogger Carpetbomberz, I now have something to write. Thanks, mate!

    Well, today I will go around talking about SSDs.

    As a PC owner, I know for a fact that you experienced quite a number of problems on this department. On my case, there are instances when my PC can’t read the SSD and it is volatile to “Freezing” and became unresponsive which lead to its eventual downfall. Most PCs use SSD as a main storage that is why it is a big problem if it will be rendered useless, not to mention the files and data stored that will be forever lost. SSD is a good find but there are downsides to all these. The cost of the device is definitely a problem too that is why you have to make use of it to your full advantage.

    Thankfully, there are way to extend it’s…

    View original post 101 more words

  • Group Forms to Drive NVDIMM Adoption | EE Times

    English: flash memory
    English: flash memory (Photo credit: Wikipedia)

    As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”

    via Group Forms to Drive NVDIMM Adoption | EE Times.

    More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk  are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.

    Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.

     

    Enhanced by Zemanta