Category: technology

General technology, not anything in particular

  • MIT boffin: Salted disks hold SIX TIMES more data • The Register

    Close-up of a hard disk head resting on a disk...
    Image via Wikipedia

    This method shows, Yang says, that “bits can be patterned more densely together by reducing the number of processing steps”. The HDD industry will be fascinated to understand how BPM drives can be made at a perhaps lower-than-anticipated cost.

    via MIT boffin: Salted disks hold SIX TIMES more data • The Register.

    Moore’s Law applies to semi-conductors built on silicon wafers. And to a lesser extent it has had some application to hard disk drive storage as well. When IBM created is GMR (Giant Magneto-Resistive) read/write head technology and was able to develop it into a shipping product, a real storage arms race began. Densities increased, prices dropped and before you knew it hard drives went from 1Gbyte to 10Gbytes overnight practically speaking. Soon a 30Gbyte drive was the default average size boot and data drive for every shipping PC when just a few years before a 700Mbyte drive was the norm. This was a greater than 10X improvement with the adoption of a new technology.

    I remember a lot of those touted technologies were added on and tacked on at the same time. PRML (Partial Read Maximum Likelihood) and Perpendicular Magnetic Recording  (PMR) too both helped keep the ball rolling in terms of storage density. IBM even did some pretty advanced work layering magnetic layers between magnetically insulating layers (using thin layers of Ruthenium) to help create even stronger magnetic recording media for the newer higher density drives.

    However each new incremental advance has now run a course and the advances in storage technology are slowing down again. But there’s still one shining hope: Bit Patterned-Media (BPM). And in all the speculation about which technology is going to keep the storage density ball rolling, this new announcement is sure to play it’s part. A competing technique using lasers to heat the disk surface before writing data is also being researched and discussed, but is likely to force a lot of storage vendors to agree to make a transition to that technology simultaneously. BPM on the other hand isn’t so different and revolutionary that it must be rolled out en masse simultaneously by each drive vendor to insure everyone is compatible. And better yet BPM maybe a much lower cost and immediate way to increase storage densities without incurring big equipment and manufacturing machine upgrade costs.

    So I’m thinking we’ll be seeing BPM much more quickly and we’ll continue to enjoy the advances in drive density for a little while longer.

  • Intels Plans for New SSDs in 2012 Detailed

    Logo of Intel, Jul 1968 - Dec 2005
    Image via Wikipedia

    Through first quarter of 2012, Intel will be releasing new SSDs: Intel SSD 520 “Cherryville” Series replacement for the Intel SSD 510 Series, Intel SSD 710 “Lyndonville” Series Enterprise HET-MLC SSD replacement for X25-E series, and Intel SSD 720 “Ramsdale” Series PCIe based SSD. In addition, you will be seeing two additional mSATA SSDs codenamed “Hawley Creek” by the end of the fourth quarter 2011.

    via Intels Plans for New SSDs in 2012 Detailed.

    That’s right folks Intel is jumping on the high performance PCIe SSD bandwagon with the Intel SSD 720 in the first quarter of 2012. Don’t know what price they will charge but given quotes and pre-releases of specs it’s going to compete against products from competitors like RamSan, Fusion-io and the top level OCZ PCIe prouct the R4. My best guess is based on pricing for those products it will be in the roughly $10,000+ category with an 8x PCI interface and fully complement of Flash memory (usually over 1TB on this class of PCIe card).

    Knowing that Intel’s got some big engineering resources behind their SSD designs, I’m curious to see how close they can come to the performance statistics quoted in this table here:

    http://www.tomshardware.com/gallery/intel-ssd-leak,0101-296920-0-2-3-1-jpg-.html

    2200 Mbytes/sec of Read throughput and 1100Mbytes/sec of Write throughput. Those are some pretty heft numbers compared to currently shipping products in the upper pro-summer and lower Enterprise Class price category. Hopefully Anandtech will get a shipping or even pre-release version before the end of the year and give it a good torture test. Following Anand Lai Shimpi on his Twitter feed, I’m seeing all kinds of tweets about how a lot of pre-release products from manufacturers off SSDs and PCIe SSDs fail during the benchmarks. Doesn’t bode well for the Quality Control depts. at the manufacturers assembling and testing these products. Especially considering the price premium of these items, it would be much more reassuring if the testing was more rigorous and conservative.

  • AnandTech – Qualcomms New Snapdragon S4: MSM8960 & Krait Architecture Explored

    Qualcomm remains the only active player in the smartphone/tablet space that uses its architecture license to put out custom designs. The benefit to a custom design is typically better power and performance characteristics compared to the more easily synthesizable designs you get directly from ARM. The downside is development time and costs go up tremendously.

    via AnandTech – Qualcomms New Snapdragon S4: MSM8960 & Krait Architecture Explored.

    The snapdragon cpu
    From the Qualcomm Website: Snapdragon

    I’m very curious to see how the different ARM based processors fair against one anther in each successive generation. Especially the move to ARM-15 (x64) none of which will see a quick implementation on a handheld mobile device. ARM-15 is a long ways off yet, but it appears in spite of the next big thing in ARM designed cores, there’s a ton of incremental improvements and evolutionary progress being made on current generation ARM cores. ARM-8 and ARM-9 have a lot of life in them for the foreseeable future including die shrinks that allow either faster clock speeds or constant clock speeds and lower power drain and lower Thermal Design Point (TDP).

    Apple’s also going steadily towards the die shrink in order to cement current gains made in it’s A5 chip design too. Taiwan Manfucturing Semi-Conductor (TMSC) is the biggest partner in this direction and is attempting to run the next iteration of Apple mobile processors on its state of the art 22 nanometer design rule process.

  • Rise of the Multi-Core Mesh Munchkins: Adapteva Announces New Epiphany Processor – HotHardware

    Epiphany Processor from Adapteva
    Epiphany Block Diagram

    Many-core processors are apparently the new black for 2011. Intel continues to work on both its single chip cloud computer and Knights Corner, Tilera made headlines earlier this year, and now a new company, Adapteva, has announced its own entry into the field.

    via Rise of the Multi-Core Mesh Munchkins: Adapteva Announces New Epiphany Processor – HotHardware.

    A competitor to Tilera and Intel’s MIC  has entered the field as a mobile processor, co-processor. Given the volatile nature of chip architectures in the mobile market, this is going to be hard sell for some device designers I think. I say this as each new generation of Mobile CPU gets more and more integrated features as each new die shrink allows more embedded functions. The Graphic processors are now being embedded wholesale into every smartphone cpu. Other features like memory controllers and baseband processors will now doubt soon be added to the list as well. If Adapteva wants any traction at all in the Mobile market they will need to further their development of the Epiphany into a synthesizable core that can be added to an existing cpu (most likely a design from ARM). Otherwise trying to stick with being a separate auxiliary chip is going to hamper and severely limit the potential applications of their product.

    Witness the integration of the graphics processing unit. Not long ago it was a way to differentiate a phone but required it to be integrated into the motherboard design along with any of the power requirements it required. In a very short time, after GPUs were added to cell phones they were integrated into the CPU chip sandwich to help keep manufacturing and power budget in check. If the Epiphany had been introduced around the golden age of discrete chips on cell phone motherboards, it would make a lot more sense. But now you need to be embedded, integrated and 100% ARM compatible with a fully baked developer toolkit. Otherwise, it’s all uphill from the product introduction forward. If there’s an application for the Ephiphany co-processor I hope they concentrate more on the tools to fully use the device and develop a niche right out of the gate rather than attempt to get some big name but small scale wins on individual devices from the Android market. That seems like the most likely candidates for shipping product right now.

  • Birck Nanotechnology Center – Ferroelectric RAM

    Schematic drawing of original designs of DRAM ...
    Image via Wikipedia

    The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.

    via Discovery Park – Birck Nanotechnology Center – News.

    I’m always pleasantly surprised to read that work is still being done on alternate materials for Random Access Memory (RAM). I was following closely developments in the category of ferroelectric RAM by folks like Samsung and HP. Very few of these products promised enough return on investment to be developed into products. And some notable efforts by big manufacturers were abandoned altogether.

    If this research effort can be licensed to a big chip manufacturer and not turned into a form of patent trolling ammunition I would feel the effort was not wasted. I think too often most recently these patented technologies are not used as a means of advancing the art of computer technology. Instead they are a portfolio to a litigator seeking rent on the patented technology.

    Due to the frequency of abandoned projects in the alternative DRAM technology category, I’m hoping the compatibility of this chip’s manufacturing process with existing chip making technology will be a big step forward. A paradigm shifting technology like magnetic RAM might just push us to the next big mountain top of power conservation, performance and capability that the CPU enjoyed from 1969 to roughly 2005 when chip speeds began to plateau.

  • AnandTech – OCZ Z-Drive R4 CM88 1.6TB PCIe SSD Review

    In the enterprise segment where 1U and 2U servers are common, PCI Express SSDs are very attractive. You may not always have a ton of 2.5″ drive bays but theres usually at least one high-bandwidth PCIe slot unused. The RevoDrive family of PCIe SSDs were targeted at the high-end desktop or workstation market, but for an enterprise-specific solution OCZ has its Z-Drive line.

    via AnandTech – OCZ Z-Drive R4 CM88 1.6TB PCIe SSD Review.

    Anandtech is breaking new ground covering some Enterprise level segments of the Solid State Disk industry. While I doubt he’ll be doing ratings of Violin and Texas Memory Systems gear very soon, the OCZ low end Enterprise PCIe cards is still beginning to approach that target. We’re talking $10,000 USD and up for anyone who wants to participate. Which puts it in the middle to high end of Fusion-io and barely touches the lower end of Violin and TMS not to mention Virident. Given that, it is still wild to see what kind of architecture and performance optimization one gets for the money they pay. SandForce rules the day at OCZ for anything requiring the top speeds for write performance. It’s also interesting to find out about the SandForce 25xx series use of super-capacitors to hold enough reserve power to flush the write caches on a power outage. It’s expensive, but moves the product up a few notches in the Enterprise level reliability scale.

  • Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ

    Image representing Layar as depicted in CrunchBase
    Image via CrunchBase

    “We have added to the platform computer vision, so we can recognize what you are looking at, and then add things on top of them.”

    via Augmented Reality Start-Up Ready to Disrupt Business – Tech Europe – WSJ.

    I’ve been a fan of Augmented Reality for a while, following the announcements from Layar over the past two years. I’m hoping out of this work comes something more than another channel for selling, advertising and marketing. But innovation always follows where the money is and artistic creative pursuits are NOT it. Witness the evolution of Layar from a toolkit to a whole package of brand loyalty add-ons ready to be sent out whole to any smartphone owner, unwitting enough to download the Layar created App.

    The emphasis in this WSJ article however is not how Layar is trying to market itself. Instead they are more worried about how Layar is creating a ‘virtual’ space where meta-data is tagged onto a physical location. So a Layar Augmented Reality squatter can setup a very mundane virtual T-shirt shop (say like Second Life) in the same physical location as a high class couturier on a high street in London or Paris. What right does anyone have to squat in the Layar domain? Just like Domain Name System squatters of today, they have every right by being there first. Which brings to mind how this will evolve into a game of technical one-upsmanship whereby each Augmented Reality Domain will be subject to the market forces of popularity. Witness the chaotic evolution of social networking where AOL, Friendster, MySpace, Facebook and now Google+ all usurp market mindshare from one another.

    While the Layar squatter has his T-shirt shop today, the question is who knows this other than other Layar users? Who will yet know whether anyone else will ever know? This leads me to conclude this is a much bigger deal to the WSJ than it is to anyone who might be sniped at by or squatted upon within an Augmented Reality cul-de-sac. Though those stores and corporations may not be able to budge the Layar squatters, they can at least lay claim to the rest of their empire and prevent any future miscreants from owning their virtual space. But as I say, in one-upsmanship there is no real end game, only just the NEXT game.

  • $1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud

    Amazon Web Services logo
    Image via Wikipedia

    Amazon EC2 and other cloud services are expanding the market for high-performance computing. Without access to a national lab or a supercomputer in your own data center, cloud computing lets businesses spin up temporary clusters at will and stop paying for them as soon as the computing needs are met.

    via $1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud.

    If you own your Data Center, you might be a little nervous right now as even a Data Center can be outsourced on an as needed basis. Especially if you are doing scientific computing you should consider the fixed costs of acquiring and maintaining those sunk, capital costs after the cluster is up and running. This story provides one great example of what I think the Cloud Computer could one day become. Rent-a-Center style data centers and compute clusters seem like an incredible value especially for a University but even more so for a business that may not need a to keep a real live data center under their control. Examples abound as even online services like Drop Box lease their compute cycles from the likes of Amazon Web Services and the Elastic Compute Cloud (EC2). And if migrating an application into a Data Center along with the data set to be analyzed can be sped up sufficiently and the cost kept down, who knows what might be possible.

    Opportunity costs are many when it comes to having access to a sufficiently large number of nodes in a compute cluster. Mostly with modeling applications, you get to run a simulation at finer time slices, at higher resolution possibly gaining a better understanding of how close your algorithms match the real world. This isn’t just for business but for science as well and I think being saddled with a typical Data Center installation and it’s infrastructure and depreciation costs along with staffing make it seem less attractive if the big Data Center providers are willing to sell part of their compute cycles at a reasonable rate. The best part is you can shop around too. In the bad old days of batch computing and the glassed in data center, before desktops and mini-computers people were dying to get access to the machine and run their jobs. Now the surplus of computing cycles is so great for the big players, they help subsidize the costs of build-outs and redundancies by letting people bid of the spare compute cycles they have just lying around generating heat. It’s a whole new era of compute cycle auctions and I for one am dying to see more stories like this in the future.

  • AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel

    USB Connector

    A new report claims Apple has continued to investigate implementing USB 3.0 in its Mac computers independent of Intels plans to eventually support USB 3.0 at the chipset level.

    via AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel.

    This is interesting to read, I have not paid much attention to USB 3.0 due to how slowly it has been adopted by the PC manufacturing world. But in the past Apple has been quicker to adopt some mainstream technologies than it’s PC manufacturing counterparts. The value add is increased as more and more devices also adopt the new interface, namely anything that runs the iOS. The surest sign there’s a move going on will be whether or not there is USB 3.0 support in the iOS 5.x and whether or not there is hardware support in the next Revision of the iPhone.

    And now it appears Apple is releasing two iPhones, a minor iPhone 4 update and a new iPhone 5 at roughly the same time. Given reports that the new iPhone 5 has a lot of RAM installed, I’m curious about how much of the storage is NAND based Flash memory. Will we see something on the order of 64GB again or more this time around when the new phones are released.  The upshot is for instances where you can tether your device to sync it to the Mac, with a USB 3.0 compliant interface the file transfer speed will make the chore of pulling out the cables worth the effort. However, the all encompassing sharing of data all the time between Apple devices may make the whole adoption of USB 3.0 seem less necessary if every device can find its partner and sync over the airwaves instead of over iPod connectors.

    Still it would be nice to have a dedicated high speed cable for the inevitable external Hard drive connection necessary in these days of the smaller laptops like the Macbook Air, or the Mac mini. Less space internally means these devices will need a supplement to the internal hard drive, one even that the Apple iCloud cannot fulfill especially considering the size of video files coming off each new generation of HD video cameras. I don’t care what Apple says but 250GBs of AVCHD files is going to sync very,…very,… slowly. All the more reason to adopt USB 3.0 as soon as possible.

  • Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    Image via Wikipedia

    Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips.

    via Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech.

    Who says there’s no such thing as progress? Apart from the DDR memory bus data rates moving from DDR-3 to DDR-4 soon what have you read that was significantly different, much less better than the first gen DDR DIMMS from years ago? Chip stacking is de rigeur for manufacturers of Flash memory especially in mobile devices with limited real estate on the motherboards. This packaging has flowed back into the computer market very handily and has lead to small form factors in all the very Flash memory devices. Whether it be, Thumb drives, or aftermarket 2.5″ Laptop Solid State Disks or embedded on an mSATA module everyone’s benefiting equally.

    Wither stacking of RAM modules? I know there’s been some efforts to do this again for the mobile device market. But any large scale flow back into the general computing market has been hard to see. I’m hoping this announcement Invensas is a real shipping product eventually and not an attempt to stake a claim on intellectual property that will take the form of lawsuits against current memory designers and manufacturers. Stacking is the way to go, even if it never can be used in say a CPU, I would think clock speeds and power savings requirements on RAM modules might be sufficient to allow some stacking to occur. And if the memory access speeds improve at the same time, so much the better.