Category: blogroll

This is what I subscribe to myself

  • Expect the First Windows 8 Snapdragon PC Late 2012

    Image representing Microsoft as depicted in Cr...
    Image via CrunchBase

    Qualcomm CEO Paul Jacobs, speaking during the San Diego semiconductor companys annual analyst day in New York, said Qualcomm is currently working with Microsoft to ensure that the upcoming Windows 8 operating system will run on its ARM-based Snapdragon SoCs.

    via Expect the First Windows 8 Snapdragon PC Late 2012.

    Image representing Qualcomm as depicted in Cru...
    Image via CrunchBase

    Windows 8 is a’comin’ down the street.  And I bet you’ll see it sooner rather than later. Maybe as early as June on some products. The reason of course is the Tablet Market is sucking all the air out of the room and Microsoft needs a win to keep the mindshare favorable to it’s view of the consumer computer market. Part of that drive is fostering a new level of cooperation with System on chip manufacturers who until now have been devoted to the mobile phone, smart phone market. Now everyone wants a great big Microsoft hope to conquer the Apple iPad in the tablet market. And this may be their only hope to accomplish that in the coming year.

    Forrester Research just 2 days ago however predicted the Windows 8 Tablet dead on arrival:

    Image representing Forrester Research as depic...
    Image via CrunchBase

    IDG News Service – Interest in tablets with Microsoft’s Windows 8 is plummeting, Forrester Research said in a study released on Tuesday.

    http://www.computerworld.com/s/article/9222238/Interest_waning_on_Windows_8_tablet_Forrester_says

    Key to making a mark in the tablet computing market is content, content, content. Performance and specs alone will not create a Windows 8 Tablet market in what is an Apple dominated tablet marketplace, as the article says. It also appears previous players in the failed PC Tablet market will make a valiant second attempt this time using Windows 8 (I’m thinking Fujitsu, HP and Dell according to this article).

    Enhanced by Zemanta
  • MIT boffin: Salted disks hold SIX TIMES more data • The Register

    Close-up of a hard disk head resting on a disk...
    Image via Wikipedia

    This method shows, Yang says, that “bits can be patterned more densely together by reducing the number of processing steps”. The HDD industry will be fascinated to understand how BPM drives can be made at a perhaps lower-than-anticipated cost.

    via MIT boffin: Salted disks hold SIX TIMES more data • The Register.

    Moore’s Law applies to semi-conductors built on silicon wafers. And to a lesser extent it has had some application to hard disk drive storage as well. When IBM created is GMR (Giant Magneto-Resistive) read/write head technology and was able to develop it into a shipping product, a real storage arms race began. Densities increased, prices dropped and before you knew it hard drives went from 1Gbyte to 10Gbytes overnight practically speaking. Soon a 30Gbyte drive was the default average size boot and data drive for every shipping PC when just a few years before a 700Mbyte drive was the norm. This was a greater than 10X improvement with the adoption of a new technology.

    I remember a lot of those touted technologies were added on and tacked on at the same time. PRML (Partial Read Maximum Likelihood) and Perpendicular Magnetic Recording  (PMR) too both helped keep the ball rolling in terms of storage density. IBM even did some pretty advanced work layering magnetic layers between magnetically insulating layers (using thin layers of Ruthenium) to help create even stronger magnetic recording media for the newer higher density drives.

    However each new incremental advance has now run a course and the advances in storage technology are slowing down again. But there’s still one shining hope: Bit Patterned-Media (BPM). And in all the speculation about which technology is going to keep the storage density ball rolling, this new announcement is sure to play it’s part. A competing technique using lasers to heat the disk surface before writing data is also being researched and discussed, but is likely to force a lot of storage vendors to agree to make a transition to that technology simultaneously. BPM on the other hand isn’t so different and revolutionary that it must be rolled out en masse simultaneously by each drive vendor to insure everyone is compatible. And better yet BPM maybe a much lower cost and immediate way to increase storage densities without incurring big equipment and manufacturing machine upgrade costs.

    So I’m thinking we’ll be seeing BPM much more quickly and we’ll continue to enjoy the advances in drive density for a little while longer.

  • Birck Nanotechnology Center – Ferroelectric RAM

    Schematic drawing of original designs of DRAM ...
    Image via Wikipedia

    The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.

    via Discovery Park – Birck Nanotechnology Center – News.

    I’m always pleasantly surprised to read that work is still being done on alternate materials for Random Access Memory (RAM). I was following closely developments in the category of ferroelectric RAM by folks like Samsung and HP. Very few of these products promised enough return on investment to be developed into products. And some notable efforts by big manufacturers were abandoned altogether.

    If this research effort can be licensed to a big chip manufacturer and not turned into a form of patent trolling ammunition I would feel the effort was not wasted. I think too often most recently these patented technologies are not used as a means of advancing the art of computer technology. Instead they are a portfolio to a litigator seeking rent on the patented technology.

    Due to the frequency of abandoned projects in the alternative DRAM technology category, I’m hoping the compatibility of this chip’s manufacturing process with existing chip making technology will be a big step forward. A paradigm shifting technology like magnetic RAM might just push us to the next big mountain top of power conservation, performance and capability that the CPU enjoyed from 1969 to roughly 2005 when chip speeds began to plateau.

  • AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel

    USB Connector

    A new report claims Apple has continued to investigate implementing USB 3.0 in its Mac computers independent of Intels plans to eventually support USB 3.0 at the chipset level.

    via AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel.

    This is interesting to read, I have not paid much attention to USB 3.0 due to how slowly it has been adopted by the PC manufacturing world. But in the past Apple has been quicker to adopt some mainstream technologies than it’s PC manufacturing counterparts. The value add is increased as more and more devices also adopt the new interface, namely anything that runs the iOS. The surest sign there’s a move going on will be whether or not there is USB 3.0 support in the iOS 5.x and whether or not there is hardware support in the next Revision of the iPhone.

    And now it appears Apple is releasing two iPhones, a minor iPhone 4 update and a new iPhone 5 at roughly the same time. Given reports that the new iPhone 5 has a lot of RAM installed, I’m curious about how much of the storage is NAND based Flash memory. Will we see something on the order of 64GB again or more this time around when the new phones are released.  The upshot is for instances where you can tether your device to sync it to the Mac, with a USB 3.0 compliant interface the file transfer speed will make the chore of pulling out the cables worth the effort. However, the all encompassing sharing of data all the time between Apple devices may make the whole adoption of USB 3.0 seem less necessary if every device can find its partner and sync over the airwaves instead of over iPod connectors.

    Still it would be nice to have a dedicated high speed cable for the inevitable external Hard drive connection necessary in these days of the smaller laptops like the Macbook Air, or the Mac mini. Less space internally means these devices will need a supplement to the internal hard drive, one even that the Apple iCloud cannot fulfill especially considering the size of video files coming off each new generation of HD video cameras. I don’t care what Apple says but 250GBs of AVCHD files is going to sync very,…very,… slowly. All the more reason to adopt USB 3.0 as soon as possible.

  • Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    Image via Wikipedia

    Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips.

    via Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech.

    Who says there’s no such thing as progress? Apart from the DDR memory bus data rates moving from DDR-3 to DDR-4 soon what have you read that was significantly different, much less better than the first gen DDR DIMMS from years ago? Chip stacking is de rigeur for manufacturers of Flash memory especially in mobile devices with limited real estate on the motherboards. This packaging has flowed back into the computer market very handily and has lead to small form factors in all the very Flash memory devices. Whether it be, Thumb drives, or aftermarket 2.5″ Laptop Solid State Disks or embedded on an mSATA module everyone’s benefiting equally.

    Wither stacking of RAM modules? I know there’s been some efforts to do this again for the mobile device market. But any large scale flow back into the general computing market has been hard to see. I’m hoping this announcement Invensas is a real shipping product eventually and not an attempt to stake a claim on intellectual property that will take the form of lawsuits against current memory designers and manufacturers. Stacking is the way to go, even if it never can be used in say a CPU, I would think clock speeds and power savings requirements on RAM modules might be sufficient to allow some stacking to occur. And if the memory access speeds improve at the same time, so much the better.

  • OCZ Launches PCIe-Based HDD/SDD Hybrid Drive

    By bypassing the SATA bottleneck, OCZs RevoDrive Hybrid promises transfer speeds up to 910 MB/s and up to 120,000 IOPS 4K random write. The SSD aspect reportedly uses a SandForce SF-2281 controller and the hard drive platters spin at 5,400rpm. On a whole, the hybrid drive makes good use of the companys proprietary Virtualized Controller Architecture.

    via OCZ Launches PCIe-Based HDD/SDD Hybrid Drive.

    RevoDrive Hybrid PCIe
    Image from: Tom's Hardware

    Good news on the Consumer Electronics front, OCZ continues to innovate on the desktop aftermarket introducing a new PCIe Flash product that marries a nice 1TByte Hard Drive to a 100GB flash-based SSD. The best of both worlds all in one neat little package. Previously you might buy these two devices seperately, 1 average sized Flash drive and 1 spacious Hard drive. Then you would configure the Flash Drive as your System boot drive and then using some kind of alias/shortcut trick have the Hard drive as your user folder to hold videos, pictures, etc. This has caused some very conservative types to sit out and wait for even bigger Flash drives hoping to store everything on one logical volume. But what they really want is a hybrid of big storage and fast speed and that according to the press release is what the OCZ Hybrid Drive delivers. With a SandForce drive controller and two drives the whole architecture is hidden away along with the caching algorithm that moves files between the flash and hard drive storage areas. To the end user, they see but one big Hard drive (albeit installed in one of their PCI card slots), but experience the faster bootup times, faster application loading times. I’m seriously considering adding one of these devices into a home computer we have and migrating the bootdrive and user home directories over to that, using the current hard drives as the Windows backup device. I think that would be a pretty robust setup and could accommodate a lot of future growth and expansion.

  • Augmented Reality Maps and Directions Coming to iPhone

    iOS logo
    Image via Wikipedia

    Of course, there are already turn-by-turn GPS apps for iOS, Android and other operating systems, but having an augmented reality-based navigational system thats native to the phone is pretty unique.

    via Augmented Reality Maps and Directions Coming to iPhone.

    In the deadly navigation battle between Google Android and Apple iOS a new front is being formed, Augmented Reality. Apple has also shown that it’s driven to create a duplicate of the Google Maps app for iOS in an attempt to maintain its independence from the Googleplex by all means possible. Though Apple may re-invent the wheel (of network available maps), you will be pleasantly surprised what other bells & whistles get thrown in as well.

    Enter the value-added feature of Augmented Reality. Apple is now filing patents on AR relating to handheld device navigation. And maybe this time ’round the Augmented Reality features will be a little more useful than marked up Geo Locations. To date Google Maps hasn’t quite approached this level of functionality, but do have most of the most valuable dataset (Street View) that would allow them to also add an Augmented Reality component. The question is who will get to market first with the most functional, and useful version of Augmented Reality maps?

  • ARM vet: The CPUs future is threatened • The Register

    8-inch silicon wafer with multiple intel Penti...
    Image via Wikipedia

    Harkening back to when he joined ARM, Segars said: “2G, back in the early 90s, was a hard problem. It was solved with a general-purpose processor, DSP, and a bit of control logic, but essentially it was a programmable thing. It was hard then – but by todays standards that was a complete walk in the park.”

    He wasn’t merely indulging in “Hey you kids, get off my lawn!” old-guy nostalgia. He had a point to make about increasing silicon complexity – and he had figures to back it up: “A 4G modem,” he said, “which is going to deliver about 100X the bandwidth … is going to be about 500 times more complex than a 2G solution.”

    via ARM vet: The CPUs future is threatened • The Register.

    A very interesting look a the state of the art in microprocessor manufacturing, The Register talks with one of the principles at ARM, the folks who license their processor designs to almost every cell phone manufacturer worldwide. Looking at the trends in manufacturing, Simon Segars is predicting a more difficult level of sustained performance gains in the near future. Most advancement he feels will be had by integrating more kinds of processing and coordinating the I/O between those processors on the same processor die. Which is kind of what Intel is attempting to do integrating graphics cores, memory controllers and CPU all on one slice of silicon. But the software integration is the trickiest part, and Intel still sees fit to just add more general purpose CPU cores to continue making new sales. Processor clocks stay pretty rigidly near the 3GHz boundary and have not shifted significantly since the end of the Pentium IV era.

    Note too, the difficulty of scaling up as well as designing the next gen chips. Referring back to my article from Dec.21,  2010; 450mm wafers (commentary on Electronista article), Intel is the only company rich enough to scale up to the next size of wafer. Every step in the manufacturing process has become so specialized that the motivation to create new devices for manufacture and test just isn’t there because the total number of manufacturers who can scale up to the next largest size of silicon wafer is probably 4 companies worldwide. That’s a measure of how exorbitantly expensive large scale chip manufacturing has become. It seems more and more a plateau is being reached in terms of clock speeds and the size of wafers finished in manufacturing. With these limits, Simon Segars thesis becomes even stronger.

  • David May, parallel processing pioneer • reghardware

    INMOS T800 Transputer
    Image via Wikipedia

    The key idea was to create a component that could be scaled from use as a single embedded chip in dedicated devices like a TV set-top box, all the way up to a vast supercomputer built from a huge array of interconnected Transputers.

    Connect them up and you had, what was, for its era, a hugely powerful system, able to render Mandelbrot Set images and even do ray tracing in real time – a complex computing task only now coming into the reach of the latest GPUs, but solved by British boffins 30-odd years ago.

    via David May, parallel processing pioneer • reghardware.

    I remember the Transputer. I remember seeing ISA-based add-on cards for desktop computers back in the early 1980s. They would advertise in the back of the popular computer technology magazines of the day. And while it seemed really mysterious what you could do with a Transputer, the price premium to buy those boards made you realize it must have been pretty magical.

    Most recently while I was attending workshop in Open Source software I met a couple form employees of  a famous manufacturer of camera film. In their research labs these guys used to build custom machines using arrays of Transputers to speed up image processing tasks inside the products they were developing. So knowing that there’s even denser architectures using chips like Tilera, Intel Atom and ARM chips absolutely blows them away. The price/performance ratio doesn’t come close.

    Software was probably the biggest point off friction in that the tools to integrate the Transputer into the overall design required another level of expertise. That is true to of the General Purpose Graphics Processing Unit (GPGU) that nVidia championed and now markets with its Tesla product line. And the Chinese have created a hybrid supercomputer mating Tesla boards up with commodity cpus. It’s too bad that the economics of designing and producing the Transputer didn’t scale with the time (the way it has for Intel as a comparison). Clock speeds also fell behind too, which allowed general purpose micro-processors to spend the extra clock cycles performing the same calculations only faster. This is also the advantage that RISC chips had until they couldn’t overcome the performance increases designed in by Intel.

  • From Big Data to NoSQL: Part 3 (ReadWriteWeb.com)

    Image representing ReadWriteWeb as depicted in...
    Image via CrunchBase

    In Part One we covered data, big data, databases, relational databases and other foundational issues. In Part Two we talked about data warehouses, ACID compliance, distributed databases and more. Now well cover non-relational databases, NoSQL and related concepts.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology Part 3.

    I really give a lot of credit to ReadWriteWeb for packaging up this 3 part series (started May 24th I think). This at least narrows down what is meant by all the fast and loose terms White Papers and Admen are throwing around to get people to consider their products in RFPs. Just know this though, in many cases to NoSQL databases that keep coming into the market tend to be one-off solutions created by big social networking companies who couldn’t get MySQL/Oracle/MSQL to scale in size/speed sufficiently during their early build-outs. Just think of Facebook hitting the 500million user mark and you will know that there’s got to be a better way than relational algebra and tables with columns and rows.

    In part 3 we finally get to what we have all been waiting for, Non-relational Databases, so-called NoSQL. Google’s MapReduce technology is quickly shown as one of the most widely known examples of a NoSQL type distributed database that while not adhering to absolute or immediate consistency gets there with ‘eventual consistency (Consistency being the big C in the acronym ACID). The coolest thing about MapReduce is the similarity (at least in my mind) it bears to the Seti@Home Project where ‘work units’ were split out of large data tapes and distributed piecemeal over the Internet and analyzed on a person’s desktop computer. The complete units were then gathered up and brought together into a final result. This is similar to how Google does it’s big data analysis to get work done in its data centers. And it follows on in the opensource project Hadoop, an opensource version of MapReduce started by Yahoo and now part of the Apache organization.

    Document databases are cool too, and very much like an Object-oriented Database where you have a core item with attributes appended. I think also of LDAP directories which also have similarities to Object -oriented databases. A person has a ‘Common Name’ or CN attribute. The CN is as close to a unique identifier as you can get, with all the attributes strung along, appended on the end as they need to be added, in no particular order. The ability to add attributes as needed is like ‘tagging’ in the way Social networking websites like Picture, Bookmark websites do it. You just add an arbitrary tag in order to help search engines index the site and help relevant web searches find your content.

    The relationship between Graph Databases and Mind-Mapping is also very interesting. There’s a good graphic illustrating a Graph database of blog content to show how relation lines are drawn and labeled. So now I have a much better understanding of Graph databases as I have used mind-mapping products before. Nice parallel there I think.

    At the very end of hte article there’s mention of NewSQL of which Drizzle is an interesting offshoot. Looking up more about it, I found it interesting as a fork of the MySQL project. Specifically Drizzle factors out tons of functions some folks absolutely need but don’t always have (like say 32-bit legacy support). There’s a lot of attempts to get the code smaller so the overall lines of code went from over 1 million for MySQL to just under 300,000 for the Drizzle project. Speed and simplicity is the order of the day with Drizzle. Add missing functions by simply add the plug-in to the main app and you get back some of the MySQL features that might have been missing.

    *Note: Older survey of the NoSQL field conducted by ReadWriteWeb in 2009