Category: technology

General technology, not anything in particular

  • Which way the wind blows: Flash Memory in the Data Center

    STEC Zeus IOPs solid state disk (ssd)
    This hard drive with a Fibre Channel interface launched the flash revolution in the datacenter

    First let’s just take a quick look backwards to see what was considered state of the art a year ago. A company called STEC was making Flash-based hard drives and selling them to big players in the enterprise storage market like IBM and NetApp. I depends solely on The Register for this information as you can read here: STEC becalmed as Fusion-io streaks ahead

    STEC flooded the market according to The Register and subsequently the people using their product were suddenly left with a glut of product using these Fibre Channel based Flash Drives (Solid State Disk Drives – SSD). And the gains in storage array performance followed. However the supply exceeded the demand and EMC is stuck with a raft of last year’s product that it hasn’t marked up and re-sold to its current customers. Which created an opening for a similar but sexier product Fusion-io and it’s PCIe based Flash hard drive. Why sexy?

    The necessity of a Fibre Channel interface for the Enterprise Storage market has long been an accepted performance standard. You need at minimum the theoretical 6GB/sec of FC interfaces to compete. But for those in the middle levels of the Enterprise who don’t own the heavy iron of giant multi-terabyte storage arrays, there was/is now an entry point through the magic of the PCIe 2.0 interface. Any given PC whether a server or not will have open PCIe slots in which a

    Fusio-io duo PCIe Flash cache card
    This is Fusion-io's entry into the Flash cache competition

    Fusion-io SSD card could be installed. That lower threshold (though not a lower price necessarily) has made Fusion-io the new darling for anyone wanting to add SSD throughput to their servers and storage systems. And now everyone wants Fusion-io not the re-branded STEC Fibre Channel SSDs everyone was buying a year ago.

    Anyone who has studied history knows in the chain of human relations there’s always another competitor out there that wants to sit on your head. Enter LSI and Seagate with a new product for the wealthy, well-heeled purchasing agent at your local data center: LSI and Seagate take on Fusion-io with flash

    Rather than create a better/smarter Fibre Channel SSD, LSI and Seagate are assembling a card that plugs into PCIe slot of a storage array or server to act as a high speed cache to the slower spinning disks. The Register refers to three form factors in the market now RamSan, STEC and Fusion-io. Because Fusion-io seems to have moved into the market at the right time and is selling like hot cakes, LSI/Seagate are targeting that particular form factor with it’s SSS6200.

    LSI's PCIe Flash hard drive card
    This is LSI's entry into the Flash hard drive market

    STEC is also going to create a product with a PCIe interface and Micron is going to design a product too. LSI’s product will not be available to ship until the end of the year.  In terms of performance the speeds being target are comparable between the Fusion-io Duo and the LSI SSS6200 (both using single level cell memory). So let the price war begin! Once we finally get some competition in the market I would hope the entry level price of Fusion-io (~$35,000) finally erodes a bit. It is a premium product right now intended to help some folks do some heavy lifting.

    My hope for the future is we could see something comparable (though much less expensive and scaled down) available on desktop machines. I don’t care if it’s built-in to a spinning SATA hard drive (say as a high speed but very large cache) or some kind of card plugging into a bus on the motherboard (like the failed Intel Speed Boost cache). If a high speed flash cache could become part of the standard desktop PC architecture to sit in front of monstrous single hard drives (2TB or higher nowadays) we might get faster response from our OS of choice, and possible better optimization of reads/writes to fairly fast but incredibly dense and possibly more error prone HDDs. I say this after reading about the big charge by Western Digital to move from smaller blocks of data to the 4K block.

    Much wailing and gnashing of teeth has accompanied the move recently by WD to address the issue of error correcting Cycle Redundancy Check (CRC) algorithms on the hard drives. Because 2Terabyte drives have so many 512bit blocks more and more time and space is taken up doing the CRC check as data is read and written to the drive. A larger block made up of 4096 bits instead of 512 makes the whole thing 4x less wasteful and possibly more reliable even if some space is wasted to small text files or web pages. I understand completely the implication and even more so, old-timers like Steve Gibson at GRC.com understand the danger of ever larger single hard drives. The potential for catastrophic loss of data as more data blocks need to be audited can numerically become overwhelming to even the fastest CPU and SATA bus. I think I remember Steve Gibson expressing doubts as to how large hard drives could theoretically become.

    Steve Gibson's SpinRite 6
    Steve Gibson's data recovery product SpinRite

    As the creator of the SpinRite data recovery utility he knows fundamentally the limits to the design of the Parallel ATA interface. Despite advances in speeds, error-correcting hasn’t changed and neither has the quality of the magnetic medium used on the spinning disks. One thing that has changed is the physical size of the blocks of data. They have gotten infinitesimally smaller with each larger size of disk storage. The smaller the block of data the more error correcting must be done. The more error-correcting the more space to write the error-correcting information. Gibson himself observers something as random as cosmic rays can flip bits within a block of data at those incredibly small scales of the block of data on a 2TByte disk.

    So my hope for the future is a new look at the current state of the art motherboard, chipset, I/O bus architecture. Let’s find a middle level, safe area to store the data we’re working on, one that doesn’t spontaneously degrade or is too susceptible to random errors (ie cosmic rays). Let the Flash Cache’s flow, let’s get better throughput and let’s put disks into the class of reliable but slower backing stores for our SSDs.

  • Apple A4 processor really stripped-down Cortex A8? | Electronista

    The custom A4 processor in the iPad is in reality a castrated Cortex A8 ARM design, say several sources.

    via Apple A4 processor really stripped-down Cortex A8? | Electronista.

    This is truly interesting, and really shows some attempt to optimize the chips with ‘known’ working designs. Covering the first announcement of the A4 chip by Brightside of News, I tried to argue that customizing a chip by licensing a core design from ARM Holdings Inc. isn’t all that custom. Following this Ashlee Vance wrote in the NYTimes the cost of development for the A4 ‘could be’ upwards of $1Billion. And now just today MacNN/Electronista is saying Apple used the ARM A8. By this I mean the ARM Cortex A8 is a licensed core already being used in the Apple iPhone 3GS. It is a proven, known cpu core that engineers are familiar with at Apple. Given the level of familiarity, it’s a much smaller step to optimize that same CPU core for speed and integration with other functions. Like for instance the GPU or memory controllers can be tightly bound into the final CPU. Add a dose of power management and you got good performance and good battery life. It’s not cutting edge to be sure, but it is more guaranteed to work right out of the gate. That’s a bloodthirsty step in the right direction of market domination. However, the market hasn’t quite yet shown itself to be so large and self sustaining that slate devices are a sure thing in the casual/auxiliary/secondary computing device market. You may have an iPhone and you may have a laptop, bu this device is going to be purchased IN ADDITION not INSTEAD OF those two existing device markets. So anyone who can afford a third device is probably going to be the target market for iPad as opposed to creating a new platform for people that want to substitute an iPad for either the iPhone or laptop.

  • AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment

    In bypassing a traditional chip maker like Intel and creating its own custom ARM-based processor for the iPad, Apple has likely incurred an investment of about $1 billion, a new report suggests.

    via AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment.

    After reading the NYTimes article linked to within this article I can only conclude it’s a very generalized statement that it costs $1Billion to create a custom chip. The exact quote from the NYTimes article author Ashlee Vance is: “Even without the direct investment of a factory, it can cost these companies about $1 billion to create a smartphone chip from scratch.”

    Given that is one third the full price of building a  chip fabrication plant, why so expensive? What is the breakdown of those costs. Apple did invest money in PA Semiconductor to get some chip building expertise (they primarily designed chips that were fabricated at overseas contract manufacturing plants). Given Qualcomm has created the Snapdragon CPU using similar cpu cores from ARM Holdings Inc., they must have $1Billion to throw around too? Qualcomm was once dominant in the cell phone market licensing its CDMA technology to the likes of Verizon. But it’s financial success is nothing like the old days. So how does Qualcomm come up with $1Billion to develop the Snapdragon CPU for smartphones? Does that seem possible?

    Qualcomm and Apple are licensing the biggest building blocks and core intellectual property from ARM, all they need to do is route and place and verify the design. Where does the $1Billion figure come into it? Is it the engineers? Is it the masks for exposing the silicon wafers? I argue now as I did in my first posting about the Apple A4 chip, the chip is an adaptation of intellectual property, a license to a CPU design provided by ARM. It’s not literally created from ‘scratch’ starting with no base design or using completely new proprietary intellectual property from Apple. This is why I am confused. Maybe ‘from scratch’ means different things to different people.

  • Next Flash Version Will Support Private Browsing

    Slashdot Your Rights Online Story | Next Flash Version Will Support Private Browsing.

    I’m beginning to think Adobe should just make Flash into a web browser that plays back it’s own movie format. That will end all debates over open standards and so forth and provide better support/integration. There is nothing wrong with a fragmented browser market. It’s what we already have right now.

    If you have ever heard from someone that Adobe Flash is buggy and crashes a lot and have to trust their judgment, then please do. It’s not the worst thing ever invented, but it certainly could be better. Given Adobe’s monopoly on web delivered video (ie YouTube) one would think they could maintain competitive advantage through creating a better user experience (like Apple entering the smart phone market). But instead they have attempted to innovate as a way of maintaining their competitiveness and so Flash has bloated up to accommodate all kinds of ActionScript and interactivity that used to only exist in desktop applications. So why should Adobe settle for just being a tool maker and browser plug-in? I say show everyone what the web browser should be, and compete.

  • Google Chrome bookmark sync

    I used to do this with a plug-in called Google Browser Sync on Mozilla back in the day. Since then, there’s a Firefox plug-in for Delicious that would help keep things synced up with that bookmark sharing site. But that’s not really what I wanted. I wanted Google Browser Sync, and now I finally have it again, cross platform.

    At long last Mac and PC versions of the Google Chrome web browser have the ability to save bookmarks to Google Docs and sync all the changes/additions/deletions to that single central file. I’m so happy I went through and did a huge house cleaning on all my accumulated bookmarks. Soon I will follow-up to find out which ones are dead and get everything ship-shape once again. It’s sad the utility of a program like browser sync is taken away. I assume it was based on arbitrary measures of popularity and success. Google’s stepping down and taking away Browser Sync gave some developers a competitive edge for a while, but I wanted Browser Sync no matter who it was that did the final software development. And now finally I think I have it again.

    Why is bookmark syncing useful? The time I’ve spent finding good sources of info on the web can be wasted if all I ever do is Google searches. The worst part is every Google search is an opportunity for Google to serve me AdWords related to my search terms. What I really want is the website that has a particularly interesting article or photo gallery. Keeping bookmarks direct to those websites bypasses Google as the middleman. Better yet, I have a link I can share with friends who need to find a well vetted, curated source of info. This is how it should be and luckily now with Chrome, I have it.

  • Apple A4 SOC unveiled – It’s an ARM CPU and the GPU! – Bright Side Of News*

    Getting back to Apple A4, Steve Jobs incorrectly addressed Apple A4 as a CPU. We’re not sure was this to keep the mainstream press enthused, but A4 is not a CPU. Or we should say, it’s not just a CPU. Nor did PA Semi/Apple had anything to do with the creation of the CPU component.

    via Apple A4 SOC unveiled – It’s an ARM CPU and the GPU! – Bright Side Of News*.

    Apple's press release image of the A4 SoC

    Interesting info on the Apple A4 System on Chip which is being used by the recently announced Ipad tablet computer. The world of mobile, low power processors is dominated by the designs of ARM Holdings Inc. Similarly ARM is providing the graphics processor intellectual property too. So in the commodity CPU/GPU and System on Chip (SoC) market ARM is the only way to go. You buy the license you layout the chip with all the core components you license and shop that around to a chip foundry. Samsung has a lot of expertise fabricating these chips made to order using the ARM designs. But Apparently another competitor Global Foundries is shrinking its design rules (meaning lower power and higher clock speeds) and may become the foundry of choice. Unfortunately outfits like iFixit can only figure out what chips and components go into an electronics device. They cannot reverse engineer the components going into the A4, and certainly anyone else would probably be sued by Apple if they did spill the beans on the A4’s exact layout and components. But  because everyone is working from the same set of Lego Blocks for the CPUs and GPUs and forming them into full Systems on a Chip, some similarities are going to occur.

    The heart of the new Apple A4 System on Chip

    One thing pointed out in this article is the broad adoption of the same clockspeed for all these ARM derived SoCs. 1Ghz is the clock speed across the board despite differences in manufacturers and devices. The reason being everyone is using the same ARM cpu cores and they  are designed to run optimally at the 1Ghz clock rate. So the more things change (meaning faster and faster time to market for more earth shaking designs) the more they stay the same (people adopt commodity CPU designs and become more similar in performance). It will take a big investment for Apple and PA Semiconductor to really TRULY differentiate themselves with a unique and different and proprietary CPU of any type. They just don’t have the time, though they may have the money. So when Jobs tells you something is exclusive to Apple, that may be true for industrial design. But for CPU/GPU/SoC, … Don’t Believe the Hype surround the Apple A4.

    Also check out AppleInsider’s coverage of this same topic.

    Update: http://www.nytimes.com/2010/02/02/technology/business-computing/02chip.html

    NYTimes weighs in on the Apple A4 chip and what it means for the iPad maintaining its competitive advantage. NYTimes gives Samsung more credit than Apple because they manufacture the chip. What they will not speculate on or guess at is ARM Holdings Inc. sale of licenses to it’s Cortex A-9 to Apple. They do hint that the nVidia Tegra CPU is going to compete directly against Apple’s iPad using the A4. However, as Steve Jobs has pointed out more than once, “Great Products Ship”. And anyone else in the market who has licensed the Cortex A-9 from ARM had better get going. You got 60 days or 90 days depending on your sales/marketing projections to compete directly with the iPad.

  • Some people are finding Google Wave useful

    Posterous Logo

    I use google wave every single day. I start off the day by checking gmail. Then I look at a few news sites to see if anything of interest happened. Then I open google wave: because thats where my business lives. Thats how I run a complicated network of collaborators, make hundreds of decisions every day and organise the various sites that made me $14.000 in december.
    On how Google Wave surprisingly changed my life – This is so Meta.

    I’m glad some people are making use of Google Wave. After the first big spurt of interest, sending invites out to people interest tapered off quickly. I would login and see no activity whatsoever. No one was coming back to see what people had posted. So like everyone else I stopped coming back too.

    Compare this also to the Facebook ebb and flow. I notice the NYTimes.com occasionally slagging Facebook with an editorial in their Tech News section. Usually the slagging is conducted by someone who I would classify as a pseudo technology enthusiast (the kind that doesn’t back up their files, then subsequently writes about it in an article to complain about it). Between iPhone upgrades and writing up the latest free web service they occasionally rip Facebook in order to get some controversy going.

    But as I’ve seen Facebook has a rhythm of less participation then periods of intense participation. Sometimes it’s lonely, people don’t post or read for months and months. It makes me wonder what interrupts their lives long enough that people stop reading or writing posts. I would assume Google Wave might suffer the same kind of ebb and flow even when used for ‘business’ purposes.

    So the question is, does any besides this lone individual on Posterous use Google Wave on a daily business for work purposes?

    logo
    Google Wave
  • Intel linked with HPC boost buy • The Register

    Comment With Intel sending its “Larrabee” graphics co-processor out to pasture late last year – before it even reached the market – it is natural to assume that the chip maker is looking for something to boost the performance of high performance compute clusters and the supercomputer workloads they run. Nvidia has its Tesla co-processors and its CUDA environment. Advanced Micro Devices has its FireStream co-processors and the OpenCL environment it has helped create. And Intel has been relegated to a secondary role.

    via Intel linked with HPC boost buy • The Register.

    Intel’s long term graphics accelerator project code-named “Larabee. It’s an unfortunate side effect of losing all that money by time delays on the project that forces Intel now to reuse the processor as a component in a High Performance Computer (so-called Super Computer). The competition have been providing hooks or links into their CPUs and motherboard for auxiliary processors or co-processors for a number of years. AMD notably created a CPU socket with open specs that FPGA’s could slide into. Field Programmable Gate Arrays are big huge general purpose CPUs with all kinds of ways to reconfigure the circuits inside of them. So huge optimizations can be made in hardware that were previously done in Machine Code/Assembler by the compilers for that particular CPU. Moving from a high level programming language to an optimized hardware implementation of an algorithm can speed a calculation up by several orders of magnitude (1,000 times in some examples). AMD has had a number of wins in some small niches of the High Performance Computing market. But not all algorithms are created equal, and not all of them lend themselves to implementation in hardware (FPGA or it’s cousin the ASIC). So co-processors are a very limited market for any manufacturer trying to sell into the HPC market. Intel isn’t going to garner a lot of extra sales by throwing development versions of Larabee out to the HPC developers. Another strike is the dependence on a PCI express bus for communications to the Larabee chipset. While PCI Express is more than fast enough for graphics processing, an HPC setup would prefer a CPU socket adjacent to the general purpose CPUs. The way AMD has designed their motherboards all sockets are on the same motherboard and can communicate directly to one another instead of using the PCI Express bus. Thus, Intel loses again trying to market Larabee in the HPC market. One can only hope that other secret code-name projects like the CPU with 80 cores will see the light of day soon when it makes a difference rather than suffer the opportunity costs of a very delayed launch of Larabee.

  • Buzz Bombs in the News – Or the Wheel Reinvented

    Slashdot just posted this article for all to read on the Interwebs

    penguinrecorder writes“The Thunder Generator uses a mixture of liquefied petroleum, cooking gas, and air to create explosions, which in turn generate shock waves capable of stunning people from 30 to 100 meters away. At that range, the weapon is relatively harmless, making people run in panic when they feel the sonic blast hitting their bodies. However, at less than ten meters, the Thunder Generator is capable ofcausing permanent damage or killing people.”

    I went directly to the article itself and read the contents of it. And it was very straight forward, more or less indicating this new shockwave gun was an adaptation of the propane powered “scare crows” used to budge and shift birds from farm fields in Israel.

    http://www.defensenews.com/story.php?i=4447499&c=FEA&s=TEC

    TEL AVIV – An Israeli-developed shock wave cannon used by farmers to scare away crop-threatening birds could soon be available to police and homeland security forces around the world for nonlethal crowd control and perimeter defense.

    I think Mark Pauline and Survival Research Labs beat the Israeli’s to the punch inventing the so-called cannon:

    http://srl.org.nyud.net:8090/srlvideos/machinetests/bigpulsejetQT300.mov

    Prior to Mark Pauline and Survival Research Labs, the German military in WW2 adapted the pulse jet for the V-1 Buzz bomb. In short, a German terror weapon has indirectly become the product of an Israeli defense contractor. Irony Explodes. The V1 Buzz bomb was influenced by a French inventor Georges Marconnet. Everything Old is new again in the war on terror. Some good ideas never die, they just get re-invented like the wheel.

  • 64GBytes is the new normal (game change on the way)

    Panasonic SDXC flash memory card
    Flash memory chips are getting smaller and denser

    I remember reading announcements of the 64GB SDXC card format coming online from Toshiba. And just today Samsung has announced it’s making a single chip 64GB flash memory module with a built-in memory controller. Apple’s iPhone design group has been big fans of the single chip large footprint flash memory from Toshiba. They bought up all of Toshiba’s supply of 32GB modules before they released the iPhone 3GS last Summer. Samsung too was providing the 32GB modules to Apple prior to the launch. Each Summer newer bigger modules are making for insanely great things that the iPhone can do. Between the new flash memory recorders from Panasonic/JVC/Canon and the iPhone what will we do with the doubling of storage every year? Surely there will be a point of diminishing return, where the chips cannot be made any thinner and stacked higher in order to make these huge single chip modules. I think back to the slow evolution and radical incrementalism in the iPod’s history going from 5GB’s of storage to start, then moving to 30GB and video! Remember that? the Video iPod @ 30GBytes was dumbfounding at the time. Eventually it would top out at 120 and now 160GBytes total on the iPod classic. At the rate of change in the flash memory market, the memory modules will double in density again by this time next year, achieving 128GBytes for a single chip modules with embedded memory controller. At that density a single SDHC sized memory card will also be able to hold that amount of storage as well. We are fast approaching the optimal size for any amount of video recording we could ever want to do and still edit when we reach the 128 Gbyte mark. At that size we’ll be able to record 1080p video upwards of 20 hours or more on today’s video cameras. Who wants to edit much less watch 20 hours of 1080p video? But for the iPhone, things are different, more apps means more fun. And at 128GB of storage you never have to delete an app, or an single song from your iTunes or a single picture or video, just keep everything. Similarly for those folks using GPS, you could keep all the maps you ever wanted to use right onboard rather than download them all the time thus providing continuous navigation capabilities like you would get with a dedicated GPS unit. I can only imagine the functionality of the iPhone increasing as a result of the increased storage 64GB Flash memory modules would provide. Things can only get better. And speaking of better, The Register just reported today some future directions.

    There could be a die process shrink in the next gen flash memory products. There are also some opportunities to use slightly denser memory cells in the next gen modules. The combination of the two refinements might provide the research and design departments at Toshiba and Panasonic the ability to double the density of the SDXC and Flash memory modules to the point where we could see 128GBytes and 256GBytes in each successive revision of the technology. So don’t be surprised if you see a Flash memory module as standard equipment on every motherboard to hold the base Operating System with the option of a hard drive for backup or some kind of slower secondary storage. I would love to see that as a direction netbook or full-sized laptops might take.

    http://www.electronista.com/articles/09/04/27/toshiba.32nm.flash.early/ (Toshiba) Apr 27, 2009

    http://www.electronista.com/articles/09/05/12/samsung.32gb.movinand.ship/ (Samsung) May 13, 2009

    http://www.theregister.co.uk/2010/01/14/samsung_64gbmovinand/ (Samsung) Jan 14, 2010