Blog

  • Garmin brings first Android phone to US through T-Mobile | Electronista

    As a phone, Garmin’s entry occupies the lower mid-range with a three-megapixel camera, native T-Mobile 3G and Wi-Fi. Built-in storage hasn’t been mentioned but should be enough to carry offline maps in addition to the usual app and media storage.

    via Garmin brings first Android phone to US through T-Mobile | Electronista.

    After it’s first attempt to create a Garmin branded phone called the G60, Garmin is back once again with the A50. But this time making a much more strategic choice by adopting an open platform: Google’s Android phone OS. I wrote about Garmin’s response to the coming Smartphone onslaught to it’s dominance of the GPS navigation market. This was after I read this article in the NYTimes: Move Over GPS, Here Comes the Smartphone – (July 8, 2009). At that time Navigon which had been in the market for GPS navigation, dropped out and went to software only licensing to device manufacturers. Whispers and rumors indicated TomTom was going to license its software as well. By Fall 2009 TomTom had shipped an iPhone version of its product. It looked like a form of paradigm shift that kills an industry overnight. GPS navigation was evolving to a software only industry. Devices themselves were better handled by the likes of Samsung, Apple, etc. When the Garmin nuviphone finally reached the market, the only review I found was on Consumer Reports. And they were not overly positive in touting what the phone did differently from a a standalone navigation unit. And worse yet, they had spent two years in development of this device only to have it hit the market trumped by the TomTom iPhone App. It was a big mistake and likely to make Garmin more wary of trying another attempt at making a device.

    Hope springs eternal it seems at Garmin. They have taken a different tack and are now going the open systems route (to an extent). It seems they don’t have to invent everything themselves. They can still manufacture devices and provide software, but they don’t have to also create an OS that allows things to be modularly integrated (Phone and GPS) and given that they chose Android, things can only get better.  I say this in part because over time it has become obvious to me Google is a real fan of GPS navigation and certainly of Maps.

    When I bought my first GPS unit from Garmin, I discovered that you can save out routes direct from Google Maps into a format that a Garmin GPS receiver can use. I know in the past Garmin forced it’s users to first purchase a PC application that allowed you to plan and plot routes then save them back to your receiver. Later it was made less expensive and eventually it was included with the purchase of new units. I’ve seen screen shots of this software and it was clunky, black and white, and more like a cartography mapping program than a route planner. On the other hand, Google Maps was as fast and intuitive as driving your car. You click on a start point, and end point and it would draw the route right on top of the satellite photos of your route. You could zoom in and out and see, actually see points of interest on your route. It seems in one stroke Google Maps stole away route planning from Garmin.

    In the intervening time Google also decided to get in the Smartphone business to compete with Apple. Many of Google’s web apps are accessed through iPhones, so why not tap into that user base who might be willing to adopt a device from the same people running the datacenter and applications hosted in them?  It might not be a huge number of users, but Google has money and time and can continuously improve anything it does until it becomes the most competitive player in a market it has chosen to compete in. Tying this all together one can see the logical progression from Google Maps to Google Smartphone. And even Google came up with some prototypes showing what this might look like:

    Google Shrinks Another Market With Free Turn-By-Turn Navigation – O’Reilly Radar (December 7, 2009)

    Google made a video showing how Google Maps, and Streetview could be integrated on an Android 2.0 device. And it looked good. It was everything someone could have wanted, navigation, text to speech directions, the ability to zoom in and out, go to Streetview to get an accurate photo of the street address. There were some bits of unpolished User Interface that they still needed to work on. But prototypes and demos are always rough.

    The video they posted led me to believe I would stick to my Garmin device, as it still had some logical organization that it would take years for Google to finally hit upon. My verdict was to wait and see what happened next. With Garmin’s announcement today though, things are even a little more interesting than I thought they would be. I can’t wait to see the demo of the final device when it ships. I definitely want to see how they integrate the navigation interface with the Web based Google Maps. If they’re separated as different Apps, that’s okay I guess but a Mashup of Garmin navigation and Google Maps with Streetview would be a Killer App. Mix in live network connection for updates on traffic, construction, and Points of Interest and there’s no telling how high they will fly. Look at this video from MobileBurn.com :

    Now all I need is a robot chauffeur to drive my car for me.

  • PCIe based Flash caches

    Let me start by saying Chris Mellor of The Register has been doing a great job of keeping up with the product announcements from the big vendors of the server based Flash memory products. I’m not talking simply Solid State Disks (SSD) with flash memory modules and Serial ATA (SATA) controllers. The new Enterprise level product that supersedes SSD disks is a much higher speed (faster than SATA) caches that plug into the PCIe slots on rack based servers. The fashion followed by many data center storage farms was to host large arrays of hot online, or warm nearly online spinning disks. Over time de-duplication was added to prevent unnecessary copies and backups being made on this valuable and scarce resource. Offline storage to tape back-up could be made throughout the day as a third tier of storage with the disks acting as the second tier. What was first tier? Well it would be the disks on the individual servers themselves or the vast RAM memory that the online transactional databases were running on. So RAM, disk, tape the three tier fashion came into being. But as data grows and grows, more people want some of the stuff that was being warehoused out to tape to do regression analysis on historical data. Everyone wants to create a model for trends they might spot in the old data. So what to do?

    So as new data comes in, and old data gets analyzed it would seem there’s a need to hold everything in memory all the time, right? Why can’t we just always have it available? Arguing against this in corporate environment is useless. Similarly explaining why you can’t speed up the analysis of historical data is also futile. Thank god there’s a technological solution and that is higher throughput. Spinning disks are a hard limit in terms of Input/Output (I/O). You can only copy so many GBits per second over the SATA interface on a spinning disk hard drive. Even if you fake it by copying alternate bits to adjacent hard drives using RAID techniques you’re still limited. So Flash based SSDs have helped considerably as a tier of storage between the the old disk arrays and the demands made by the corporate overseers who want to see all their data all the time. The big 3 disk storage array makers IBM/Hitachi, EMC, and NetApp are all making hybrid, Flash SSD and spinning disk arrays and optimizing the throughput through the software running the whole mess. Speeds have improved considerably. More companies are doing online analysis to data that previously would be loaded from tape to do offline analysis.

    And the interconnects to the storage arrays has improved considerably too. Fibre Channel was a godsend in the storage farm as it allowed much higher speed (first 2Gbytes per second, then doubling with each new generation). The proliferation of Fibre Channel alone made up for a number of failings in the speed of spinning disks and acted as a way of abstracting or virtualizing the physical and logical disks of the storage array. In terms of Fibre Channel the storage control software offers up a ‘virtual’ disk but can manage it on the storage array itself anyway it sees fit. Flexibility and speed reign supreme. But still there’s an upper limit to the Fibre Channel interface and the motherboard of the server itself. It’s the PCIe interface. And evenwith PCIe 2.0 there’s an upper limit to how much throughput you can get off the machine and back onto the machine. Enter the PCIe disk cache.

    In this article I review the survey of PCIe based SSD and Flash memory disk caches since they entered the market (as it was written in The Register. It’s not a really mainstream technology. It’s prohibitively expensive to buy and is going to be purchased by those who can afford it in order to gain the extra speed. But even in the short time since STEC was marketing it’s SSDs to the big 3 storage makers, a lot of engineering and design has created a brand new product category and the performance within that category has made steady progress.

    LSI’s entry into the market is still very early and shipping product isn’t being widely touted. The Register is the only website actively covering this product segment right now. But the speeds and the density of the chips on these products just keeps getting bigger, better and faster. Which provides a nice parallel to Moore’s Law but in a storage device context. Prior to the PCIe flash cache market opening, SATA, Serial Attached Storage (SAS) was the upper limit of what could be accomplished with even a flash memory chip. Soldering those chips directly onto an add-on board connected directly to the CPU through the PCIe 8-Lane channel is nothing short of miraculous in the speeds it has gained. Now the competition between current vendors is to build one off, customized setups to bench test the theoretical top limit of what can be done with these new products. And this recent article from Chris Mellor shines a light on the newest product on the market the LSI SSS6200. In this article Chris concludes:

    None of these million IOPS demos can be regarded as benchmarks and so are not directly comparable. But they do show how the amount of flash kit you need to get a million IOPS has been shrinking

    Moore’s law holds true now for the Flash caches which are now becoming the high speed storage option for many datacenters who absolutely have to have the highest I/O disk throughput available. And as the sizes and quantity of the chips continues to shrink and the storage volume increases who knows what the upper limit might be? But news travels swiftly and Chris Mellor got a whitepaper press release from Samsung and began drawing some conclusions.

    Interestingly, the owner of the Korean Samsung 20nm process foundry has just taken a stake in Fusion-io, a supplier of PCIe-connected flash solid-state drives. This should mean an increase in Fusion-io product capacities, once Samsung makes parts for Fusion using the new process

    The new Flash memory makers are now in an arms race with the product manufacturers. Apple and Fusion-io get first dibs on the shipping product as the new generation of Flash chips enters the market. Apple has Toshiba, and Fusion-io gets Samsung. In spite of LSI’s benchmark of 1million IOPs in their test system, I give the advantage to Fusion-io in the very near future. Another recent announcement from Fusion-io is a small round of venture capital funding that will hopefully cement its future as a going concern. Let’s hope their next generation caches top out at a size that is competitive with all its competitors and that its speed is equal to or faster than currently shipping product.

    Outside the datacenter however things are more boring. I’m not seeing anyone try to peer into the future of the desktop or laptop and create a flash cache that performs at this level. Fusion-io does have a desktop product currently shipping mostly targeted at the PC gaming market. I have not seen Tom’s Hardware try it out or attempt to integrate it into a desktop system. The premium price is enough to make it very limited in its appeal (it lists MSRP $799 I think). But let’s step back and imagine what the future might be like. Given that Intel has incorporated the RAM memory controller into its i7 cpus and given that their cpu design rules have shrunk so far that adding the memory controller was not a big sacrifice, Is it possible the PCIe interface electronics could be migrated on CPU away from the Northbridge chipset? I’m not saying there should be no chipset at all. A bridge chip is absolutely necessary for really slow I/O devices like the USB interface. But maybe there could be at least on 16x PCIe lane directly into the CPU or possibly even an 8x PCIe lane. If this product existed, a Fusion-io cache could have almost 1TB storage of flash directly connected into the CPU and act as the highest speed storage yet available on the desktop.

    Other routes to higher speed storage could even be another tier of memory slots with an accompanying JEDEC standard for ‘storage’ memory. So RAM would go in one set of slots, Flash in the other. And you could mix, match and add on as much Flash memory as you liked. This potentially could be addressed through the same memory controllers already built into Intel’s currently shipping CPUs. Why does this even matter or why do I think about it at all? I am awaiting the next big speed increase in desktop computing that’s why. Ever since the Megahertz Wars died out, much of the increase in performance has been so micro incremental that there’s not a dime’s worth of difference between any currently shipping PC. Disk storage has reigned supreme and has becoming painfully obvious as the last link in the I/O chain that has stayed pretty static. Parallel ATA migration to Serial ATA has improved things, but nothing like the march of improvements that occurred with each new generation of Intel chips. So I vote for dumping disks once and for all. Move to 2TByte Flash memory storage and let’s run it through the fastest channel we can onto and off the CPU. There’s not telling what new things we might be able to accomplish with the speed boost. Not just games, not just watching movies and not just scientific calculations. It seems to me everything OS and Apps both would receive a big benefit by dumping the disk.

  • links for 2010-04-09

  • AppleInsider | Inside the iPad: Apples A4 processor

    Another report, appearing in The New York Times in February, stated that Apple, Nvidia and Qualcomm were all working to develop their own ARM-based chips before noting that “it can cost these companies about $1 billion to create a smartphone chip from scratch.” Developing an SoC based on licensed ARM designs is not “creating a chip from scratch,” and does not cost $1 billion, but the article set off a flurry of reports that said Apple has spent $1 billion on the A4.

    via AppleInsider | Inside the iPad: Apples A4 processor.

    Thankyou AppleInsider for trying to set the record straight. I doubted the veracity of the NYTimes article when I saw that $1Billion figure thrown around (seems more like the price of a Intel chip development project which is usually from scratch). And knowing now from this article here (link to PA Semi historical account), that PA Semi made a laptop version of a dual core G5 chip, leads me to believe power savings is something they would be brilliant at engineering solutions for (G5 was a heat monster, meaning electrical power use was large). P.A. Semi was going to made the G5 power efficient enough to fit into a laptop and they did it, but Apple had already migrated to Intel chips for its laptops.

    Intrinsity + P.A. Semiconductor  + Apple = A4. Learning that Intrinsity is an ARM developer knits a nice neat picture of a team of chip designers, QA folks and validation folks who would all team up to make the A4 a resounding success. No truer mark of accomplishment can be shown for this effort than Walt Mossberg and David Pogue stating in reviews of the iPad yesterday they both got over 10 hours of run time from their iPads. Kudos to Apple, you may not have made a unique chip but you sure as hell made a well optimized one. Score, score, score.

  • iPad release imminent – caveat emptor

    Apropos to the big Easter Weekend, Apple is releasing the iPad to the U.S. market. David Pogue from the NYTimes has done two reviews in one. Rather than anger his technophile readers or alienate his average readers he gave each audience his own review of a real hands-on iPad. Where’s Walt Mossberg on this topic? (Walt likes it) Pogue more or less says lack of a physical keyboard is a showstopper for many. Instead, users who need a keyboard need to get a laptop of some sort. Otherwise for what it accomplishes through finger gestures and software design the iPad is a pretty incredible end user experience. Whether or not your personality, demeanor is compatible with the iPad is up for debate. But try before you buy, hand-on will tell you much more than doing a web order and hoping for the best. And given the price, it’s a wise choice. Walt Mossberg too feels you had better actually try to use it before you buy. It is in his own words, not like any other computer but in a different class all its own. So don’t trust other people to tell you whether or not it will work for you.

    One thing David Pogue is also very enthused by is the data plan seems less onerous than the first and second generation iPhone contracts with AT&T. The dam is about to burst on mandatory data plans, and in the iPad universe you can subscribe and lapse, re-subscribe lapse again depending on your needs. So don’t pay for a long term contract if you don’t need it. That addresses a long-standing problem I have had with the iPhone as it is currently marketed by Apple and AT&T. Battery life is another big upshot. The review models that Mossberg and Pogue used had ‘longer’, read that again LONGER run times than stated by Apple. Both guys tried doing real heavy network and video playback on the devices and went over the 10hr. battery life claimed by Apple. Score a big win for the iPad in that category.

    Lastly Pogue hinted at maps looking and feeling like real maps on the bigger display. Mossberg points out the hardware isn’t what’s really important. No, it’s what’s going to show up on the AppStore specifically for the iPad. I think I’ve heard a few M.I.T. types say this before. It’s unimportant what it does. The question is what ‘else’ does it do. And that ‘else’ is the software developer’s coin of the realm. Without developers these products have no legs, no markets outside of the loyal fan base. What may come, no one can tell but it will be interesting times for the iPad owners that’s for sure.

  • Which way the wind blows: Flash Memory in the Data Center

    STEC Zeus IOPs solid state disk (ssd)
    This hard drive with a Fibre Channel interface launched the flash revolution in the datacenter

    First let’s just take a quick look backwards to see what was considered state of the art a year ago. A company called STEC was making Flash-based hard drives and selling them to big players in the enterprise storage market like IBM and NetApp. I depends solely on The Register for this information as you can read here: STEC becalmed as Fusion-io streaks ahead

    STEC flooded the market according to The Register and subsequently the people using their product were suddenly left with a glut of product using these Fibre Channel based Flash Drives (Solid State Disk Drives – SSD). And the gains in storage array performance followed. However the supply exceeded the demand and EMC is stuck with a raft of last year’s product that it hasn’t marked up and re-sold to its current customers. Which created an opening for a similar but sexier product Fusion-io and it’s PCIe based Flash hard drive. Why sexy?

    The necessity of a Fibre Channel interface for the Enterprise Storage market has long been an accepted performance standard. You need at minimum the theoretical 6GB/sec of FC interfaces to compete. But for those in the middle levels of the Enterprise who don’t own the heavy iron of giant multi-terabyte storage arrays, there was/is now an entry point through the magic of the PCIe 2.0 interface. Any given PC whether a server or not will have open PCIe slots in which a

    Fusio-io duo PCIe Flash cache card
    This is Fusion-io's entry into the Flash cache competition

    Fusion-io SSD card could be installed. That lower threshold (though not a lower price necessarily) has made Fusion-io the new darling for anyone wanting to add SSD throughput to their servers and storage systems. And now everyone wants Fusion-io not the re-branded STEC Fibre Channel SSDs everyone was buying a year ago.

    Anyone who has studied history knows in the chain of human relations there’s always another competitor out there that wants to sit on your head. Enter LSI and Seagate with a new product for the wealthy, well-heeled purchasing agent at your local data center: LSI and Seagate take on Fusion-io with flash

    Rather than create a better/smarter Fibre Channel SSD, LSI and Seagate are assembling a card that plugs into PCIe slot of a storage array or server to act as a high speed cache to the slower spinning disks. The Register refers to three form factors in the market now RamSan, STEC and Fusion-io. Because Fusion-io seems to have moved into the market at the right time and is selling like hot cakes, LSI/Seagate are targeting that particular form factor with it’s SSS6200.

    LSI's PCIe Flash hard drive card
    This is LSI's entry into the Flash hard drive market

    STEC is also going to create a product with a PCIe interface and Micron is going to design a product too. LSI’s product will not be available to ship until the end of the year.  In terms of performance the speeds being target are comparable between the Fusion-io Duo and the LSI SSS6200 (both using single level cell memory). So let the price war begin! Once we finally get some competition in the market I would hope the entry level price of Fusion-io (~$35,000) finally erodes a bit. It is a premium product right now intended to help some folks do some heavy lifting.

    My hope for the future is we could see something comparable (though much less expensive and scaled down) available on desktop machines. I don’t care if it’s built-in to a spinning SATA hard drive (say as a high speed but very large cache) or some kind of card plugging into a bus on the motherboard (like the failed Intel Speed Boost cache). If a high speed flash cache could become part of the standard desktop PC architecture to sit in front of monstrous single hard drives (2TB or higher nowadays) we might get faster response from our OS of choice, and possible better optimization of reads/writes to fairly fast but incredibly dense and possibly more error prone HDDs. I say this after reading about the big charge by Western Digital to move from smaller blocks of data to the 4K block.

    Much wailing and gnashing of teeth has accompanied the move recently by WD to address the issue of error correcting Cycle Redundancy Check (CRC) algorithms on the hard drives. Because 2Terabyte drives have so many 512bit blocks more and more time and space is taken up doing the CRC check as data is read and written to the drive. A larger block made up of 4096 bits instead of 512 makes the whole thing 4x less wasteful and possibly more reliable even if some space is wasted to small text files or web pages. I understand completely the implication and even more so, old-timers like Steve Gibson at GRC.com understand the danger of ever larger single hard drives. The potential for catastrophic loss of data as more data blocks need to be audited can numerically become overwhelming to even the fastest CPU and SATA bus. I think I remember Steve Gibson expressing doubts as to how large hard drives could theoretically become.

    Steve Gibson's SpinRite 6
    Steve Gibson's data recovery product SpinRite

    As the creator of the SpinRite data recovery utility he knows fundamentally the limits to the design of the Parallel ATA interface. Despite advances in speeds, error-correcting hasn’t changed and neither has the quality of the magnetic medium used on the spinning disks. One thing that has changed is the physical size of the blocks of data. They have gotten infinitesimally smaller with each larger size of disk storage. The smaller the block of data the more error correcting must be done. The more error-correcting the more space to write the error-correcting information. Gibson himself observers something as random as cosmic rays can flip bits within a block of data at those incredibly small scales of the block of data on a 2TByte disk.

    So my hope for the future is a new look at the current state of the art motherboard, chipset, I/O bus architecture. Let’s find a middle level, safe area to store the data we’re working on, one that doesn’t spontaneously degrade or is too susceptible to random errors (ie cosmic rays). Let the Flash Cache’s flow, let’s get better throughput and let’s put disks into the class of reliable but slower backing stores for our SSDs.

  • links for 2010-03-17

  • Apple A4 processor really stripped-down Cortex A8? | Electronista

    The custom A4 processor in the iPad is in reality a castrated Cortex A8 ARM design, say several sources.

    via Apple A4 processor really stripped-down Cortex A8? | Electronista.

    This is truly interesting, and really shows some attempt to optimize the chips with ‘known’ working designs. Covering the first announcement of the A4 chip by Brightside of News, I tried to argue that customizing a chip by licensing a core design from ARM Holdings Inc. isn’t all that custom. Following this Ashlee Vance wrote in the NYTimes the cost of development for the A4 ‘could be’ upwards of $1Billion. And now just today MacNN/Electronista is saying Apple used the ARM A8. By this I mean the ARM Cortex A8 is a licensed core already being used in the Apple iPhone 3GS. It is a proven, known cpu core that engineers are familiar with at Apple. Given the level of familiarity, it’s a much smaller step to optimize that same CPU core for speed and integration with other functions. Like for instance the GPU or memory controllers can be tightly bound into the final CPU. Add a dose of power management and you got good performance and good battery life. It’s not cutting edge to be sure, but it is more guaranteed to work right out of the gate. That’s a bloodthirsty step in the right direction of market domination. However, the market hasn’t quite yet shown itself to be so large and self sustaining that slate devices are a sure thing in the casual/auxiliary/secondary computing device market. You may have an iPhone and you may have a laptop, bu this device is going to be purchased IN ADDITION not INSTEAD OF those two existing device markets. So anyone who can afford a third device is probably going to be the target market for iPad as opposed to creating a new platform for people that want to substitute an iPad for either the iPhone or laptop.

  • AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment

    In bypassing a traditional chip maker like Intel and creating its own custom ARM-based processor for the iPad, Apple has likely incurred an investment of about $1 billion, a new report suggests.

    via AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment.

    After reading the NYTimes article linked to within this article I can only conclude it’s a very generalized statement that it costs $1Billion to create a custom chip. The exact quote from the NYTimes article author Ashlee Vance is: “Even without the direct investment of a factory, it can cost these companies about $1 billion to create a smartphone chip from scratch.”

    Given that is one third the full price of building a  chip fabrication plant, why so expensive? What is the breakdown of those costs. Apple did invest money in PA Semiconductor to get some chip building expertise (they primarily designed chips that were fabricated at overseas contract manufacturing plants). Given Qualcomm has created the Snapdragon CPU using similar cpu cores from ARM Holdings Inc., they must have $1Billion to throw around too? Qualcomm was once dominant in the cell phone market licensing its CDMA technology to the likes of Verizon. But it’s financial success is nothing like the old days. So how does Qualcomm come up with $1Billion to develop the Snapdragon CPU for smartphones? Does that seem possible?

    Qualcomm and Apple are licensing the biggest building blocks and core intellectual property from ARM, all they need to do is route and place and verify the design. Where does the $1Billion figure come into it? Is it the engineers? Is it the masks for exposing the silicon wafers? I argue now as I did in my first posting about the Apple A4 chip, the chip is an adaptation of intellectual property, a license to a CPU design provided by ARM. It’s not literally created from ‘scratch’ starting with no base design or using completely new proprietary intellectual property from Apple. This is why I am confused. Maybe ‘from scratch’ means different things to different people.

  • Next Flash Version Will Support Private Browsing

    Slashdot Your Rights Online Story | Next Flash Version Will Support Private Browsing.

    I’m beginning to think Adobe should just make Flash into a web browser that plays back it’s own movie format. That will end all debates over open standards and so forth and provide better support/integration. There is nothing wrong with a fragmented browser market. It’s what we already have right now.

    If you have ever heard from someone that Adobe Flash is buggy and crashes a lot and have to trust their judgment, then please do. It’s not the worst thing ever invented, but it certainly could be better. Given Adobe’s monopoly on web delivered video (ie YouTube) one would think they could maintain competitive advantage through creating a better user experience (like Apple entering the smart phone market). But instead they have attempted to innovate as a way of maintaining their competitiveness and so Flash has bloated up to accommodate all kinds of ActionScript and interactivity that used to only exist in desktop applications. So why should Adobe settle for just being a tool maker and browser plug-in? I say show everyone what the web browser should be, and compete.