Blog

  • The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It

    Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there’d be no realistic way to make it happen.

    via The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It.

    While true that Microsoft didn’t tax Software Developers who sold product running on the Windows OS, a kind of a tax levy did exist for hardware manufacturers creating desktop pc’s with Intel chips inside. But message received I get the bigger point, cul-de-sacs don’t make good computers. They do however make good appliances. But as the author Jonathan Zittrain points out we are becoming less aware of the distinction between a computer and an applicance, and have lowered our expectation accordingly.

    In fact this points to the bigger trend of not just computers becoming silos of information/entertainment consumption no, not by a long shot. This trend was preceded by the wild popularity of MySpace, followed quickly by Facebook and now Twitter. All platforms as described by their owners with some amount of API publishing and hooks allowed to let in 3rd party developers (like game maker Zynga). But so what if I can play Scrabble or Farmville with my ‘friends’ on a social networking ‘platform’? Am I still getting access to the Internet? Probably not, as you are most likely reading what ever filters into or out of the central all-encompassing data store of the Social Networking Platform.

    Like the old World Maps in the days before Columbus, there be Dragons and the world ends HERE even though platform owners might say otherwise. It is an Intranet pure and simple, a gated community that forces unique identities on all participants. Worse yet it is a big brother-like panopticon where each step and every little movement monitored and tallied. You take quizzes, you like, you share, all these things are collection points, check points to get more data about you. And that is the TAX levied on anyone who voluntarily participates in a social networking platform.

    So long live the Internet, even though it’s frontier, wild-catting days are nearly over. There will be books and movies like How the Cyberspace was Won, and the pioneers will all be noted and revered. We’ll remember when we could go anywhere we wanted and do lots of things we never dreamed. But those days are slipping as new laws get passed under very suspicious pretenses all in the name of Commerce. As for me I much prefer Freedom over Commerce, and you can log that in your stupid little database.

    Cover of "The Future of the Internet--And...
    Cover via Amazon
  • AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND

    English: NAND Flash memory circuit
    Image via Wikipedia

    The big question is endurance, however we wont see a reduction in write cycles this time around. IMFTs 20nm client-grade compute NAND used in consumer SSDs is designed for 3K – 5K write cycles, identical to its 25nm process.

    via AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND.

    If true this will help considerably in driving down cost of Flash memory chips while maintaining the current level of wear and performance drop seen over the lifetime of a chip. Stories I have read previously indicated that Flash memory might not continue to evolve using the current generation of silicon chip manufacturing technology. Performance drops occur as memory cells wear out. Memory cells were wearing out faster and faster as the wires and transistors got smaller and narrower on the Flash memory chip.

    The reason for this is memory cells have to be erased in order to free them up and writing and erasing take a toll on the memory cell each time one of these operations is performed. Single Level memory cells are the most robust, and can go through many thousands even millions of write and erase cycles before they wear out. However the cost per megabyte of Single Level memory cells make it an Enterprise level premium price level for Corporate customers generally speaking. Two Level memory cells are much more cost effective, but the structure of the cells makes them less durable than Single Level cells. And as the wires connecting them get thinner and narrower, the amount of write and erase cycles they can endure without failing drops significantly. Enterprise customers in the past would not purchase products specifically because of this limitation of the Two level memory cell.

    As companies like Intel and Samsung tried to make Flash memory chips smaller and less expensive to manufacture, the durability of the chips became less and less. The question everyone asked is there a point of diminishing return where smaller design rules, thinner wires is going to make chips so fragile? The solution for most manufacturers is to add spare memory cells, “over-providing” so that when a cell fails, you can unlock a spare and continue using the whole chip. The over -provisioning no so secret trick has been the way most Solid State Disks (SSDs) have handled the write/erase problem for Two Level memory cells. But even then, the question is how much do you over-provision? Another technique used is called wear-levelling where a memory controller distributes writes/erases over ALL the chips available to it. A statistical scheme is used to make sure each and every chip suffers equally and gets the same number of wear and tear apllied to it. It’s difficult balancing act manufacturers of Flash Memory and storage product manufacturers who consume those chips to make products that perform adequately, do not fail unexpectedly and do not cost too much for laptop and desktop manufacturers to offer to their customers.

    If Intel and Micron can successfully address the fragility of Flash chips as the wiring and design rules get smaller and smaller, we will start to see larger memories included in more mobile devices. I predict you will see iPhones and Samsung Android smartphones with upwards of 128GBytes of Flash memory storage. Similarly, tablets and ultra-mobile laptops will also start to have larger and larger SSDs available. Costs should stay about where they are now in comparison to current shipping products. We’ll just have more products to choose from, say like 1TByte SSDs instead of the more typical high end 512GByte SSDs we see today. Prices might also come down, but that’s bound to take a little longer until all the other Flash memory manufacturers catch up.

    A flash memory cell.
    Image via Wikipedia: Wiring of a Flash Memory Cell
  • Samsung: 2 GHz Cortex-A15 Exynos 5250 Chip

    Samsung also previewed a 2 GHz dual-core ARM Cortex-A15 application processor, the Exynos 5250, also designed on its 32-nm process. The company said that the processor is twice as fast as a 1.5 GHz A9 design without having to jump to a quad-core layout.

    via Samsung Reveals 2 GHz Cortex-A15 Exynos 5250 Chip.

    Deutsch: Offizielles Logo der ARM-Prozessorarc...
    Image via Wikipedia

    More news on the release dates and the details off Samsung’s version of the ARM Cortex A15 cpu for mobile devices. Samsung is helping ramp up performance by shrinking the design rule down to 32nm, and in the  A15 cpu dropping two out of the four possible cores. This choice is to make room for the integrated graphics processor. It’s a deluxe system on a chip that will no doubt give any A9 equipped tablet a run for its money. Indications at this point by Samsung are that the A15 will be a tablet only cpu and not adapted to smartphone use.

    Early in the Fall there were some indications that the memory addressing of the Cortex A15 would be enhanced to allow larger memories (greater than 4GBytes) to be added to devices. As it is now memory addressing isn’t a big issue as memory extensions (up to 40bits Large Physical  Address Extensions-LPAE) are allowed under the current generation Cortex A9. However the Instructions are still the same 32 bit Instruction Set longtime users of the ARM architecture are familiar with, and as always are backward compatible with previous generation software. It would appear that the biggest advantage to moving to Cortex A15 would be the potential for higher clock rates, decent power management and room to grow on the die for embedded graphics.

    Apple in it’s designs using the Cortex processors has stayed one generation behind the rest of the manufacturers and used all possible knowledge and brute force to eek out a little more power savings. Witness the iPad battery life still tops most other devices on the market. By creating a fully customized Cortex A8, Apple has absolutely set the bar on power management on die, and on the motherboard as well. If Samsung decides to go the route of pure power and clock, but sacrifices two cores to get the power level down I just hope they can justify that effort with equally amazing advancements in the software that runs on this new chip. Whether it be a game or better yet a snazzy User Interface, they need to differentiate themselves and try to show off their new cpu.

  • Super Tiny Computer Puts Android on Your TV, Laptop-with a side of Raspberry Pi

    Early this year we got to see, through ARM-powered devices such as the Motorola Atrix, that it doesnt take even a netbook to run basic computing functions. At a live demonstration in New York City, FXI Technologies showed off the next evolution of that idea: an ARM-based computer on a USB stick without any of that extra smartphone or tablet baggage.

    via Super Tiny Computer Puts Android on Your TV, Laptop.

    An example of a newly minted Raspberry Pi motherboard

    As time marches onward, the term we use ‘computer’ becomes more and more diffuse. Consider the cell phone, is no longer a phone but a computer connected to a network and can act like a phone. Or your TV is a computer that is also connected to a network and you can watch broadcasts or streamed videos or attach it to a game console. Now what if you could turn any bit of electronics with a usb connector and a video display into a bona fide computer? Size is no limit when using a mobile cpu like an ARM chip. As Android evolves I hope too that efforts like Raspberry Pi show what can be done in a wholly  Open Source context. FXI Technologies is showing us the way, but so are other efforts like the Raspberry Pi computer too.

    I attended a workshop this past Summer sponsored by RedHat covering a wide range of topics including Open Source communities. The main technical person leading the workshop also volunteers some of his time to Mozilla, specifically Mozilla target to ARM cpus, like the Raspberry Pi computer. He told us a little bit about how astoundingly cheap that device is as it was originally intended as the main board for the Sling Box time shifting TV controller. The first generation design was meant to be as low cost as possible but it didn’t quite make it to market. Succeeding generations of the original design did make it to market, as did the custom ARM CPU that Broadcom created to put in the original design. That CPU has now given birth to the Raspberry Pi project using the Broadcom BCM2835 System on a Chip (SOC). This is an ARM 11 based core which puts it just a bit ahead of Apple’s A4 and A5 iPhone/iPad cpus which have used ARM8 and now ARM9 cores for it’s central processing unit. Cost is of course cheap compared to anything else calling itself a computer, or a tablet and this is the reasoning behind making the board layout open source along with targeting a Linux distribution specifically for this computer.

    USB flash drive
    Image via Wikipedia
  • AnandTech – Applied Micros X-Gene: The First ARMv8 SoC

    APM expects that even with a late 2012 launch it will have a 1 – 2 year lead on the competition. If it can get the X-Gene out on time, hitting power and clock targets both very difficult goals, the headstart will be tangible. Note that by the end of 2012 well only just begin to see the first Cortex A15 implementations. ARMv8 based competitors will likey be a full year out, at least. 

    via AnandTech – Applied Micros X-Gene: The First ARMv8 SoC.

    Chip Diagram for the ARM version 8 as implemented by APM

    It’s nice to get a confirmation of the production time lines for the Cortex A15 and the next generation ARM version 8 architecture. So don’t expect to see shipping chips, much less finished product using those chips well into 2013 or even later. As for the 4 core ARM A15, finished product will not appear until well into 2012. This means if Intel is able to scramble, they have time to further refine their Atom chips to reach the power level and Thermal Design Point (TDP) for the competing ARM version 8 architecture. What seems to be the goal is to jam in more cores per CPU socket than is currently done on the Intel architecture (up to almost 32 in on of the graphics presented with the article).

    The target we are talking about is 2W per core @ 3Ghz, and it is going to be a hard, hard target to hit for any chip designer or manufacturer. One can only hope that TMSC can help APM get a finished chip out the door on it’s finest ruling chip production lines (although an update to the article indicates it will ship on 40nm to get it out the door quicker). The finer the ruling of signal lines on the chip the lower the TDP, and the higher they can run the clock rate. If ARM version 8 can accomplish their goal of 2W per cpu core @ 3 Gigahertz, I think everyone will be astounded. And if this same chip can be sampled at the earliest prototypes stages by a current ARM Server manufacturer say, like Calxeda or even SeaMicro then hopefully we can get benchmarks to show what kind of performance can be expected from the ARM v.8  architecture and instruction set. These will be interesting times.

    Intel Atom CPU Z520, 1,333GHz
    Image via Wikipedia
  • Expect the First Windows 8 Snapdragon PC Late 2012

    Image representing Microsoft as depicted in Cr...
    Image via CrunchBase

    Qualcomm CEO Paul Jacobs, speaking during the San Diego semiconductor companys annual analyst day in New York, said Qualcomm is currently working with Microsoft to ensure that the upcoming Windows 8 operating system will run on its ARM-based Snapdragon SoCs.

    via Expect the First Windows 8 Snapdragon PC Late 2012.

    Image representing Qualcomm as depicted in Cru...
    Image via CrunchBase

    Windows 8 is a’comin’ down the street.  And I bet you’ll see it sooner rather than later. Maybe as early as June on some products. The reason of course is the Tablet Market is sucking all the air out of the room and Microsoft needs a win to keep the mindshare favorable to it’s view of the consumer computer market. Part of that drive is fostering a new level of cooperation with System on chip manufacturers who until now have been devoted to the mobile phone, smart phone market. Now everyone wants a great big Microsoft hope to conquer the Apple iPad in the tablet market. And this may be their only hope to accomplish that in the coming year.

    Forrester Research just 2 days ago however predicted the Windows 8 Tablet dead on arrival:

    Image representing Forrester Research as depic...
    Image via CrunchBase

    IDG News Service – Interest in tablets with Microsoft’s Windows 8 is plummeting, Forrester Research said in a study released on Tuesday.

    http://www.computerworld.com/s/article/9222238/Interest_waning_on_Windows_8_tablet_Forrester_says

    Key to making a mark in the tablet computing market is content, content, content. Performance and specs alone will not create a Windows 8 Tablet market in what is an Apple dominated tablet marketplace, as the article says. It also appears previous players in the failed PC Tablet market will make a valiant second attempt this time using Windows 8 (I’m thinking Fujitsu, HP and Dell according to this article).

    Enhanced by Zemanta
  • Fusion plays its card: The Ten of Terabytes • The Register

    Fusion-io has crammed eight ioDrive flash modules on one PCIe card to give servers 10TB of app-accelerating flash.

    This follows on from its second generation ioDrives: PCIe-connected flash cards using single level cell and multi-level cell flash to provide from 400GB to 2.4TB of flash memory, which can be used by applications to get stored data many times faster than from disk. By putting eight 1.28TB multi-level cell ioDrive 2 modules on a single wide ioDrive Octal PCIe card Fusion reaches a 10TB capacity level.

    via Fusion plays its card: The Ten of Terabytes • The Register.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase

    This is some big news in the fight to be king of the PCIe SSD market. I declare: Advantage Fusion-io. They now have the lead in terms of not just speed but also overall capacity at the price point they have targeted.  As densities increase and prices more or less stay flat, the value add is more data can stay resident on the PCIe card and not be swapped out to Fibre-Channel array storage on the Storage Area Network (SAN). Performance is likely to be wicked cool and early adopters will now doubt reap big benefits from transaction processing and online analytic processing as well.

  • CMOS sensor inventor Eric Fossum discusses digital image sensors: Digital Photography Review

    CMOS sensor inventor Eric Fossum discusses digital image sensors: Digital Photography Review.

    Check out the video of the Lecture. Dr. Fossum attempts to address the societal and privacy implications of his invention the CMOS sensor. You don’t find too many scientists willing to engage in this type of presentation. And he brings the thorny issues early in the presentation so that he doesn’t run out of time to cover them by sticking them at the end.

    Also interesting in this video is Dr. Fossum’s story about how he was assigned the task of improving the reliability of CCDs (charged coupled devices) that were being sent into space. Defects in the sensor could occur when a highly energetic particle entered the sensor and created a defect in the sensor itself (ruing the ability to read out data accurately from the chip). The CCD works by collecting a sample than moving it one step at a time out to the edge of the chip, where it then gets amplified and read, and recorded. So if a defect occurs, the buckets moving a particular row or column of pixels will hit the defect and alter the reading or stop it from reading altogether.

    Dr. Fossum was able to get around this by building an amplifier into each pixel. This was achieved, hanks to the scaling down of micro-electronics available in silicon semi-conductors and Moore’s Law. A double-benefit of using CMOS semiconductors for the sensor is you can add all kinds of OTHER electronic circuits on the same chip as the sensor, so things get really interesting because you can integrate them on the silicon (bring up performance, bringing down costs). As Dr. Fossum says, “basically we can integrate so many things, we can create a full camera on a chip. All you do is add power, and out comes an image,…”

    Also liked this quote, “The force of marketing is greater than the force of engineering…”

    Lastly, he covers his research of quanta-image sensor (QIS) which sounds pretty interesting too.

    CMOS Image Sensor has photo diodes (PD), same ...
    Image via Wikipedia
  • Intel Responds to Calxeda/HP ARM Server News (Wired.com)

    Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.via Cloudline | Blog | Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data.

    Structure of the TILE64 Processor from Tilera
    Image via Wikipedia: Tile64 mesh network processor from Tilera
    Image representing Tilera as depicted in Crunc...
    Image via CrunchBase

    So Intel gets an interview with a Conde-Nast writer for a sub-blog of Wired.com. I doubt too many purchasers or data center architects consult Cloudline@Wired.com. But all the same, I saw through many thinly veiled bits of handwaving and old saws from Intel saying, “Yes, this exists but we’re already addressing it with our exiting product lines,. . .” So, I wrote in a comment to this very article. Especially regarding a throw-away line mentioning the ‘future’ of the data center and the direction the Data Center and Cloud Computing market was headed. However the moderator never published the comment. In effect, I raised the Question: Whither Tilera? And the Quanta SM-2 server based on the Tilera Chip?

    Aren’t they exactly what is described by the author John Stokes as a network of cores on a chip? And given the scale of Tilera’s own product plans going into the future and the fact they are not just concentrating on Network gear but actual Compute Clouds too, I’d say both Stokes and Walcyzk are asking the wrong questions and directing our attention in the wrong direction. This is not a PR battle but a flat out technology battle. You cannot win this with words and white papers but in fact it requires benchmarks and deployments and Case Histories. Technical merit and superior technology will differentiate the players in the  Cloud in a Box race. And this hasn’t been the case in the past as Intel has battled AMD in the desktop consumer market. In the data center Intel Fear Uncertainty and Doubt is the only weapon they have.

    And I’ll quote directly from John Stokes’s article here describing EXACTLY the kind of product that Tilera has been shipping already:

    “Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.”

    The key phrase here is cluster on a chip, in essence exactly what Tilera has strived to achieve with its Tilera64 based architecture. To review from previous blog entries of this website following the announcements and timelines published by Tilera: