Category: technology

General technology, not anything in particular

  • Apple A5X CPU in Review

    Apple Inc.
    Apple Inc. (Photo credit: marcopako )

    A meta-analysis of the Apple A5X system on chip

    (from the currently shipping 3rd Gen iPad)

    New Ipad’s A5X beats NIVIDIA Tegra 3 in some tests (MacNN|Electronista)

    Apple’s A5X Die (and Size?) Revealed (Anandtech.com)

    Chip analysis reveals subtle changes to new iPad innards (AppleInsider-quoting Anandtech)

    Apple A5X Die Size Measured: 162.94mm^2, Samsung 45nm LP Confirmed (Update from Anandtech based on a more technical analysis of the chip)

    Reading through all the hubbub and hand-waving from the technology ‘teardown’ press outlets, one would have expected a bigger leap from Apple’s chip designers. A fairly large chip sporting an enormous graphics processor integrated into the die is what Apple came up with to help boost itself to the next higher rez display (so-called Retina Display). The design rule is still a pretty conservative 45nm (rather than try to push the envelope by going with 32nm or thinner to bring down the power requirements). Apple similarly had to boost its battery capacity to make up for this power hungry pixel demon by almost 2X more than the first gen iPad. So for almost the ‘same’ amount of battery capacity (10 hours of reserve power), you get the higher rez display. But a bigger chip and higher rez display will add up to some extra heat being generated, generally speaking. Which leads us to a controversy.

    Given this knowledge there has been a recent back and forth argument over thermal design point for iPad 3rd generation. Consumer Reports published an online article saying the power/heat dissipation was much higher than previous generation iPads. They included some thermal photographs indicating the hot spots on the back of the device and relative temperatures. While the iPad doesn’t run hotter than a lot of other handheld devices (say Android tablets). It does run hotter than say an iPod Touch. But as Apple points out that has ALWAYS been the case. So you gain some things you give up some things and still Apple is the market leader in this form factor, years ahead of the competition. And now the tempest in the teapot is winding down as Consumer Reports (via LA Times.com)has rated the 3rd Gen iPad as it’s no. 1 tablet on the market (big surprise). So while they aren’t willing to retract their original claim of high heat, they are willing to say it doesn’t count as ’cause for concern’. So you be the judge when you try out the iPad in the Apple Store. Run it through its paces, a full screen video or 2 should heat up the GPU and CPU enough to get the electrons really racing through the device.

    A picture of the Apple A5X
    This is the new System on Chip used by the Apple 3rd generation iPad
  • ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com

    Image representing Wired Magazine as depicted ...
    Image via CrunchBase

    On Tuesday, the company unveiled its new ARM Cortex-M0+ processor, a low-power chip designed to connect non-PC electronics and smart sensors across the home and office.

    Previous iterations of the Cortex family of chips had the same goal, but with the new chip, ARM claims much greater power savings. According to the company, the 32-bit chip consumes just nine microamps per megahertz, an impressively low amount even for an 8- or 16-bit chip.

    via ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com.

    Lower power means a very conservative power budget especially for devices connected to the network. And 32 bits is nothing to sneeze at considering most manufacturers would pick a 16 or 8-bit chip to bring down the cost and power budget too. According to this article the degree of power savings is so great in fact that in sleep mode the chip consumes almost no power at all. For this market Moore’s Law is paying off big benefits especially given the bonus of a 32bit core. So not only will you get a very small lower power cpu, you’ll have a much more diverse range of software that could run on it and take advantage of a larger memory address space as well. I think non-PC electronics could include things as simple as web cams or cellphone cameras. Can you imagine a CMOS camera chip with a whole 32bit cpu built in? Makes you wonder no just what it could do, but what ELSE it could do, right?

    The term ‘Internet of Things‘ is bandied about quite a bit as people dream about cpus and networks connecting ALL the things. And what would be the outcome if your umbrella was connected to the Internet? What if ALL the umbrellas were connected? You could log all kinds of data, whether it was opened or close, what the ambient temperature is. It would be like a portable weather station for anyone aggregating all the logged data potentially. And the list goes on and on. Instead of Tire pressure monitors, why not also capture video of the tire as it is being used commuting to work. It could help measure the tire wear and setup and appointment when you need to get a wheel alignment. It could determine how many times you hit potholes and suggest smoother alternate routes. That’s the kind of blue sky wide open conjecture that is enabled by a 32-bit low/no power cpu.

    Moore's Law, The Fifth Paradigm.
    Moore’s Law, The Fifth Paradigm. (Photo credit: Wikipedia)
  • Accidental Time Capsule: Moments from Computing in 1994 (from RWW)

    Byte Magazine is one of the reasons Im here today, doing what I do. Every month, Byte set its sights on the bigger picture, a significant trend that might be far ahead or way far ahead. And in July 1994, Jon Udell to this very day, among the most insightful people ever to sign his name to an article was setting his sights on the inevitable convergence between the computer and the telephone.

    via Accidental Time Capsule: Moments from Computing in 1994, by 

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    I also liked Tom Halfhill, Jerry Pournelle, Steve Gilmore, and many other writers at Byte Inc. over the years too. I couldn’t agree more with Scott Fulton, as I still am a big fan of Jon Udell and any projects he worked on and documented. I can credit Jon Udell for getting me to be curious about weblogging, Radio Userland, WordPress, Flickr and del.icio.us (social bookmarking website). And watching his progress on a ‘Calendar of Public Calendars’, The elmcity project. Jon’s attempting to catalog and build an aggregated list of calendars that have RSS style feeds that anyone can subscribe to. No need for automated emails filling a filtered email box. No, you just fire up a browser and read what’s posted. You find out what’s going on and just add the event to your calendar.

    As Jon has discovered the calendar exists, the events are there, they just aren’t evenly distributed yet (ie much like the future). So in his analysis of ‘what works’ Jon’s found some sterling examples of calendar keeping and maintenance some of which has popped up in interesting places, like Public School systems. However the biggest downfall of all events calendars is the all too common practice of taking Word Documents and exporting them as PDF files which get posted to a website. THAT is the calendar for far too many organizations and it fails utterly as a means of ‘discovering’ what’s going on.

    Suffice it to say elmcity has been a long term goal of organizing and curatorial work that Jon is attempting to get an informal network of like-minded people involved in. And as different cities form up calendar ‘hubs’ Jon is collecting them into larger networks so that you can just search one spot and find out ‘what’s happening’ and then adding those events to your own calendar in a very seamless and lightweight manner. I highly recommend following Jon’s weblog as he’s got the same ability to explain and analyze these technologies that he excelled at while at Byte Inc. And continues to follow his bliss and curiosity about computers, networks and more generally technology.

  • Stephen Wolfram Blog : The Personal Analytics of My Life

    Publicity photo of en:Stephen Wolfram.
    Publicity photo of en:Stephen Wolfram. (Photo credit: Wikipedia)

    One day I’m sure everyone will routinely collect all sorts of data about themselves. But because I’ve been interested in data for a very long time, I started doing this long ago. I actually assumed lots of other people were doing it too, but apparently they were not. And so now I have what is probably one of the world’s largest collections of personal data.

    via Stephen Wolfram Blog : The Personal Analytics of My Life.

    Gordon Bell
    Gordon Bell (Photo credit: Wikipedia)

    In some ways similar to Stephen Wolfram, Gordon Bell at Microsoft has engaged in an attempt to record his “LifeBits” using a ‘wearable’ computer to record video and capture what goes on in his life. In my opinion, Stephen Wolfram has done Gordon Bell one better by collecting data over a much longer period and of a much wider range than Gordon Bell accomplished within the scope of LifeBits. Reading Wolfram’s summary of all his data plots is as interesting as seeing the plots themselves. There can be no doubt that Stephen Wolfram has always and will continue to think differently than most folks, and dare I say most scientists. Bravo!

    The biggest difference between MyLifeBits versus Wolfram’s personal data collection is the Wolram’s emphasis on non-image based data. The goal it seems for the Microsoft Research group is to fulfill the promise of Vannevar Bush’s old article titled “As we may think” printed in the Atlantic, July 1945. In this article Bush proposes a prototype of a more ‘visual computer’ that would act as a memory recall and analytic thinking aid. He named it the Memex.

    Gordon Bell and Jim Gemmell of Microsoft Research, seemed to be focused on the novelty of a camera carried and taking pictures automatically of the area immediately in front of it. This log of ‘what was seen’ was meant to help cement visual memory and recall. Gordon Bell had spent a long period of time digitizing, “articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally.” This over emphasis on visual data I think if used properly might be useful to some but is more a product of Gordon Bell’s own personal interest in seeing how much he could capture then catalog after the fact.

    Stephen Wolfram’s data wasn’t even necessarily based on a ‘wearable computer‘ the way MyLifeBits seems to be. Wolfram built in a logging/capture system into things he did daily on a computer. This even included data collected by a digital pedometer to measure the steps he would take in a day. The plots of the data are most interesting in comparison to one another especially given the length of time over which they were collected (a much bigger set than Gordon Bell’s Life Bits I dare say). So maybe this points to another step forward in the evolution of Lifebits perhaps? Wolfram’s data seems to be more useful in a lot of ways, he’s not as focused on memory and recall of any given day. But maybe a synthesis of Wolfram’s data collection methods and analysis and Gordon Bell’s MyLifeBits capture of image data might be useful to a broader range of people if someone wanted to embrace and extend these two scientists’ personal data projects.

  • Nginx is some serious voodoo for serving up websites. I’m bowled over by this level of performance for a consumer level Intel box running Ubuntu. And from 200 connections to 1000 connections performance stays high without any big increases in latency. Amazing.

    lowlatencyweb's avatarThe Low Latency Web

    A modern HTTP server running on somewhat recent hardware is capable of servicing a huge number of requests with very low latency. Here’s a plot showing requests per second vs. number of concurrent connections for the default index.html page included with nginx 1.0.14.


    With this particular hardware & software combination the server quickly reaches over 500,000 requests/sec and sustains that with gradually increasing latency. Even at 1,000 concurrent connections, each requesting the page as quickly as possible, latency is only around 1.5ms.

    The plot shows the average requests/sec and per-request latency of 3 runs of wrk -t 10 -c N -r 10m http://localhost:8080/index.html where N = number of connections. The load generator is wrk, a scalable HTTP benchmarking tool.

    Software

    The OS is Ubuntu 11.10 running Linux 3.0.0-16-generic #29-Ubuntu SMP Tue Feb 14 12:48:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux. The following kernel parameters were changed to increase…

    View original post 149 more words

  • Raspberry Pi Hits a Slight Manufacturing Delay

    English: Extract from Raspberry Pi board at Tr...
    Image via Wikipedia

    The $35 Raspberry Pi “Model B” is board of choice to ship out to consumers first. It contains two USB ports, 256 MB of RAM, an Ethernet port and a 700 MHz Broadcom BCM2835 SoC. The Videocore 4 GPU within the SoC is roughly the equivalent to the original Xboxs level of performance, providing Open GL ES 2.0, hardware-accelerated OpenVG, and 1080p30 H.264 high-profile decode.

    via Raspberry Pi Hits a Slight Manufacturing Delay.

    Raspberry Pi boards are on the way and the components list is still pretty impressive for $35 USD. Not bad, given they had a manufacturing delay. The re-worked boards should ship out as a second batch once they have been tested fully. It also appears all the other necessary infrastructure is slowly falling into place to help create a rich environment for curious and casually interested purchasers of the Raspberry Pi. For instance let’s look at the Fedora remixes for Raspberry Pi.

    A remix in the Open source software community refers to a distribution of an OS that can run without compiling on a particular chip architecture whether it be the Raspberry Pi Broadcom chip or an Intel x86 variety. In addition to the OS a number of other pre-configured applications will be included so that you can start using the computer right away instead of having to download lots of apps. The best part of this is not only the time savings but the lowering of the threshold to less technical users. Also of note is the particular Fedora OS distributions chosen LXDE and XFCE both noted for being less resource intensive and smaller in physical size. The documentation on the Fedora website indicates these two distros are geared for older less capable, less powerful computers that you would still like to use around the house. And for a Raspberry Pi user, getting a tuned OS specifically compiled for your CPU and ready to go is a big boon.

    What’s even more encouraging is the potential for a Raspberry Pi community to begin optimizing and developing a new range of apps specifically geared towards this new computer architecture. I know the Fedora Yum project is a great software package manager using the RPM format for adding and removing software components as things change. And having a Yum app geared specifically for Raspberry Pi users might give a more App store like experience for the more casual users interested in dabbling. Right now there’s a group at Seneca College in Toronto, CA doing work on an app store-like application that would facilitate the process off discovering, downloading and

    Logo for Raspberry Pi
    Raspberry Pi project

    trying out different software pre-compiled for the Raspberry Pi computer.

    Broadcom
    Image via Wikipedia
  • AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase

    Chip designer and chief Intel rival AMD has signed an agreement to acquire SeaMicro, a Silicon Valley startup that seeks to save power and space by building servers from hundreds of low-power processors.

    via AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com.

    It was bound to happen eventually, I guess. SeaMicro has been acquired by AMD. We’ll see what happens as a result as SeaMicro is a customer of Intel’s Atom chips and now most recently Xeon server chips as well. I have no idea where this is going or what AMD intends to do, but hopefully this won’t scare off any current or near future customers.

    SeaMicro’s competitive advantage has been and will continue to be the development work they performed on that custom ASIC chip they use in all their systems. That bit of intellectual property was in essence the reason AMD decided to acquire SeaMicro and hopefully let it gain an engineering advantage for systems it might put out on the market in the future for large scale Data Centers.

    While this is all pretty cool technology, I think that SeaMicro’s best move was to design its ASIC so that it could take virtually any common CPU. In fact, SeaMicro’s last big announcement introduced its SM10000-EX option, which uses low-power, quad-core Xeon processors to more than double compute performance while still keeping the high density, low-power characteristics of its siblings.

    via SeaMicro acquisition: A game-changer for AMD • The Register.

    So there you have it Wired and The Register are reporting the whole transaction pretty positively. Looks on the surface to be a win for AMD as it can design new server products and get them to market quickly using the SeaMicro ASIC as a key ingredient. SeaMicro can still service it’s current customers and eventually allow AMD to up sell or upgrade as needed to keep the ball rolling. And with AMD’s Fusion architecture marrying GPUs with CPU cores who knows what cool new servers might be possible? But as usual the nay-sayers the spreaders of Fear, Uncertainty and Doubt have questioned the value of SeaMicro and their original product he SM-10000.

    Diane Bryant, the general manager of Intel’s data center and connected systems group at a press conference for the launch of new Xeon processors had this to say, ““We looked at the fabric and we told them thereafter that we weren’t even interested in the fabric,” when asked about SeaMicro’s attempt to interest Intel in buying out the company. To Intel there’s nothing special enough in the SeaMicro to warrant buying the company. Furthermore Bryant told Wired.com:

    “…Intel has its own fabric plans. It just isn’t ready to talk about them yet. “We believe we have a compelling solution; we believe we have a great road map,” she said. “We just didn’t feel that the solution that SeaMicro was offering was superior.”

    This is a move straight out of Microsoft’s marketing department circa 1992 where they would pre-announce a product that never shipped was barely developed beyond a prototype stage. If Intel is really working on this as a new product offering you would have seen an announcement by now, rather than a vague, tangential reference that appears more like a parting shot than a strategic direction. So I will be watching intently in the coming months and years if needed to see what if any Intel ‘fabric technology’ makes its way from the research lab, to the development lab and to final product shipping. However don’t be surprised if this is Intel attempting to undermine AMD’s choice to purchase SeaMicro. Likewise, Forbes.com later reported from a representative from SeaMicro that their company had not tried to encourage Intel to acquire SeaMicro. It is anyone’s guess who is really correct and being 100% honest in their recollections. However I am still betting on SeaMicro’s long term strategy of pursuing low power, ultra dense, massively parallel servers. It is an idea whose time has come.

    Image representing Intel as depicted in CrunchBase
    Image via CrunchBase
  • Hope for a Tool-Less Tomorrow | iFixit.org

    I’ve seen the future, and not only does it work, it works without tools. It’s moddable, repairable, and upgradeable. Its pieces slide in and out of place with hand force. Its lid lifts open and eases shut. It’s as sleek as an Apple product, without buried components or proprietary screws.

    via Hope for a Tool-Less Tomorrow | iFixit.org.HP Z1 worstation

    Oh how I wish this were true today for Apple. I say this as a recent purchaser of a Apple re-furbished iMac 27″. My logic and reasoning for going with the refurbished over new was based on a few bits of knowledge gained reading Macintosh weblogs. The rumors I read included the idea that Apple repaired items are strenuously tested before being re-sold. In some cases return items are not even broken, they are returns based on buyers remorse or cosmetic problems. So there’s a good chance the logic board and lcd have no problems. Now reading back this Summer just after the launch of Mac OS X 10.7 (Lion), I read about lots of problems with crashes off 27″ iMacs. So I figured a safer bet would be to get a 21″ iMac. But then I started thinking about Flash-based Solid State Disks. And looking at the prohibitively high prices Apple charges for their installed SSDs, I decided I needed something that I could upgrade myself.

    But as you may know iMacs over time have never been and continue to remain not user up-gradable. However, that’s not to say people haven’t tried or succeeded in upgrading their own iMacs over the years. Enter the aftermarket for SSD upgrades. Apple has attempted to zig and zag as the hobbyists swap in newer components like larger hard drives and SSDs. Witness the Apple temperature sensor on the boot drive in the 27″ iMac, where they have added a sensor wire to measure the internal heat of the hard drive. As the Mac monitors this signal it will rev-up the internal fans. Any iMac hobbyist attempting to swap out a a 4TByte or 3TByte drive for the stock Apple 2TByte drive will suffer the inevitable panic mode of the iMac as it cannot see its temperature sensor (these replacement drives don’t have the sensor built-in) and assumes the worst. They say the noise is deafening when those fans speed up, and they never, EVER slow down. This Apple’s attempt insure sanctity through obscurity. No one is allowed to mod or repair, and that means anyone foolish enough to attempt to swap their internal hard drive on the iMac.

    But, there’s a workaround thank goodness and that is the 27″ iMac whose internal case is just large enough to install a secondary hard drive. You can slip a 2.5″ SSD into that chassis. You just gotta know how to open it up. And therein lies the theme of this essay, the user upgradable, user friendly computer case design. The antithesis of this idea IS the iMac 27″ if you read these steps from iFixit and the photographer Brian Tobey. Both of these websites make clear the excruciating minutiae of finding and disconnecting the myriad miniature cables that connect the logic board to the computer. Without going through those steps one cannot gain access to the spare SATA connectors facing towards the back of the iMac case. I decided to go through these steps to add an SSD to my iMac right after it was purchased. I thought Brian Tobey’s directions were just slightly better and had more visuals pertinent to the way I was working on the iMac as I opened up the case.

    It is in a word a non-trivial task. You need the right tools, the right screwdrivers. In fact you even need suction cups! (thankyou Apple). However there is another way, even for so-called All-in-One style computer designs like the iMac. It’s a new product from Hewlett-Packard targeted for the desktop engineering and design crowd. It’s an All-in-One workstation that is user upgradable and it’s all done without any tools at all. Let me repeat that last bit again, it is a ‘tool-less’ design. What you may ask is a tool-less design? I hadn’t heard of it either until I read this article in iFixit. And after having followed the links to the NewEgg.com website to see what other items were tagged as ‘tool-less’ I began to remember some hints and stabs at this I had seen in some Dell Optiplex desktops some years back. The ‘carrier’ bracket for the CD/DVD and HDD drive bays were these green plastic rails that just simply ‘pushed’ into the sides of the drive (no screws necessary).

    And when I considered my experience working with the 27″ iMac actually went pretty well (it booted up the first time no problems) after all I had done to it, I consider myself very lucky. But it could have been better. And there’s no reason it cannot be better for EVERYONE. It also made me think of the XO Laptop (One Laptop Per Child project) and I wondered how tool-less that laptop might be. How accessible are any of these designs? And it also made me recall the Facebook story I recently commented on about how Facebook is designing its own hard drive storage units to make them easier to maintain (no little screws to get lost and dropped onto a fully powered motherboard and short things out). So I much more hope than when I first embarked on the do it yourself journey of upgrading my iMac. Tool-less design today, Tool-less design tomorrow and Tool-less design forever.

    Image representing Hewlett-Packard as depicted...
    Image via CrunchBase
  • Flash DOOMED to drive itself off a cliff – boffins • The Register

    A flash memory cell.
    Image via Wikipedia

    Microsoft and University of California San Diego researchers have said flash has a bleak future because smaller and more densely packed circuits on the chips silicon will make it too slow and unreliable. Enterprise flash cost/bit will stagnate and the cutting edge that is flash will become a blunted blade.

    via Flash DOOMED to drive itself off a cliff – boffins • The Register. As reported bChris Mellor for The Register (http://www.theregister.co.uk/)

    More information regarding semiconductor manufacturers rumors and speculation of a wall being hit in the shrinking down of Flash memory chips. (see this link to the previous Carpetbomber article from Dec. 15). This report has a more definitive ring to it as actual data has been collected and projections based on models of that data. The trend according to these researchers is lower performance due to increasingly bad error rates and signaling on the chip itself. Higher Density chips = Lower Performance per memory cell.

    To hedge against this dark future for NAND flash memory companies are attempting to develop novel and in cases exotic technology. IBM has “racetrack memory“, Hewlett-Packard and Hynix have MemRistor and the list goes on. Nobody in the industry has any idea what comes next so bets are being placed all over the map. My advice to anyone reading this article is do not choose a winner until it has won. I say this as someone who has watched a number of technologies fight for supremacy in the market. Sony Betamax versus JVC VHS, HD-DVD versus Blu-ray, LCD versus Plasma Display Panel, etc. I will admit at times the time span for these battles can be waged over a longer period of time, and so it can be harder to tell who has won. But it seems to be shorter time spans over the life of these products as more recent battles have been waged. And who is to say, Blu-ray hasn’t really been adopted widely enough to say it is the be all and end all as DVD and CD disks both are used widely as recordable media. Just know that to go any further in improving the cost vs. performance ratio NAND will need to be forsaken to get to the next technological benchmark in high speed, random access, long term, durable storage media.

    Things to look out for as the NAND bandwagon slows down are Triple Level Memory cells, or worse yet Quadruple Level cells. These are not going to be the big saviors the average consumer hopes they will be. Performance of Flash memory that gangs up the memory cells also has higher error rates at the beginning and even higher over time. The amount of cells assigned for being ‘over-provisioned’ will be so high as to negate the cost benefit of choosing the higher density memory cells. Also being touted as a way forward to stave off the end of the road are error correcting circuits and digital signal processors onboard the chips and controllers. As the age of the chip begins to affect its reliability, more statistical quality control techniques are applied to offset the losses of signal quality in the chip. This is a technique used today by at least one manufacturer (Intel), but how widely it can be adopted and how successfully is another question altogether. It would seem each memory manufacturer has its own culture and as a result, a technique for fixing the problem. Who ever has the best marketing and sales campaigns will as past history has shown will be the winner.

    English: I, § Music Sorter § (talk) ...
    Image via Wikipedia
  • I don’t know how accurate or specific this criticism of Apple’s Press conference from Wednesday is, but many people are commenting on it. I contributed a comment as well.