Blog

  • Picture This: Hosted Lifebits in the Personal Cloud | Cloudline | Wired.com

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    It’s not just photos. I want the same for my whole expanding set of digital objects, including medical and financial records, commercial transactions, personal correspondence, home energy use data, you name it. I want all of my lifebits to be hosted in the cloud under my control. Is that feasible? Technically there are huge challenges, but they’re good ones, the kind that will spawn new businesses.

    via (Jon UdellPicture This: Hosted Lifebits in the Personal Cloud | Cloudline | Wired.com.

    From Gordon Moore‘s MyLifeBits to most recently Stephen Wolfram‘s personal collection of data and now to Jon Udell. Witness the ever expanding universe of personal data. Thinking about Gordon Moore now, I think the emphasis from Microsoft Research was always on video and pictures and ‘recollecting’ what’s happened in any given day. Stephen Wolfram’s emphasis was not so much on collecting the data but analyzing it after the fact and watching patterns emerge. Now with Jon Udell we get a nice kind of advancing of the art by looking at possible end-game scenarios. So you have collected a mass of LifeBits, now what?

    Who’s going to manage this thing? Is anyone going to offer a service that will help manage it? All great questions because the disparate form social networking lifebits take versus other like health and ‘performance’ lifebits (like Stephen Wolfram collects and maintains for himself) are pointing up a big gap that exists in the cloud services sector. Ripe pickings for anyone in the entrepreneurial vein to step in and bootstrap a service like the one Jon Udell proposes. If someone was really smart they could get it up and running cheaply on Amazon Web Services (AWS) until it got to be too cost and performance prohibitive to keep it hosted there. That would both allow an initial foray to test the waters, see the size and tastes of the market and adapt the hosted lifebits service to anyone willing to pay up. That might just be a recipe for success.

  • Apple A5 from the Apple TV 3 – and an iPad 2! » Chipworks

    Pictures of the two different A5 chips
    From: 9to5mac, the two Apple A5 cpus in question

    Not only did Apple roll out a new processor that was not what it was advertised to be, but it also snuck in a new process technology for the manufacturing of this new A5. The previous generation A5, part number APL0498, was manufactured on Samsung Semiconductors’ 45 nm LP CMOS process. This new A5 processor is manufactured on Samsung’s new 32 nm high-k metal gate, gate first, LP CMOS process technology.

    via Update – Apple A5 from the Apple TV 3 – and an iPad 2! » Technology Blog » Chipworks.

    Check out the article at the Chipworks website, just follow the link above. They have a great rundown of what they discovered in their investigation of the most recent Apple A5 chips. These chips are appearing in a newly revised AppleTV but have also appeared in more recently manufactured Apple iPad 2 as well. There was some amount of surprise that Apple didn’t adopt a shrunk down die ruling for the A5X used in the iPad 3. Most of the work went into the integrated graphics of the A5X as it was driving a much higher rez ‘Retina’-like display.

    Very, very sneaky of Apple to slip in the next generation smaller die size on a ‘hobby’ product like the Apple TV. This is proof positive that when someone says something is a hobby, it isn’t necessarily so. I for one am both heartened and intrigued that Apple is attempting to get a 32nm processor out there on their ‘low power’ low cost products. Now that this part has also been discovered in the more recently constructed Apple iPad 2 units, I wonder what kind of heat, battery life differences there are versus an early model iPad 2 using the A5 part number APL0498?

    Keeping up with the Samsungs is all important these days and Apple has got to keep its CPU die rulings in step with the next generation of of chip fabrication giants. Intel is pushing 22nm, Samsung has been on 32nm for a while and then there’s Apple sitting 1 or 2 generations behind the cutting edge. I fear this may have resulted in some of the heat issues that were first brought to people’s attention by Consumer Reports weeks after the introduction of the iPad 3. With any luck and process engineering speed, the A5X can jump ship to the 32nm fabrication line at Samsung sooner rather than later.

  • Always good suggestions on the WordPress News blog. I like that they try to build the community.

  • Nvidia: No magic compilers for HPC coprocessors • The Register

    Image representing NVidia as depicted in Crunc...
    Image via CrunchBase

    And with clock speeds topped out and electricity use and cooling being the big limiting issue, Scott says that an exaflops machine running at a very modest 1GHz will require one billion-way parallelism, and parallelism in all subsystems to keep those threads humming.

    via Nvidia: No magic compilers for HPC coprocessors • The Register.

    Interesting write-up of a blog entry from nVidia‘s chief of super-computing, including his thoughts regarding scaling up to an exascale supercomputer. I’m surprised at how power efficient a GPU is for floating point operations. I’m amazed at these company’s ability to measure the power consumption down to the single operation level. Microjoules and picojoules are worlds apart from on another and here’s the illustration:

    1 Microjoule is 1 millionth of a joule or 1×10-6 (six decimal places) whereas 1 picojoule is 1×10-12 or twice as many decimal places a total of 12 zeroes. So that is a HUGE difference 6 orders of magnitude in efficiency from an electrical consumption standpoint. The nVidia guy, author Steve estimates that to get to exascale supercomputers any hybrid CPU/GPU machine would need GPUs that have one order of magnitude higher efficiency in joules per floating point operation (FLOP) or 1×10-13, one whole decimal point better. To borrow a cliche, Supercomputer manufacturers have their work cut out for them. The way forward is efficiency and the GPU has the edge per operation, and all they need do is increase the efficiency that one decimal point to get them closer to the exascale league of super-computing.

    Why is exascale important to the scientific community at large? In one segment there’s never enough cycles per second to satisfy the scale of the computations being done. Models of systems can be created but the simulations they provide may not have enough fine grained ‘detail’. The detail say for weather model simulating a period of time in the future needs to know the current conditions then it can start the calculation. But the ‘resolution’ or fine-grained detail of ‘conditions’ is what limits the accuracy over time. Especially when small errors get amplified by each successive cycle of calculating. One way to help limit the damage by these small errors is to increase the resolution or the land area over which you are assign a ‘current condition’. So instead of 10 miles of resolution (meaning each block on the face of the planet is 10miles square), you switch to 1mile resolution. Any error in a one mile square patch is less likely to cause huge errors in the future weather prediction. But now you have to calculate 10x the number of squares as compared to the previous best model which you set at 10miles of resolution. That’s probably the easiest way to see how demands on the computer increase as people increase the resolution of their weather prediction models. But it’s not limited just to weather. It could be used to simulate a nuclear weapon aging over time. Or it could be used to decrypt foreign messages intercepted by NSA satellites. The speed of the computer would allow more brute force attempts ad decrypting any message they capture.

    Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva ...
    Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva TNT2 M64 Grafikprozessor (Photo credit: Wikipedia)

    In spite of all the gains to be had with an exascale computer, you still have to program the bloody thing to work with your simulation. And that’s really the gist of this article, no free lunch in High Performance Computing. The level of knowledge of the hardware required to get anything like the maximum theoretical speed is a lot higher than one would think. There’s no magic bullet or ‘re-compile’ button that’s going to get your old software running smoothly on the exascale computer. More likely you and a team of the smartest scientists are going to work for years to tailor your simulation to the hardware you want to run it on. And therein lies the rub, the hardware alone isn’t going to get you the extra performance.

  • What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld

    Astronauts Steven L. Smith, and John M. Grunsf...
    Astronauts Steven L. Smith, and John M. Grunsfeld, appear as small figures in this wide scene photographed during extravehicular activity (EVA). On this space walk they are replacing gyroscopes, contained in rate sensor units (RSU), inside the Hubble Space Telescope. A wide expanse of waters, partially covered by clouds, provides the backdrop for the photograph. (Photo credit: Wikipedia)

    “Theres a bunch of research I’ve come across in this work, where people say that the social context is a 78-80 per cent determinant of performance; individual abilities are 10 per cent. So why do we make this mistake? Because we spend all of these years in higher education being trained that its about individual abilities.”

    via What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld.

    Former NASA Director Charlie Pellerin is now a consultant on how to prevent failures on teams charged with carrying out large scale, technically impossible projects. He’s had to face two biggies while at NASA the Challenger explosion and subsequent to that and more directly the Hubble Space Telescope mirror failure. In that time he tried to really look at the source of the failures rather than just let the investigative committees do all the work. And what he’s decided is that culture is a bigger part of the chain of failure than technical ability.

    Which leads me to ask the question how often does this happen in other circumstances as well? I’ve seen the PBS NOVA program on the 747 runway collision in Tenerife back in 1977. At that time the KLM Airliner more or less start taking off before the Pan American 747 had taxied off of the runway. In spite of all the protocols and controls in place to manage planes on the ground the captain of the KLM 747 made the decision to take-off not once, but TWICE! The first time it happened his co-pilot corrected him saying they didn’t have clearance from the tower. The second time, the co-pilot and navigator both sat back and let the whole thing unfold, to their own detriment. No one survived in that KLM 747 after it crashed into the Pan American 747 and bounced down the runway. In the article I link to above there’s an anecdote that Charlie Pellerin relates about a number of Korean Air crashes that occurred in the 1990s. Similarly it was the cockpit ‘culture’ that was leading to the bad decisions being made and resulting in the loss of the airplane and passengers on board.

    Some people like to call it ‘top-down’ management, where everyone defers to the person recognized as the person in charge. Worse yet sometimes the person in charge doesn’t always realize this. They go on about their decision making process never once thinking people are restraining themselves holding back questions. The danger is always once this pattern is in place, any mistake by the person in charge gets amplified over time. In Charlie Pellerin’s judgement modern airliners are designed to run by a team who SHARE the responsibilities of conducting the airplane. And while the planes themselves have many safety systems in place to make things run smoothly the assumption is always made by the plane designers of a TEAM. But when you have a hierarchy of people in charge and people that defer to them, the TEAM as such doesn’t exist and you have now broken the primary design principle of the aircraft’s designer. No TEAM, No plane, and there’s many examples that show this not just in the airline accident investigations.

    Polishing the Hubble Mirror at Perkin-Elmer

    In the case of the Hubble Telescope mirror, things broke down when a simple calibration step was rushed. The sub-contractor in charge of measuring the point of focus on the mirror followed the procedure as given to him. But skipped a step that threw the whole calibration off. The step that he skipped was to simply apply spray paint onto two end caps that would then be placed on to a very delicately measured and finely cut metal rod. The black spray paint was meant to act as a non-reflective surface to expose a very small bit of the rod end to a laser that would measure the distance to the focus point. What happened instead because the whole telescope program was going over budget and was constantly delayed all sub-contractors were pressured to ‘hurry up’. When the guy who was responsible for this calibration step couldn’t find matte black spray paint to put on the end caps he improvised (like a true engineer). He got black electrical tape, wrapped it on the end of the cap, cut a hole with the tip of an Xacto knife and began his calibration step.

    But that one detail was what put the whole Hubble Space Telescope in jeopardy. In the rush to get this step done, the Xacto knife nicked a bit off the metal end cap and a small shiny, metal burr was created. Almost too small to see, the burr poke out into the hole cut into the black electrical tape for the laser to shine through. When the engineer calibrated it, the small burr was reflecting light back to the sensors measuring the distance. The burr was only 1mm off the polished surface of the calibration rod. And that 1mm distance was registered as ‘in spec’ and the full distance to the focus point had 1 mm added to it. Considering how accurate a mirror has to be for telescope work, and how long the Hubble mirror spent being ground and polished, 1mm would be equal to 1 mile in the real world. And this was the source of the ‘blur’ in the Hubble Telescope when it was first turned on after being deployed by the Space Shuttle. The culture of hurry up and get it done, we’re behind schedule jeopardized a billion dollar space telescope mission that was over budget and way behind schedule.

    All these cautionary tales reiterate the over-arching theme of big failures are not technical, no. These failures are cultural and everyone has the capacity to do better every chance they get. I encourage anyone and everyone reading this article to read the original interview with Charlie Pellerin as he’s got a lot to say on this subject and some fixes that can be applied to avoid the fire next time. Because statistically speaking there will always be a next time.

    KLM's 747-406 PH-BFW - nose
    KLM’s 747-406 PH-BFW – nose (Photo credit: caribb)
  • Apple A5X CPU in Review

    Apple Inc.
    Apple Inc. (Photo credit: marcopako )

    A meta-analysis of the Apple A5X system on chip

    (from the currently shipping 3rd Gen iPad)

    New Ipad’s A5X beats NIVIDIA Tegra 3 in some tests (MacNN|Electronista)

    Apple’s A5X Die (and Size?) Revealed (Anandtech.com)

    Chip analysis reveals subtle changes to new iPad innards (AppleInsider-quoting Anandtech)

    Apple A5X Die Size Measured: 162.94mm^2, Samsung 45nm LP Confirmed (Update from Anandtech based on a more technical analysis of the chip)

    Reading through all the hubbub and hand-waving from the technology ‘teardown’ press outlets, one would have expected a bigger leap from Apple’s chip designers. A fairly large chip sporting an enormous graphics processor integrated into the die is what Apple came up with to help boost itself to the next higher rez display (so-called Retina Display). The design rule is still a pretty conservative 45nm (rather than try to push the envelope by going with 32nm or thinner to bring down the power requirements). Apple similarly had to boost its battery capacity to make up for this power hungry pixel demon by almost 2X more than the first gen iPad. So for almost the ‘same’ amount of battery capacity (10 hours of reserve power), you get the higher rez display. But a bigger chip and higher rez display will add up to some extra heat being generated, generally speaking. Which leads us to a controversy.

    Given this knowledge there has been a recent back and forth argument over thermal design point for iPad 3rd generation. Consumer Reports published an online article saying the power/heat dissipation was much higher than previous generation iPads. They included some thermal photographs indicating the hot spots on the back of the device and relative temperatures. While the iPad doesn’t run hotter than a lot of other handheld devices (say Android tablets). It does run hotter than say an iPod Touch. But as Apple points out that has ALWAYS been the case. So you gain some things you give up some things and still Apple is the market leader in this form factor, years ahead of the competition. And now the tempest in the teapot is winding down as Consumer Reports (via LA Times.com)has rated the 3rd Gen iPad as it’s no. 1 tablet on the market (big surprise). So while they aren’t willing to retract their original claim of high heat, they are willing to say it doesn’t count as ’cause for concern’. So you be the judge when you try out the iPad in the Apple Store. Run it through its paces, a full screen video or 2 should heat up the GPU and CPU enough to get the electrons really racing through the device.

    A picture of the Apple A5X
    This is the new System on Chip used by the Apple 3rd generation iPad
  • ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com

    Image representing Wired Magazine as depicted ...
    Image via CrunchBase

    On Tuesday, the company unveiled its new ARM Cortex-M0+ processor, a low-power chip designed to connect non-PC electronics and smart sensors across the home and office.

    Previous iterations of the Cortex family of chips had the same goal, but with the new chip, ARM claims much greater power savings. According to the company, the 32-bit chip consumes just nine microamps per megahertz, an impressively low amount even for an 8- or 16-bit chip.

    via ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com.

    Lower power means a very conservative power budget especially for devices connected to the network. And 32 bits is nothing to sneeze at considering most manufacturers would pick a 16 or 8-bit chip to bring down the cost and power budget too. According to this article the degree of power savings is so great in fact that in sleep mode the chip consumes almost no power at all. For this market Moore’s Law is paying off big benefits especially given the bonus of a 32bit core. So not only will you get a very small lower power cpu, you’ll have a much more diverse range of software that could run on it and take advantage of a larger memory address space as well. I think non-PC electronics could include things as simple as web cams or cellphone cameras. Can you imagine a CMOS camera chip with a whole 32bit cpu built in? Makes you wonder no just what it could do, but what ELSE it could do, right?

    The term ‘Internet of Things‘ is bandied about quite a bit as people dream about cpus and networks connecting ALL the things. And what would be the outcome if your umbrella was connected to the Internet? What if ALL the umbrellas were connected? You could log all kinds of data, whether it was opened or close, what the ambient temperature is. It would be like a portable weather station for anyone aggregating all the logged data potentially. And the list goes on and on. Instead of Tire pressure monitors, why not also capture video of the tire as it is being used commuting to work. It could help measure the tire wear and setup and appointment when you need to get a wheel alignment. It could determine how many times you hit potholes and suggest smoother alternate routes. That’s the kind of blue sky wide open conjecture that is enabled by a 32-bit low/no power cpu.

    Moore's Law, The Fifth Paradigm.
    Moore’s Law, The Fifth Paradigm. (Photo credit: Wikipedia)
  • Accidental Time Capsule: Moments from Computing in 1994 (from RWW)

    Byte Magazine is one of the reasons Im here today, doing what I do. Every month, Byte set its sights on the bigger picture, a significant trend that might be far ahead or way far ahead. And in July 1994, Jon Udell to this very day, among the most insightful people ever to sign his name to an article was setting his sights on the inevitable convergence between the computer and the telephone.

    via Accidental Time Capsule: Moments from Computing in 1994, by 

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    I also liked Tom Halfhill, Jerry Pournelle, Steve Gilmore, and many other writers at Byte Inc. over the years too. I couldn’t agree more with Scott Fulton, as I still am a big fan of Jon Udell and any projects he worked on and documented. I can credit Jon Udell for getting me to be curious about weblogging, Radio Userland, WordPress, Flickr and del.icio.us (social bookmarking website). And watching his progress on a ‘Calendar of Public Calendars’, The elmcity project. Jon’s attempting to catalog and build an aggregated list of calendars that have RSS style feeds that anyone can subscribe to. No need for automated emails filling a filtered email box. No, you just fire up a browser and read what’s posted. You find out what’s going on and just add the event to your calendar.

    As Jon has discovered the calendar exists, the events are there, they just aren’t evenly distributed yet (ie much like the future). So in his analysis of ‘what works’ Jon’s found some sterling examples of calendar keeping and maintenance some of which has popped up in interesting places, like Public School systems. However the biggest downfall of all events calendars is the all too common practice of taking Word Documents and exporting them as PDF files which get posted to a website. THAT is the calendar for far too many organizations and it fails utterly as a means of ‘discovering’ what’s going on.

    Suffice it to say elmcity has been a long term goal of organizing and curatorial work that Jon is attempting to get an informal network of like-minded people involved in. And as different cities form up calendar ‘hubs’ Jon is collecting them into larger networks so that you can just search one spot and find out ‘what’s happening’ and then adding those events to your own calendar in a very seamless and lightweight manner. I highly recommend following Jon’s weblog as he’s got the same ability to explain and analyze these technologies that he excelled at while at Byte Inc. And continues to follow his bliss and curiosity about computers, networks and more generally technology.

  • Stephen Wolfram Blog : The Personal Analytics of My Life

    Publicity photo of en:Stephen Wolfram.
    Publicity photo of en:Stephen Wolfram. (Photo credit: Wikipedia)

    One day I’m sure everyone will routinely collect all sorts of data about themselves. But because I’ve been interested in data for a very long time, I started doing this long ago. I actually assumed lots of other people were doing it too, but apparently they were not. And so now I have what is probably one of the world’s largest collections of personal data.

    via Stephen Wolfram Blog : The Personal Analytics of My Life.

    Gordon Bell
    Gordon Bell (Photo credit: Wikipedia)

    In some ways similar to Stephen Wolfram, Gordon Bell at Microsoft has engaged in an attempt to record his “LifeBits” using a ‘wearable’ computer to record video and capture what goes on in his life. In my opinion, Stephen Wolfram has done Gordon Bell one better by collecting data over a much longer period and of a much wider range than Gordon Bell accomplished within the scope of LifeBits. Reading Wolfram’s summary of all his data plots is as interesting as seeing the plots themselves. There can be no doubt that Stephen Wolfram has always and will continue to think differently than most folks, and dare I say most scientists. Bravo!

    The biggest difference between MyLifeBits versus Wolfram’s personal data collection is the Wolram’s emphasis on non-image based data. The goal it seems for the Microsoft Research group is to fulfill the promise of Vannevar Bush’s old article titled “As we may think” printed in the Atlantic, July 1945. In this article Bush proposes a prototype of a more ‘visual computer’ that would act as a memory recall and analytic thinking aid. He named it the Memex.

    Gordon Bell and Jim Gemmell of Microsoft Research, seemed to be focused on the novelty of a camera carried and taking pictures automatically of the area immediately in front of it. This log of ‘what was seen’ was meant to help cement visual memory and recall. Gordon Bell had spent a long period of time digitizing, “articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally.” This over emphasis on visual data I think if used properly might be useful to some but is more a product of Gordon Bell’s own personal interest in seeing how much he could capture then catalog after the fact.

    Stephen Wolfram’s data wasn’t even necessarily based on a ‘wearable computer‘ the way MyLifeBits seems to be. Wolfram built in a logging/capture system into things he did daily on a computer. This even included data collected by a digital pedometer to measure the steps he would take in a day. The plots of the data are most interesting in comparison to one another especially given the length of time over which they were collected (a much bigger set than Gordon Bell’s Life Bits I dare say). So maybe this points to another step forward in the evolution of Lifebits perhaps? Wolfram’s data seems to be more useful in a lot of ways, he’s not as focused on memory and recall of any given day. But maybe a synthesis of Wolfram’s data collection methods and analysis and Gordon Bell’s MyLifeBits capture of image data might be useful to a broader range of people if someone wanted to embrace and extend these two scientists’ personal data projects.

  • Nginx is some serious voodoo for serving up websites. I’m bowled over by this level of performance for a consumer level Intel box running Ubuntu. And from 200 connections to 1000 connections performance stays high without any big increases in latency. Amazing.

    lowlatencyweb's avatarThe Low Latency Web

    A modern HTTP server running on somewhat recent hardware is capable of servicing a huge number of requests with very low latency. Here’s a plot showing requests per second vs. number of concurrent connections for the default index.html page included with nginx 1.0.14.


    With this particular hardware & software combination the server quickly reaches over 500,000 requests/sec and sustains that with gradually increasing latency. Even at 1,000 concurrent connections, each requesting the page as quickly as possible, latency is only around 1.5ms.

    The plot shows the average requests/sec and per-request latency of 3 runs of wrk -t 10 -c N -r 10m http://localhost:8080/index.html where N = number of connections. The load generator is wrk, a scalable HTTP benchmarking tool.

    Software

    The OS is Ubuntu 11.10 running Linux 3.0.0-16-generic #29-Ubuntu SMP Tue Feb 14 12:48:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux. The following kernel parameters were changed to increase…

    View original post 149 more words