Category: blogroll

This is what I subscribe to myself

  • AnandTech – The Intel Ivy Bridge Core i7 3770K Review

    Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.

    via AnandTech – The Intel Ivy Bridge Core i7 3770K Review.

    QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.

    In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel  Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.

    Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.

    Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.

    Timeline of Intel processor codenames includin...
    Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)
  • Owning Your Words: Personal Clouds Build Professional Reputations | Cloudline | Wired.com

    My first blogging platform was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosophere became a conversational medium, I saw what that could mean. Radio UserLand did not, at first, support comments. That turned out to be a constraint well worth embracing. When conversation emerged, as it inevitably will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.

    via Owning Your Words: Personal Clouds Build Professional Reputations | Cloudline | Wired.com.

    I would love to be able to comment on an article or a blog entry by passing a link to a blog entry within my own WordPress instance on WordPress.com. However rendering that ‘feed’ back into the comments section on the originating article/blog page doesn’t seem to be common. At best I think I could drop a permalink into the comments section so people might be tempted to follow the link to my blog. But it’s kind of unfair to an unsuspecting reader to force them to jump and in a sense re-direct to another website just to follow a commentary. So I fully agree there needs to be a pub/sub style way of passing my blog entry by reference back into the comments section of the originating article/blog. Better yet that gives me some ability to amend and edit my poor choice of words the first time I publish a response. Too often silly mistakes get preserved in the ‘amber’ of the comments fields in the back-end MySQL databases of those content management systems housing many online web magazines. So there’s plenty of room for improvement and RSS could easily embrace and extend this style of commenting I think if someone were driven to develop it.

  • Fusion-io shoves OS aside, lets apps drill straight into flash • The Register

    Like the native API libraries, directFS is implemented directly on ioMemory, significantly reducing latency by entirely bypassing operating system buffer caches, file system and kernel block I/O layers. Fusion-io directFS will be released as a practical working example of an application running natively on flash to help developers explore the use of Fusion-io APIs.

    via (Chris Mellor) Fusion-io shoves OS aside, lets apps drill straight into flash • The Register.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase

    Another interesting announcement from the folks at Fusion-io regarding their brand of PCIe SSD cards. There was a proof of concept project covered previously by Chris Mellor in which Fusion-io attempted to top out at 1 Billion IOPs using a novel architecture where PCIe SSD drives were not treated as storage. In fact the Fusion-io was turned into a memory tier bypassing most of the OSes own buffers and queues for handling a traditional Filesystem. Doing this reaped many benefits in terms of depleting the latency inherent with a FileSystem and how it has to communicate through the OS kernel through to the memory subsystem and back again.

    Considering also work done within the last 4 years or more using so-called “in memory’ databases and big data projects in general a product like directFS might pair nicely with them. The limit with in memory databases is always the amount of RAM available and total number of cpu nodes managing those memory subsystems. Tack on the necessary storage to load and snapshot the database over time and you have a very traditional looking database server. However, if you supplement that traditional looking architecture with a tier of storage like the directFS the SAN network becomes a 3rd tier of storage, almost like a tape backup device. Sounds interesting the more I daydream about it.

    Shows the kernel's role in a computer.
    Shows the kernel’s role in a computer. (Photo credit: Wikipedia)
  • Google shows off Project Glass augmented reality specs • The Register

    Thomas Hawk’s picture of Sergey Brin wearing the prototype of Project Glass

    But it is early days yet. Google has made it clear that this is only the initial stages of Project Glass and it is seeking feedback from the general public on what they want from these spectacles. While these kinds of heads-up displays are popular in films and fiction and dearly wanted by this hack, the poor sales of existing eye-level screens suggests a certain reluctance on the part of buyers.

    via Google shows off Project Glass augmented reality specs • The Register.

    The video of the Google Glass interface is kind of interesting and problematic at the same time. Stuff floats in and out of few kind of like the organism that live in the mucous of your eye. And the the latency delays of when you see something and issue a command give it a kind of halting staccato cadence when interacting with it. It looks and feels like old style voice recognition that needed discrete pauses added to know when things ended. As a demo it’s interesting, but they should issue releases very quickly and get this thing up to speed as fast as they possibly can. And I don’t mean having the CEO Sergey Brin show up at a party wearing the thing. According to reports the ‘back pack’ that the glasses are tethered to is not small. Based on the description I think Google has a long way to go yet.

    http://my20percent.wordpress.com/2012/02/27/baseball-cap-head-up-displa/

    And on the smaller scale tinkerer front, this WordPress blogger fashioned an older style ‘periscope’ using a cellphone, mirror and half-mirrored sunglasses to get a cheaper Augmented Reality experience. The cellphone is an HTC unit strapped onto the rim of a baseball hat. The display is than reflected downwards through a hold cut in the rim and then is reflected off a pair of sunglasses mounted at roughly a 45 degree angle. It’s cheap, it works, but I don’t know how good the voice activation is. Makes me wonder how well it might work with an iPhone Siri interface. The author even mentions that HTC is a little heavy and an iPhone might work a little better. I wonder if it wouldn’t work better still if the ‘periscope’ mirror arrangement was scrapped altogether. Instead just mount the phone flat onto the bill of the hat, let the screen face downward. The screen would then reflect off the sunglasses surface. The number of reflecting surfaces would be reduced, the image would be brighter, etc. I noticed a lot of people also commented on this fellow’s blog and might get some discussion brewing about longer term the value-add benefits to Augmented Reality. There is a killer app yet to be found and even Google hasn’t captured the flag yet.

    This picture shows the Wikitude World Browser ...
    This picture shows the Wikitude World Browser on the iPhone looking at the Old Town of Salzburg. Computer-generated information is drawn on top of the screen. This is an example for location-based Augmented Reality. (Photo credit: Wikipedia)
  • Picture This: Hosted Lifebits in the Personal Cloud | Cloudline | Wired.com

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    It’s not just photos. I want the same for my whole expanding set of digital objects, including medical and financial records, commercial transactions, personal correspondence, home energy use data, you name it. I want all of my lifebits to be hosted in the cloud under my control. Is that feasible? Technically there are huge challenges, but they’re good ones, the kind that will spawn new businesses.

    via (Jon UdellPicture This: Hosted Lifebits in the Personal Cloud | Cloudline | Wired.com.

    From Gordon Moore‘s MyLifeBits to most recently Stephen Wolfram‘s personal collection of data and now to Jon Udell. Witness the ever expanding universe of personal data. Thinking about Gordon Moore now, I think the emphasis from Microsoft Research was always on video and pictures and ‘recollecting’ what’s happened in any given day. Stephen Wolfram’s emphasis was not so much on collecting the data but analyzing it after the fact and watching patterns emerge. Now with Jon Udell we get a nice kind of advancing of the art by looking at possible end-game scenarios. So you have collected a mass of LifeBits, now what?

    Who’s going to manage this thing? Is anyone going to offer a service that will help manage it? All great questions because the disparate form social networking lifebits take versus other like health and ‘performance’ lifebits (like Stephen Wolfram collects and maintains for himself) are pointing up a big gap that exists in the cloud services sector. Ripe pickings for anyone in the entrepreneurial vein to step in and bootstrap a service like the one Jon Udell proposes. If someone was really smart they could get it up and running cheaply on Amazon Web Services (AWS) until it got to be too cost and performance prohibitive to keep it hosted there. That would both allow an initial foray to test the waters, see the size and tastes of the market and adapt the hosted lifebits service to anyone willing to pay up. That might just be a recipe for success.

  • Nvidia: No magic compilers for HPC coprocessors • The Register

    Image representing NVidia as depicted in Crunc...
    Image via CrunchBase

    And with clock speeds topped out and electricity use and cooling being the big limiting issue, Scott says that an exaflops machine running at a very modest 1GHz will require one billion-way parallelism, and parallelism in all subsystems to keep those threads humming.

    via Nvidia: No magic compilers for HPC coprocessors • The Register.

    Interesting write-up of a blog entry from nVidia‘s chief of super-computing, including his thoughts regarding scaling up to an exascale supercomputer. I’m surprised at how power efficient a GPU is for floating point operations. I’m amazed at these company’s ability to measure the power consumption down to the single operation level. Microjoules and picojoules are worlds apart from on another and here’s the illustration:

    1 Microjoule is 1 millionth of a joule or 1×10-6 (six decimal places) whereas 1 picojoule is 1×10-12 or twice as many decimal places a total of 12 zeroes. So that is a HUGE difference 6 orders of magnitude in efficiency from an electrical consumption standpoint. The nVidia guy, author Steve estimates that to get to exascale supercomputers any hybrid CPU/GPU machine would need GPUs that have one order of magnitude higher efficiency in joules per floating point operation (FLOP) or 1×10-13, one whole decimal point better. To borrow a cliche, Supercomputer manufacturers have their work cut out for them. The way forward is efficiency and the GPU has the edge per operation, and all they need do is increase the efficiency that one decimal point to get them closer to the exascale league of super-computing.

    Why is exascale important to the scientific community at large? In one segment there’s never enough cycles per second to satisfy the scale of the computations being done. Models of systems can be created but the simulations they provide may not have enough fine grained ‘detail’. The detail say for weather model simulating a period of time in the future needs to know the current conditions then it can start the calculation. But the ‘resolution’ or fine-grained detail of ‘conditions’ is what limits the accuracy over time. Especially when small errors get amplified by each successive cycle of calculating. One way to help limit the damage by these small errors is to increase the resolution or the land area over which you are assign a ‘current condition’. So instead of 10 miles of resolution (meaning each block on the face of the planet is 10miles square), you switch to 1mile resolution. Any error in a one mile square patch is less likely to cause huge errors in the future weather prediction. But now you have to calculate 10x the number of squares as compared to the previous best model which you set at 10miles of resolution. That’s probably the easiest way to see how demands on the computer increase as people increase the resolution of their weather prediction models. But it’s not limited just to weather. It could be used to simulate a nuclear weapon aging over time. Or it could be used to decrypt foreign messages intercepted by NSA satellites. The speed of the computer would allow more brute force attempts ad decrypting any message they capture.

    Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva ...
    Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva TNT2 M64 Grafikprozessor (Photo credit: Wikipedia)

    In spite of all the gains to be had with an exascale computer, you still have to program the bloody thing to work with your simulation. And that’s really the gist of this article, no free lunch in High Performance Computing. The level of knowledge of the hardware required to get anything like the maximum theoretical speed is a lot higher than one would think. There’s no magic bullet or ‘re-compile’ button that’s going to get your old software running smoothly on the exascale computer. More likely you and a team of the smartest scientists are going to work for years to tailor your simulation to the hardware you want to run it on. And therein lies the rub, the hardware alone isn’t going to get you the extra performance.

  • What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld

    Astronauts Steven L. Smith, and John M. Grunsf...
    Astronauts Steven L. Smith, and John M. Grunsfeld, appear as small figures in this wide scene photographed during extravehicular activity (EVA). On this space walk they are replacing gyroscopes, contained in rate sensor units (RSU), inside the Hubble Space Telescope. A wide expanse of waters, partially covered by clouds, provides the backdrop for the photograph. (Photo credit: Wikipedia)

    “Theres a bunch of research I’ve come across in this work, where people say that the social context is a 78-80 per cent determinant of performance; individual abilities are 10 per cent. So why do we make this mistake? Because we spend all of these years in higher education being trained that its about individual abilities.”

    via What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld.

    Former NASA Director Charlie Pellerin is now a consultant on how to prevent failures on teams charged with carrying out large scale, technically impossible projects. He’s had to face two biggies while at NASA the Challenger explosion and subsequent to that and more directly the Hubble Space Telescope mirror failure. In that time he tried to really look at the source of the failures rather than just let the investigative committees do all the work. And what he’s decided is that culture is a bigger part of the chain of failure than technical ability.

    Which leads me to ask the question how often does this happen in other circumstances as well? I’ve seen the PBS NOVA program on the 747 runway collision in Tenerife back in 1977. At that time the KLM Airliner more or less start taking off before the Pan American 747 had taxied off of the runway. In spite of all the protocols and controls in place to manage planes on the ground the captain of the KLM 747 made the decision to take-off not once, but TWICE! The first time it happened his co-pilot corrected him saying they didn’t have clearance from the tower. The second time, the co-pilot and navigator both sat back and let the whole thing unfold, to their own detriment. No one survived in that KLM 747 after it crashed into the Pan American 747 and bounced down the runway. In the article I link to above there’s an anecdote that Charlie Pellerin relates about a number of Korean Air crashes that occurred in the 1990s. Similarly it was the cockpit ‘culture’ that was leading to the bad decisions being made and resulting in the loss of the airplane and passengers on board.

    Some people like to call it ‘top-down’ management, where everyone defers to the person recognized as the person in charge. Worse yet sometimes the person in charge doesn’t always realize this. They go on about their decision making process never once thinking people are restraining themselves holding back questions. The danger is always once this pattern is in place, any mistake by the person in charge gets amplified over time. In Charlie Pellerin’s judgement modern airliners are designed to run by a team who SHARE the responsibilities of conducting the airplane. And while the planes themselves have many safety systems in place to make things run smoothly the assumption is always made by the plane designers of a TEAM. But when you have a hierarchy of people in charge and people that defer to them, the TEAM as such doesn’t exist and you have now broken the primary design principle of the aircraft’s designer. No TEAM, No plane, and there’s many examples that show this not just in the airline accident investigations.

    Polishing the Hubble Mirror at Perkin-Elmer

    In the case of the Hubble Telescope mirror, things broke down when a simple calibration step was rushed. The sub-contractor in charge of measuring the point of focus on the mirror followed the procedure as given to him. But skipped a step that threw the whole calibration off. The step that he skipped was to simply apply spray paint onto two end caps that would then be placed on to a very delicately measured and finely cut metal rod. The black spray paint was meant to act as a non-reflective surface to expose a very small bit of the rod end to a laser that would measure the distance to the focus point. What happened instead because the whole telescope program was going over budget and was constantly delayed all sub-contractors were pressured to ‘hurry up’. When the guy who was responsible for this calibration step couldn’t find matte black spray paint to put on the end caps he improvised (like a true engineer). He got black electrical tape, wrapped it on the end of the cap, cut a hole with the tip of an Xacto knife and began his calibration step.

    But that one detail was what put the whole Hubble Space Telescope in jeopardy. In the rush to get this step done, the Xacto knife nicked a bit off the metal end cap and a small shiny, metal burr was created. Almost too small to see, the burr poke out into the hole cut into the black electrical tape for the laser to shine through. When the engineer calibrated it, the small burr was reflecting light back to the sensors measuring the distance. The burr was only 1mm off the polished surface of the calibration rod. And that 1mm distance was registered as ‘in spec’ and the full distance to the focus point had 1 mm added to it. Considering how accurate a mirror has to be for telescope work, and how long the Hubble mirror spent being ground and polished, 1mm would be equal to 1 mile in the real world. And this was the source of the ‘blur’ in the Hubble Telescope when it was first turned on after being deployed by the Space Shuttle. The culture of hurry up and get it done, we’re behind schedule jeopardized a billion dollar space telescope mission that was over budget and way behind schedule.

    All these cautionary tales reiterate the over-arching theme of big failures are not technical, no. These failures are cultural and everyone has the capacity to do better every chance they get. I encourage anyone and everyone reading this article to read the original interview with Charlie Pellerin as he’s got a lot to say on this subject and some fixes that can be applied to avoid the fire next time. Because statistically speaking there will always be a next time.

    KLM's 747-406 PH-BFW - nose
    KLM’s 747-406 PH-BFW – nose (Photo credit: caribb)
  • Apple A5X CPU in Review

    Apple Inc.
    Apple Inc. (Photo credit: marcopako )

    A meta-analysis of the Apple A5X system on chip

    (from the currently shipping 3rd Gen iPad)

    New Ipad’s A5X beats NIVIDIA Tegra 3 in some tests (MacNN|Electronista)

    Apple’s A5X Die (and Size?) Revealed (Anandtech.com)

    Chip analysis reveals subtle changes to new iPad innards (AppleInsider-quoting Anandtech)

    Apple A5X Die Size Measured: 162.94mm^2, Samsung 45nm LP Confirmed (Update from Anandtech based on a more technical analysis of the chip)

    Reading through all the hubbub and hand-waving from the technology ‘teardown’ press outlets, one would have expected a bigger leap from Apple’s chip designers. A fairly large chip sporting an enormous graphics processor integrated into the die is what Apple came up with to help boost itself to the next higher rez display (so-called Retina Display). The design rule is still a pretty conservative 45nm (rather than try to push the envelope by going with 32nm or thinner to bring down the power requirements). Apple similarly had to boost its battery capacity to make up for this power hungry pixel demon by almost 2X more than the first gen iPad. So for almost the ‘same’ amount of battery capacity (10 hours of reserve power), you get the higher rez display. But a bigger chip and higher rez display will add up to some extra heat being generated, generally speaking. Which leads us to a controversy.

    Given this knowledge there has been a recent back and forth argument over thermal design point for iPad 3rd generation. Consumer Reports published an online article saying the power/heat dissipation was much higher than previous generation iPads. They included some thermal photographs indicating the hot spots on the back of the device and relative temperatures. While the iPad doesn’t run hotter than a lot of other handheld devices (say Android tablets). It does run hotter than say an iPod Touch. But as Apple points out that has ALWAYS been the case. So you gain some things you give up some things and still Apple is the market leader in this form factor, years ahead of the competition. And now the tempest in the teapot is winding down as Consumer Reports (via LA Times.com)has rated the 3rd Gen iPad as it’s no. 1 tablet on the market (big surprise). So while they aren’t willing to retract their original claim of high heat, they are willing to say it doesn’t count as ’cause for concern’. So you be the judge when you try out the iPad in the Apple Store. Run it through its paces, a full screen video or 2 should heat up the GPU and CPU enough to get the electrons really racing through the device.

    A picture of the Apple A5X
    This is the new System on Chip used by the Apple 3rd generation iPad
  • ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com

    Image representing Wired Magazine as depicted ...
    Image via CrunchBase

    On Tuesday, the company unveiled its new ARM Cortex-M0+ processor, a low-power chip designed to connect non-PC electronics and smart sensors across the home and office.

    Previous iterations of the Cortex family of chips had the same goal, but with the new chip, ARM claims much greater power savings. According to the company, the 32-bit chip consumes just nine microamps per megahertz, an impressively low amount even for an 8- or 16-bit chip.

    via ARM Wants to Put the Internet in Your Umbrella | Wired Enterprise | Wired.com.

    Lower power means a very conservative power budget especially for devices connected to the network. And 32 bits is nothing to sneeze at considering most manufacturers would pick a 16 or 8-bit chip to bring down the cost and power budget too. According to this article the degree of power savings is so great in fact that in sleep mode the chip consumes almost no power at all. For this market Moore’s Law is paying off big benefits especially given the bonus of a 32bit core. So not only will you get a very small lower power cpu, you’ll have a much more diverse range of software that could run on it and take advantage of a larger memory address space as well. I think non-PC electronics could include things as simple as web cams or cellphone cameras. Can you imagine a CMOS camera chip with a whole 32bit cpu built in? Makes you wonder no just what it could do, but what ELSE it could do, right?

    The term ‘Internet of Things‘ is bandied about quite a bit as people dream about cpus and networks connecting ALL the things. And what would be the outcome if your umbrella was connected to the Internet? What if ALL the umbrellas were connected? You could log all kinds of data, whether it was opened or close, what the ambient temperature is. It would be like a portable weather station for anyone aggregating all the logged data potentially. And the list goes on and on. Instead of Tire pressure monitors, why not also capture video of the tire as it is being used commuting to work. It could help measure the tire wear and setup and appointment when you need to get a wheel alignment. It could determine how many times you hit potholes and suggest smoother alternate routes. That’s the kind of blue sky wide open conjecture that is enabled by a 32-bit low/no power cpu.

    Moore's Law, The Fifth Paradigm.
    Moore’s Law, The Fifth Paradigm. (Photo credit: Wikipedia)
  • Accidental Time Capsule: Moments from Computing in 1994 (from RWW)

    Byte Magazine is one of the reasons Im here today, doing what I do. Every month, Byte set its sights on the bigger picture, a significant trend that might be far ahead or way far ahead. And in July 1994, Jon Udell to this very day, among the most insightful people ever to sign his name to an article was setting his sights on the inevitable convergence between the computer and the telephone.

    via Accidental Time Capsule: Moments from Computing in 1994, by 

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    I also liked Tom Halfhill, Jerry Pournelle, Steve Gilmore, and many other writers at Byte Inc. over the years too. I couldn’t agree more with Scott Fulton, as I still am a big fan of Jon Udell and any projects he worked on and documented. I can credit Jon Udell for getting me to be curious about weblogging, Radio Userland, WordPress, Flickr and del.icio.us (social bookmarking website). And watching his progress on a ‘Calendar of Public Calendars’, The elmcity project. Jon’s attempting to catalog and build an aggregated list of calendars that have RSS style feeds that anyone can subscribe to. No need for automated emails filling a filtered email box. No, you just fire up a browser and read what’s posted. You find out what’s going on and just add the event to your calendar.

    As Jon has discovered the calendar exists, the events are there, they just aren’t evenly distributed yet (ie much like the future). So in his analysis of ‘what works’ Jon’s found some sterling examples of calendar keeping and maintenance some of which has popped up in interesting places, like Public School systems. However the biggest downfall of all events calendars is the all too common practice of taking Word Documents and exporting them as PDF files which get posted to a website. THAT is the calendar for far too many organizations and it fails utterly as a means of ‘discovering’ what’s going on.

    Suffice it to say elmcity has been a long term goal of organizing and curatorial work that Jon is attempting to get an informal network of like-minded people involved in. And as different cities form up calendar ‘hubs’ Jon is collecting them into larger networks so that you can just search one spot and find out ‘what’s happening’ and then adding those events to your own calendar in a very seamless and lightweight manner. I highly recommend following Jon’s weblog as he’s got the same ability to explain and analyze these technologies that he excelled at while at Byte Inc. And continues to follow his bliss and curiosity about computers, networks and more generally technology.