Blog

  • Question to Carpetbomberz Readers out there (E=m*c^2)

    This is an interactive quiz and I don’t know the answer in advance. But possibly through crowd-sourcing the solution we can come to a more quick and accurate answer. I remember once on a PBS program hearing a number given as to the the ‘mass’ of the amount of sunshine that strikes the Earth in one year. Does anyone have a rough scheme on how to calculate the Mass of the sunlight that strikes the earth in one year, then convert that from say Kilograms into pounds?

  • Google X founder Thrun demonstrates Project Glass on TV show | Electronista

    Sebastian Thrun, Associate Professor of Comput...
    Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)

    Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.

    via Google X founder Thrun demonstrates Project Glass on TV show | Electronista.

    You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.

    Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.

    So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:

    Project Glass: One day. . .

    As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:

    1. A specific iPhone that has a noise cancelling microphone array

    2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses

    So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:

    1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside

    2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.

    It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.

    Sergey Brin wearing Google Glasses
    Thomas Hawk’s photo of Sergey Brin wearing Google Glasses
  • Nice technical abstract on optimizing a messaging architecture on the theoretical level. Many parts to the puzzle.

  • Fusion-ios flash drill threatens to burst Violins pipes • The Register

    Violin Memory logo
    Violin Memory Inc.

    NoSQL database supplier Couchbase says it is tweaking its key-value storage server to hook into Fusion-ios PCIe flash ioMemory products – caching the hottest data in RAM and storing lukewarm info in flash. Couchbase will use the ioMemory SDK to bypass the host operating systems IO subsystems and buffers to drill straight into the flash cache.

    via Fusion-ios flash drill threatens to burst Violins pipes • The Register.

    Can you hear it? It’s starting to happen. Can you feel it? The biggest single meme of the last 2 years Big Data/NoSQL is mashing up with PCIe SSDs and in memory databases. What does it mean? One can only guess but the performance gains to be had using a product like CouchBase to overcome the limits of a traditional tables/rows SQL database will be amplified when optimized and paired up with PCIe SSD data stores. I’m imagining something like a 10X boost in data reads/writes on the CouchBase back end. And something more like realtime performance from something that might have been treated previously like a Data Mart/Data warehouse. If the move to use the ioMemory SDK and directFS technology with CouchBase is successful you are going to see some interesting benchmarks and white papers about the performance gains.

    What is Violin Memory Inc. doing in this market segment of tiered database caches? Violin is teaming with SAP to create a tiered cache for the HANA in memory databasefrom SAP. The SSD SAN array provided by Violin could be multi-tasked to do other duties (providing a cache to any machine on the SAN network). However, this product most likely would be a dedicated caching store to speed up all operations of a RAM based HANA installation, speeding up Online transaction processing and parallel queries on realtime data. No doubt SAP users could stand to gain a lot if they are already invested heavily into the SAP universe of products. But for the more enterprising, entrepreneurial types I think Fusio-io and Couchbase could help get a legacy free group of developers up and running with equal performance and scale. Which ever one you pick is likely to do the job once it’s been purchased, installed and is up and running in a QA environment.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase
  • SSD prices may drop following impending price war | MacFixIt – CNET Reviews

    Image representing Newegg as depicted in Crunc...
    Image via CrunchBase

    As a result of this impending price war, if you are planning on upgrading your system with an SSD, you might consider waiting for a few months to watch the market and see how much prices fall.

    via SSD prices may drop following impending price war | MacFixIt – CNET Reviews.

    Great analysis and news from Topher Kessler at C|Net regarding competition in the flash memory industry. I have to say keep your eyes peeled between now and September and track those prices closely through both Amazon and Newegg. They are neck and neck when it comes to prices on any of big name brand SSDs. Samsung and Intel would be at the top of my list going into the Fall, but don’t be too quick to purchase your gear. Just wait for it as Intel goes up against OCZ and Crucial and Kingston.

    The amount of change in prices will likely vary based on total capacity of each drive (that’s a fixed cost due to the chip count in the device). So don’t expect a 512GB SSD to be dropping by 50% by the end of Summer. It’s not going to be that drastic. But the price premium brought about by the semi-false scarcity of the SSDs is what is really going to be disappearing once the smaller vendors are eliminated from the market. I will be curious to see how Samsung fares in this battle between the other manufacturers as they were not specifically listed as a participant in the price war. However being a chip manufacturer gives them a genuine advantage as they supply many of the people who design and manufacture SSDs with Flash memory chips.

    2008 Intel Developer Forum in Taipei: Samsung ...
    2008 Intel Developer Forum in Taipei: Samsung muSATA_128GB_SSD. (Photo credit: Wikipedia)
  • AnandTech – The Intel Ivy Bridge Core i7 3770K Review

    Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.

    via AnandTech – The Intel Ivy Bridge Core i7 3770K Review.

    QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.

    In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel  Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.

    Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.

    Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.

    Timeline of Intel processor codenames includin...
    Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)
  • Owning Your Words: Personal Clouds Build Professional Reputations | Cloudline | Wired.com

    My first blogging platform was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosophere became a conversational medium, I saw what that could mean. Radio UserLand did not, at first, support comments. That turned out to be a constraint well worth embracing. When conversation emerged, as it inevitably will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.

    via Owning Your Words: Personal Clouds Build Professional Reputations | Cloudline | Wired.com.

    I would love to be able to comment on an article or a blog entry by passing a link to a blog entry within my own WordPress instance on WordPress.com. However rendering that ‘feed’ back into the comments section on the originating article/blog page doesn’t seem to be common. At best I think I could drop a permalink into the comments section so people might be tempted to follow the link to my blog. But it’s kind of unfair to an unsuspecting reader to force them to jump and in a sense re-direct to another website just to follow a commentary. So I fully agree there needs to be a pub/sub style way of passing my blog entry by reference back into the comments section of the originating article/blog. Better yet that gives me some ability to amend and edit my poor choice of words the first time I publish a response. Too often silly mistakes get preserved in the ‘amber’ of the comments fields in the back-end MySQL databases of those content management systems housing many online web magazines. So there’s plenty of room for improvement and RSS could easily embrace and extend this style of commenting I think if someone were driven to develop it.

  • Fusion-io shoves OS aside, lets apps drill straight into flash • The Register

    Like the native API libraries, directFS is implemented directly on ioMemory, significantly reducing latency by entirely bypassing operating system buffer caches, file system and kernel block I/O layers. Fusion-io directFS will be released as a practical working example of an application running natively on flash to help developers explore the use of Fusion-io APIs.

    via (Chris Mellor) Fusion-io shoves OS aside, lets apps drill straight into flash • The Register.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase

    Another interesting announcement from the folks at Fusion-io regarding their brand of PCIe SSD cards. There was a proof of concept project covered previously by Chris Mellor in which Fusion-io attempted to top out at 1 Billion IOPs using a novel architecture where PCIe SSD drives were not treated as storage. In fact the Fusion-io was turned into a memory tier bypassing most of the OSes own buffers and queues for handling a traditional Filesystem. Doing this reaped many benefits in terms of depleting the latency inherent with a FileSystem and how it has to communicate through the OS kernel through to the memory subsystem and back again.

    Considering also work done within the last 4 years or more using so-called “in memory’ databases and big data projects in general a product like directFS might pair nicely with them. The limit with in memory databases is always the amount of RAM available and total number of cpu nodes managing those memory subsystems. Tack on the necessary storage to load and snapshot the database over time and you have a very traditional looking database server. However, if you supplement that traditional looking architecture with a tier of storage like the directFS the SAN network becomes a 3rd tier of storage, almost like a tape backup device. Sounds interesting the more I daydream about it.

    Shows the kernel's role in a computer.
    Shows the kernel’s role in a computer. (Photo credit: Wikipedia)
  • Cloud Is Bigger Than Its Backbone, Research Finds | Cloudline | Wired.com

    Innovative cloud apps are hitting legacy-based industries too; TV broadcasters being one of them. Consumers want internet access to shows and sports on tablets and phones.

    via Cloud Is Bigger Than Its Backbone, Research Finds | Cloudline | Wired.com.

    Image representing Amazon Web Services as depi...
    Image via CrunchBase

    Sun Microsystems once marketed itself with the phrase, “The Network IS the computer“. Now the Cloud is bigger than the network. Witness the recent story that Amazon’s Cloud Services and its data center pass 1% of ALL internet traffic. That’s a lot of data for any single company to possess at any fraction of time in a day. I wonder too how this compares to the volume that the NSA handles by rerouting the whole Internet through its secret server closets at key data network providers around the U.S? A very few seem to be handling an outsize portion of data that moves on what we consider the ‘internet’. But in fact a greater deal of it moves within each company’s own network as a form of Intranet (think Amazon Web Services and Google’s Data Center infrastructure). Those private nets exist in order for the owners to ‘add value’ to the traffic and the advertisers who help subsidize this traffic by delivering eyeballs to the adverts.

    But this article that I’m commenting on isn’t really looking at Internet traffic, instead it is all about the growth of these companies in terms of total employees. Cloud based service providers grew at a much faster rate than any other kind of technology company worldwide. The prerequisites for this phenomenon is the erosion of what were high end technologies into every commodities. High speed networks, large data centers and mass virtualization all work in concert to bring down the incremental cost of moving, storing and processing a data packet. The turn towards the data center (after a turn away from it using desktop computers) is becoming profitable as the big data cloud providers differentiate their assets. New services like Spotify pop-up on the back of Amazon Web Services, because they need the infrastructure but don’t have the capital to do it themselves. This would not have been possible in the old days of the Internet bubble when Sun Microsystems and Cisco ruled the day.

    And finally the entertainment industry too feels the tug of the Cloud through content delivery services which at one time were only a high end option for the very well-heeled companies that could afford it. Now CDNs are a commodity as well, though maybe not as inexpensive as say a simple web-mail inbox. Still if you need something to move reliably to the endpoints of the Internet and you cannot afford to provision the data network and co-location servers ’round the world you just get someone else to do it. And not just someone who only does content delivery networks no, the specialization of these cloud providers isn’t that narrow. They are like big utility companies providing much more integration between all these pieces. Akamai and Limelight were the go to guys, but now if you’re already using AWS why not just tack-on the CDN service too while you’re at it (that lower friction is a tremendous value add). All of these utility computing services and the heavy lifting required to make them scale are what will drive further growth in hiring in the future  no doubt and will make the Cloud providers much richer than the guys owning and operating the Point-of-Presence nodes on the Internet.

    Diagram showing overview of cloud computing in...
    Diagram showing overview of cloud computing including Google, Salesforce, Amazon, Microsoft, Yahoo & Lundi Matin (Photo credit: Wikipedia)
  • Google shows off Project Glass augmented reality specs • The Register

    Thomas Hawk’s picture of Sergey Brin wearing the prototype of Project Glass

    But it is early days yet. Google has made it clear that this is only the initial stages of Project Glass and it is seeking feedback from the general public on what they want from these spectacles. While these kinds of heads-up displays are popular in films and fiction and dearly wanted by this hack, the poor sales of existing eye-level screens suggests a certain reluctance on the part of buyers.

    via Google shows off Project Glass augmented reality specs • The Register.

    The video of the Google Glass interface is kind of interesting and problematic at the same time. Stuff floats in and out of few kind of like the organism that live in the mucous of your eye. And the the latency delays of when you see something and issue a command give it a kind of halting staccato cadence when interacting with it. It looks and feels like old style voice recognition that needed discrete pauses added to know when things ended. As a demo it’s interesting, but they should issue releases very quickly and get this thing up to speed as fast as they possibly can. And I don’t mean having the CEO Sergey Brin show up at a party wearing the thing. According to reports the ‘back pack’ that the glasses are tethered to is not small. Based on the description I think Google has a long way to go yet.

    http://my20percent.wordpress.com/2012/02/27/baseball-cap-head-up-displa/

    And on the smaller scale tinkerer front, this WordPress blogger fashioned an older style ‘periscope’ using a cellphone, mirror and half-mirrored sunglasses to get a cheaper Augmented Reality experience. The cellphone is an HTC unit strapped onto the rim of a baseball hat. The display is than reflected downwards through a hold cut in the rim and then is reflected off a pair of sunglasses mounted at roughly a 45 degree angle. It’s cheap, it works, but I don’t know how good the voice activation is. Makes me wonder how well it might work with an iPhone Siri interface. The author even mentions that HTC is a little heavy and an iPhone might work a little better. I wonder if it wouldn’t work better still if the ‘periscope’ mirror arrangement was scrapped altogether. Instead just mount the phone flat onto the bill of the hat, let the screen face downward. The screen would then reflect off the sunglasses surface. The number of reflecting surfaces would be reduced, the image would be brighter, etc. I noticed a lot of people also commented on this fellow’s blog and might get some discussion brewing about longer term the value-add benefits to Augmented Reality. There is a killer app yet to be found and even Google hasn’t captured the flag yet.

    This picture shows the Wikitude World Browser ...
    This picture shows the Wikitude World Browser on the iPhone looking at the Old Town of Salzburg. Computer-generated information is drawn on top of the screen. This is an example for location-based Augmented Reality. (Photo credit: Wikipedia)