Blog

  • ARM creators Sophie Wilson and Steve Furber • reghardware

    BBC Micro
    BBC Micro (Photo credit: Wikipedia)

    Unsung Heroes of Tech Back in the late 1970s you wouldnt have guessed that this shy young Cambridge maths student named Wilson would be the seed for what has now become the hottest-selling microprocessor in the world.

    via Chris Bidmead: ARM creators Sophie Wilson and Steve Furber • reghardware.

    This is an amazing story of how a small computer company in Britain was able to jump into the chip design business and accidentally create a new paradigm in low power chips. Astounding what seemingly small groups can come with as complete product categories unto themselves. The BBC Micro was the single most important project that kept the company going and was produced as a learning aid for the BBC television show: The_Computer_Programme, a part of the BBC Computer Literacy Project. From that humble beginning of making the BBC Micro, Furber and Wilson’s ability to engineer a complete computer was well demonstrated.

    But whereas the BBC Micro used an off the shelf MOS 6502 cpu, a later computer used a custom (bespoke) designed chip created in house by Wilson and Furber. This is the vaunted Acorn Risc Machine (ARM) used in the Archimedes desktop computer. And that one chip helped launch a revolution unto itself in that the very first time the powered up a sample chip, the multimeter hooked up to registered no power draw. At first one would think this was a flaw, and ask “What the heck is happening here?” But in fact when further inspection showed that the multimeter was correct, the engineers discovered that the whole cpu was running of power that was leaking from the logic circuits within the chip itself. Yes, the low power requirement of this first sample chip of the ARM cpu in 1985 ran on 1/10 of a watt of electricity. And that ‘bug’ then went on to become a feature in later generations of the ARM architecture.

    Today we know of the ARM cpu cores as a bit of licensed Intellectual Property that any chip make can acquire and implement in their mobile processor designs. It has come to dominate many different architectures by different manufacturers as diverse as Qualcomm and Apple Inc. But none of it ever would have happened were it not for that somewhat surprising discovery of how power efficient that first sample chip really was when it was plugged into a development board. So thankyou Sophie Wilson and Steve Furber, as the designers and engineers today are able to stand upon your shoulders the way you once stood on the shoulders of people who designed the MOS 6502.

    MOS 6502 microprocessor in a dual in-line pack...
    MOS 6502 microprocessor in a dual in-line package, an extremely popular 8-bit design (Photo credit: Wikipedia)
  • Mark Cuban weighs in on the College Loan Debt problem in the U.S. Hopefully this can sort itself out soon without an major disruptions in the Industry. (fingers-crossed)

    kenbonzon's avatarblog maverick

    This is what I see when i think about higher education in this country today:

    Remember the housing meltdown ? Tough to forget isn’t it. The formula for the housing boom and bust was simple. A lot of easy money being lent to buyers who couldn’t afford the money they were borrowing. That money was then spent on homes with the expectation that the price of the home would go up and it could easily be flipped or refinanced at a profit.  Who cares if you couldn’t afford the loan. As long as prices kept on going up, everyone was happy. And prices kept on going up. And as long as pricing kept on going up real estate agents kept on selling homes and finding money for buyers.

    Until the easy money stopped.  When easy money stopped, buyers couldn’t sell. They couldn’t refinance.  First sales slowed, then prices started falling…

    View original post 1,376 more words

  • Question to Carpetbomberz Readers out there (E=m*c^2)

    This is an interactive quiz and I don’t know the answer in advance. But possibly through crowd-sourcing the solution we can come to a more quick and accurate answer. I remember once on a PBS program hearing a number given as to the the ‘mass’ of the amount of sunshine that strikes the Earth in one year. Does anyone have a rough scheme on how to calculate the Mass of the sunlight that strikes the earth in one year, then convert that from say Kilograms into pounds?

  • Google X founder Thrun demonstrates Project Glass on TV show | Electronista

    Sebastian Thrun, Associate Professor of Comput...
    Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)

    Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.

    via Google X founder Thrun demonstrates Project Glass on TV show | Electronista.

    You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.

    Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.

    So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:

    Project Glass: One day. . .

    As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:

    1. A specific iPhone that has a noise cancelling microphone array

    2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses

    So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:

    1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside

    2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.

    It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.

    Sergey Brin wearing Google Glasses
    Thomas Hawk’s photo of Sergey Brin wearing Google Glasses
  • Nice technical abstract on optimizing a messaging architecture on the theoretical level. Many parts to the puzzle.

  • Fusion-ios flash drill threatens to burst Violins pipes • The Register

    Violin Memory logo
    Violin Memory Inc.

    NoSQL database supplier Couchbase says it is tweaking its key-value storage server to hook into Fusion-ios PCIe flash ioMemory products – caching the hottest data in RAM and storing lukewarm info in flash. Couchbase will use the ioMemory SDK to bypass the host operating systems IO subsystems and buffers to drill straight into the flash cache.

    via Fusion-ios flash drill threatens to burst Violins pipes • The Register.

    Can you hear it? It’s starting to happen. Can you feel it? The biggest single meme of the last 2 years Big Data/NoSQL is mashing up with PCIe SSDs and in memory databases. What does it mean? One can only guess but the performance gains to be had using a product like CouchBase to overcome the limits of a traditional tables/rows SQL database will be amplified when optimized and paired up with PCIe SSD data stores. I’m imagining something like a 10X boost in data reads/writes on the CouchBase back end. And something more like realtime performance from something that might have been treated previously like a Data Mart/Data warehouse. If the move to use the ioMemory SDK and directFS technology with CouchBase is successful you are going to see some interesting benchmarks and white papers about the performance gains.

    What is Violin Memory Inc. doing in this market segment of tiered database caches? Violin is teaming with SAP to create a tiered cache for the HANA in memory databasefrom SAP. The SSD SAN array provided by Violin could be multi-tasked to do other duties (providing a cache to any machine on the SAN network). However, this product most likely would be a dedicated caching store to speed up all operations of a RAM based HANA installation, speeding up Online transaction processing and parallel queries on realtime data. No doubt SAP users could stand to gain a lot if they are already invested heavily into the SAP universe of products. But for the more enterprising, entrepreneurial types I think Fusio-io and Couchbase could help get a legacy free group of developers up and running with equal performance and scale. Which ever one you pick is likely to do the job once it’s been purchased, installed and is up and running in a QA environment.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase
  • SSD prices may drop following impending price war | MacFixIt – CNET Reviews

    Image representing Newegg as depicted in Crunc...
    Image via CrunchBase

    As a result of this impending price war, if you are planning on upgrading your system with an SSD, you might consider waiting for a few months to watch the market and see how much prices fall.

    via SSD prices may drop following impending price war | MacFixIt – CNET Reviews.

    Great analysis and news from Topher Kessler at C|Net regarding competition in the flash memory industry. I have to say keep your eyes peeled between now and September and track those prices closely through both Amazon and Newegg. They are neck and neck when it comes to prices on any of big name brand SSDs. Samsung and Intel would be at the top of my list going into the Fall, but don’t be too quick to purchase your gear. Just wait for it as Intel goes up against OCZ and Crucial and Kingston.

    The amount of change in prices will likely vary based on total capacity of each drive (that’s a fixed cost due to the chip count in the device). So don’t expect a 512GB SSD to be dropping by 50% by the end of Summer. It’s not going to be that drastic. But the price premium brought about by the semi-false scarcity of the SSDs is what is really going to be disappearing once the smaller vendors are eliminated from the market. I will be curious to see how Samsung fares in this battle between the other manufacturers as they were not specifically listed as a participant in the price war. However being a chip manufacturer gives them a genuine advantage as they supply many of the people who design and manufacture SSDs with Flash memory chips.

    2008 Intel Developer Forum in Taipei: Samsung ...
    2008 Intel Developer Forum in Taipei: Samsung muSATA_128GB_SSD. (Photo credit: Wikipedia)
  • AnandTech – The Intel Ivy Bridge Core i7 3770K Review

    Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.

    via AnandTech – The Intel Ivy Bridge Core i7 3770K Review.

    QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.

    In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel  Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.

    Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.

    Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.

    Timeline of Intel processor codenames includin...
    Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)
  • Owning Your Words: Personal Clouds Build Professional Reputations | Cloudline | Wired.com

    My first blogging platform was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosophere became a conversational medium, I saw what that could mean. Radio UserLand did not, at first, support comments. That turned out to be a constraint well worth embracing. When conversation emerged, as it inevitably will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.

    via Owning Your Words: Personal Clouds Build Professional Reputations | Cloudline | Wired.com.

    I would love to be able to comment on an article or a blog entry by passing a link to a blog entry within my own WordPress instance on WordPress.com. However rendering that ‘feed’ back into the comments section on the originating article/blog page doesn’t seem to be common. At best I think I could drop a permalink into the comments section so people might be tempted to follow the link to my blog. But it’s kind of unfair to an unsuspecting reader to force them to jump and in a sense re-direct to another website just to follow a commentary. So I fully agree there needs to be a pub/sub style way of passing my blog entry by reference back into the comments section of the originating article/blog. Better yet that gives me some ability to amend and edit my poor choice of words the first time I publish a response. Too often silly mistakes get preserved in the ‘amber’ of the comments fields in the back-end MySQL databases of those content management systems housing many online web magazines. So there’s plenty of room for improvement and RSS could easily embrace and extend this style of commenting I think if someone were driven to develop it.

  • Fusion-io shoves OS aside, lets apps drill straight into flash • The Register

    Like the native API libraries, directFS is implemented directly on ioMemory, significantly reducing latency by entirely bypassing operating system buffer caches, file system and kernel block I/O layers. Fusion-io directFS will be released as a practical working example of an application running natively on flash to help developers explore the use of Fusion-io APIs.

    via (Chris Mellor) Fusion-io shoves OS aside, lets apps drill straight into flash • The Register.

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase

    Another interesting announcement from the folks at Fusion-io regarding their brand of PCIe SSD cards. There was a proof of concept project covered previously by Chris Mellor in which Fusion-io attempted to top out at 1 Billion IOPs using a novel architecture where PCIe SSD drives were not treated as storage. In fact the Fusion-io was turned into a memory tier bypassing most of the OSes own buffers and queues for handling a traditional Filesystem. Doing this reaped many benefits in terms of depleting the latency inherent with a FileSystem and how it has to communicate through the OS kernel through to the memory subsystem and back again.

    Considering also work done within the last 4 years or more using so-called “in memory’ databases and big data projects in general a product like directFS might pair nicely with them. The limit with in memory databases is always the amount of RAM available and total number of cpu nodes managing those memory subsystems. Tack on the necessary storage to load and snapshot the database over time and you have a very traditional looking database server. However, if you supplement that traditional looking architecture with a tier of storage like the directFS the SAN network becomes a 3rd tier of storage, almost like a tape backup device. Sounds interesting the more I daydream about it.

    Shows the kernel's role in a computer.
    Shows the kernel’s role in a computer. (Photo credit: Wikipedia)