Category: blogroll

This is what I subscribe to myself

  • David May, parallel processing pioneer • reghardware

    INMOS T800 Transputer
    Image via Wikipedia

    The key idea was to create a component that could be scaled from use as a single embedded chip in dedicated devices like a TV set-top box, all the way up to a vast supercomputer built from a huge array of interconnected Transputers.

    Connect them up and you had, what was, for its era, a hugely powerful system, able to render Mandelbrot Set images and even do ray tracing in real time – a complex computing task only now coming into the reach of the latest GPUs, but solved by British boffins 30-odd years ago.

    via David May, parallel processing pioneer • reghardware.

    I remember the Transputer. I remember seeing ISA-based add-on cards for desktop computers back in the early 1980s. They would advertise in the back of the popular computer technology magazines of the day. And while it seemed really mysterious what you could do with a Transputer, the price premium to buy those boards made you realize it must have been pretty magical.

    Most recently while I was attending workshop in Open Source software I met a couple form employees of  a famous manufacturer of camera film. In their research labs these guys used to build custom machines using arrays of Transputers to speed up image processing tasks inside the products they were developing. So knowing that there’s even denser architectures using chips like Tilera, Intel Atom and ARM chips absolutely blows them away. The price/performance ratio doesn’t come close.

    Software was probably the biggest point off friction in that the tools to integrate the Transputer into the overall design required another level of expertise. That is true to of the General Purpose Graphics Processing Unit (GPGU) that nVidia championed and now markets with its Tesla product line. And the Chinese have created a hybrid supercomputer mating Tesla boards up with commodity cpus. It’s too bad that the economics of designing and producing the Transputer didn’t scale with the time (the way it has for Intel as a comparison). Clock speeds also fell behind too, which allowed general purpose micro-processors to spend the extra clock cycles performing the same calculations only faster. This is also the advantage that RISC chips had until they couldn’t overcome the performance increases designed in by Intel.

  • From Big Data to NoSQL: Part 3 (ReadWriteWeb.com)

    Image representing ReadWriteWeb as depicted in...
    Image via CrunchBase

    In Part One we covered data, big data, databases, relational databases and other foundational issues. In Part Two we talked about data warehouses, ACID compliance, distributed databases and more. Now well cover non-relational databases, NoSQL and related concepts.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology Part 3.

    I really give a lot of credit to ReadWriteWeb for packaging up this 3 part series (started May 24th I think). This at least narrows down what is meant by all the fast and loose terms White Papers and Admen are throwing around to get people to consider their products in RFPs. Just know this though, in many cases to NoSQL databases that keep coming into the market tend to be one-off solutions created by big social networking companies who couldn’t get MySQL/Oracle/MSQL to scale in size/speed sufficiently during their early build-outs. Just think of Facebook hitting the 500million user mark and you will know that there’s got to be a better way than relational algebra and tables with columns and rows.

    In part 3 we finally get to what we have all been waiting for, Non-relational Databases, so-called NoSQL. Google’s MapReduce technology is quickly shown as one of the most widely known examples of a NoSQL type distributed database that while not adhering to absolute or immediate consistency gets there with ‘eventual consistency (Consistency being the big C in the acronym ACID). The coolest thing about MapReduce is the similarity (at least in my mind) it bears to the Seti@Home Project where ‘work units’ were split out of large data tapes and distributed piecemeal over the Internet and analyzed on a person’s desktop computer. The complete units were then gathered up and brought together into a final result. This is similar to how Google does it’s big data analysis to get work done in its data centers. And it follows on in the opensource project Hadoop, an opensource version of MapReduce started by Yahoo and now part of the Apache organization.

    Document databases are cool too, and very much like an Object-oriented Database where you have a core item with attributes appended. I think also of LDAP directories which also have similarities to Object -oriented databases. A person has a ‘Common Name’ or CN attribute. The CN is as close to a unique identifier as you can get, with all the attributes strung along, appended on the end as they need to be added, in no particular order. The ability to add attributes as needed is like ‘tagging’ in the way Social networking websites like Picture, Bookmark websites do it. You just add an arbitrary tag in order to help search engines index the site and help relevant web searches find your content.

    The relationship between Graph Databases and Mind-Mapping is also very interesting. There’s a good graphic illustrating a Graph database of blog content to show how relation lines are drawn and labeled. So now I have a much better understanding of Graph databases as I have used mind-mapping products before. Nice parallel there I think.

    At the very end of hte article there’s mention of NewSQL of which Drizzle is an interesting offshoot. Looking up more about it, I found it interesting as a fork of the MySQL project. Specifically Drizzle factors out tons of functions some folks absolutely need but don’t always have (like say 32-bit legacy support). There’s a lot of attempts to get the code smaller so the overall lines of code went from over 1 million for MySQL to just under 300,000 for the Drizzle project. Speed and simplicity is the order of the day with Drizzle. Add missing functions by simply add the plug-in to the main app and you get back some of the MySQL features that might have been missing.

    *Note: Older survey of the NoSQL field conducted by ReadWriteWeb in 2009

  • History of Sage

    A screenshot of Sagemath working.
    Image via Wikipedia

    The Sage Project Webpage http://www.sagemath.org/

    Sage is mathematical software, very much in the same vein as MATLAB, MAGMA, Maple, and Mathematica. Unlike these systems, every component of Sage is GPL-compatible. The interpretative language of Sage is Python, a mainstream programming language. Use Sage for studying a huge range of mathematics, including algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, and exact linear algebra.

    Explanation of what Sage does by the original author William Stein 

    (Long – roughly 50 minutes)

    Original Developer http://wstein.org/ and his history of Sage mathematical software development. Wiki listing http://wiki.sagemath.org/ with a list of participating commiters. Discussion lists for developers: Mostly done through Google Groups with associated RSS feeds. Mercurial Repository (start date Sat Feb 11 01:13:08 2006) Gonzalo Tornaria seems to have loaded the project in at this point. Current List of source code in TRAC with listing of commiters for the most recent release of Sage (4.7).

    • William Stein (wstein) Still very involved based on freqenecy of commits
    • Michael Abshoff (mabs) Ohloh has him ranked second only to William Stein with commits and time on project. He’s now left the project according to the Trac log.
    • Jeroen Demeyer (jdemeyer) commits a lot
    • J.H.Palmieri (palmieri) has done  number of tutorials and documentation he’s on the IRC channel
    • Minh Van Nguyen (nguyenminh2) has done some tutorials,documentation and work Categories module. He also appears to be the sysadmin on the Wiki
    • Mike Hansen (mhansen) Is on the IRC channel irc.freenode.net#sagemath and is a big contributor
    • Robert Bradshaw (robertwb) has done some very recent commits

    Changelog for the most recent release (4.7) of Sage. Moderators of irc.freenode.net#sagemath Keshav Kini (who maintains the Ohloh info) & schilly@boxen.math.washington.edu. Big milestone release of version 4.7 with tickets listed here based on modules: Click Here. And the Ohloh listing of top contributors to the project. There’s an active developer and end user community. Workshops are tracked here. Sage Days workshops tend to be hackfests for interested parties. But more importantly Developers can read up on this page, how to get started and what the process is as a Sage developer.

    Further questions that need to be considered. Look at the git repository and the developer blogs ask the following questions:

    1. Who approves patches? How many people? (There’s a large number of people responsible for reviewing patches, if I had to guess it could be 12 in total based on the most recent changelog)
    2. Who has commit access? & how many?
    3. Who is involved in the history of the project? (That’s pretty easy to figure out from the Ohloh and Trac websites for Sage)
    4. Who are the principal contributors, and have they changed over time?
    5. Who are the maintainers?
    6. Who is on the front end (user interface) and back end (processing or server side)?
    7. What have been some of the major bugs/problems/issues that have arisen during development? Who is responsible for quality control and bug repair?
    8. How is the project’s participation trending and why? (Seems to have stabilized with a big peak of 41 contribs about 2 years ago, look at Ohloh graph of commits, peak activity was 2009 and 2010 based on Ohloh graph).

    Note the period over which the Gource visualization occurs is since 2009, earliest entry in the Mercurial repository I could find was 2005. Sage was already a going concern prior to the Mercurial repository being put on the web. So the simulation doesn’t show the full history of development.

  • AppleInsider | Apple seen merging iOS, Mac OS X with custom A6 chip in 2012

    Steve Jobs while introducing the iPad in San F...
    Image via Wikipedia

    Rumors of an ARM-based MacBook Air are not new. In May, one report claimed that Apple had built a test notebook featuring the same low-power A5 processor found in the iPad 2. The report, which came from Japan, suggested that Apple officials were impressed by the results of the experiment.

    via AppleInsider | Apple seen merging iOS, Mac OS X with custom A6 chip in 2012.

    Following up on an article they did back on May 27th, and one prior to that on May 6th,  AppleInsider does a bit of prediction and prognosticating about the eventual fusion of iOS and Mac OS X. What they see triggering this is an ARM chip that would be able to execute 64-bit binaries across all of the product lines (A fabled ARM A-6). How long would it take to do this consolidation and interweaving? How many combined updaters, security patches, Pro App updaters would it take to get OS X 10.7 to be ‘more’ like iOS than it is today? Software development is going to take a while and it’s not just a matter of cross-compiling to an ARM chip from a software based on Intel chips.

    Given that 64-bit Intel Atom chips are already running on the new Seamircro SM10000 (x64), it won’t be long now I’m sure before the ARM equivalent ARM-15 chip hits full stride. The designers have been aiming for a 4-core ARM design that will be encompassed by the ARM-15 release real soon now (RSN). The next step after that chip is licensed and piloted, tested and put into production will be a 64-bit clean design. I’m curious to see if 64-bit will be applied across ALL the different product lines within Apple. Especially when the issue of power-usage and Thermal Design power (TDM) is considered, will 64-bit ARM chips be as battery friendly? I wonder. True Intel has jumped the 64-bit divide on the desktop with the Core 2 Duo line some time ago and made them somewhat battery friendly. But they cannot compare at all to the 10 hours+ one gets on a 32-bit ARM chip today using the iPad.

    Lastly, App Developers will also need to keep their Xcode environment up to date and merge in new changes constantly up to the big cutover to ARM x64. No telling what that’s going to be like apart from the previous 2 problems I have raised here. Apple in the 10.7 Lion run-up was very late in providing the support and tools to allow the developers to get their Apps ready. I will say though that in the history of migrations in Apple’s hardware/software, they have done more of them, more successfully than any other company. So I think they will be able to pull it off no doubt, but there will be much wailing and gnashing of teeth. And hopefully we’ll see something better as the end-users of the technology, something better than a much bigger profit margin for Apple (though that seems to be the prime mover in most recent cases as Steve Jobs has done the long slow fade into obscurity).

    If ARM x64 is inevitable and iOS on Everything too, then I’m hoping things don’t change so much I can’t do things similarly to the way I do them now on the desktop. Currently on OS X 10.7 I am ignoring completely:

    1. Gestures
    2. Misson Control
    3. Launch Pad
    4. AppStore (not really because I had to download Lion)

    Let’s hope this roster doesn’t get even longer over time as the iOS becomes the de facto OS on all Apple Products. Because I was sure hoping the future would be brighter than this. And as AppleInsider quotes from May 6th,

    “In addition to laptops, the report said that Apple would ‘presumably’ be looking to move its desktop Macs to ARM architecture as well. It characterized the transition to Apple-made chips for its line of computers as a ‘done deal’.”

  • Apple patents hint at future AR screen tech for iPad | Electronista

    Structure of liquid crystal display: 1 – verti...
    Image via Wikipedia

    Apple may be working on bringing augmented reality views to its iPad thanks to a newly discovered patent filing with the USPTO.

    via Apple patents hint at future AR screen tech for iPad | Electronista. (Originally posted at AppleInsider at the following link below)

    Original Article: Apple Insider article on AR

    Just a very brief look at a couple of patent filings by Apple with some descriptions of potential applications. They seem to want to use it for navigation purposes using the onboard video camera. One half the screen will use the live video feed, the other half is a ‘virtual’ rendition of that scene in 3D to allow you to find a path or maybe a parking space in between all those buildings.

    The second filing mentions a see-through screen whose opacity can be regulated by the user. The information display will take precedence over the image seen through the LCD panel. It will default to totally opaque using no voltage whatsoever (In Plane switching design for the LCD).

    However the most intriguing part of the story as told by AppleInsider is the use of sensors on the device to determine angle, direction, bearing to then send over the network. Why the network? Well the whole rendering of the 3D scene as described in first patent filing is done somewhere in the cloud and spit back to the iOS device. No onboard 3D rendering needed or at least not at that level of detail. Maybe those datacenters in North Carolina are really cloud based 3D rendering farms?

  • Distracting chatter is useful. But thanks to RSS (remember that?) it’s optional. (via Jon Udell)

    editing my radio userland instiki from my 770
    Image by Donovan Watts via Flickr

    I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.

    When I left the pageview business I walked away from an engine that had, for many years, manufactured an audience for my writing. Four years on I’m still adjusting to the change. I always used to cringe when publishers talked about using content to drive traffic. Of course when the traffic was being herded my way I loved the attention. And when it wasn’t I felt — still feel — its absence. There are plenty of things I don’t miss, though. Among t … Read More

    via Jon Udell

  • A cocktail of AR and social marketing | Japan Pulse

    From top left: Shinjuku, Tokyo Tower, Rainbow ...
    Image via Wikipedia

    Though the AR element is not particularly elegant, merely consisting of a blue dot superimposed on your cell phone screen that guides the user through Tokyo’s streets, we think it’s nevertheless a clever marketing gimmick.

    via A cocktail of AR and social marketing | Japan Pulse.

    Augmented Reality (AR) in the news this week being used for a marketing campaign in Tokyo JP. It’s mostly geared towards getting people out to visit bars and restaurants to collect points. Whoever gets enough points can cash them in for Chivas Regal memorabilia. But hey, it’s something I guess. I just wish the navigation interface was a little more sophisticated.

    I also wonder how many different phones you can use as personal navigators to find the locations awarding points. Seems like GPS is an absolute requirement, but so is one that has a Foursquare or Livedoor client as well.

  • Kim Cameron returns to Microsoft as indie ID expert • The Register

    Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

    via Kim Cameron returns to Microsoft as indie ID expert • The Register.

    CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

  • Tilera throws gauntlet at Intels feet • The Register

    Upstart mega-multicore chip maker Tilera has not yet started sampling its future Tile-Gx 3000 series of server processors, and companies have already locked in orders for the chips.

    via Tilera throws gauntlet at Intels feet • The Register.

    Proof that sometimes  a shipping product doesn’t always make all the difference. Although it might be nice to tout performance of actual shipping product. What’s becoming more real is the power efficiency of the Tilera architcture core for core versus the Intel IA-64 architecture. Tilera can provide a much lower Thermal Design Point (TDM) per core than typical Intel chips running the same workloads. So Tilera for the win on paper anyways.

  • Intel readying MIC x64 coprocessor for 2012 • The Register

    Image representing Intel as depicted in CrunchBase
    Image via CrunchBase

    Thus far, Intels Many Integrated Core MIC is little more than a research project. Intel picked up the remnants of the failed “Larrabee” graphics card project and rechristened it Knights and put it solely in the service of the king of computing, the CPU.

    via Intel readying MIC x64 coprocessor for 2012 • The Register.

    Ahhh, alas poor ol’ Larrabee, we hardly knew ye. And yet, somehow your ghost will rise again, and again and again. I remember the hints at the 80 core cpu, which then fell to 64 cores, 40 cores and now just today I read this article to find out it is merely Larrabee and only has a grand total of (hold tight, are you ready for this shocker?) 32 cores. Wait what was that? Did you say 32 cores? Let’s turn back the page to May 15, 2009 where Intel announced the then new Larrabee graphics processing engine with a 32-core processor. That’s right, nothing (well maybe not nothing) has happened in TWO YEARS! Or very little has happened a few die shrinks, and now the upcoming 3D transistors (tri-gate) for the 22nm design revision for Intel Architecture CPUs. It also looks like they may have shuffled around the floor plan/layout of the first gen Larrabee CPU to help speed things up a bit. But, other than these incrementalist appointments the car looks vastly like the model year car from two years ago. Now, what we can also hope has improved since 2009 is the speed and efficiency of the compilers Intel’s engineers have crafted to accompany the release of this re-packaged Larrabee.

    Intel shows glimpse of 32-core Larrabee beast (Chris Mellor @ http://www.theregister.co.uk)