Category: blogroll

This is what I subscribe to myself

  • ARM server hero Calxeda lines up software super friends • The Register

    Company Logo
    Maker of the massively parallel ARM-based server

    via ARM server hero Calxeda lines up software super friends • The Register.

    Calxeda in the news again this week with some more announcements regarding its plans. Remembering recently to the last article I posted on Calxeda, this company boasts an ARM based server packing 120 cpus (each with four cores) into a 2U high rack (making it just 3-1/2″ tall *see note). With every evolution in hardware one must needs get an equal if not greater revolution in software. Which is the point of the announcement by Calxeda of its new software partners.

    It’s all mostly cloud apps, cloud provisioning and cloud management types of vendors. And with the partnership each company gets early access to the hardware Calxeda is promising to design, prototype and eventually manufacture. Both Google and Intel have poo-poohed the idea of using “wimpy processors” on massively parallel workloads claiming faster serialized workloads are still easier to manage through existing software/programming techniques. For many years as Intel has complained about the programming tools, it still has gone the multi-core/multi-thread route hoping to continue its domination by offering up ‘newer’ and higher performing products. So while Intel bad mouths parallelism on competing cpus it seems to be desperate to sell multi-core to willing customers year over year.

    Even as power efficient as those cores maybe Intel’s old culture of maximum performance for the money still holds sway. Even the most recent Ultra-low Voltage i-series cpus are still hitting about 17Watts of power for chips clocking in around 1.8Ghz (speed boosting up to 2.9Ghz in a pinch). Even if Intel allowed these chips to be installed into servers we’re stilling talking a lot of  Thermal Design Point (TDM) that has to be chilled to keep running.

  • Goal oriented visualizations? (via Erik Duval’s Weblog)

    Charles Minard's 1869 chart showing the losses...
    Image via Wikipedia

    Visualizations and their efficacy always takes me back to Edward Tufte‘s big hard cover books on Infographics (or Chart Junk when it’s done badly). In terms of this specific category, visualization leading to a goal I think it’s still very much a ‘general case’. But examples are always better than theoretical descriptions of an ideal. So while I don’t have an example to give (which is what Erik Duval really wants) I can at least point to a person who knows how Infographics get misused.

    I’m also reminded somewhat of the most recent issue of Wired Magazine where there’s an article on feedback loops. How are goal oriented visualizations different from or better than feedback loops? I’d say that’s an interesting question to investigate further. The primary example given in that story is the radar equipped speed limit sign. It doesn’t tell you the posted speed. It merely tells you how fast you are going and that by itself apart from ticketing and making the speed limit signs more noticeable did more to effect a change in behavior than any other option. So maybe a goal oriented visualization could also benefit from some techniques like feedback loops?

    Some of the fine fleur of information visualisation in Europe gathered in Brussels today at the Visualizing Europe meeting. Definitely worth to follow the links of the speakers on the program! Twitter has a good trace of what was discussed. Revisit offers a rather different view on that discussion than your typical twitter timeline. In the Q&A session, Paul Kahn asked the Rather Big Question: how do you choose between different design alterna … Read More

    via Erik Duval’s Weblog

  • Macintouch Reader Reports: User Interface Issues iOS/Lion

    Magic Mouse on MacBook Pro. Canon Rebel T1i wi...
    Image via Wikipedia

    Anyways, I predict a semi-chaos, where – for example- a 3 fingers swipe from left to right means something completely different in Apple than in any other platform. We are already seeing signs of this in Android, and in the new Windows 8.Also, users will soon need “cheat sheets” to remember the endless possible combinations.Would be interesting to hear other people’s thoughts.

    via User Interface Issues.

    After the big WWDC Keynote presentation by Steve Jobs et. al. the question I have too is what’s up with all the finger combos for swiping. In the bad old days people needed wire bound notebooks to tell them all about the commands to run their IBM PC. And who can forget the users of WordPerfect who had keyboard template overlays to remind themselves of the ‘menu’ of possible key combos (Ctrl/Alt/Shift). Now we are faced with endless and seemingly arbitrary combinations off finger swipes/pinches/flicks etc.

    Like other readers who responded to this question on the Macintouch message boards, what about the bad old days of the Apple 1 button mouse? Remember when Apple finally capitulated and provided two mice buttons (No?) well they did it through software. Just before the Magic Mouse hit town Apple provided a second mouse button (at long last) bringing the Mac inline for the first time with the Windows PC convention of left and right mouse buttons. How recently did this happen? Just two years ago maybe, Apple introduced the wired and wireless version of the Mighty Mouse? And even then it was virtual, not a literal real two button-ness experience either. Now we have the magic mouse with no buttons, no clicking. It’s one rounded over trackpad that accepts the Lionized gestures. To quote John Wayne, “It’s gettin’ to be Ri-goddamn-diculous”.

    So whither the haptic touch interface conventions of the future? Who is going to win the gesture arms race? Who is going to figure out less is more when it comes to gestures? It ain’t Apple.

  • JSON Activity Streams Spec Hits Version 1.0

    This is icon for social networking website. Th...
    Image via Wikipedia

    The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

    via JSON Activity Streams Spec Hits Version 1.0.

    Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

    We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

  • AnandTech – Computex 2011: OCZs RevoDrive 3

    OCZ Technology
    Image via Wikipedia

    Theres a new PCIe SSD in town: the RevoDrive 3. Armed with two SF-2281 controllers and anywhere from 128 – 256GB of NAND 120/240GB capacities, the RevoDrive 3 is similar to its predecessors in that the two controllers are RAIDed on card. Heres where things start to change though.

    via AnandTech – Computex 2011: OCZs RevoDrive 3 & RevoDrive 3 X2, Now With TRIM.

    OCZ is back with a revision of its consumer grade PCIe SSD, the RevoDrive. This time out the SandForce SF-2281 makes an appearance and to great I/O effect. The bus interface is a true PCIe bridge chip as opposed to the last versions PCI-X to PCIe bridge. Also this device can be controlled completely through the OSes own drive utilities and TRIM support. All combined this is the most natively and well support PCIe SSD to hit the market. No benchmarks yet from a commercially shipping product. But my fingers are crossed that this thing is going to be faster than OCZ’s Vertex 3 and Vertex 3 Pro (I hope) while possibly holding more flash memory chips than those SATA 6 based SSDs.

    One other upshot of this revised product is full OS booting support. So not only will TRIM work but your motherboard and the PCIe’s card electronics will allow you to boot directly off of the card. So this is by far the most evolved and versatile PCIe based SSD drive to date. Pricing is the next big question on my mind after reading the specifications. Hopefully will not be Enterprise grade (greater than $1200). I’ve found most off the  prosumer and gamer market upgrade manufacturers are comfortable setting prices at the $1200 price point for these PCIe SSDs. And that trend has been pretty reliable going back to the original RevoDrive.

  • 2WAY Q&A: Layar’s Maarten Lens-FitzGerald on Building a Digital Layer on Top of the World

    Image representing Layar as depicted in CrunchBase
    Image via CrunchBase

    Lens-FitzGerald: I never thought of going into augmented reality, but cyberspace, any form of digital worlds, have always been one of the things I’ve been thinking about since I found out about science fiction. One of the first books I read of the cyber punk genre was Bruce Sterling‘s “Mirror Shades.” Mirror shades, meaning, of course, AR goggles. And that book came out in 1988 and ever since, this was my world.

    via 2WAY Q&A: Layar’s Maarten Lens-FitzGerald on Building a Digital Layer on Top of the World.

    An interview with the man that who created the most significant Augmented Reality (AR) application on handheld devices Layar. In the time since the first releases on smartphones like the Android in Europe, Layar has branched out to cover more of the OSes available on hand held devices. The interest I think has cooled somewhat on AR as social network and location has seemed to rule the day. And I would argue even location isn’t as fiery hot as it was at the beginning. But Facebook is still here with a vengeance. So wither the market for AR? What’s next you wonder, well it seems Qualcomm today has announced it’s very own AR Toolkit to help jump start the developer market more useful, nay killer AR apps. Stay tuned.

  • AnandTech – OCZ Agility 3 240GB Review

    OCZ Technology
    Image via Wikipedia

    Theres another issue holding users back from the Vertex 3: capacity. The Vertex 3 is available in 120, 240 and 480GB versions, there is no 60GB model. If you’re on a budget or like to plan frequent but rational upgrades, the Vertex 3 can be a tough sell.

    via AnandTech – OCZ Agility 3 240GB Review.

    OCZ apart from having the fastest SSD on the market now is attempting to branch out and down market simultaneously. And by down market I don’t mean anything other than the almighty PRICE. It’s all about the upgrade market for the PC Fan boys that want to trade up to get the next higher performing part for their gaming computer (If people still do that, play games on their PeeCees). Performance-wise it is designed to be less expensive and this SSD shows that it is not the highest speed part. So if you demand to own an OCZ branded SSD and won’t settle for anything less, but you don’t want to pay $499 to get it, the Agility 3 is just for you. Also if you read the full review the charts will show how all the current generation SATA 6 drives are shaping up (Intel included) versus the previous generation SATA 2.0 drives (3Gbytes/sec). OCZ Vertex 3 is still the king of the mountain at the 240GB size, but is still very much at a price premium.

  • From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1)

    Process and data modeling
    Image via Wikipedia

    Big Data

    In short, big data simply means data sets that are large enough to be difficult to work with. Exactly how big is big is a matter of debate. Data sets that are multiple petabytes in size are generally considered big data (a petabye is 1,024 terabytes). But the debate over the term doesn’t stop there.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1).

    There’s big doin’s inside and outside the data center theses days. You cannot spend a day without a cool new article about some new project that’s just been open sourced from one of the departments inside the social networking giants. Hadoop being the biggest example. What you ask is Hadoop? It is a project Yahoo started after Google started spilling the beans on it’s two huge technological leaps in massively parallel databases and processing real time data streams. The first one was called BigTable. It is a huge distributed database that could be brought up on an inordinately large number of commodity servers and then ingest all the indexing data sent by Google’s web bots as they found new websites. That’s the database and ingestion point. The second point is the way in which the rankings and ‘pertinence’ of the indexed websites would be calculated through PageRank. The invention for the realtime processing of this data being collected is called MapReduce. It was a way of pulling in, processing and quickly sorting out the important highly ranked websites. Yahoo read the white papers put out by Google and subsequently created a version of those technologies which today power the Yahoo! search engine. Having put this into production and realizing the benefits of it, Yahoo turned it into an open source project to lower the threshold of people wanting to get into the Big Data industry. Similarly, they wanted to get many eyes of programmers looking at the source code and adding features, packaging it, and all importantly debugging what was already there. Hadoop was the name given to the Yahoo bag of software and this is what a lot of people initially adopt if they are trying to do large scale collection and real-time analysis of Big Data.

    Another discovery along the way towards the Big Data movement was a parallel attempt to overcome the limitations of extending the schema of a typical database holding all the incoming indexed websites. Tables and Rows and Structured Query Language (SQL) have ruled the day since about 1977 or so, and for many kinds of tabbed data there is no substitute. However, the kinds of data being stored now fall into the big amorphous mass of binary large objects (BLOBs) that can slow down a traditional database. So a non-SQL approach was adopted and there are parts of the BigTable database and Hadoop that dump the unique key values and relational tables of SQL to just get the data in and characterize it as quickly as possible, or better yet to re-characterize it by adding elements to the schema after the fact. Whatever you are doing, what you collect might not be structured or easily structured so you’re going to need to play fast and loose with it and you need a database of some sort equal to that task. Enter the NoSQL movement to collect and analyze Big Data in its least structured form. So my recommendation to anyone trying to get the square peg of Relational Databases to fit the round hole of their unstructured data is to give up. Go NoSQL and get to work.

    This first article from Read Write Web is good in that it lays the foundation for what a relational database universe looks like and how you can manipulate it. Having established what IS, future articles will be looking at what quick, dirty workarounds and one off projects people have come up with to fit their needs. And subsequently which ‘Works for Me’ type solutions have been turned into bigger open source projects that will ‘Work for Others’, as that is where each of these technologies will really differentiate themselves. Ease of use and lowering the threshold will be deciding factors for many people’s adoption of a NoSQL database I’m sure.

  • Tilera preps 100-core chips for network gear • The Register

    One Blue Gene/L node board
    Image via Wikipedia

    Upstart multicore chip maker Tilera is using the Interop networking trade show as the coming out party for its long-awaited Tile-Gx series of processors, which top out at 100 cores on a single die.

    via Tilera preps 100-core chips for network gear • The Register.

    A further update on Tilera’s product launches as the old Interop tradeshow for network switch and infrastructure vendors is held in Las Vegas. They have tweaked the chip packaging of their cpus and now are going to market different cpus to different industries. This family of Tilera chips is called the 8000 series and will be followed by a next generation of 3000 and 5000 series chips. Projections are by the time the Tilera 3000 series is released the density of the chips will be sufficient to pack upwards of 20,000 cpu cores of Tilera chips in a single 42 unit tall, 19 inch wide server rack. with a future revision possibly doubling that number of cores to 40,000. That road map is very agressive but promising and shows that there is lots of scaling possible with the Tilera product over time. Hopefully these plans will lead to some big customers signing up to use Tilera in shipping product in the immediate and near future.

    What I’m most interested in knowing is how does the Qanta server currently shipping that uses the Tilera cpu benchmark compared to an Intel Atom based or ARM based server on a generic webserver benchmark. While white papers and press releases have made regular appearances on the technolog weblogs, very few have attempted to get sample product and run it through the paces. I suspect, and cannot confirm that anyone who is a potential customer are given Non-disclosure Agreements and shipping samples to test in their data centers before making any big purchases. I also suspect that as is often the case the applications for these low power massively parallel dense servers is very narrow. Not unlike that for a super computer. IBM‘s Cell Processor that powers the Blue Gene super computers is essentially a PowerPC architecture with some extra optimizations and streamlining to make it run very specific workloads and algorithms faster. In a super computing environment you really need to tune your software to get the most out of the huge up front investment in the ‘iron’ that you got from the manufacturer. There’s not a lot of value add available in that scientific and super computing environment. You more or less roll your own solution, or beg, borrow or steal it from a colleague at another institution using the same architecture as you. So the Quanta S2Q server using the Tilera chip is similarly likely to be a one off or niche product, but a very valuable one to those who  purchase it. Tilera will need a software partner to really pump up the volumes of shipping product if they expect a wider market for their chips.

    But using a Tilera processor in a network switch or a ‘security’ device or some other inspection engine might prove very lucrative. I’m thinking of your typical warrantless wire-tapping application like the NSA‘s attempt to scoop up and analyze all the internet traffic at large carriers around the U.S. Analyzing data traffic in real time prevents folks like NSA from capturing and having to move around large volumes of useless data in order to have it analyzed at a central location. Instead localized computing nodes can do the initial inspection in realtime keying on phrases, words, numbers, etc. which then trigger the capturing process and send the tagged data back to NSA for further analysis. Doing that in parallel with a 100 core CPU would be very advantageous in that a much smaller footprint would be required in the secret closets NSA maintains at those big data carriers operations centers. Smaller racks, less power makes for a much less obvious presence in the data center.

  • Intel’s Tri-Gate gamble: It’s now or never • The Register

    I am the author of this image.
    Image via Wikipedia

    Analysis  There are two reasons why Intel is switching to a new process architecture: it can, and it must.

    via Intel’s Tri-Gate gamble: It’s now or never • The Register.

    Usually every time there’s a die shrink of a computer processor there’s always an attendant evolution of the technology to to produce it. I think back recently to the introduction of super filtered water immersion lithography. The goal of immersion lithography was to increase the ability to resolve the fine line wire traces of the photo masks as they were exposed onto photosensitive emulsion coating a silicon wafer. The problem is the light travels from the photomask to the surface of the wafer through ‘air’. There’s a small gap, and air is full of optical scrambling atoms and molecules that make the photomask slightly blurry. If you put a layer of water between the mask the wafer, you have in a sense a ‘lens’ made of optically superior water molecules that act more predictably than ‘air’. Likewise you get better chip yields, more profit, higher margins etc.

    As the wire traces on microchips continue to get thinner and transistors smaller the physics involved are harder to control. Electrodynamics begin to follow the laws of Quantum Electro-dynamics rather than Maxwell’s equations. This makes it harder to tell when a transistor has switched on or off and the basic digits of the digital computer (1s and 0s) become harder and harder to measure and register properly. IBM and Intel have waged a war on shrinking their dies all through the 80s and 90s. IBM chose to adopt new, sometimes exotic materials (copper metal for traces instead of aluminum, silicon on insulator, high-K dielectric gates). Intel chose to go the direction of improving what they had using higher energy light sources and only adopting very new processes when absolutely, positively necessary. At the same time, Intel was cranking out such volumes of current generation product it almost seem as though it didn’t need to innovate at all. But IBM kept Intel honest as did Taiwan Semiconductor Manufacturing Co. (contract manufacturer of micro-processors). And Intel continued to maintain its volume and technological advantage.

    ARM (formerly the Acorn Risc Machine) became a cpu manufacturer during the golden age off RISC computers (early and mid-1980s). Over time they got out of manufacturing and started selling their processor designs to anyone that wanted to embed a core microprocessor into a bigger chip design. Eventually ARM became the defacto standard micro chip for smart handheld devices and telephones before Intel had to react. Intel had come up with a market leading low voltage cheap cpu in the Atom processor. But they did not have the specialized knowledge and capability ARM had with embedded cpus. Licensees of ARM designs began cranking out newer generations of higher performance and lower power cpus than Intel’s research labs could create and the stage was set for a battle royale of low power/high performance.

    Which brings us now to an attempt to continue to scale down the  processor power requirements through the same brute force that worked in the past. Moore’s Law, an epigram quoted from Intel’s Gordon Moore indicated the rate at which the ‘industry’ would continue to scaled down the size of the ‘wires’ in silicon chips would increase speed and lower costs. Speeds would double, prices would halve and this would continue on ad infinitum to some distant future. The problem has been always that the future is now. Intel hit a brick wall back around the end off the Pentium IV era when they couldn’t get speeds to double anymore without also doubling the amount of waste heat coming off of the chip. That heat was harder and harder to remove efficiently and soon, it appeared the chips would create so much heat they might melt. Intel worked around this by putting multiple CPUs on the same silicon wafers they used for previous generation chips and got some amount of performance scaling to work. Along those lines they have research projects to create first an 80 core processor, then a 48 and now a 24 core processor (which might actually turn into a shippable product). But what about Moore’s Law? Well, the scaling has continued downward, and power requirements have improved but it’s getting harder and harder to shave down those little wire traces and get the bang that drives profits for Intel. Now Intel is going the full-on research and development route by adopting a new way of making transistors on silicon. It’s called a Fin Field Effect Trasistor or FinFET. And it makes use of not just the surface layer of metal but the surface and the left and right sides, effectively giving you 3x the surface to move the electrons around the processor. If they can get this to work on a modern day silicon chip production line, they will be able to continue differentiating their product, keeping their costs manageable and selling more chips. But it’s a big risk and bet I’m sure everyone hopes will pay off.