Category: blogroll

This is what I subscribe to myself

  • Macintouch Reader Reports: User Interface Issues iOS/Lion

    Magic Mouse on MacBook Pro. Canon Rebel T1i wi...
    Image via Wikipedia

    Anyways, I predict a semi-chaos, where – for example- a 3 fingers swipe from left to right means something completely different in Apple than in any other platform. We are already seeing signs of this in Android, and in the new Windows 8.Also, users will soon need “cheat sheets” to remember the endless possible combinations.Would be interesting to hear other people’s thoughts.

    via User Interface Issues.

    After the big WWDC Keynote presentation by Steve Jobs et. al. the question I have too is what’s up with all the finger combos for swiping. In the bad old days people needed wire bound notebooks to tell them all about the commands to run their IBM PC. And who can forget the users of WordPerfect who had keyboard template overlays to remind themselves of the ‘menu’ of possible key combos (Ctrl/Alt/Shift). Now we are faced with endless and seemingly arbitrary combinations off finger swipes/pinches/flicks etc.

    Like other readers who responded to this question on the Macintouch message boards, what about the bad old days of the Apple 1 button mouse? Remember when Apple finally capitulated and provided two mice buttons (No?) well they did it through software. Just before the Magic Mouse hit town Apple provided a second mouse button (at long last) bringing the Mac inline for the first time with the Windows PC convention of left and right mouse buttons. How recently did this happen? Just two years ago maybe, Apple introduced the wired and wireless version of the Mighty Mouse? And even then it was virtual, not a literal real two button-ness experience either. Now we have the magic mouse with no buttons, no clicking. It’s one rounded over trackpad that accepts the Lionized gestures. To quote John Wayne, “It’s gettin’ to be Ri-goddamn-diculous”.

    So whither the haptic touch interface conventions of the future? Who is going to win the gesture arms race? Who is going to figure out less is more when it comes to gestures? It ain’t Apple.

  • JSON Activity Streams Spec Hits Version 1.0

    This is icon for social networking website. Th...
    Image via Wikipedia

    The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

    via JSON Activity Streams Spec Hits Version 1.0.

    Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

    We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

  • AnandTech – Computex 2011: OCZs RevoDrive 3

    OCZ Technology
    Image via Wikipedia

    Theres a new PCIe SSD in town: the RevoDrive 3. Armed with two SF-2281 controllers and anywhere from 128 – 256GB of NAND 120/240GB capacities, the RevoDrive 3 is similar to its predecessors in that the two controllers are RAIDed on card. Heres where things start to change though.

    via AnandTech – Computex 2011: OCZs RevoDrive 3 & RevoDrive 3 X2, Now With TRIM.

    OCZ is back with a revision of its consumer grade PCIe SSD, the RevoDrive. This time out the SandForce SF-2281 makes an appearance and to great I/O effect. The bus interface is a true PCIe bridge chip as opposed to the last versions PCI-X to PCIe bridge. Also this device can be controlled completely through the OSes own drive utilities and TRIM support. All combined this is the most natively and well support PCIe SSD to hit the market. No benchmarks yet from a commercially shipping product. But my fingers are crossed that this thing is going to be faster than OCZ’s Vertex 3 and Vertex 3 Pro (I hope) while possibly holding more flash memory chips than those SATA 6 based SSDs.

    One other upshot of this revised product is full OS booting support. So not only will TRIM work but your motherboard and the PCIe’s card electronics will allow you to boot directly off of the card. So this is by far the most evolved and versatile PCIe based SSD drive to date. Pricing is the next big question on my mind after reading the specifications. Hopefully will not be Enterprise grade (greater than $1200). I’ve found most off the  prosumer and gamer market upgrade manufacturers are comfortable setting prices at the $1200 price point for these PCIe SSDs. And that trend has been pretty reliable going back to the original RevoDrive.

  • 2WAY Q&A: Layar’s Maarten Lens-FitzGerald on Building a Digital Layer on Top of the World

    Image representing Layar as depicted in CrunchBase
    Image via CrunchBase

    Lens-FitzGerald: I never thought of going into augmented reality, but cyberspace, any form of digital worlds, have always been one of the things I’ve been thinking about since I found out about science fiction. One of the first books I read of the cyber punk genre was Bruce Sterling‘s “Mirror Shades.” Mirror shades, meaning, of course, AR goggles. And that book came out in 1988 and ever since, this was my world.

    via 2WAY Q&A: Layar’s Maarten Lens-FitzGerald on Building a Digital Layer on Top of the World.

    An interview with the man that who created the most significant Augmented Reality (AR) application on handheld devices Layar. In the time since the first releases on smartphones like the Android in Europe, Layar has branched out to cover more of the OSes available on hand held devices. The interest I think has cooled somewhat on AR as social network and location has seemed to rule the day. And I would argue even location isn’t as fiery hot as it was at the beginning. But Facebook is still here with a vengeance. So wither the market for AR? What’s next you wonder, well it seems Qualcomm today has announced it’s very own AR Toolkit to help jump start the developer market more useful, nay killer AR apps. Stay tuned.

  • AnandTech – OCZ Agility 3 240GB Review

    OCZ Technology
    Image via Wikipedia

    Theres another issue holding users back from the Vertex 3: capacity. The Vertex 3 is available in 120, 240 and 480GB versions, there is no 60GB model. If you’re on a budget or like to plan frequent but rational upgrades, the Vertex 3 can be a tough sell.

    via AnandTech – OCZ Agility 3 240GB Review.

    OCZ apart from having the fastest SSD on the market now is attempting to branch out and down market simultaneously. And by down market I don’t mean anything other than the almighty PRICE. It’s all about the upgrade market for the PC Fan boys that want to trade up to get the next higher performing part for their gaming computer (If people still do that, play games on their PeeCees). Performance-wise it is designed to be less expensive and this SSD shows that it is not the highest speed part. So if you demand to own an OCZ branded SSD and won’t settle for anything less, but you don’t want to pay $499 to get it, the Agility 3 is just for you. Also if you read the full review the charts will show how all the current generation SATA 6 drives are shaping up (Intel included) versus the previous generation SATA 2.0 drives (3Gbytes/sec). OCZ Vertex 3 is still the king of the mountain at the 240GB size, but is still very much at a price premium.

  • From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1)

    Process and data modeling
    Image via Wikipedia

    Big Data

    In short, big data simply means data sets that are large enough to be difficult to work with. Exactly how big is big is a matter of debate. Data sets that are multiple petabytes in size are generally considered big data (a petabye is 1,024 terabytes). But the debate over the term doesn’t stop there.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology (Part 1).

    There’s big doin’s inside and outside the data center theses days. You cannot spend a day without a cool new article about some new project that’s just been open sourced from one of the departments inside the social networking giants. Hadoop being the biggest example. What you ask is Hadoop? It is a project Yahoo started after Google started spilling the beans on it’s two huge technological leaps in massively parallel databases and processing real time data streams. The first one was called BigTable. It is a huge distributed database that could be brought up on an inordinately large number of commodity servers and then ingest all the indexing data sent by Google’s web bots as they found new websites. That’s the database and ingestion point. The second point is the way in which the rankings and ‘pertinence’ of the indexed websites would be calculated through PageRank. The invention for the realtime processing of this data being collected is called MapReduce. It was a way of pulling in, processing and quickly sorting out the important highly ranked websites. Yahoo read the white papers put out by Google and subsequently created a version of those technologies which today power the Yahoo! search engine. Having put this into production and realizing the benefits of it, Yahoo turned it into an open source project to lower the threshold of people wanting to get into the Big Data industry. Similarly, they wanted to get many eyes of programmers looking at the source code and adding features, packaging it, and all importantly debugging what was already there. Hadoop was the name given to the Yahoo bag of software and this is what a lot of people initially adopt if they are trying to do large scale collection and real-time analysis of Big Data.

    Another discovery along the way towards the Big Data movement was a parallel attempt to overcome the limitations of extending the schema of a typical database holding all the incoming indexed websites. Tables and Rows and Structured Query Language (SQL) have ruled the day since about 1977 or so, and for many kinds of tabbed data there is no substitute. However, the kinds of data being stored now fall into the big amorphous mass of binary large objects (BLOBs) that can slow down a traditional database. So a non-SQL approach was adopted and there are parts of the BigTable database and Hadoop that dump the unique key values and relational tables of SQL to just get the data in and characterize it as quickly as possible, or better yet to re-characterize it by adding elements to the schema after the fact. Whatever you are doing, what you collect might not be structured or easily structured so you’re going to need to play fast and loose with it and you need a database of some sort equal to that task. Enter the NoSQL movement to collect and analyze Big Data in its least structured form. So my recommendation to anyone trying to get the square peg of Relational Databases to fit the round hole of their unstructured data is to give up. Go NoSQL and get to work.

    This first article from Read Write Web is good in that it lays the foundation for what a relational database universe looks like and how you can manipulate it. Having established what IS, future articles will be looking at what quick, dirty workarounds and one off projects people have come up with to fit their needs. And subsequently which ‘Works for Me’ type solutions have been turned into bigger open source projects that will ‘Work for Others’, as that is where each of these technologies will really differentiate themselves. Ease of use and lowering the threshold will be deciding factors for many people’s adoption of a NoSQL database I’m sure.

  • Tilera preps 100-core chips for network gear • The Register

    One Blue Gene/L node board
    Image via Wikipedia

    Upstart multicore chip maker Tilera is using the Interop networking trade show as the coming out party for its long-awaited Tile-Gx series of processors, which top out at 100 cores on a single die.

    via Tilera preps 100-core chips for network gear • The Register.

    A further update on Tilera’s product launches as the old Interop tradeshow for network switch and infrastructure vendors is held in Las Vegas. They have tweaked the chip packaging of their cpus and now are going to market different cpus to different industries. This family of Tilera chips is called the 8000 series and will be followed by a next generation of 3000 and 5000 series chips. Projections are by the time the Tilera 3000 series is released the density of the chips will be sufficient to pack upwards of 20,000 cpu cores of Tilera chips in a single 42 unit tall, 19 inch wide server rack. with a future revision possibly doubling that number of cores to 40,000. That road map is very agressive but promising and shows that there is lots of scaling possible with the Tilera product over time. Hopefully these plans will lead to some big customers signing up to use Tilera in shipping product in the immediate and near future.

    What I’m most interested in knowing is how does the Qanta server currently shipping that uses the Tilera cpu benchmark compared to an Intel Atom based or ARM based server on a generic webserver benchmark. While white papers and press releases have made regular appearances on the technolog weblogs, very few have attempted to get sample product and run it through the paces. I suspect, and cannot confirm that anyone who is a potential customer are given Non-disclosure Agreements and shipping samples to test in their data centers before making any big purchases. I also suspect that as is often the case the applications for these low power massively parallel dense servers is very narrow. Not unlike that for a super computer. IBM‘s Cell Processor that powers the Blue Gene super computers is essentially a PowerPC architecture with some extra optimizations and streamlining to make it run very specific workloads and algorithms faster. In a super computing environment you really need to tune your software to get the most out of the huge up front investment in the ‘iron’ that you got from the manufacturer. There’s not a lot of value add available in that scientific and super computing environment. You more or less roll your own solution, or beg, borrow or steal it from a colleague at another institution using the same architecture as you. So the Quanta S2Q server using the Tilera chip is similarly likely to be a one off or niche product, but a very valuable one to those who  purchase it. Tilera will need a software partner to really pump up the volumes of shipping product if they expect a wider market for their chips.

    But using a Tilera processor in a network switch or a ‘security’ device or some other inspection engine might prove very lucrative. I’m thinking of your typical warrantless wire-tapping application like the NSA‘s attempt to scoop up and analyze all the internet traffic at large carriers around the U.S. Analyzing data traffic in real time prevents folks like NSA from capturing and having to move around large volumes of useless data in order to have it analyzed at a central location. Instead localized computing nodes can do the initial inspection in realtime keying on phrases, words, numbers, etc. which then trigger the capturing process and send the tagged data back to NSA for further analysis. Doing that in parallel with a 100 core CPU would be very advantageous in that a much smaller footprint would be required in the secret closets NSA maintains at those big data carriers operations centers. Smaller racks, less power makes for a much less obvious presence in the data center.

  • Intel’s Tri-Gate gamble: It’s now or never • The Register

    I am the author of this image.
    Image via Wikipedia

    Analysis  There are two reasons why Intel is switching to a new process architecture: it can, and it must.

    via Intel’s Tri-Gate gamble: It’s now or never • The Register.

    Usually every time there’s a die shrink of a computer processor there’s always an attendant evolution of the technology to to produce it. I think back recently to the introduction of super filtered water immersion lithography. The goal of immersion lithography was to increase the ability to resolve the fine line wire traces of the photo masks as they were exposed onto photosensitive emulsion coating a silicon wafer. The problem is the light travels from the photomask to the surface of the wafer through ‘air’. There’s a small gap, and air is full of optical scrambling atoms and molecules that make the photomask slightly blurry. If you put a layer of water between the mask the wafer, you have in a sense a ‘lens’ made of optically superior water molecules that act more predictably than ‘air’. Likewise you get better chip yields, more profit, higher margins etc.

    As the wire traces on microchips continue to get thinner and transistors smaller the physics involved are harder to control. Electrodynamics begin to follow the laws of Quantum Electro-dynamics rather than Maxwell’s equations. This makes it harder to tell when a transistor has switched on or off and the basic digits of the digital computer (1s and 0s) become harder and harder to measure and register properly. IBM and Intel have waged a war on shrinking their dies all through the 80s and 90s. IBM chose to adopt new, sometimes exotic materials (copper metal for traces instead of aluminum, silicon on insulator, high-K dielectric gates). Intel chose to go the direction of improving what they had using higher energy light sources and only adopting very new processes when absolutely, positively necessary. At the same time, Intel was cranking out such volumes of current generation product it almost seem as though it didn’t need to innovate at all. But IBM kept Intel honest as did Taiwan Semiconductor Manufacturing Co. (contract manufacturer of micro-processors). And Intel continued to maintain its volume and technological advantage.

    ARM (formerly the Acorn Risc Machine) became a cpu manufacturer during the golden age off RISC computers (early and mid-1980s). Over time they got out of manufacturing and started selling their processor designs to anyone that wanted to embed a core microprocessor into a bigger chip design. Eventually ARM became the defacto standard micro chip for smart handheld devices and telephones before Intel had to react. Intel had come up with a market leading low voltage cheap cpu in the Atom processor. But they did not have the specialized knowledge and capability ARM had with embedded cpus. Licensees of ARM designs began cranking out newer generations of higher performance and lower power cpus than Intel’s research labs could create and the stage was set for a battle royale of low power/high performance.

    Which brings us now to an attempt to continue to scale down the  processor power requirements through the same brute force that worked in the past. Moore’s Law, an epigram quoted from Intel’s Gordon Moore indicated the rate at which the ‘industry’ would continue to scaled down the size of the ‘wires’ in silicon chips would increase speed and lower costs. Speeds would double, prices would halve and this would continue on ad infinitum to some distant future. The problem has been always that the future is now. Intel hit a brick wall back around the end off the Pentium IV era when they couldn’t get speeds to double anymore without also doubling the amount of waste heat coming off of the chip. That heat was harder and harder to remove efficiently and soon, it appeared the chips would create so much heat they might melt. Intel worked around this by putting multiple CPUs on the same silicon wafers they used for previous generation chips and got some amount of performance scaling to work. Along those lines they have research projects to create first an 80 core processor, then a 48 and now a 24 core processor (which might actually turn into a shippable product). But what about Moore’s Law? Well, the scaling has continued downward, and power requirements have improved but it’s getting harder and harder to shave down those little wire traces and get the bang that drives profits for Intel. Now Intel is going the full-on research and development route by adopting a new way of making transistors on silicon. It’s called a Fin Field Effect Trasistor or FinFET. And it makes use of not just the surface layer of metal but the surface and the left and right sides, effectively giving you 3x the surface to move the electrons around the processor. If they can get this to work on a modern day silicon chip production line, they will be able to continue differentiating their product, keeping their costs manageable and selling more chips. But it’s a big risk and bet I’m sure everyone hopes will pay off.

  • SPDY: An experimental protocol for a faster web – The Chromium Projects

    Google Chromium alpha for Linux. User agent: M...
    Image via Wikipedia

    As part of the “Let’s make the web faster” initiative, we are experimenting with alternative protocols to help reduce the latency of web pages. One of these experiments is SPDY (pronounced “SPeeDY”), an application-layer protocol for transporting content over the web, designed specifically for minimal latency.  In addition to a specification of the protocol, we have developed a SPDY-enabled Google Chrome browser and open-source web server. In lab tests, we have compared the performance of these applications over HTTP and SPDY, and have observed up to 64% reductions in page load times in SPDY. We hope to engage the open source community to contribute ideas, feedback, code, and test results, to make SPDY the next-generation application protocol for a faster web.

    via SPDY: An experimental protocol for a faster web – The Chromium Projects.

    Google wants the World Wide Web to go faster. I think we all would like to have that as well. But what kind of heavy lifting is it going to take? The transition from Arpanet to the TCP/IP protocol took a very long time and required some heavy handed shoving to accomplish the cutover in 1984. We can all thank Vint Cerf for making that happen so that we could continue to grow and evolve as an online species (Tip of Hat). But now what? There’s been a move to evolved from TCP/IP version 4 to version 6 to accommodate the increase in number of network devices. Speed really wasn’t a consideration in that revision. I don’t know how this project integrates with TCP/IP vers. 6. But I hope maybe it can be pursued on a parallel course with the big migration to TCP/IP vers. 6.

    What would be the worst thing that could happen is to create another Facebook/Twitter/Apple Store/Google/AOL cul-de-sac that only benefits the account holders loyal to Google. Yes it would be nice if Google Docs and all the other attendant services provided via/through Google got onboard the SPDY accelerator train. I would stand to benefit, but things like this should be pushed further up into the wider Internet so that everyone, everywhere has the same benefits. Otherwise this is an attempt to steal away user accounts and create churn in the competitors account databases.

  • Cloud on a chip: Sometimes the best hypervisor is none at all   • The Register

    Image representing Intel as depicted in CrunchBase
    Image via CrunchBase

    On the cloud front, one of the more interesting projects that Held is working on is called the Single-chip Cloud Computer, or SCC for short.

    via Cloud on a chip: Sometimes the best hypervisor is none at all   • The Register.

    Singe-chip Cloud Computer sounds a lot like that 80 core and 48 core CPU experiments that Intel had been working on a while back. There is a a note that the core is a Pentium 54c and that rings a bell too as it was the same core used for those multi-core CPUs. Now the research appears to be centered on the communications links between those cores and getting an optimal bit of work for a given amount of interconnectivity. Twenty-four cores is a big step down from 80 and 48 cores. I’m thinking Intel’s manufacturing process engineers are attempting to reign in the scope of this research to make it more worthy of manufacture. Whatever happens you will likely see adaptations or bits and pieces of these technologies in a future shipping product. I’m a little disappointed though that the scope has grown smaller. I had real high hopes Intel could pull off a big technological breakthrough with an 80 core CPU, but change comes slowly and Chip Fab lines are incredibly expensive to build, pilot and line out as they make new products. Conservatism is to be expected in an industry that has the highest level of up front capital expenditure required before there’s a return on the investment. If nothing else, companies like Seamicro, Tilera and ARM will continue to goose Intel into research efforts like this and innovate their old serial processors  a little bit more.

    On the other side of the argument there is the massive virtualization of OSes on more typical serial style multi-core CPUs from Intel. VMWare and competitors still continue to slice out clock cycles of the Intel processor to make them appear to be more than one physical machine. Datacenters have seen performance compromises using this scheme to be well worth the effort in staff and software licenses given the amount of space saved through consolidation. Less rack space, and power required, the higher the marginal return for that one computer host sitting on the network. But, what this article from The Register is trying to say is if a sufficiently dense multi-core cpu is used and the power requirements scaled down sufficiently you get the same kind of consolidation of rack space, but without the layer of software on top of it all to provide the virtualized computers themselves. A one-to-one relationship between computer core and actual virtual machine can be done without the typical machinations and complications required by a Hypervisor-style OS riding herd over the virtualized computers. In that case, less Hypervisor is more. More robust that is in terms of total compute cycles devoted to hosts, more robust design architecture to minimize single points of failure and choke points. So I say there’s plenty of room to innovate yet in the virtualization industry given that the CPUs and their architectures are in an early stage of innovating massively multi-core cpus.