This tells me my job with foursquare is to be “driven” like a calf into a local business. Of course, this has been the assumption from the start. But I had hoped that somewhere along the way foursquare could also evolve into a true QS app, yielding lat-lon and other helpful information for those (like me) who care about that kind of thing. (And, to be fair, maybe that kind of thing actually is available, through the foursquare API. I saw a Singly app once that suggested as much.) Hey, I would pay for an app that kept track of where I’ve been and what I’ve done, and made that data available to me in ways I can use.
foursquare as a kind of Lifebits I think is what Doc Searls is describing. A form of self-tracking a la Stephen Wolfram or Gordon Moore. Instead foursquare is the carrot being dangled to lure you into giving your business to a particular retailer. After that you accumulate points for numbers of visits and possibly unlock rewards for your loyalty. But foursquare no doubt accumulates a lot of other data along the way that could be use for the very purpose Doc Searls was hoping for.
Gordon Moore’s work at Microsoft Research bootstrapping the My Lifebits project is a form of memory enhancement, but also logging of personal data that can be analyzed later. The collection or ‘instrumentation’ of one’s environment is what Stephen Wolfram has accomplished by counting things over time. Not to say it’s simpler than the My Lifebits, but it is in someways lighter weight data (instead of videos and pictures, mouse clicks and tallies of email activity, times of day, etc.) There is no doubt that foursquare could make a for profit service to paying users where they could collect this location data and serve it up to subscribers, letting them analyze the data after the fact.
I firmly believe a form of My Lifebits could be aggregated across a wide range of free and paid services along with personal instrumentation and data collecting like the kind Stephen Wolfram does. If there’s one thing I’ve learned readings stories about inventions like these from MIT’s Media Lab is that it’s never an either or proposition. You don’t have to just adopt Gordon Moore’s technology or Stephen Wolfram’s techniques or even foursquare’s own data. You can do all or just pick and choose the ones that suit your personal data collection needs. Then you get to slice, dice and analyze to your heart’s content. What you do with it after that is completely up to you and should be considered as personal as any legal documents or health records you already have.
Which takes me back to an article I wrote some time ago in reference to Jon Udell calling for a federated LifeBits type of service. It wouldn’t be constrained to one kind of data, but all the LifeBits aggregated potentially and new repositories for stuff that must be locked down and private. So add Doc Searls to the list of bloggers and long time technology writers who see an opportunity. Advocacy (in the case of Doc’s experience with foursquare) on behalf of sharing unfiltered data with the users on whom data is collected is one step in that direction. I feel Jon Udell is also an advocate for users gaining access to all that collected and aggregated data. But as Jon Udell asks, who is going to be the first to attempt to offer this up as a pay-for service in the cloud where you can for a fee access your lifebits aggregated into one spot (foursquare,twitter,facebook,gmail,flickr,photostream,mint,eRecords,etc.) so that you don’t spend your life logging on and logging off from service to service to service. Aggregation could be a beautiful thing.
AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year.
There’s a lot to talk about when it comes to accelerated video transcoding, really. Not the least of which is HandBrake’s dominance generally for anyone doing small scale size reductions of their DVD collections for transport on mobile devices. We owe it all to the open source x264 codec and all the programmers who have contributed to it over the years, standing on one another’s shoulders allowing us to effortlessly encode or transcode gigabytes of video to manageable sizes. But Intel has attempted to rock the boat by inserting itself into the fray by tooling its QuickSync technology for accelerating the compression and decompression of video frames. However it is a proprietary path pursued by a few small scale software vendors. And it prompts the question, when is open source going to benefit from the proprietary Intel QuickSync technology? Maybe its going to take a long time. Maybe it won’t happen at all. Lucky for the HandBrake users in the audience some attempt is being made now to re-engineer the x264 codec to take advantage of any OpenCL compliant hardware on a given computer.
But moreover, Yahoo needed to leverage this thing that it had just bought. Yahoo wanted to make sure that every one of its registered users could instantly use Flickr without having to register for it separately. It wanted Flickr to work seamlessly with Yahoo Mail. It wanted its services to sing together in harmony, rather than in cacophonous isolation. The first step in that is to create a unified login. That’s great for Yahoo, but it didn’t do anything for Flickr, and it certainly didn’t do anything for Flickr’s (extremely vocal) users.
Gizmodo article on how Yahoo first bought Flickr then proceeded to let it erode. As the old cliche sez’, The road to hell is paved with good intentions. For me personally I didn’t really mind the issue others had with the Yahoo login. I was allowed to use the Flickr login for a long time after they were taken over. But I still had to create a Yahoo account even if I never used it for anything other than accessing Flickr. Once I realized this was the case, i dearly wished Google had bought them as I WAS already using GMail and other Google services like it.
Most recently there’s been a lot of congratulations spread around following the release of a new Flickr uploader. I always had to purchase an add-on to my Apple iPhoto in order to streamline the cataloging, annotating, and arranging of picture sets. Doing the uploads one at a time through the Web interface was not on, I needed bulk uploads, but I refused to export picture sets out of iPhoto just to get them into Flickr. So an aftermarket arose for people like me invested heavily into iPhoto. And these add-on programs worked great, but they would go out of date or be incompatible with newer versions of iPhoto. So you would have to go back and drop another $10 USD on a newer version of your own iPhoto/Flickr exporter.
And by this time Facebook had so taken over the social networking aspects of picture sharing, no one could see the point of a single medium service (just picture sharing). When Facebook allowed you to converse, play games, and poke your friends, why would you log out and open Flickr to just manage your photos. The level of integration and friction was too high for the bulk of Internet users. So Facebook had gain the mindshare, reduced the friction and made everything seamless and just work the way everyone thought it should. And it is hard to come back from a defeat like that with the millions of sign ups that Facebook was enjoying. Yahoo should have had an app for that early on and let people share their Flickr sets with people using similar access controls and levels of security.
I would have found Flickr a lot more useful if it had been well bridged into the Facebook universe during the critical time period of 2008-2010. For me that would have been just the time period when things were really chaotically ramping up in terms of total new Facebook account creations. The addition of an insanely great Flickr App for Facebook could have made a big difference with helping grow the community awareness and possibly garner a few new Flickr accounts along the way. However, agendas are always so much more blinders in the way that they close you off to the environment in which you operate. Flickr and Yahoo’s merger and the agenda of ‘integration’ more or less was the single most important thing going on during the giant Facebook ramp-up. And so it goes, Yahoo stumbles more than once and takes a perfectly good Web 2.0 app and lets it slowly erode Friendster and MySpace before it. So long Flickr it’s been good to know yuh.
Paul Otellini, CEO of Intel (Photo credit: Wikipedia)
During Intels annual investor day on Thursday, CEO Paul Otellini outlined the companys plan to leverage its multi-billion-dollar chip fabrication plants, thousands of developers and industry sway to catch up in the lucrative mobile device sector, reports Forbes.
But what you are seeing is a form of Fear, Uncertainty and Doubt (FUD) being spread about to sow the seeds of mobile Intel processors sales. The doubt is not as obvious as questioning the performance of ARM chips, or the ability of manufacturers like Samsung to meet their volume targets and reject rates for each new mobile chip. No it’s more subtle than that and only noticeable to people who know details like what design rule Intel is currently using versus that which is used by Samsung or TSMC (Taiwan Semiconductor Manufacturing Corp.) Intel is currently just releasing its next gen 22nm chips as companies like Samsung are still trying to recoup their investment in 45nm and 32nm production lines. Apple is just now beginning to sample some 32nm chips from Samsung in iPad 2 and Apple TV products. It’s current flagship model iPad/iPhone both use a 45nm chip produced by Samsung. Intel is trying to say that the old generation technology while good doesn’t have the weight and just massive investment in the next generation chip technology. The new chips will be smaller, energy efficient, less expensive all the things need to make higher profit on consumer devices using them. However, Intel doesn’t do ARM chips, it has Atom and that is the one thing that has hampered any big design wins in cellphone or tablet designs to date. At any narrow size of the design rule, ARM chips almost always use less power than a comparably sized Atom chip from Intel. So whether it’s really an attempt to spread FUD, can easily be debated one way or another. But the message is clear, Intel is trying to fight back against ARM. Why? Let’s turn back the clock to March of this year in a previous article also appearing in Apple Insider:
This article is referenced in the original article quoted at the top of the page. And it points out why Intel is trying to get Apple to take notice of its own mobile chip commitments. Apple designs its own chips and has the manufacturing contracted out to a foundry. To date Samsung has been the sole source of the A-processors used in iPhones/iPod/iPad devices as Apple is trying to get TSMC up to speed to get a second source. Meanwhile sales of the Apple devices continues to grow handsomely in spite of these supply limits. More important to Intel is the blistering growth in spite of being on older foundry technology and design rules. Intel has a technological and investment advantage over Samsung now. They do not have a chip however that is BETTER than Apple’s in house designed ARM chip. That’s why the underlying message for Intel is that it has to make it’s Atom chip so much better than an A4, A5, A5X at ANY design ruling that Apple cannot ignore Intel’s superior design and manufacturing capability. Apple will still use Intel chips, but not in its flagship products until Intel achieves that much greater level of technical capability and sophistication in its Mobile microprocessors.
Intel is planning a two-pronged attack on the smartphone and tablet markets, with dual Atom lines going down to 14 nanometers and Android providing the special sauce to spur sales.
Lastly, Ian Thomson from The Register weighs in looking at what the underlying message from Intel really is. It’s all about the future of microprocessors for the consumer market. However the emphasis in this article is that Android OS devices whether they be phones or tablets or netbooks will be the way to compete AGAINST Apple. But again it’s not Apple as such it’s the microprocessor Apple is using in it’s best selling devices that scares Intel the most. Intel has since its inception been geared towards the ‘mainstream’ market selling into Enterprises and the Consumer area for years. It has milked the desktop PC revolution as it helped create it more or less starting with its forays into integrated micro-processor chips and chipsets. It reminds me a little of the old steel plants that existed in the U.S. during the 1970s as Japan was building NEW steel plants that used a much more energy efficient design, and a steel making technology that created a higher quality product. So less expensive higher quality steel was only possible by creating brand new steel plants. But the old line U.S. plants couldn’t justify the expense and so just wrapped up and shutdown operations all over the place. Intel while it is able to make that type of investment in newer technology is still not able to create the energy saving mobile processor that will out perform an ARM core cpu.
Profile shown on Thefacebook in 2005 (Photo credit: Wikipedia)
Codenamed “Knox,” Facebook’s storage prototype holds 30 hard drives in two separate trays, and it fits into a nearly 8-foot-tall data center rack, also designed by Facebook.The trick is that even if Knox sits at the top of the rack — above your head — you can easily add and remove drives. You can slide each tray out of the the rack, and then, as if it were a laptop display, you can rotate the tray downwards, so that you’re staring straight into those 15 drives.
Nice article around Facebook’s own data center design and engineering efforts. I think their approach is going to advance the state of the art way more than Apple/Google/Amazon’s own protected and secretive data center efforts. Although they have money and resources to plow into custom engineered bits for their data centers, Facebook can at least show off what its learned in the time that it has scaled up to a huge number of daily users. Not the least of which is expressed best by their hard drive rack design, a tool-less masterpiece.
This article emphasizes the physical aspects of the racks in which the hard drives are kept. It’s a tool-less design not unlike what I talked about in this article from a month ago. HP has adopted a tool-less design for its all-in-one (AIO) Engineering Workstation, see Introducing the HP Z1 Workstation. The video link will demonstrate the idea of a tool-less design for what is arguably not the easiest device to design without the use of proprietary connectors, fasteners, etc. I use my personal experience of attempting to upgrade my 27″ iMac as the foil for what is presented in the HP promo video. If Apple adopted a tool-less design for its iMacs there’s no telling what kind of aftermarket might spring up for the hobbyist or even the casually interested Mac owners.
I don’t know how much of Facebook’s decisions regarding their data center designs is driven by the tool-less methodology. But I can honestly say that any large outfit like Facebook and HP attempting to go tool-less in some ways is a step in the right direction. Comapnies like O’Reilly’s Make: magazine and iFixit.org are readily providing path for anyone willing to put in the work to learn how to fix the things they own. Also throw into that mix less technology and more Home Maintenance style outfits like Repair Clinic, while not as sexy technologically, I can vouch for their ability to teach me how to fix a fan in my fridge.
Borrowing the phrase, “If you can’t fix it, you don’t own it” let me say I wholeheartedly agree. And also borrowing from the old Apple commercial, Here’s to the crazy ones because they change things. They have no respect for the status quo, so lots stop throwing away those devices, appliances, automobiles and let’s start first by fixing some things.
This is a 32nm A5 cpu from a new model Apple TV, the same CPU being installed in some small number of iPad 2
I would like to applaud Apples 32nm migration plan. By starting with lower volume products and even then, only on a portion of the iPad 2s available on the market, Apple maintains a low profile and gets great experience with Samsungs 32nm HK+MG process.
Anand Lal Shimpi @ Anandtech.com does a great turn explaining some of the Electrical Engineering minutiae entailed by Apple’s un-publicized switch to a smaller design rule for some of it’s 2nd Generation iPads. Specifically this iPad’s firmware reads as the iPad 2,4 version indicating a 32nm version of the Apple A5 chip. And boy howdy, is there a difference between 45nm A5 vs. 32nm A5 on the iPad 2.
Anand first explains the process technology involved in making the new chip (metal gate electrodes and High dielectric constant gate oxides). Most of it is chosen to keep electricity from leaking between the two sides of the transistor “switch” that populate the circuits on the processor. The metal gates can handle a higher voltage which is needed to overcome the high dielectric constant of the gate oxide (it is more resistant to conducting electricity, so it needs more voltage ‘oomph!’ applied it). Great explanation I think regarding those two on-die changes with the new Samsung 32nm design ruling. Both of the changes help keep the electrical current from leaking all over the processor.
What does this change mean? Well the follow-up to that question is the benchmarks that Anand runs in the rest of the article checking battery life at each step of the way. Informally it appears the iPad2,4 will have roughly 1 extra hour of battery life as compared to the original iPad2,1 using the larger 45nm A5 chip. Performance of the graphics and cpu are exactly the SAME as the first generation A5. So as the article title indicates this change was just a straightforward die shrink from 45nm to 32nm and no doubt is helping validate the A5 architecture on the new production line process technology. And this will absolutely be required to wedge the very large current generation A5x cpu on the iPad 3 into a new iPhone in the Fall 2012.
But consider this, even as Apple and Samsung both refine and innovate on the ARM architecture for mobile devices, Intel is still the technology leader (bar none). Intel has got 22nm production lines up and running and is releasing Ivy Bridge CPUs with that design rule this Summer 2012. While Intel doesn’t literally compete in the mobile chip industry (there have been attempts in the past), it at least can tout being the most dense, power efficient chip in the categories it dominates. I cannot help but wonder what kind of gains could be made if an innovator like Apple had access to an ARM chip foundry with all of Intel’s process engineering and optimization. What would an A5X chip look like at the 22nm design ruling with all the power efficiency and silicon process technologies applied to it? How large would the die be? What kind of battery life would you see if you die-shrunk an A5X all the way down to 22nm? That to me is the Andy Grove 10X improvement I would like to see. Could we get 11-12 continuous hours of battery life on a cell phone? Could we see a cell phone with more cpu/graphics capability than current generation Xbox and Playstations? Hard to tell, I know, but thinking about it is just so darned much fun I cannot help but think about it.
Design rules at 45nm (left) and 32nm (right) indicate the scale being discussed in the Anandtech article.
Unsung Heroes of Tech Back in the late 1970s you wouldnt have guessed that this shy young Cambridge maths student named Wilson would be the seed for what has now become the hottest-selling microprocessor in the world.
This is an amazing story of how a small computer company in Britain was able to jump into the chip design business and accidentally create a new paradigm in low power chips. Astounding what seemingly small groups can come with as complete product categories unto themselves. The BBC Micro was the single most important project that kept the company going and was produced as a learning aid for the BBC television show: The_Computer_Programme, a part of the BBC Computer Literacy Project. From that humble beginning of making the BBC Micro, Furber and Wilson’s ability to engineer a complete computer was well demonstrated.
But whereas the BBC Micro used an off the shelf MOS 6502 cpu, a later computer used a custom (bespoke) designed chip created in house by Wilson and Furber. This is the vaunted Acorn Risc Machine (ARM) used in the Archimedes desktop computer. And that one chip helped launch a revolution unto itself in that the very first time the powered up a sample chip, the multimeter hooked up to registered no power draw. At first one would think this was a flaw, and ask “What the heck is happening here?” But in fact when further inspection showed that the multimeter was correct, the engineers discovered that the whole cpu was running of power that was leaking from the logic circuits within the chip itself. Yes, the low power requirement of this first sample chip of the ARM cpu in 1985 ran on 1/10 of a watt of electricity. And that ‘bug’ then went on to become a feature in later generations of the ARM architecture.
Today we know of the ARM cpu cores as a bit of licensed Intellectual Property that any chip make can acquire and implement in their mobile processor designs. It has come to dominate many different architectures by different manufacturers as diverse as Qualcomm and Apple Inc. But none of it ever would have happened were it not for that somewhat surprising discovery of how power efficient that first sample chip really was when it was plugged into a development board. So thankyou Sophie Wilson and Steve Furber, as the designers and engineers today are able to stand upon your shoulders the way you once stood on the shoulders of people who designed the MOS 6502.
MOS 6502 microprocessor in a dual in-line package, an extremely popular 8-bit design (Photo credit: Wikipedia)
Sebastian Thrun, Associate Professor of Computer Science at Stanford University. (Photo credit: Wikipedia)
Google X formerly Labs founder Sebastian Thrun debuted a real-world use of his latest endeavor Project Glass during an interview on the syndicated Charlie Rose show which aired yesterday, taking a picture of the host and then posting it to Google+, the companys social network. Thrun appeared to be able to take the picture through tapping the unit, and posting it online via a pair of nods, though the project is still at the prototype stage at this point.
You may remember Sebastian Thrun the way I do. He was spotlighted a few times on the PBS TV series NOVA in their coverage of the DARPA Grand Challenge competition follow-up in 2005. That was the year that Carnegie Mellon University battled Stanford University to win in a race of driverless vehicles in the desert. The previous year CMU was the favorite to win, but their vehicle didn’t finish the race. By the following years competition, the stakes were much higher. Stanford started it’s effort that Summer 2004 just months after the March Grand Challenge race. By October 2005 the second race was held with CMU and Stanford battling it out. Sebastian Thrun was the head of the Stanford team, and had previously been at CMU and a colleague of the Carnegie race team head, Red Whittaker. In 2001 Thrun took a sabbatical year from CMU and spent it at Stanfrod. Eventually Thrun left Carnegie-Mellon altogether and moved to Stanford in July 2003.
Thrun also took a graduate student of his and Red Whittaker’s with him to Stanford, Michael Montemerlo. That combo of experience at CMU and a grad student to boot help accelerate the pace at which Stanley, the driverless vehicle was able to be developed and compete in October of 2005. Now move forward to another academic sabbatical this time from Stanford to Google Inc. Thrun took a group of students with him to work on Google Street View. Eventually this lead to another driverless car funded completely internally by Google. Thrun’s accomplishments have continued to accrue at regular intervals so much so that now Thrun has given up his tenure at Stanford to join Google as a kind of entrepreneurial research scientist helping head up the Google X Labs. The X Labs is a kind of internal skunkworks that Google funds to work on various and sundry technologies including the Google Driverless Car. Add to this Sebastian Thrun’s other big announcement this year of an open education initiative that’s titled Udacity (attempting to ‘change’ the paradigm of college education). The list as you see goes on and on.
So where does that put the Google Project Glass experiment. Sergey Brin attempted to show off a prototype of the system at a party very recently. Now Sebastian Thrun has shown it off as well. Google Project Glass is a prototype as most online websites have reported. Sebastian Thrun’s interview on Charlie Rose attempted to demo what the prototype is able to do today. It appears according to this article quoted at the top of my blogpost that Google Glass can respond to gestures, and voice (though that was not demonstrated). Questions still remain as to what is included in this package to make it all work. Yes, the glasses do appear ‘self-contained’ but then a wireless connection (as pointed out by Mashable.com) would not be visible to anyone not specifically shown all the components that make it go. That little bit of visual indirection (like a magician) would lead one to believe that everything resides in the glasses themselves. Well, so much the better then for Google to let everyone draw their own conclusions. As to the concept video of Google Glass, I’m still not convinced it’s the best way to interact with a device:
As the video shows it’s more centered on voice interaction very much like Apple’s own Siri technology. And that as you know requires two things:
1. A specific iPhone that has a noise cancelling microphone array
2. A broadband cellphone connection back to the Apple mothership data center in North Carolina to do the Speech-t0-Text recognition and responses
So it’s guaranteed that the glasses are self-contained to an untrained observer, but to do the required heavy lifting as it appears in the concept video is going to require the Google Glasses and two additional items:
1. A specific Android phone with the Google Glass spec’d microphone array and ARM chip inside
2. A broadband cellphone connection back to the Google motherships wherever they may be to do some amount of off-phone processing and obviously data retrievals for the all the Google Apps included.
It would be interesting to know what passes over that personal area network between the Google Glasses and the cellphone data uplink a real set of glasses is going to require. The devil is in those details and will be the limiting factor on how inexpensively this product could be manufactured and sold.
Thomas Hawk’s photo of Sergey Brin wearing Google Glasses
Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.
QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.
In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.
Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.
Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.
Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)
My first blogging platform was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosophere became a conversational medium, I saw what that could mean. Radio UserLand did not, at first, support comments. That turned out to be a constraint well worth embracing. When conversation emerged, as it inevitably will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.
I would love to be able to comment on an article or a blog entry by passing a link to a blog entry within my own WordPress instance on WordPress.com. However rendering that ‘feed’ back into the comments section on the originating article/blog page doesn’t seem to be common. At best I think I could drop a permalink into the comments section so people might be tempted to follow the link to my blog. But it’s kind of unfair to an unsuspecting reader to force them to jump and in a sense re-direct to another website just to follow a commentary. So I fully agree there needs to be a pub/sub style way of passing my blog entry by reference back into the comments section of the originating article/blog. Better yet that gives me some ability to amend and edit my poor choice of words the first time I publish a response. Too often silly mistakes get preserved in the ‘amber’ of the comments fields in the back-end MySQL databases of those content management systems housing many online web magazines. So there’s plenty of room for improvement and RSS could easily embrace and extend this style of commenting I think if someone were driven to develop it.