Category: blogroll

This is what I subscribe to myself

  • Angelbird to Bring PCIe SSD on the Cheap and Iomega has a USB 3 external SSD

     

    msystems
    Image via Wikipedia

     

    From Tom’s Hardware:

    Extreme SSD performance over PCI-Express on the cheap? There’s hope!

    A company called Angelbird is working on bringing high-performance SSD solutions to the masses, specifically, user upgradeable PCI-Express SSD solution.

    via Angelbird to Bring PCIe SSD on the Cheap.

    This is one of a pair of SSD announcements that came in on Tuesday. SSDs are all around us now and the product announcements are coming in faster and harder. The first one, is from a British company named Angelbird. Looking at the website announcing the specs of their product, it is on paper a very fast PCIe based SSD drive. Right up there with Fusion-io in terms of what you get for the dollars spent. I’m a little concerned however due to the reliance of an OS hosted in the firmware of the PCIe card. I would prefer something a little more peripheral like that the OS supports natively, rather than have the card become the OS. But this is all speculative until actual production or test samples hit the review websites and we see some kind of benchmarks from the likes of Tom’s Hardware or Anandtech.

    From MacNN|Electronista:

    Iomega threw itself into external solid-state drives today through the External SSD Flash Drive. The storage uses a 1.8-inch SSD that lets it occupy a very small footprint but still outperform a rotating hard drive:

    Read more: http://www.electronista.com/articles/10/10/15/iomega.outs.external.usb.30.ssd/

    The second story covers a new product from Iomega where we have for the first time an external SSD from a mainstream manufacturer. Price is at premium compared to the performance, but if you like the looks you’ll be willing to pay. It’s not bad speeds for reading and writing, but it’s not the best compared to the amount of money you’re paying. And why do they still use a 2.5″ external case if it’s internally a 1.8″ drive? Couldn’t they shrink it down to the old Firefly HDD size from back in the day? It should be the smaller.

  • Personal data stores and pub/sub networks – O’Reilly Radar

    Now social streams have largely eclipsed RSS readers, and the feed reading service I’ve used for years — Bloglines — will soon go dark. Dave Winer thinks the RSS ecosystem could be rebooted, and argues for centralized subscription handling on the next turn of the crank. Of course definitions tend to blur when we talk about centralized versus decentralized services.

    via Personal data stores and pub/sub networks – O’Reilly Radar.

    Here now, more Uncertainty and Doubt surrounding RSS readers as the future of consuming web pages. I wouldn’t expect this from the one guy I most respect when it comes to future developments in computer technology. I have followed Jon Udell’s shining example each step of the way from Radio Userland to Bloglines. And I breathed deeply the religion of loosely coupled services tied together with ‘services’ like pub/sub or RSS feeds. The flexibility and robustness of not letting a single vendor or purveyor of a free services to me was obvious. However I have fallen prey to the siren song of social media, starting with Digg, Flickr, Google Reader, LinkedIn. Each one claiming some amount of market share, but none of them anticipating the wild popularity of Friendster, MySpace and now Facebook. I actively participate in Facebook to help keep everyone energized and to let them know someone is reading the stuff they post. I want this service to succeed. And by all accounts it’s succeeding beyond its wildest dreams, through advertising revenue.

    But who wants to be marketed to? Doc Searles argued rightly our personal information is ours, our ‘attention’ is ours. He wants something like a Vendor Relationship Management service where we keep our ‘profile’ information and dole out the absolute minimum necessary to participate online or do commerce. And Jon in this article uses the elmcity project as a sterling example of how many stovepipe social networks in which we participate. Jon’s work with elmcity is an ongoing attempt to have events be ‘subscribe-enabled’ the way blogs or online news websites are already. Each online calendar program has a web presence, but usually does not have a comparable publication/subscription service like RSS or iCalendar formats associated with them. To ‘really’ know what is going requires a network of event curators who can manage the data feeds that then get plugged into an information hub that aggregates all the events in a geographical region. It’s all loosely coupled and more robust than trying to get everyone to adopt a single calendar.

    Which brings us back to the online personal data store, why can’t we have a ‘hub’ that aggregates these ‘services’ we participate in but contain the single source of profile information that we manage and dole out? In that way I’m not hostage to End User Licenses and the attendant risks of letting someone else be my profile steward. Instead I can manage it and let the services subscribe to my hub, and all my ‘data stores’ can exist across all the social networks that exist or may exist. No Lock In. Think about this, I cannot export all the little write-ups and comments on made on headlines I posted in Bloglines. I could export my Blogroll though, using OPML (thanks Dave Winer!) Similarly I won’t ever be able export any of my numerous status updates in Facebook. In fact as near as I can tell there is no Export Button anywhere for anything. It’s like AOL, an internet cul-de-sac that we all willingly participate in, never considering consequences.

  • Intel Debuts New Atom System-on-Chip Processor

    This is a an Altera Flex FPGA with 20,000 cell...
    Image via Wikipedia

    At an IDF keynote, Intel launched “Tunnel Creek,” a new Atom E600 SoC processor. One particular processor detailed is codenamed “Stellarton,” which consists of the Atom E600 processor paired with an Altera FPGA on a multi-chip package that provides additional flexibility for customers who want to incorporate proprietary I/O or acceleration.

    via Intel Debuts New Atom System-on-Chip Processor.

    Intel has announced a future product that pairs an Intel Atom processor with a Virtex FPGA. Now this is interesting, I just mentioned FPGA (field programmable gate array) chips and out of the blue Intel has summoned the same chip and married it to a little Atom core processor. They say it could be used as an accelerator of some sort. I’m wondering what specifically they had in mind (something very esoteric and niche like a TCP/IP offload processor). I would like to see some touting of its possible uses and not just say, “We want to see what happens.” Unfortunately the way the competition works in Consumer Electronics, you never tell people what’s inside. You let folks like iFixit do a teardown and put pictures up. You let industry websites research all the chips and what they cost, estimate the ones that are custom Integrated Circuits and report the cost to manufacture the device. That’s what they do with every Apple iPhone these days.

    It would be cool if Intel could also sell this as a development kit for Stellarton’s users. Keep the price high enough to prevent people from releasing product based just on the kit’s CPU, but low enough to get people to try out some interesting projects. I’m guessing it would be a great tool to use for video transcoding, Mux/DeMuxing for video streams, etc. If anyone does release a shipping product thought it would be cool if they put the “Stellarton Inside” logo, so we know that FPGAs are doing the heavy lifting. The other possibility Intel mentions is to use the FPGA as a proprietary I/O so possibly like an Infiniband network interface? I still have hopes it’s used in the Consumer Electronics world.

  • Custom superchippery pulls 3D from 2D images like humans • The Register

    Computing brainboxes believe they have found a method which would allow robotic systems to perceive the 3D world around them by analysing 2D images as the human brain does – which would, among other things, allow the affordable development of cars able to drive themselves safely.

    via Custom superchippery pulls 3D from 2D images like humans • The Register.

    The beauty of this new work is they designed a custom CPU using a Virtex 6 FPGA (Field Programmable Gate Array). FPGA for those who don’t know is a computer chip that you can ‘re-wire’ through software to take on mathematical task you can dream up. In the old days this would have required a custom chip to be engineered, validated and manufactured at great cost. FPGAs require development kits and FPGA chips you need to program. With this you can optimize every step within the computer processor and speed things up much more than a general purpose computer processor (like the Intel chip that powers your Windows or Mac computer). In this example of the research being done the custom designed computer circuitry is using video images to decide where in the world a robot can safely drive as it maneuvers around on the ground. I know Hans Moravec has done a lot with it at Carnegie Mellon U. And it seems that this group is from Yale’s engineering dept. which is encouraging to see the techniques embraced and extended by another U.S. university. The low power of this processor and it’s facility for processing the video images in real-time is ahead of its time and hopefully will find some commercial application either in robotics or automotive safety controls. As for me I’m still hoping for a robot chauffeur.

  • The Ask.com Blog: Bloglines Update

    Image representing Steve Gillmor as depicted i...
    Steve Gilmor Image via CrunchBase

    As Steve Gillmor pointed out in TechCrunch last year , being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system. RSS is a means to an end, not a consumer experience in and of itself. As a result, RSS aggregator usage has slowed significantly, and Bloglines isn’t the only service to feel the impact.. The writing is on the wall.

    via The Ask.com Blog: Bloglines Update.

    I don’t know if I agree with the conclusion RSS readers are a form of lock-in. I consider Facebook participation as a form of lock-in as all my quips, photos and posts in that social networking cul-de-sac will never be exported back out again. There’s no way to do it, never ever. With an RSS reader at least my blogroll can easily be exported and imported again using OPML formatted ASCII text. How cool is that in the era of proprietary binary formats (mp4, pdf, doc). No I would say RSS is kind of innately good in and of itself. Enabling technologies are like that and while RSS readers are not the only way to consume or create feeds I haven’t found one of them that couldn’t import my blogroll. Try doing that with Twitter or Facebook (click the don’t like button).

  • Blog U.: Augmented Reality and the Layar Reality Browser

    I remember when I first saw the Verizon Wireless commercial featuring the Layar Reality Browser. It looked like something out of a science fiction movie. When my student web coordinator came in to the office with her iPhone, I asked her if she had ever heard of “Layar.” She had not heard of it so we downloaded it from the App Store. I was amazed at how the app used the phone’s camera, GPS and Internet access to create a virtual layer of information over the image being displayed by the phone. It was my first experience with an augmented reality application.

    via Blog U.: Augmented Reality and the Layar Reality Browser – Student Affairs and Technology – Inside Higher Ed.

    It’s nice to know Layar is getting some wider exposure. When I first wrote about it last year, the smartphone market was still somewhat small. And Layar was targeting phones that already had GPS built-in which the Apple iPhone wasn’t quite ready to allow access to in its development tools. Now the iPhone and Droid are willing participants in this burgeoning era of Augmented Reality.

    The video in the article is from Droid and does a WAY better job than any of the fanboy websites for the Layar application. Hopefully real world performance is as good as it appears in the video. And I’m pretty sure the software company that makes it has continuously been updating it since it was first on the iPhone a year ago. Given the recent release of the iPhone 4 and it’s performance enhancements, I have a feeling Layar would be a cool, cool app to try out and explore.

  • Drive suppliers hit capacity increase difficulties • The Register

    Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology.

    via Drive suppliers hit capacity increase difficulties • The Register.

    This is a good survey of upcoming HDD platter technologies. HAMR (Heat Assisted Magnetic Recording)and BPM (Bit Patterned Media) are the next generation after the current Perpendicular Magnetic Recording slowly hits the top end of its ability to squash together the 1’s and 0’s of a spinning hard drive platter. HAMR is like the old Floptical technology from the halls of Steve Job’s old NEXT Computer company. It uses a laser to heat the surface of the drive platter before the Read/Write head starts recording data to the drive. This ‘change’ in the state of the surface of the drive (the heat) helps align the magnetism of the bits written so that the tracks of the drive and the bits recorded inside them can be more tightly spaced. In the world of HAMR, Heat + Magnetism = bigger hard drives on the same old 3.5″ platters and 2.5″ platters we have now.  With BPM, the whole drive is manufactured to hold a set number of bits and tracks in advance. Each bit is created directly on the platter as a ‘well’ with a ring of insulating material surround it. The sizes of the wells are sufficiently small and dense enough to allow a light tighter spacing than PMR. But as is often the case the new technologies aren’t ready for manufacturing. A few test samples of possible devices are out in limited or custom made engineering prototypes to test the waters.

    Given the slow down in silicon CMOS chip speeds from the likes of Intel and AMD along with the wall of PMR it would appear the frontier days of desktop computing are coming to a close. Gone are the days of Megahertz wars and now Gigabyte wars waged in the labs of review sites and test labs across the Interwebs. The torrid pace of change in hardware we all experienced from the release of Windows 95 to the release this year of Windows 7 has slowed to a radical incrementalism. Intel releases so many chips with ‘slight’ variations in clock speed and cache one cannot keep up with them all. Hard drive manufacturers try to increment their disks about .5 Tbytes every 6 months but now that will stop. Flash-based SSD will be the biggest change for most of us and will help break through the inherent speed barriers enforced by SATA and spinning disk technologies. I hope a hybrid approach is used mixing SSDs and HDDs for speed and size in desktop computers. Fast things that need to be fast can use the SDD, slow things that are huge in size or quantity will go to the HDD. As for next gen disk based technologies, I’m sure there will be a change to the next higher density technology. But it will no doubt be a long time in coming.

  • Seagate unveils first-ever 3TB external drive | Electronista

    Seagate is selling the drive today for $250. Cables to add new interfaces or support vary from $20 to $50. Internal drives are expected in the future but may wait until more systems can properly boot; using a larger than 2.1TB disk as a boot drive requires EFI firmware that most Windows PCs don’t have.

    via Seagate unveils first-ever 3TB external drive | Electronista.

    No doubt the internal version known as Constellation is still to be released. And take note EFI or Extensible Firmware Interface is the one thing differentiating Mac desktops from the large mass of Wintel desktops now on the market. Dell, HP, IBM, Acer, Asus, etc. are all wedded still to the old Intel BIOS based motherboard architecture. Mac along adopted EFI and has used it consistently since it first adopted Intel chips for its computer products. Now the necessity of EFI is becoming embarrassingly clear. Especially for the gamer fanboys out there who must have the largest hard drives on the market. Considering the size of these drives it’s amazing to think you could pack 4 of these into a Mac Pro desktop, and get 12TB of storage all internally connected.

    Regarding the internals of the drive itself. Some speculation in this article included a suggestion that this hard drive used 4 platters total to reach 3GB of storage. Computing how many GBytes per platter this would require puts the density at 750 Gbytes/platter. This would mark a significant increase over the more common 640Gbytes/platter in currently shipping. In fact in a follow-up to this original announcement yesterday Seagate has announced it is using a total of 5 platters in this external hard drive. Which computes to 600 Gbytes/platter which is more inline with currently shipping single platter drives and even slightly less dense the the 640 GByte drives that are at the top of the storage density scale.

  • Tilera, SeaMicro: The era of ultra high density computing

    The Register did an article recently following up on a press release from Tilera. The news this week is Tilera is now working on the next big thing, Quanta will be shipping a 2U rack mounted computer with 512 processing cores inside. Why is that significant? Well 512 is the magic number quoted in the announcement last week from upstart server maker SeaMicro. The SM10000 from SeaMicro boasts 512 Intel cores inside a 10U box. Which makes me wonder who or what is all this good for? Based solely on press releases and articles written to date about Tilera, their targeted customers aren’t quite as general say as SeaMicro. Even though each core in a Tilera cpu can run it’s own OS and share data, it is up to the device manufacturers licensing the Tilera chip to do the heavy lifting of developing the software and applications that make all that raw iron do useful work. The cpus on the SeaMicro hardware however are full Intel x86 capable Atom cpus tied together with a lot of management hardware and software provided by SeaMicro. Customers in this case are most likely going to load software applications they already have in operation on existing Intel hardware. Development time or re-coding or recompiling is unnecessary as SeaMicro’s value add is the management interface for all that raw iron. Quanta is packaging up the Tilera in a way that will make it more palatable to a potential customer who might also be considering buying SeaMicro’s project. It all depends on what apps you want to run, what performance you expect, and how dense you need all your cores to be when they are mounted in the rack. Numerically speaking, the race for ultimate density right now the Quanta SQ2 wins with 512 general purpose CPUs in a 2U rack mount. SeaMicro has 512 in a 10U rack mount. However, that in now way reflects the differences in the OSes and types of applications and performance you might see when using either piece of hardware.

    http://www.theregister.co.uk/2007/08/20/tilera_tile64_chip/ (The Register August 20, 2007)

    “Hot Chips The multi-core chip revolution advanced this week with the emergence of Tilera – a start-up using so-called mesh processor designs to go after the networking and multimedia markets.”

    http://www.theregister.co.uk/2007/09/28/tilera_new_ceo/ (The Register September 28, 2007)

    “Tahernia arrives at Tilera from FPGA shop Xilinx where he was general manager in charge of the Processing Solutoins (sic) Group.”

    http://www.linuxfordevices.com/c/a/News/64way-chip-gains-Linux-IDE-dev-cards-design-wins/
    (Linux for Devices April 30 2008)

    “Tilera introduced a Linux-based development kit for its scalable, 64-core Tile64 SoC (system-on-chip). The company also announced a dual 10GbE PCIExpress card based on the chip (pictured at left), revealed a networking customer win with Napatech, and demo’d the Tile64 running real-time 1080P HD video.”

    http://www.theregister.co.uk/2008/09/23/tilera_cpu_upgrade/ (The Register September 23 2008)

    “This week, Tilera is putting its second-generation chips into the field and is getting some traction among various IT suppliers, who want to put the Tile64 processors and their homegrown Linux environment to work.”

    “Tilera was founded in Santa Clara, California, in October 2004. The company’s research and development is done in its Westborough, Massachusetts lab, which makes sense given that the Tile64 processor that is based on an MIT project called Raw. The Raw project was funded by the U.S. National Science Foundation and the Defense Advanced Research Projects Agency, the research arm of the U.S. Department of Defense, back in 1996, and it delivered a 16-core processor connected by a mesh of on-core switches in 2002.”

    http://www.theregister.co.uk/2009/10/26/tilera_third_gen_mesh_chips/ (The Register October 26 2009)

    “Upstart massively multicore chip designer Tilera has divulged the details on its upcoming third generation of Tile processors, which will sport from 16 to 100 cores on a single die.”

    http://www.goodgearguide.com.au/article/323692/tilera_targets_intel_amd_100-core_processor/#comments
    (Good Gear Guide October 26 2009)

    “Look at the markets Tilera is aiming these chips at. These applications have lots of parallelism, require very high throughput, and need a low power footprint. The benefits of a system using a custom processor are large enough that paying someone to write software for the job is more than worth it.”

    http://www.theregister.co.uk/2009/11/02/tilera_quanta_servers/ (The Register November 2 2009)

    “While Doud was not at liberty to reveal the details, he did tell El Reg that Tilera had inked a deal with Quanta that will see the Taiwanese original design manufacturer make servers based on the future Tile-Gx series of chips, which will span from 16 to 100 RISC cores and which will begin to ship at the end of 2010.”

    http://www.theregister.co.uk/2010/03/09/tilera_vc_funding/ (The Register March 9 2010)

    “The current processors have made some design wins among networking, wireless infrastructure, and communications equipment providers, but the Tile-Gx series is going to give gear makers a slew of different options.”

  • Big Web Operations Turn to Tiny Chips – NYTimes.com

    Stephen O’Grady, a founder at the technology analyst company RedMonk, said the technology industry often has swung back and forth between more standard computing systems and specialized gear.

    via Big Web Operations Turn to Tiny Chips – NYTimes.com.

    A little tip of the hat to Andrew Feldman, CEO of SeaMicro the startup company that announced it’s first product last week. The giant 512 cpu computer is being covered in this NYTimes article to spotlight the ‘exotic’ technologies both hardware and software some companies use to deploy huge web apps. It’s part NoSQL part low power massive parallelism.