Category: technology

General technology, not anything in particular

  • The Ask.com Blog: Bloglines Update

    Image representing Steve Gillmor as depicted i...
    Steve Gilmor Image via CrunchBase

    As Steve Gillmor pointed out in TechCrunch last year , being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system. RSS is a means to an end, not a consumer experience in and of itself. As a result, RSS aggregator usage has slowed significantly, and Bloglines isn’t the only service to feel the impact.. The writing is on the wall.

    via The Ask.com Blog: Bloglines Update.

    I don’t know if I agree with the conclusion RSS readers are a form of lock-in. I consider Facebook participation as a form of lock-in as all my quips, photos and posts in that social networking cul-de-sac will never be exported back out again. There’s no way to do it, never ever. With an RSS reader at least my blogroll can easily be exported and imported again using OPML formatted ASCII text. How cool is that in the era of proprietary binary formats (mp4, pdf, doc). No I would say RSS is kind of innately good in and of itself. Enabling technologies are like that and while RSS readers are not the only way to consume or create feeds I haven’t found one of them that couldn’t import my blogroll. Try doing that with Twitter or Facebook (click the don’t like button).

  • Blog U.: Augmented Reality and the Layar Reality Browser

    I remember when I first saw the Verizon Wireless commercial featuring the Layar Reality Browser. It looked like something out of a science fiction movie. When my student web coordinator came in to the office with her iPhone, I asked her if she had ever heard of “Layar.” She had not heard of it so we downloaded it from the App Store. I was amazed at how the app used the phone’s camera, GPS and Internet access to create a virtual layer of information over the image being displayed by the phone. It was my first experience with an augmented reality application.

    via Blog U.: Augmented Reality and the Layar Reality Browser – Student Affairs and Technology – Inside Higher Ed.

    It’s nice to know Layar is getting some wider exposure. When I first wrote about it last year, the smartphone market was still somewhat small. And Layar was targeting phones that already had GPS built-in which the Apple iPhone wasn’t quite ready to allow access to in its development tools. Now the iPhone and Droid are willing participants in this burgeoning era of Augmented Reality.

    The video in the article is from Droid and does a WAY better job than any of the fanboy websites for the Layar application. Hopefully real world performance is as good as it appears in the video. And I’m pretty sure the software company that makes it has continuously been updating it since it was first on the iPhone a year ago. Given the recent release of the iPhone 4 and it’s performance enhancements, I have a feeling Layar would be a cool, cool app to try out and explore.

  • Micron intros SSD speed king • The Register

    The RealSSD P300 comes in a 2.5-inch form factor and in 50GB, 100GB and 200GB capacity points, and is targeted at servers, high-end workstations and storage arrays. The product is being sampled with customers now and mass production should start in October.

    via Micron intros SSD speed king • The Register.

    Sandisk C300 ssd drive
    The C300 as it appears on Anandtech.com

    I am now for the first time after SSDs have hit the market looking at the drive performance of each new product being offered. What I’ve begun to realize is the speeds of each product are starting to fall into a familiar range. For instance I can safely say that for a drive in the 120GB range with Multi-Level Cells you’re going to see a minimum 200MB/sec read/write speeds (writing is usually faster than reading by some amount on every drive). This is a vague estimate of course, but it’s becoming more and more common. Smaller size drives have slower speeds and suffer on benchmarks due in part to the smaller number of parallel data channels. Bigger capacity drives have more channels and therefore can write more data per second. A good capacity for a boot/data drive is going to be in the 120-128GB category. And while it won’t be the best for archiving all your photos and videos, that’s fine. Use a big old 2-3TB SATA drive for those heavy lifting duties. I think that will be a more common architecture in the future and not a premium choice as it is now. SSD for boot/data and typical HDD for big archive and backup.

    On the enterprise front things are a little different speed and throughput are important, but the drive interface is as well. With SATA being the most widely used interface for consumer hardware, big drive arrays for the data center are wedded to a form of Serial Attached Storage (SAS) or Fibre Channel (FC). So now manufacturers and designers like Sandisk need to engineer niche products for the high margin markets that require SAS or FC versions of the SSD. As was the case with the transition from Parallel ATA top Serial ATA, the first products are going to SATA to X interface adapters and electronics on board to make them compatible. Likely this will be the standard procedure for quite a while as a ‘native’ Fibre or SAS interface will require a bit of engineering to be done and cost increases to accommodate the enterprise interfaces. Speeds however will likely always be tuned for the higher volume consumer market and the SATA version of each drive will likely be the highest possible throughput version in each drive category. I’m thinking that the data center folks should adapt and adjust and go with the consumer level gear or adopt SATA SSDs now that the drives are not mechanically spinning disks. Similarly as more and more manufacturers are also doing their own error correction and wear leveling on the memory chips in SSDs the reliability will be equal to or exceed that of a FC or SAS spinning disks.

    And speaking of spinning disks, the highest throughput I’ve ever seen quoted for a SATA disk was always 150MB/sec. Hands down that was theoretically the best it could ever do. More likely you would only see 80MB/sec (which takes me back to the old days of Fast/Wide SCSI and the Barracuda). Given the limits of moving media like spinning disks and read/write heads tracking across their surface, Flash throughput is just stunning. We are now in an era that while the Flash SSDs are slower than RAM, they are awfully fast and fast enough to notice when booting a computer. I think the only real speed enhancement beyond the drive interface is to put Flash SSDs on the motherboard directly and build a SATA drive controller directly on the CPU to make read/write requests. I doubt it would be cost effective for the amount of improvement, but it would eliminate some of the motherboard electronics and smooth the flow a bit. Something to look for certainly in netbook or slate style computers in the future.

  • Drive suppliers hit capacity increase difficulties • The Register

    Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology.

    via Drive suppliers hit capacity increase difficulties • The Register.

    This is a good survey of upcoming HDD platter technologies. HAMR (Heat Assisted Magnetic Recording)and BPM (Bit Patterned Media) are the next generation after the current Perpendicular Magnetic Recording slowly hits the top end of its ability to squash together the 1’s and 0’s of a spinning hard drive platter. HAMR is like the old Floptical technology from the halls of Steve Job’s old NEXT Computer company. It uses a laser to heat the surface of the drive platter before the Read/Write head starts recording data to the drive. This ‘change’ in the state of the surface of the drive (the heat) helps align the magnetism of the bits written so that the tracks of the drive and the bits recorded inside them can be more tightly spaced. In the world of HAMR, Heat + Magnetism = bigger hard drives on the same old 3.5″ platters and 2.5″ platters we have now.  With BPM, the whole drive is manufactured to hold a set number of bits and tracks in advance. Each bit is created directly on the platter as a ‘well’ with a ring of insulating material surround it. The sizes of the wells are sufficiently small and dense enough to allow a light tighter spacing than PMR. But as is often the case the new technologies aren’t ready for manufacturing. A few test samples of possible devices are out in limited or custom made engineering prototypes to test the waters.

    Given the slow down in silicon CMOS chip speeds from the likes of Intel and AMD along with the wall of PMR it would appear the frontier days of desktop computing are coming to a close. Gone are the days of Megahertz wars and now Gigabyte wars waged in the labs of review sites and test labs across the Interwebs. The torrid pace of change in hardware we all experienced from the release of Windows 95 to the release this year of Windows 7 has slowed to a radical incrementalism. Intel releases so many chips with ‘slight’ variations in clock speed and cache one cannot keep up with them all. Hard drive manufacturers try to increment their disks about .5 Tbytes every 6 months but now that will stop. Flash-based SSD will be the biggest change for most of us and will help break through the inherent speed barriers enforced by SATA and spinning disk technologies. I hope a hybrid approach is used mixing SSDs and HDDs for speed and size in desktop computers. Fast things that need to be fast can use the SDD, slow things that are huge in size or quantity will go to the HDD. As for next gen disk based technologies, I’m sure there will be a change to the next higher density technology. But it will no doubt be a long time in coming.

  • Seagate unveils first-ever 3TB external drive | Electronista

    Seagate is selling the drive today for $250. Cables to add new interfaces or support vary from $20 to $50. Internal drives are expected in the future but may wait until more systems can properly boot; using a larger than 2.1TB disk as a boot drive requires EFI firmware that most Windows PCs don’t have.

    via Seagate unveils first-ever 3TB external drive | Electronista.

    No doubt the internal version known as Constellation is still to be released. And take note EFI or Extensible Firmware Interface is the one thing differentiating Mac desktops from the large mass of Wintel desktops now on the market. Dell, HP, IBM, Acer, Asus, etc. are all wedded still to the old Intel BIOS based motherboard architecture. Mac along adopted EFI and has used it consistently since it first adopted Intel chips for its computer products. Now the necessity of EFI is becoming embarrassingly clear. Especially for the gamer fanboys out there who must have the largest hard drives on the market. Considering the size of these drives it’s amazing to think you could pack 4 of these into a Mac Pro desktop, and get 12TB of storage all internally connected.

    Regarding the internals of the drive itself. Some speculation in this article included a suggestion that this hard drive used 4 platters total to reach 3GB of storage. Computing how many GBytes per platter this would require puts the density at 750 Gbytes/platter. This would mark a significant increase over the more common 640Gbytes/platter in currently shipping. In fact in a follow-up to this original announcement yesterday Seagate has announced it is using a total of 5 platters in this external hard drive. Which computes to 600 Gbytes/platter which is more inline with currently shipping single platter drives and even slightly less dense the the 640 GByte drives that are at the top of the storage density scale.

  • OCZ’s RevoDrive Preview: An Affordable PCIe SSD – AnandTech

    We have seen a turnaround however. At last year’s IDF Intel showed off a proof of concept PCIe SSD that could push 1 million IOPS. And with the consumer SSD market dominated by a few companies, the smaller players turned to building their own PCIe SSDs to go after the higher margin enterprise market. Enterprise customers had the budget and the desire to push even more bandwidth. Throw a handful of Indilinx controllers on a PCB, give it a good warranty and you had something you could sell to customers for over a thousand dollars.

    via OCZ’s RevoDrive Preview: An Affordable PCIe SSD – AnandTech :: Your Source for Hardware Analysis and News.

    Anandtech does a review of the OCZ RevoDrive. A PCIe SSD for the consumer market. It’s not as fast as a Fusion-io, but then it isn’t nearly as expensive either. How fast is it say compared to a typical SATA SSD? Based on the benchmarks in this review it seems as though the RevoDrive is a little faster than most SATA SSDs, but it also costs about $20 more than a really good 120GB SSD. Be warned that this is the Suggest Retail price, and no shipping product yet exists. Prices may vary once this PCIe card finally hits the market. But I agree 100% with this quote from the end of the review:

    “If OCZ is able to deliver a single 120GB RevoDrive at $369.99 this is going to be a very tempting value.”

    Indeed, much more reasonable than a low end Fusion-io priced closer to $700+, but not as fast either. You picks your products, you pays yer money.

  • Tilera, SeaMicro: The era of ultra high density computing

    The Register did an article recently following up on a press release from Tilera. The news this week is Tilera is now working on the next big thing, Quanta will be shipping a 2U rack mounted computer with 512 processing cores inside. Why is that significant? Well 512 is the magic number quoted in the announcement last week from upstart server maker SeaMicro. The SM10000 from SeaMicro boasts 512 Intel cores inside a 10U box. Which makes me wonder who or what is all this good for? Based solely on press releases and articles written to date about Tilera, their targeted customers aren’t quite as general say as SeaMicro. Even though each core in a Tilera cpu can run it’s own OS and share data, it is up to the device manufacturers licensing the Tilera chip to do the heavy lifting of developing the software and applications that make all that raw iron do useful work. The cpus on the SeaMicro hardware however are full Intel x86 capable Atom cpus tied together with a lot of management hardware and software provided by SeaMicro. Customers in this case are most likely going to load software applications they already have in operation on existing Intel hardware. Development time or re-coding or recompiling is unnecessary as SeaMicro’s value add is the management interface for all that raw iron. Quanta is packaging up the Tilera in a way that will make it more palatable to a potential customer who might also be considering buying SeaMicro’s project. It all depends on what apps you want to run, what performance you expect, and how dense you need all your cores to be when they are mounted in the rack. Numerically speaking, the race for ultimate density right now the Quanta SQ2 wins with 512 general purpose CPUs in a 2U rack mount. SeaMicro has 512 in a 10U rack mount. However, that in now way reflects the differences in the OSes and types of applications and performance you might see when using either piece of hardware.

    http://www.theregister.co.uk/2007/08/20/tilera_tile64_chip/ (The Register August 20, 2007)

    “Hot Chips The multi-core chip revolution advanced this week with the emergence of Tilera – a start-up using so-called mesh processor designs to go after the networking and multimedia markets.”

    http://www.theregister.co.uk/2007/09/28/tilera_new_ceo/ (The Register September 28, 2007)

    “Tahernia arrives at Tilera from FPGA shop Xilinx where he was general manager in charge of the Processing Solutoins (sic) Group.”

    http://www.linuxfordevices.com/c/a/News/64way-chip-gains-Linux-IDE-dev-cards-design-wins/
    (Linux for Devices April 30 2008)

    “Tilera introduced a Linux-based development kit for its scalable, 64-core Tile64 SoC (system-on-chip). The company also announced a dual 10GbE PCIExpress card based on the chip (pictured at left), revealed a networking customer win with Napatech, and demo’d the Tile64 running real-time 1080P HD video.”

    http://www.theregister.co.uk/2008/09/23/tilera_cpu_upgrade/ (The Register September 23 2008)

    “This week, Tilera is putting its second-generation chips into the field and is getting some traction among various IT suppliers, who want to put the Tile64 processors and their homegrown Linux environment to work.”

    “Tilera was founded in Santa Clara, California, in October 2004. The company’s research and development is done in its Westborough, Massachusetts lab, which makes sense given that the Tile64 processor that is based on an MIT project called Raw. The Raw project was funded by the U.S. National Science Foundation and the Defense Advanced Research Projects Agency, the research arm of the U.S. Department of Defense, back in 1996, and it delivered a 16-core processor connected by a mesh of on-core switches in 2002.”

    http://www.theregister.co.uk/2009/10/26/tilera_third_gen_mesh_chips/ (The Register October 26 2009)

    “Upstart massively multicore chip designer Tilera has divulged the details on its upcoming third generation of Tile processors, which will sport from 16 to 100 cores on a single die.”

    http://www.goodgearguide.com.au/article/323692/tilera_targets_intel_amd_100-core_processor/#comments
    (Good Gear Guide October 26 2009)

    “Look at the markets Tilera is aiming these chips at. These applications have lots of parallelism, require very high throughput, and need a low power footprint. The benefits of a system using a custom processor are large enough that paying someone to write software for the job is more than worth it.”

    http://www.theregister.co.uk/2009/11/02/tilera_quanta_servers/ (The Register November 2 2009)

    “While Doud was not at liberty to reveal the details, he did tell El Reg that Tilera had inked a deal with Quanta that will see the Taiwanese original design manufacturer make servers based on the future Tile-Gx series of chips, which will span from 16 to 100 RISC cores and which will begin to ship at the end of 2010.”

    http://www.theregister.co.uk/2010/03/09/tilera_vc_funding/ (The Register March 9 2010)

    “The current processors have made some design wins among networking, wireless infrastructure, and communications equipment providers, but the Tile-Gx series is going to give gear makers a slew of different options.”

  • Big Web Operations Turn to Tiny Chips – NYTimes.com

    Stephen O’Grady, a founder at the technology analyst company RedMonk, said the technology industry often has swung back and forth between more standard computing systems and specialized gear.

    via Big Web Operations Turn to Tiny Chips – NYTimes.com.

    A little tip of the hat to Andrew Feldman, CEO of SeaMicro the startup company that announced it’s first product last week. The giant 512 cpu computer is being covered in this NYTimes article to spotlight the ‘exotic’ technologies both hardware and software some companies use to deploy huge web apps. It’s part NoSQL part low power massive parallelism.

  • SeaMicro Announces SM10000 Server with 512 Atom CPUs

    From where I stand, the SM10000 looks like the type of product that if you could benefit from having it, you’ve been waiting for something like it. In other words, you will have been asking for something like the SM10000 for quite a while already. SeaMicro is simply granting your wish.

    via SeaMicro Announces SM10000 Server with 512 Atom CPUs and Low Power Consumption – AnandTech :: Your Source for Hardware Analysis and News.

    This announcement that has been making the rounds this Monday June 14th has hit Wired.com, Anandtech, Slashdot, everywhere. It is a press release full court press. But it is an interesting product on paper for anyone who is doing analysis of datasets using large numbers of CPUs for regressions or large scale simulations too. And it is at it’s core virtual Machines, with virtual peripherals (memory, disk, networking). I don’t know how you benchmark something like this, but it is impressive in its low power consumption and size. It only takes up 10U of a 42U rack. It fits 512 CPUs in that 10U area as well.

    Imagine 324 of these plugged in and racked up

    This takes me back to the days of RLX Technologies when blade servers were so new nobody knew what they were good for. The top of the line RLX unit had 324 CPUs in a 42U rack. And each blade had a Transmeta Crusoe processor which was designed to run at a lower clock speed and much more efficiently from a thermal standpoint. When managed by the RLX chassis hardware and software and paired up to an F5 Networks load balancer BIG-IP, the whole thing was an elegant design. However the advantage of using Transmeta’s CPU was lost on a lot of people, including technology journalists who bashed it for being too low performance for most IT shops and data centers. Nobody had considered the total cost of ownership including the cooling and electricity. In those days, clock speed was the only measure of a server’s usefulness.

    Enter Google into the data center market, and the whole scale changes. Google didn’t care about clock speed nearly as much as lowering its total overall costs for its huge data centers. Even the technical journalists began to understand the cost savings of lowering the clock speed a few hundred megahertz and placing servers more densely into a fixed sized data center. Movements in the High Performance computing also led to large scale installations of commodity servers being all bound together into one massively parallel super computer. More space was needed for physical machines racked up in the data centers. Everyone could see the only way to build out was to build more data centers, build bigger data centers or pack more servers into the existing footprint of current data centers. Manufacturers like Compaq got into the Blade server market, along with IBM and Hewlett Packard. Everyone engineered their own proprietary interfaces and architectures, but all of them focused on the top of the line server CPUs from Intel. As a result, the heat dissipation was enormous and the densities of these blade centers was pretty low (possibly 14 CPUs in a 4U rack mount).

    Blue Gene super computer has high density motherboards
    Look at all those CPUs on one motherboard!

    IBM began to experiment with lower clocked PowerPC chips in a massively parallel super computer called the Blue Gene. In my opinion this started to change people’s belief about what direction data center architectures could go. The density of the ‘drawers’ in the Blue Gene server cabinets is pretty high. Lot more CPUs, power supplies, storage and RAM in each unit than in a comparable base level commodity server from Dell or HP (the previous most common building block for the massively parallel super computers). Given these trends it’s very promising to see what Seamicro has done with its first product. I’m not saying this is a super computer in a 10U box, but there are plenty of workloads that would fit within the scope of this server’s capabilities. And what’s cooler is the virtual abstraction of all the hardware from the RAM, to the networking to the storage. It’s like the golden age of IBM machine partitioning and Virtual Machines but on an Intel architecture. Depending on how quickly they can ramp up production and market their goods, Seamicro might be game changer or it might be a takeover target from the likes of HP or IBM.

  • Seagate, Toshiba to Make SSD + HDD Hybrid?

    Seagate, Toshiba to Make SSD + HDD Hybrid?.

    Some people may remember the poorly marketed and badly implemented Microsoft ReadyBoost technology hyped prior to the launch of Windows Vista. Microsoft’s intention was to speed throughput on machines without sufficient RAM memory to cache large parts of the Windows OS and shared libraries. By using a small Flash memory module on

    Intel Turbo memory module for PCIe
    Intel Turbo Memory to be used as ReadyDrive storage cache

    the motherboard (Intel’s Turbo Memory) or by using a USB connected Flash memory stick one could create a Flash memory cache that would offset the effect of having 512MB or less RAM installed. In early testing done by folks like Anandtech and Tom’s Hardware system performance suffered terribly on computers with more than the 512MB of RAM targeted by Microsoft. By trying to use these techniques to offset the lack of RAM on computers with more than 512MB of RAM the computers ran slower using Vista. I had great hopes ReadyBoost at the time the flash cache method of speeding throughput on a desktop PC was heralding a new early of desktop PC performance. In the end it was all a myth created by the Microsoft marketing department.

    Some time has passed since then Vista was released. RAM prices have slowly gone down. Even low end machines have more than adequate RAM installed to run Vista or now Windows 7 (no more machines with 512MB of RAM). The necessity of working around those limits of RAM is unnecessary. However total system level I/O has seen some gains through using somewhat expensive Flash based SSD (solid state disks). Really this is what we have all been waiting for all along. It’s flash memory modules like the ones Intel tried using for it’s  ReadyDrive capable Turbo Memory technology. However these were wired into a PCIe controller and optimized for fast I/O, faster than a real spinning hard disk. The advantage over the ReadyBoost was the speed of the PCIe interface connected to the Flash memory chips. Enterprise data centers have begun using some Flash SSDs as caches with some very high end product using all Flash SSDs in their storage arrays. The entry level price though can be daunting to say the least. 500GB SSD disks are the top of the line, premium priced products and not likely to be sold in large quantity until the prices come down.

    Seagate is now offering a product that has a hybrid Flash cache and spinning disk all tied into one SATA disk controller.

    Seagate hybrid hard drive
    Seagate Momentus XT

    The beauty of this design is the OS doesn’t enter into the fray. So it’s OS agnostic. Similarly the OS doesn’t try to be a disk controller. Seagate manages all the details on its side of the SATA controller and OS just sees what it thinks is a  hard disk that it sends read/write commands. In theory this sounds like a step up from simple spinning disks and maybe a step below a full flash based SSD. What is the performance of a hybrid drive like this?

    As it turns out The Register did publish a follow-up with a quick benchmark (performed by Seagate) of the Seagate Moments XT compared to middle and top of the line spinning hard drives. The Seagate hybrid drive performs almost as well as an the Western Digital SSD included in the benchmark. That flash memory caches the stuff that needs quick access, and is able to refine what it stores over time based on what it is accessed most often by the OS. Your boot times speed up, file read/write times speed up all as a result of the internal controller on the hybrid drive. The availability if you check Amazon’s website is 1-2months which means you and I cannot yet purchase this item. But it’s encourage and I would like to see some more innovation in this product category. No doubt lots of optimization and algorithms can be tried out to balance the Flash memory and spinning hard disks. I say this because of the static ram cache that’s built into the Momentus XT which is 32MBytes in size. Decide when data goes in and out, which cache it uses (RAM or Flash) and when it finally gets written to disk is one of those difficult Computer Science type optimization problems. And there are likely as many answers as there are Computer Scientists to compute the problem. There will be lots of room to innovate if this product segment takes hold.