Blog

  • October 6, 2010 | BI Incorporated

    “We believe the issue is resolved as we have expanded the database threshold to more than 1 trillion records. In the meantime, we are working with Microsoft to develop a warning system on database thresholds so we can anticipate these issues in the future.”

    via October 6, 2010 | BI Incorporated.

    This is the key phrase regarding the recent event where BI stopped sending out alerts for the criminals it was tracking on behalf of police departments around the country. A company like this should do everything it can to design it’s systems for tracking so an eventuality like this doesn’t happen. How long before they bump up against the 1 Trillion record limit? I ask you. Let’s go back to the original article as it was posted on the BBC Online:

    Thousands of US sex offenders, prisoners on parole and other convicts were left unmonitored after an electronic tagging system shut down because of data overload.

    BI Incorporated, which runs the system, reached its data threshold – more than two billion records – on Tuesday.

    This left authorities across 49 states unaware of offenders’ movement for about 12 hours.

    BI increased its data storage capacity to avoid a repeat of the problem.

    Prisons and other corrections agencies were blocked from getting notifications on about 16,000 people, BI Incorporated spokesman Jock Waldo said on Wednesday.

    So the question I have a question as to how 16,000 people results in 2 Billion records in the database? Is that really all they are doing? How much old junk data are they keeping for legal purposes or just because they can keep it for potential future use? And how is it that a company depends on Microsoft to bail them out of such a critical situation. These seems like a very amateurish mistake. And could have been avoided by anyone with the title of Database Administrator who monitors the server on a regular basis. They should have known this thing was hitting an upper limit months ago and started rolling out a new database and moving records into it. This also shows the fundamental flaw in using SQL based record keeping for so-called real time data. Facebook gave up on it long ago as did Google. Rows and Tables and real time updates, doesn’t scale well. And if you cannot employ a Database Administrator to tell you when you are hitting a critical limit, but are dumping it off on the vendor, well good luck with that one guys.

     

  • Microsoft GPU video encoding patent could hurt creatives | Electronista

    Microsoft hasnt been granted the patent despite it having been first filed in September 2004, but it may face challenges to the claims from companies that began using GPU video encoding independently after the patent application was filed but before it was published.

    via Microsoft GPU video encoding patent could hurt creatives | Electronista.

    Given that it took nVidia quite a while before they got any developers to work on shipping products that took advantage of their programmable GPUs (the CUDA architecture), it’s a surprise to me that Microsoft even filed a patent on this. Previously I have re-posted some press releases surrounding the products known as Avivo (from ATI/AMD) and Badaboom, which was designed to speed up this very thing. You rip a DVD and you want to save it to a smaller file size or one that’s compatible with a portable video player.  But it takes forever on your computer, so what’s a person to do? Well thanks to nVidia and product X you just add a little software and speed up that transcoding to .mp4 format. It’s like discovering your car can do something you didn’t know was even possible, like turning into a Corvette on straight flat roadways. Now be advised not all roads are straight or flat, but when they are Boom! You can go as fast as you want. That’s what having an accelerated video encoding is like. It’s specialized but when you use it, it really works and it really speeds things up. I think part of why Microsoft wants to enforce this is in the hope of possibly getting licensing fees but part of it is also maintaining it’s bullying prowess on the desktop computer. They own the OS right? So why not remind everyone that were it not for their generosity and research labs we would all be using pocket calculators to do our taxes. This is one case, a premiere example of how patents are stifling innovation. And I would love to see this patent never be enforced or struck down.

  • Personal data stores and pub/sub networks – O’Reilly Radar

    Now social streams have largely eclipsed RSS readers, and the feed reading service I’ve used for years — Bloglines — will soon go dark. Dave Winer thinks the RSS ecosystem could be rebooted, and argues for centralized subscription handling on the next turn of the crank. Of course definitions tend to blur when we talk about centralized versus decentralized services.

    via Personal data stores and pub/sub networks – O’Reilly Radar.

    Here now, more Uncertainty and Doubt surrounding RSS readers as the future of consuming web pages. I wouldn’t expect this from the one guy I most respect when it comes to future developments in computer technology. I have followed Jon Udell’s shining example each step of the way from Radio Userland to Bloglines. And I breathed deeply the religion of loosely coupled services tied together with ‘services’ like pub/sub or RSS feeds. The flexibility and robustness of not letting a single vendor or purveyor of a free services to me was obvious. However I have fallen prey to the siren song of social media, starting with Digg, Flickr, Google Reader, LinkedIn. Each one claiming some amount of market share, but none of them anticipating the wild popularity of Friendster, MySpace and now Facebook. I actively participate in Facebook to help keep everyone energized and to let them know someone is reading the stuff they post. I want this service to succeed. And by all accounts it’s succeeding beyond its wildest dreams, through advertising revenue.

    But who wants to be marketed to? Doc Searles argued rightly our personal information is ours, our ‘attention’ is ours. He wants something like a Vendor Relationship Management service where we keep our ‘profile’ information and dole out the absolute minimum necessary to participate online or do commerce. And Jon in this article uses the elmcity project as a sterling example of how many stovepipe social networks in which we participate. Jon’s work with elmcity is an ongoing attempt to have events be ‘subscribe-enabled’ the way blogs or online news websites are already. Each online calendar program has a web presence, but usually does not have a comparable publication/subscription service like RSS or iCalendar formats associated with them. To ‘really’ know what is going requires a network of event curators who can manage the data feeds that then get plugged into an information hub that aggregates all the events in a geographical region. It’s all loosely coupled and more robust than trying to get everyone to adopt a single calendar.

    Which brings us back to the online personal data store, why can’t we have a ‘hub’ that aggregates these ‘services’ we participate in but contain the single source of profile information that we manage and dole out? In that way I’m not hostage to End User Licenses and the attendant risks of letting someone else be my profile steward. Instead I can manage it and let the services subscribe to my hub, and all my ‘data stores’ can exist across all the social networks that exist or may exist. No Lock In. Think about this, I cannot export all the little write-ups and comments on made on headlines I posted in Bloglines. I could export my Blogroll though, using OPML (thanks Dave Winer!) Similarly I won’t ever be able export any of my numerous status updates in Facebook. In fact as near as I can tell there is no Export Button anywhere for anything. It’s like AOL, an internet cul-de-sac that we all willingly participate in, never considering consequences.

  • Intel Debuts New Atom System-on-Chip Processor

    This is a an Altera Flex FPGA with 20,000 cell...
    Image via Wikipedia

    At an IDF keynote, Intel launched “Tunnel Creek,” a new Atom E600 SoC processor. One particular processor detailed is codenamed “Stellarton,” which consists of the Atom E600 processor paired with an Altera FPGA on a multi-chip package that provides additional flexibility for customers who want to incorporate proprietary I/O or acceleration.

    via Intel Debuts New Atom System-on-Chip Processor.

    Intel has announced a future product that pairs an Intel Atom processor with a Virtex FPGA. Now this is interesting, I just mentioned FPGA (field programmable gate array) chips and out of the blue Intel has summoned the same chip and married it to a little Atom core processor. They say it could be used as an accelerator of some sort. I’m wondering what specifically they had in mind (something very esoteric and niche like a TCP/IP offload processor). I would like to see some touting of its possible uses and not just say, “We want to see what happens.” Unfortunately the way the competition works in Consumer Electronics, you never tell people what’s inside. You let folks like iFixit do a teardown and put pictures up. You let industry websites research all the chips and what they cost, estimate the ones that are custom Integrated Circuits and report the cost to manufacture the device. That’s what they do with every Apple iPhone these days.

    It would be cool if Intel could also sell this as a development kit for Stellarton’s users. Keep the price high enough to prevent people from releasing product based just on the kit’s CPU, but low enough to get people to try out some interesting projects. I’m guessing it would be a great tool to use for video transcoding, Mux/DeMuxing for video streams, etc. If anyone does release a shipping product thought it would be cool if they put the “Stellarton Inside” logo, so we know that FPGAs are doing the heavy lifting. The other possibility Intel mentions is to use the FPGA as a proprietary I/O so possibly like an Infiniband network interface? I still have hopes it’s used in the Consumer Electronics world.

  • Custom superchippery pulls 3D from 2D images like humans • The Register

    Computing brainboxes believe they have found a method which would allow robotic systems to perceive the 3D world around them by analysing 2D images as the human brain does – which would, among other things, allow the affordable development of cars able to drive themselves safely.

    via Custom superchippery pulls 3D from 2D images like humans • The Register.

    The beauty of this new work is they designed a custom CPU using a Virtex 6 FPGA (Field Programmable Gate Array). FPGA for those who don’t know is a computer chip that you can ‘re-wire’ through software to take on mathematical task you can dream up. In the old days this would have required a custom chip to be engineered, validated and manufactured at great cost. FPGAs require development kits and FPGA chips you need to program. With this you can optimize every step within the computer processor and speed things up much more than a general purpose computer processor (like the Intel chip that powers your Windows or Mac computer). In this example of the research being done the custom designed computer circuitry is using video images to decide where in the world a robot can safely drive as it maneuvers around on the ground. I know Hans Moravec has done a lot with it at Carnegie Mellon U. And it seems that this group is from Yale’s engineering dept. which is encouraging to see the techniques embraced and extended by another U.S. university. The low power of this processor and it’s facility for processing the video images in real-time is ahead of its time and hopefully will find some commercial application either in robotics or automotive safety controls. As for me I’m still hoping for a robot chauffeur.

  • The Ask.com Blog: Bloglines Update

    Image representing Steve Gillmor as depicted i...
    Steve Gilmor Image via CrunchBase

    As Steve Gillmor pointed out in TechCrunch last year , being locked in an RSS reader makes less and less sense to people as Twitter and Facebook dominate real-time information flow. Today RSS is the enabling technology – the infrastructure, the delivery system. RSS is a means to an end, not a consumer experience in and of itself. As a result, RSS aggregator usage has slowed significantly, and Bloglines isn’t the only service to feel the impact.. The writing is on the wall.

    via The Ask.com Blog: Bloglines Update.

    I don’t know if I agree with the conclusion RSS readers are a form of lock-in. I consider Facebook participation as a form of lock-in as all my quips, photos and posts in that social networking cul-de-sac will never be exported back out again. There’s no way to do it, never ever. With an RSS reader at least my blogroll can easily be exported and imported again using OPML formatted ASCII text. How cool is that in the era of proprietary binary formats (mp4, pdf, doc). No I would say RSS is kind of innately good in and of itself. Enabling technologies are like that and while RSS readers are not the only way to consume or create feeds I haven’t found one of them that couldn’t import my blogroll. Try doing that with Twitter or Facebook (click the don’t like button).

  • Blog U.: Augmented Reality and the Layar Reality Browser

    I remember when I first saw the Verizon Wireless commercial featuring the Layar Reality Browser. It looked like something out of a science fiction movie. When my student web coordinator came in to the office with her iPhone, I asked her if she had ever heard of “Layar.” She had not heard of it so we downloaded it from the App Store. I was amazed at how the app used the phone’s camera, GPS and Internet access to create a virtual layer of information over the image being displayed by the phone. It was my first experience with an augmented reality application.

    via Blog U.: Augmented Reality and the Layar Reality Browser – Student Affairs and Technology – Inside Higher Ed.

    It’s nice to know Layar is getting some wider exposure. When I first wrote about it last year, the smartphone market was still somewhat small. And Layar was targeting phones that already had GPS built-in which the Apple iPhone wasn’t quite ready to allow access to in its development tools. Now the iPhone and Droid are willing participants in this burgeoning era of Augmented Reality.

    The video in the article is from Droid and does a WAY better job than any of the fanboy websites for the Layar application. Hopefully real world performance is as good as it appears in the video. And I’m pretty sure the software company that makes it has continuously been updating it since it was first on the iPhone a year ago. Given the recent release of the iPhone 4 and it’s performance enhancements, I have a feeling Layar would be a cool, cool app to try out and explore.

  • Micron intros SSD speed king • The Register

    The RealSSD P300 comes in a 2.5-inch form factor and in 50GB, 100GB and 200GB capacity points, and is targeted at servers, high-end workstations and storage arrays. The product is being sampled with customers now and mass production should start in October.

    via Micron intros SSD speed king • The Register.

    Sandisk C300 ssd drive
    The C300 as it appears on Anandtech.com

    I am now for the first time after SSDs have hit the market looking at the drive performance of each new product being offered. What I’ve begun to realize is the speeds of each product are starting to fall into a familiar range. For instance I can safely say that for a drive in the 120GB range with Multi-Level Cells you’re going to see a minimum 200MB/sec read/write speeds (writing is usually faster than reading by some amount on every drive). This is a vague estimate of course, but it’s becoming more and more common. Smaller size drives have slower speeds and suffer on benchmarks due in part to the smaller number of parallel data channels. Bigger capacity drives have more channels and therefore can write more data per second. A good capacity for a boot/data drive is going to be in the 120-128GB category. And while it won’t be the best for archiving all your photos and videos, that’s fine. Use a big old 2-3TB SATA drive for those heavy lifting duties. I think that will be a more common architecture in the future and not a premium choice as it is now. SSD for boot/data and typical HDD for big archive and backup.

    On the enterprise front things are a little different speed and throughput are important, but the drive interface is as well. With SATA being the most widely used interface for consumer hardware, big drive arrays for the data center are wedded to a form of Serial Attached Storage (SAS) or Fibre Channel (FC). So now manufacturers and designers like Sandisk need to engineer niche products for the high margin markets that require SAS or FC versions of the SSD. As was the case with the transition from Parallel ATA top Serial ATA, the first products are going to SATA to X interface adapters and electronics on board to make them compatible. Likely this will be the standard procedure for quite a while as a ‘native’ Fibre or SAS interface will require a bit of engineering to be done and cost increases to accommodate the enterprise interfaces. Speeds however will likely always be tuned for the higher volume consumer market and the SATA version of each drive will likely be the highest possible throughput version in each drive category. I’m thinking that the data center folks should adapt and adjust and go with the consumer level gear or adopt SATA SSDs now that the drives are not mechanically spinning disks. Similarly as more and more manufacturers are also doing their own error correction and wear leveling on the memory chips in SSDs the reliability will be equal to or exceed that of a FC or SAS spinning disks.

    And speaking of spinning disks, the highest throughput I’ve ever seen quoted for a SATA disk was always 150MB/sec. Hands down that was theoretically the best it could ever do. More likely you would only see 80MB/sec (which takes me back to the old days of Fast/Wide SCSI and the Barracuda). Given the limits of moving media like spinning disks and read/write heads tracking across their surface, Flash throughput is just stunning. We are now in an era that while the Flash SSDs are slower than RAM, they are awfully fast and fast enough to notice when booting a computer. I think the only real speed enhancement beyond the drive interface is to put Flash SSDs on the motherboard directly and build a SATA drive controller directly on the CPU to make read/write requests. I doubt it would be cost effective for the amount of improvement, but it would eliminate some of the motherboard electronics and smooth the flow a bit. Something to look for certainly in netbook or slate style computers in the future.

  • Drive suppliers hit capacity increase difficulties • The Register

    Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology.

    via Drive suppliers hit capacity increase difficulties • The Register.

    This is a good survey of upcoming HDD platter technologies. HAMR (Heat Assisted Magnetic Recording)and BPM (Bit Patterned Media) are the next generation after the current Perpendicular Magnetic Recording slowly hits the top end of its ability to squash together the 1’s and 0’s of a spinning hard drive platter. HAMR is like the old Floptical technology from the halls of Steve Job’s old NEXT Computer company. It uses a laser to heat the surface of the drive platter before the Read/Write head starts recording data to the drive. This ‘change’ in the state of the surface of the drive (the heat) helps align the magnetism of the bits written so that the tracks of the drive and the bits recorded inside them can be more tightly spaced. In the world of HAMR, Heat + Magnetism = bigger hard drives on the same old 3.5″ platters and 2.5″ platters we have now.  With BPM, the whole drive is manufactured to hold a set number of bits and tracks in advance. Each bit is created directly on the platter as a ‘well’ with a ring of insulating material surround it. The sizes of the wells are sufficiently small and dense enough to allow a light tighter spacing than PMR. But as is often the case the new technologies aren’t ready for manufacturing. A few test samples of possible devices are out in limited or custom made engineering prototypes to test the waters.

    Given the slow down in silicon CMOS chip speeds from the likes of Intel and AMD along with the wall of PMR it would appear the frontier days of desktop computing are coming to a close. Gone are the days of Megahertz wars and now Gigabyte wars waged in the labs of review sites and test labs across the Interwebs. The torrid pace of change in hardware we all experienced from the release of Windows 95 to the release this year of Windows 7 has slowed to a radical incrementalism. Intel releases so many chips with ‘slight’ variations in clock speed and cache one cannot keep up with them all. Hard drive manufacturers try to increment their disks about .5 Tbytes every 6 months but now that will stop. Flash-based SSD will be the biggest change for most of us and will help break through the inherent speed barriers enforced by SATA and spinning disk technologies. I hope a hybrid approach is used mixing SSDs and HDDs for speed and size in desktop computers. Fast things that need to be fast can use the SDD, slow things that are huge in size or quantity will go to the HDD. As for next gen disk based technologies, I’m sure there will be a change to the next higher density technology. But it will no doubt be a long time in coming.

  • Flickr

    This is a test post from flickr, a fancy photo sharing thing.