Category: technology

General technology, not anything in particular

  • End of the hiatus

    I am now at a point in my daily work where I can begin posting to my blog once again. It’s not so much that I’M catching up, but more like I don’t care as much about falling behind. Look forward to more Desktop related posts as that is now my fulltime responsibility there where I work.

    Posted from WordPress for Windows Phone

  • A Hiatus is being announced (carpetbomberz.com is taking a pause)

    Sadly, my job has changed. I was forced to be re-assigned to a different part of the same organization I have worked for the last 16 years. Luckily I GOT a job, and still have one. Which is more than some people who have suffered through these last 4 years of recession. So I thank my lucky stars that I can continue to pay bills for the foreseeable future. I am lucky, there’s no other word for it.

    As for my commentary on technology news, that will have to wait for a while until I can sort out my daily schedule. This may take a little while until I can develop a good work/life balance again and am able to follow tech news a little more closely and try to project what Future Trends may emerge. So I’m glad to have had a good consistent run for a while. And hopefully I can get back to a regular twice-weekly schedule again real soon. So enjoy the archive of older articles (there’s literally hundreds of them) and try throwing in some comments on some older articles. I’ll respond, no problem with that at all. And on that happy suggestion, I bid you adieu!

  • The wretched state of GPU transcoding – ExtremeTech

    The spring 2005 edition of ExtremeTech magazine
    The spring 2005 edition of ExtremeTech magazine (Photo credit: Wikipedia)

    For now, use Handbrake for simple, effective encodes. Arcsoft or Xilisoft might be worth a look if you know you’ll be using CUDA or Quick Sync and have no plans for any demanding work. Avoid MediaEspresso entirely.

    via By Joel Hruska @ ExtremeTech The wretched state of GPU transcoding – Slideshow | ExtremeTech.

    Joel Hruska does a great survey of GPU enabled video encoders. He even goes back to the original Avivo and Badaboom encoders put out by AMD and nVidia when they were promoting GPU accelerated video encoding. Sadly the hype doesn’t live up to the results. Even Intel’s most recent competitor in the race, QuickSync, is left wanting. HandBrake appears to be the best option for most people and the most reliable and repeatable in the results it gives.

    Ideally the maintainers of the HandBrake project might get a boost by starting up a fork of the source code that has Intel QuickSync support. There’s no indication now that that everyone is interested in proprietary Intel technology like QuickSynch as expressed in this article from Anandtech. OpenCL seems like a more attractive option for the Open Source community at large. So the OpenCL/HandBrake development is at least a little encouraging. Still as Joel Hruska points out the CPU still is the best option for encoding high quality at smaller frame sizes, it just beats the pants off all the GPU accelerated options available to date.

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase
  • Facebook smacks away hardness, sticks MySQL stash on flash • The Register

    Image representing Fusion-io as depicted in Cr...
    Image via CrunchBase

    Does Fusion-io have a sustainable competitive advantage or will it get blown away by a hurricane of other PCIe flash card vendors attacking the market, such as EMC, Intel, Micron, OCZ, TMS, and many others? 

    via Facebook smacks away hardness, sticks MySQL stash on flash • The Register.

    More updates on the data center uptake of PCI SSD cards in the form of two big wins from Facebook and Apple. Price/Performance for database applications seems to be skewed heavily to Fusion-io versus the big guns in large scale SAN roll-outs. It seems like due to the smaller scale and faster speed PCI SSD outstrips the resources needed to get an equally fast disk based storage array (including power, and square feet taken up by all the racks). Typically a large rack of spinning disks can be aggregated by using RAID drive controllers and caches to look like a very large high speed hard drive. The Fibre Channel connections add yet another layer of aggregation on top of all that so that you can start splitting the underlying massive disk array into virtual logical drives that fit the storage needs of individual servers and OSes along the way. But to get sufficient speed equal to a Fusion-io style PCI SSD, say to speed up JUST your MySQL server the number of equivalent drives, racks, RAID controllers, caches and Fibre Channel host bus adapters is so large and costs so much, it isn’t worth it.

    A single PCI SSD won’t quite have the same total storage capacity as say that larger scale SAN. But for a single, say one-off speed up of a MySQL database you don’t need the massive storage so much as the massive speed up in I/O. And that’s where the PCI SSD comes into play. With the newest PCI 3.0 interfaces and utilizing 8x (eight PCI lane) connectors the current generation of cards is able to maintain 2GB/sec through put on a single PCI card. To achieve that using the older SAN technology is not just cost prohibitive but seriously SPACE prohibitive in all but the largest of data centers. The race now is to see how dense and energy efficient a data center can be constructed. So it comes as no surprise that Facebook and Apple (who are attempting to lower costs all around) are the ones leading this charge of higher density and higher power efficiency as well.

    Don’t get me wrong when I tout the PCI SSD so heavily. Disk storage will never go away in my lifetime. It’s just to cost effective and it is fast enough. But for the SysOps in charge of deploying production Apps and hitting performance brick walls, the PCI SSD is going to really save the day. And if nothing else will act as a bridge for most until a better solution can be designed and procured in any given situation. That alone I think would make the cost of trying out a PCI SSD well worth it. Longer term, which vendor will win is still a toss-up. I’m not well versed in the scale of sales into Enterprises of the big vendors in the PCI SSD market. But Fusion-io is doing a great job keeping their name in the press and marketing to some big identifiable names.

    But also I give OCZ some credit to with their Z-Drive R5 though it’s not quite considered an Enterprise data center player. Design wise, the OCZ R5 is helping push the state of the art by trying out new controllers, new designs attempting to raise the total number of I/Os and bandwidth on single card. I’ve seen one story so far about a test sample at Computex(Anandtech) that a brand new clean R5 hit nearly 800,000 I/Os in benchmark tests. That peak peformance eventually eroded as the flash chips filled up and fell to around 530,000 I/Os but the trend is clear. We may see 1million IOPs on a single PCI SDD before long. And that my readers is going to be an Andy Grove style 10X difference that brings changes we never thought possible.

    Andy Grove: Only the Paranoid Survive
    In this book Grove mentions a 10x change is when things are improving, growing at a rate of one whole order of magnitude, reaching a new equilibrium
  • Very cool tips for monitoring the GPU usage on Win8. Didn’t know you could do this without have to install a card manufacturer’s own utility software.

    McAkins's avatarMcAkins Online

    To analyze your device’s GPU usage by Windows 8 please follow these simple steps:

    1. Download Process Explorer from Microsoft and install as usual.

    2. Run the Process Explorer as Admin by right-clicking on the Icon on Start Screen and on the App bar below elect to run as Admin. You need to do this to see the GPU readouts.

    image

    3. Click on any of the Graphs on the toolbar, System Information opens:

    image

    4. Select the GPU tab and Click the Engine button. GPU Engine History screen opens:

    image

    Run any Graphic intensive application or just use Windows 8 as usual and come back to see your GPU usage history.

    View original post

  • My colleagues over at the MedCenter are announcing the big upgrade this weekend. Let’s hope this downtime goes smoothly and everything just works come Monday (fingers crossed)

    Catherine Delia's avatarURMC Learn

    We will be updating the Blackboard Learning Management system from 9.1 Service Pack 5 to 9.1 Service Pack 7 on Saturday, June 9 between midnight and 10AM. The Blackboard Learning System functions will be unavailable. Portal services including UR ePay, Student/Instructor/Advisor Access and email will be available.

    Most of the changes are performance improvements and enabling some new data features.

    • Performance Improvements
    • SCORM Integration
    • Browser Compatibility updates

    Additionally, the Learning Objects Blog and Wiki tools will no longer be available. You can find out more about the Blackboard native Blog and Wiki tools and exporting your Learning Objects data before June 30, 2012.

    If you have questions please contact Blackboard Support via phone at 275-6865 or via email at blackboard@urmc.rochester.edu.

    View original post

  • Doc Searls Weblog · Won and done

    Doc Searls
    Doc Searls (Photo credit: Wikipedia)

    This tells me my job with foursquare is to be “driven” like a calf into a local business. Of course, this has been the assumption from the start. But I had hoped that somewhere along the way foursquare could also evolve into a true QS app, yielding lat-lon and other helpful information for those (like me) who care about that kind of thing. (And, to be fair, maybe that kind of thing actually is available, through the foursquare API. I saw a Singly app once that suggested as much.) Hey, I would pay for an app that kept track of where I’ve been and what I’ve done, and made  that data available to me in ways I can use.

    via Doc Searls Weblog · Won and done.

    foursquare as a kind of Lifebits I think is what Doc Searls is describing. A form of self-tracking a la Stephen Wolfram or Gordon Moore. Instead foursquare is the carrot being dangled to lure you into giving your business to a particular retailer. After that you accumulate points for numbers of visits and possibly unlock rewards for your loyalty. But foursquare no doubt accumulates a lot of other data along the way that could be use for the very purpose Doc Searls was hoping for.

    Gordon Moore’s work at Microsoft Research bootstrapping the My Lifebits project is a form of memory enhancement, but also logging of personal data that can be analyzed later. The collection or ‘instrumentation’ of one’s environment is what Stephen Wolfram has accomplished by counting things over time. Not to say it’s simpler than the My Lifebits, but it is in someways lighter weight data (instead of videos and pictures, mouse clicks and tallies of email activity, times of day, etc.) There is no doubt that foursquare could make a for profit service to paying users where they could collect this location data and serve it up to subscribers, letting them analyze the data after the fact.

    I firmly believe a form of My Lifebits could be aggregated across a wide range of free and paid services along with personal instrumentation and data collecting like the kind Stephen Wolfram does. If there’s one thing I’ve learned readings stories about inventions like these from MIT’s Media Lab is that it’s never an either or proposition. You don’t have to just adopt Gordon Moore’s technology or Stephen Wolfram’s techniques or even foursquare’s own data. You can do all or just pick and choose the ones that suit your personal data collection needs. Then you get to slice, dice and analyze to your heart’s content. What you do with it after that is completely up to you and should be considered as personal as any legal documents or health records you already have.

    Which takes me back to an article I wrote some time ago in reference to Jon Udell calling for a federated LifeBits type of service. It wouldn’t be constrained to one kind of data, but all the LifeBits aggregated potentially and new repositories for stuff that must be locked down and private. So add Doc Searls to the list of bloggers and long time technology writers who see an opportunity. Advocacy (in the case of Doc’s experience with foursquare) on behalf of sharing unfiltered data with the users on whom data is collected is one step in that direction. I feel Jon Udell is also an advocate for users gaining access to all that collected and aggregated data. But as Jon Udell asks, who is going to be the first to attempt to offer this up as a pay-for service in the cloud where you can for a fee access your lifebits aggregated into one spot (foursquare,twitter,facebook,gmail,flickr,photostream,mint,eRecords,etc.) so that you don’t spend your life logging on and logging off from service to service to service. Aggregation could be a beautiful thing.

    Image representing Foursquare as depicted in C...
    Image via CrunchBase
  • AnandTech – Testing OpenCL Accelerated Handbrake with AMD’s Trinity

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase

    AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year.

    via AnandTech – What We’ve Been Waiting For: Testing OpenCL Accelerated Handbrake with AMD’s Trinity.

    There’s a lot to talk about when it comes to accelerated video transcoding, really. Not the least of which is HandBrake’s dominance generally for anyone doing small scale size reductions of their DVD collections for transport on mobile devices. We owe it all to the open source x264 codec and all the programmers who have contributed to it over the years, standing on one another’s shoulders allowing us to effortlessly encode or transcode gigabytes of video to manageable sizes. But Intel has attempted to rock the boat by inserting itself into the fray by tooling its QuickSync technology for accelerating the compression and decompression of video frames. However it is a proprietary path pursued by a few small scale software vendors. And it prompts the question, when is open source going to benefit from the proprietary Intel QuickSync technology? Maybe its going to take a long time. Maybe it won’t happen at all. Lucky for the HandBrake users in the audience some attempt is being made now to re-engineer the x264 codec to take advantage of any OpenCL compliant hardware on a given computer.

    Image representing NVidia as depicted in Crunc...
    Image via CrunchBase
  • How Yahoo Killed Flickr and Lost the Internet

    Image representing Flickr as depicted in Crunc...
    Image via CrunchBase

    But moreover, Yahoo needed to leverage this thing that it had just bought. Yahoo wanted to make sure that every one of its registered users could instantly use Flickr without having to register for it separately. It wanted Flickr to work seamlessly with Yahoo Mail. It wanted its services to sing together in harmony, rather than in cacophonous isolation. The first step in that is to create a unified login. That’s great for Yahoo, but it didn’t do anything for Flickr, and it certainly didn’t do anything for Flickr’s (extremely vocal) users.

    via How Yahoo Killed Flickr and Lost the Internet.

    Gizmodo article on how Yahoo first bought Flickr then proceeded to let it erode. As the old cliche sez’, The road to hell is paved with good intentions. For me personally I didn’t really mind the issue others had with the Yahoo login. I was allowed to use the Flickr login for a long time after they were taken over. But I still had to create a Yahoo account even if I never used it for anything other than accessing Flickr. Once I realized this was the case, i dearly wished Google had bought them as I WAS already using GMail and other Google services like it.

    Most recently there’s been a lot of congratulations spread around following the release of a new Flickr uploader. I always had to purchase an add-on to my Apple iPhoto in order to streamline the cataloging, annotating, and arranging of picture sets. Doing the uploads one at a time through the Web interface was not on, I needed bulk uploads, but I refused to export picture sets out of iPhoto just to get them into Flickr. So an aftermarket arose for people like me invested heavily into iPhoto. And these add-on programs worked great, but they would go out of date or be incompatible with newer versions of iPhoto. So you would have to go back and drop another $10 USD on a newer version of your own iPhoto/Flickr exporter.

    And by this time Facebook had so taken over the social networking aspects of picture sharing, no one could see the point of a single medium service (just picture sharing). When Facebook allowed you to converse, play games, and poke your friends, why would you log out and open Flickr to just manage your photos. The level of integration and friction was too high for the bulk of Internet users. So Facebook had gain the mindshare, reduced the friction and made everything seamless and just work the way everyone thought it should. And it is hard to come back from a defeat like that with the millions of sign ups that Facebook was enjoying. Yahoo should have had an app for that early on and let people share their Flickr sets with people using  similar access controls and levels of security.

    I would have found Flickr a lot more useful if it had been well bridged into the Facebook universe during the critical time period of 2008-2010. For me that would have been just the time period when things were really chaotically ramping up in terms of total new Facebook account creations. The addition of an insanely great Flickr App for Facebook could have made a big difference with helping grow the community awareness and possibly garner a few new Flickr accounts along the way. However, agendas are always so much more blinders in the way that they close you off to the environment in which you operate. Flickr and Yahoo’s merger and the agenda of ‘integration’ more or less was the single most important thing going on during the giant Facebook ramp-up. And so it goes, Yahoo stumbles more than once and takes a perfectly good Web 2.0 app and lets it slowly erode Friendster and MySpace before it. So long Flickr it’s been good to know yuh.

    Image representing Yahoo! as depicted in Crunc...
    Image via CrunchBase
  • Intel looks to build ultra-efficient mobile chips Apple cant ignore

    English: Paul Otellini, CEO of Intel
    Paul Otellini, CEO of Intel (Photo credit: Wikipedia)

    During Intels annual investor day on Thursday, CEO Paul Otellini outlined the companys plan to leverage its multi-billion-dollar chip fabrication plants, thousands of developers and industry sway to catch up in the lucrative mobile device sector, reports Forbes.

    via Intel looks to build ultra-efficient mobile chips Apple cant ignore (Apple Insider)

    But what you are seeing is a form of Fear, Uncertainty and Doubt (FUD) being spread about to sow the seeds of mobile Intel processors sales. The doubt is not as obvious as questioning the performance of ARM chips, or the ability of manufacturers like Samsung to meet their volume targets and reject rates for each new mobile chip. No it’s more subtle than that and only noticeable to people who know details like what design rule Intel is currently using versus that which is used by Samsung or TSMC (Taiwan Semiconductor Manufacturing Corp.) Intel is currently just releasing its next gen 22nm chips as companies like Samsung are still trying to recoup their investment in 45nm and 32nm production lines. Apple is just now beginning to sample some 32nm chips from Samsung in iPad 2 and Apple TV products. It’s current flagship model iPad/iPhone both use a 45nm chip produced by Samsung. Intel is trying to say that the old generation technology while good doesn’t have the weight and just massive investment in the next generation chip technology. The new chips will be smaller, energy efficient, less expensive all the things need to make higher profit on consumer devices using them. However, Intel doesn’t do ARM chips, it has Atom and that is the one thing that has hampered any big design wins in cellphone or tablet designs to date. At any narrow size of the design rule, ARM chips almost always use less power than a comparably sized Atom chip from Intel. So whether it’s really an attempt to spread FUD, can easily be debated one way or another. But the message is clear, Intel is trying to fight back against ARM. Why? Let’s turn back the clock to March of this year in a previous article also appearing in Apple Insider:

    Apple could be top mobile processor maker by end of 2012 (Apple Insider, March 20, 2012)

    This article is referenced in the original article quoted at the top of the page. And it points out why Intel is trying to get Apple to take notice of its own mobile chip commitments. Apple designs its own chips and has the manufacturing contracted out to a foundry. To date Samsung has been the sole source of the A-processors used in iPhones/iPod/iPad devices as Apple is trying to get TSMC up to speed to get a second source. Meanwhile sales of the Apple devices continues to grow handsomely in spite of these supply limits. More important to Intel is the blistering growth in spite of being on older foundry technology and design rules. Intel has a technological and investment advantage over Samsung now. They do not have a chip however that is BETTER than Apple’s in house designed ARM chip. That’s why the underlying message for Intel is that it has to make it’s Atom chip so much better than an A4, A5, A5X at ANY design ruling that Apple cannot ignore Intel’s superior design and manufacturing capability. Apple will still use Intel chips, but not in its flagship products until Intel achieves that much greater level of technical capability and sophistication in its Mobile microprocessors.

    Twin-track development plan for Intel’s expansion into smartphones (The Register, May 11, 2012)

    Intel is planning a two-pronged attack on the smartphone and tablet markets, with dual Atom lines going down to 14 nanometers and Android providing the special sauce to spur sales. 

    Lastly, Ian Thomson from The Register weighs in looking at what the underlying message from Intel really is. It’s all about the future of microprocessors for the consumer market. However the emphasis in this article is that Android OS devices whether they be phones or tablets or netbooks will be the way to compete AGAINST Apple. But again it’s not Apple as such it’s the microprocessor Apple is using in it’s best selling devices that scares Intel the most. Intel has since its inception been geared towards the ‘mainstream’ market selling into Enterprises and the Consumer area for years. It has milked the desktop PC revolution as it helped create it more or less starting with its forays into integrated micro-processor chips and chipsets. It reminds me a little of the old steel plants that existed in the U.S. during the 1970s as Japan was building NEW steel plants that used a much more energy efficient design, and a steel making technology that created  a higher quality product. So less expensive higher quality steel was only possible by creating brand new steel plants. But the old line U.S. plants couldn’t justify the expense and so just wrapped up and shutdown operations all over the place. Intel while it is able to make that type of investment in newer technology is still not able to create the energy saving mobile processor that will out perform an ARM core cpu.