Carpet Bomberz Inc.

Scouring the technology news sites every day

Human/machine partnership for problems otherwise Too Hard

leave a comment »

carpetbomberz:

Clippit asking if the user needs help

Clippit asking if the user needs help (Photo credit: Wikipedia)

Agreed. I think insofar as a computer AI can watch and see what we’re doing and step in and prompt us with some questions, THAT will be the killer app. It won’t be Clippy the assistant from MS Word, but a friendly prompt saying, “I just watched you do something 3 times in a row, would you like some help doing a bunch of them without having to go through the steps yourself?” Then you got the offer of assistance, it’s timely and non-threatening. You won’t have to “turn-on” a Macro recorder to tell the computer what you want to do, and let it see the steps. It (the computer) will have already recognized you are doing a repetitive task it can automate. And as Jon points out it’s just a matter of successive approximations until you get the slam dunk, series of steps that gets the heavy lifting done. Then the human can address the exceptions list. The 20-50 examples that didn’t work quite right or the AI felt diverged from the pattern. That exception list is what the human should really be working on, not the 1,000 self-similar items that can be handled with the assistance of an AI.

Originally posted on Jon Udell:

My recent post about redirecting a page of broken links weaves together two few different ideas. First, that the titles of the articles on that page of broken links can be used as search terms in alternate links that lead people to those articles’ new locations. Second, that non-programmers can create macros to transform the original links into alternate search-driven links.

There was lots of useful feedback on the first idea. As Herbert Van de Sompel and Michael Nelson pointed out, it was a really bad idea to discard the original URLs, which retain value as lookup keys into one or more web archives. Alan Levine showed how to do that with the Wayback Machine. That method, however, leads the user to sets of snapshots that don’t consistently mirror the original article, because (I think) Wayback’s captures happened both before and after the breakage.

So for now I’ve restored…

View original 431 more words

Written by carpetbomberz

December 19, 2014 at 6:26 pm

Posted in technology

Retrotechtacular: Supersonic Transport Initiatives

leave a comment »

carpetbomberz:

Back in the day MIT and WGBH produced films together. Imagine that. Well imagine no longer. This was an actual production done at NASA facilities in Virginia and California surveying researching Supersonic Transports (SST) from 1966, a few years before Robert McNamara shut it all down to save money to spend on teh War in Vietnam.

Originally posted on Hackaday:

In the early days of PBS member station WGBH-Boston, they in conjunction with MIT produced a program called Science Reporter. The program’s aim was explaining modern technological advances to a wide audience through the use of interviews and demonstrations. This week, we have a 1966 episode called “Ticket Through the Sound Barrier”, which outlines the then-current state of supersonic transport (SST) initiatives being undertaken by NASA.

MIT reporter and basso profondo [John Fitch] opens the program at NASA’s Ames research center. Here, he outlines the three major considerations of the SST initiative. First, the aluminium typically used in subsonic aircraft fuselage cannot withstand the extreme temperatures caused by air friction at supersonic speeds. Although the Aérospatiale-BAC Concorde was skinned in aluminium, it was limited to Mach 2.02 because of heating issues. In place of aluminium, a titanium alloy with a melting point of 3,000°F is being developed and tested.

View original 331 more words

Written by carpetbomberz

December 9, 2014 at 4:28 pm

Posted in technology

The same, but different!

leave a comment »

carpetbomberz:

I too am a big believer in doing some amount of testing when the opportunity comes along. Most recently I had to crunch down some video to smaller file sizes. I decided to use Handbrake as that’s the hammer I use for every nail. And in the time since I first started using it, a number of options have cropped up all surrounding the use of the open source x264 encoding libraries. There are now more commandline tweaks and options than you could ever imagine. Thankfully the maintainers for Handbrake have simplified some of the setting through the GUI based version of the software. Now I wasn’t going for quality but for file size and I got it using the “constant quality” output option as opposed to my classical fave “Constant Bitrate”. Let’s just say after a few hours of doing different tweaks on the same file I got bit rates way down without using Contant Bit Rate. And it seams to work no matter what the content is (static shots or fast moving action). So kudos to Spreadys for giving a head-to-head comparison. Much appreciated.

Originally posted on Spreadys.com:

After another long phone call, I decided to repeat a test previously conducted 3 years ago.

The conversation surrounded a small experiment on transcoders and players. It highlighted an issue in that any documented process must include what software was used, the settings and then a comparison of the results. It originally proved that just specifying a player type and / or container format was useless, as the video file itself could have been created in a million different ways. I was asked, “would the same issues happen today?” With updated software and higher spec PC’s, would issues still arise. I said Yes… but then thought I had better check!

Disclaimer!! – This is by no means scientific. I have replicated the real world and not dug too deep into encoding parameters or software settings and the PC used is mid ranged. I have posted this information in order to…

View original 885 more words

Written by carpetbomberz

December 5, 2014 at 5:56 pm

Posted in technology

Study Finds Internet Congestion Really Is About Business, Not Technology

carpetbomberz:

That’s right, the fault dear reader is not in our stars but in ourselves. We have slow internets speeds, ‘Cuz Business. That’s the briefest synopsis yet that I’ve written. Whether it’s carriers allow each others traffic to run across their networks or peering arrangements or whatever, each business is trying to mess with the other guy’s traffic. And the consumers the customers all lose as a result.

Originally posted on Consumerist:


Various enormous corporations have this year been at each other’s throats over how well or how poorly internet traffic travels through their systems. A new report indicates that some of the mud-slinging this year is true: interconnection, or peering, between ISPs is why end-users are getting terrible internet traffic. But, they say, it’s business, and not technology, that’s making your Netflix buffer.

DSL Reports points the way to the study, from an internet research organization called M-Lab. M-Lab studied how traffic does (or doesn’t) make it to you through the peering connections it travels through.

Peering has come up a lot this year, most notably around Netflix. The streaming-video behemoth contended that major ISPs — particularly but not solely Comcast and Verizon — were deliberately letting Netflix traffic clog up.

The congestion was happening at interconnection points, the places where the transit ISPs Netflix partnered with…

View original 645 more words

Written by carpetbomberz

October 30, 2014 at 5:29 pm

Posted in technology

120 Node Rasperry Pi Cluster for Website Testing

carpetbomberz:

I’m always fascinated by these one-off, one of a kind clustered systems like this Raspberry Pi rig. Kudos for doing the assembly and getting it all running. As the comments mention it may not be practical in terms of price. But still it’s pretty cool for what it is.

Originally posted on Hackaday:

raspicluster1

[alexandros] works for resin.io, a website which plans to allow users to update firmware on embedded devices with a simple git push command. The first target devices will be Raspberry Pis running node.js applications. How does one perform alpha testing while standing up such a service? Apparently by building a monster tower of 120 Raspberry Pi computers with Adafruit 2.8″ PiTFT displays. We’ve seen some big Raspberry Pi clusters before, but this one may take the cake.

raspicluster2

The tower is made up of 5 hinged sections of plywood. Each section contains 24 Pis, two Ethernet switches and two USB hubs. The 5 sections can be run on separate networks, or as a single 120 node monster cluster. When the sections are closed in, they form a pentagon-shaped tower that reminds us of the classic Cray-1 supercomputer.

Rasberry Pi machines are low power, at least when compared to a desktop PC. A standard Raspi consumes less…

View original 65 more words

Written by carpetbomberz

October 7, 2014 at 9:57 pm

Posted in technology

What’s a Chromebook good for? How about running PHOTOSHOP? • The Register

Netscape Communicator

Netscape Communicator (Photo credit: Wikipedia)

Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.

via What’s a Chromebook good for? How about running PHOTOSHOP? • The Register.

Back in 1997 and 1998 I spent a lot of time experimenting and playing with Netscape Communicator “Gold”. It had a built in web page editor that more or less gave you WYSIWYG rendering of the html elements live as you edited. It also had a Email client and news reader built into it. I spent also a lot of time reading Netscape white papers on their Netscape Communications server and LDAP server and this whole universe of Netscape trying to re-engineer desktop computing in such a way that the Web Browser was the THING. Instead of a desktop with apps, you had some app-like behavior resident in the web browser. And from there you would develop your Javascript/ECMAscript web applications that did other useful things. Web pages with links in them could take the place of Powerpoint. Netscape Communicator Gold would take the place of Word, Outlook. This is the triumvirate that Google would assail some 10 years later with its own Google Apps and the benefit of AJAX based web app interfaces and programming.

Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!

Given that there’s even a possibility to stream Photoshop on a Google Chromebook based device, I think we’ve now hit that which Netscape had discovered long ago. The web-browser is the desktop, pure and simple. It was bound to happen especially now with the erosion into different form factors and mobile OSes. iOS and Android have shown what we are willing to call an “app” most times is nothing more than a glorified link to a web page, really. So if they can manage to wire-up enough of the codebase of Photoshop to make it work in realtime through a web browser without tons and tons of plug-ins and client-side Javascript, I say all the better. Because this means architecturally speaking good old Outlook Web Access (OWA) can only get better and become more like it’s desktop cousin Outlook 2013. Microsoft too is eroding the distinction between Desktop and Mobile. It’s all just a matter of more time passing.

Written by carpetbomberz

October 2, 2014 at 3:00 pm

HP Ships First ARM Servers | EE Times

The software ecosystem for ARM servers “is still shaky, there needs to be a lot more software development going on and it will take time,” says Gwennap.

via HP Ships First ARM Servers | EE Times.

Previous generations of multi-core, massively parallel, ARM based servers were one off manufacturers with their own toolsets and Linux distros. HP’s attempt to really market to this segment of the market will hopefully be substantial enough to get an Ubuntu distro that has enough Libraries and packages to make it function right out of the box. In the article it says companies are using the Proliant ARM-based system as a memcached server. I would speculate that if that’s what people want, the easier you can make that happen from an OS and app server standpoint the better. There’s a reason folks like to buy Synology and BuffaloTech NAS products and that’s the ease with which you spin them up and get a lot of storage attached in a short amount of time. If Proliant can do that for people needing quicker and more predictable page loads on their web apps, then optimize for memcached performance and make it easy to configure and put into production.

Now what, you may ask, is memcached? If you’re running a web server or a web application that requires a lot of speed so that purchases or other transactions complete and show some visual cue that it was successful, the easiest way to do that is through cacheing. The web page contents are kept in a high speed storage location separate from the actual webpage and when required will  redirect, or point to the stuff that sits over in that high speed location. By swapping the high speed stored stuff for the slower stuff, you get a really good experience with the web page refreshing automagically showing your purchases in a shopping cart, or that your tax refund is on it’s way. The web site world is built on caching so we don’t see spinning watches or other indications that processing is going on in the background.

To date, this type of caching has seen different software packages do this for first Apache web servers, but now in the world of Social Media, it’s doing it for any type of web server. Whether it’s Amazon, Google or Facebook, memcached or a similar cacheing server is sending you that actual webpage as you click, submit and wait for the page to refresh. And if a data center owner like Amazon, Google and Facebook can lower the cost for each of it’s memcached servers, they can lower their operating costs for each of these cached web pages and keep everyone happy with the speed of their respective websites. Whether or not ARM-based servers see a wider application is dependent on the apps being written specifically for that chip architecture. But at least now people can point to memcached and web page acceleration as a big first win that might see wider adoption longer term.

Written by carpetbomberz

September 29, 2014 at 3:00 pm

Posted in cloud, data center, mobile

Tagged with , , ,