Homages can be a great thing, when they force one to reconsider the thing being honored. Parodies too, they can be useful as homage. Austin Powers forced the producers of James Bond to reconsider all that had gone on before. So they rebooted the franchise with Casino Royal and Daniel Craig, and the rest as one might say is history. Apple take note.
Originally posted on 9to5Mac:
Beautiful renderings from German site Curved/labs depict a stunning metallic ode to Apple’s original Macintosh computer. While acknowledging the enhanced functionality of Apple’s latest computers, such as the Retina iMac, Curved/labs suggests that the company often neglects its own design history when releasing new machines – the inspiration for this “tribute.”
View original 105 more words
Agreed. I think insofar as a computer AI can watch and see what we’re doing and step in and prompt us with some questions, THAT will be the killer app. It won’t be Clippy the assistant from MS Word, but a friendly prompt saying, “I just watched you do something 3 times in a row, would you like some help doing a bunch of them without having to go through the steps yourself?” Then you got the offer of assistance, it’s timely and non-threatening. You won’t have to “turn-on” a Macro recorder to tell the computer what you want to do, and let it see the steps. It (the computer) will have already recognized you are doing a repetitive task it can automate. And as Jon points out it’s just a matter of successive approximations until you get the slam dunk, series of steps that gets the heavy lifting done. Then the human can address the exceptions list. The 20-50 examples that didn’t work quite right or the AI felt diverged from the pattern. That exception list is what the human should really be working on, not the 1,000 self-similar items that can be handled with the assistance of an AI.
Originally posted on Jon Udell:
My recent post about redirecting a page of broken links weaves together two few different ideas. First, that the titles of the articles on that page of broken links can be used as search terms in alternate links that lead people to those articles’ new locations. Second, that non-programmers can create macros to transform the original links into alternate search-driven links.
There was lots of useful feedback on the first idea. As Herbert Van de Sompel and Michael Nelson pointed out, it was a really bad idea to discard the original URLs, which retain value as lookup keys into one or more web archives. Alan Levine showed how to do that with the Wayback Machine. That method, however, leads the user to sets of snapshots that don’t consistently mirror the original article, because (I think) Wayback’s captures happened both before and after the breakage.
So for now I’ve restored…
View original 431 more words
Back in the day MIT and WGBH produced films together. Imagine that. Well imagine no longer. This was an actual production done at NASA facilities in Virginia and California surveying researching Supersonic Transports (SST) from 1966, a few years before Robert McNamara shut it all down to save money to spend on teh War in Vietnam.
Originally posted on Hackaday:
In the early days of PBS member station WGBH-Boston, they in conjunction with MIT produced a program called Science Reporter. The program’s aim was explaining modern technological advances to a wide audience through the use of interviews and demonstrations. This week, we have a 1966 episode called “Ticket Through the Sound Barrier”, which outlines the then-current state of supersonic transport (SST) initiatives being undertaken by NASA.
MIT reporter and basso profondo [John Fitch] opens the program at NASA’s Ames research center. Here, he outlines the three major considerations of the SST initiative. First, the aluminium typically used in subsonic aircraft fuselage cannot withstand the extreme temperatures caused by air friction at supersonic speeds. Although the Aérospatiale-BAC Concorde was skinned in aluminium, it was limited to Mach 2.02 because of heating issues. In place of aluminium, a titanium alloy with a melting point of 3,000°F is being developed and tested.
View original 331 more words
I too am a big believer in doing some amount of testing when the opportunity comes along. Most recently I had to crunch down some video to smaller file sizes. I decided to use Handbrake as that’s the hammer I use for every nail. And in the time since I first started using it, a number of options have cropped up all surrounding the use of the open source x264 encoding libraries. There are now more commandline tweaks and options than you could ever imagine. Thankfully the maintainers for Handbrake have simplified some of the setting through the GUI based version of the software. Now I wasn’t going for quality but for file size and I got it using the “constant quality” output option as opposed to my classical fave “Constant Bitrate”. Let’s just say after a few hours of doing different tweaks on the same file I got bit rates way down without using Contant Bit Rate. And it seams to work no matter what the content is (static shots or fast moving action). So kudos to Spreadys for giving a head-to-head comparison. Much appreciated.
Originally posted on Spreadys.com:
After another long phone call, I decided to repeat a test previously conducted 3 years ago.
The conversation surrounded a small experiment on transcoders and players. It highlighted an issue in that any documented process must include what software was used, the settings and then a comparison of the results. It originally proved that just specifying a player type and / or container format was useless, as the video file itself could have been created in a million different ways. I was asked, “would the same issues happen today?” With updated software and higher spec PC’s, would issues still arise. I said Yes… but then thought I had better check!
Disclaimer!! – This is by no means scientific. I have replicated the real world and not dug too deep into encoding parameters or software settings and the PC used is mid ranged. I have posted this information in order to…
View original 863 more words
That’s right, the fault dear reader is not in our stars but in ourselves. We have slow internets speeds, ‘Cuz Business. That’s the briefest synopsis yet that I’ve written. Whether it’s carriers allow each others traffic to run across their networks or peering arrangements or whatever, each business is trying to mess with the other guy’s traffic. And the consumers the customers all lose as a result.
Originally posted on Consumerist:
Various enormous corporations have this year been at each other’s throats over how well or how poorly internet traffic travels through their systems. A new report indicates that some of the mud-slinging this year is true: interconnection, or peering, between ISPs is why end-users are getting terrible internet traffic. But, they say, it’s business, and not technology, that’s making your Netflix buffer.
DSL Reports points the way to the study, from an internet research organization called M-Lab. M-Lab studied how traffic does (or doesn’t) make it to you through the peering connections it travels through.
Peering has come up a lot this year, most notably around Netflix. The streaming-video behemoth contended that major ISPs — particularly but not solely Comcast and Verizon — were deliberately letting Netflix traffic clog up.
The congestion was happening at interconnection points, the places where the transit ISPs Netflix partnered with — companies like…
View original 635 more words
I’m always fascinated by these one-off, one of a kind clustered systems like this Raspberry Pi rig. Kudos for doing the assembly and getting it all running. As the comments mention it may not be practical in terms of price. But still it’s pretty cool for what it is.
Originally posted on Hackaday:
[alexandros] works for resin.io, a website which plans to allow users to update firmware on embedded devices with a simple git push command. The first target devices will be Raspberry Pis running node.js applications. How does one perform alpha testing while standing up such a service? Apparently by building a monster tower of 120 Raspberry Pi computers with Adafruit 2.8″ PiTFT displays. We’ve seen some big Raspberry Pi clusters before, but this one may take the cake.
The tower is made up of 5 hinged sections of plywood. Each section contains 24 Pis, two Ethernet switches and two USB hubs. The 5 sections can be run on separate networks, or as a single 120 node monster cluster. When the sections are closed in, they form a pentagon-shaped tower that reminds us of the classic Cray-1 supercomputer.
Rasberry Pi machines are low power, at least when compared to a desktop PC. A standard Raspi consumes less…
View original 65 more words
Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.
Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!