Category: media

Anything relating to writing about technology or the media and or the blogospherical.

  • Apple patents hint at future AR screen tech for iPad | Electronista

    Structure of liquid crystal display: 1 – verti...
    Image via Wikipedia

    Apple may be working on bringing augmented reality views to its iPad thanks to a newly discovered patent filing with the USPTO.

    via Apple patents hint at future AR screen tech for iPad | Electronista. (Originally posted at AppleInsider at the following link below)

    Original Article: Apple Insider article on AR

    Just a very brief look at a couple of patent filings by Apple with some descriptions of potential applications. They seem to want to use it for navigation purposes using the onboard video camera. One half the screen will use the live video feed, the other half is a ‘virtual’ rendition of that scene in 3D to allow you to find a path or maybe a parking space in between all those buildings.

    The second filing mentions a see-through screen whose opacity can be regulated by the user. The information display will take precedence over the image seen through the LCD panel. It will default to totally opaque using no voltage whatsoever (In Plane switching design for the LCD).

    However the most intriguing part of the story as told by AppleInsider is the use of sensors on the device to determine angle, direction, bearing to then send over the network. Why the network? Well the whole rendering of the 3D scene as described in first patent filing is done somewhere in the cloud and spit back to the iOS device. No onboard 3D rendering needed or at least not at that level of detail. Maybe those datacenters in North Carolina are really cloud based 3D rendering farms?

  • Distracting chatter is useful. But thanks to RSS (remember that?) it’s optional. (via Jon Udell)

    editing my radio userland instiki from my 770
    Image by Donovan Watts via Flickr

    I too am a big believer in RSS. And while I am dipping toes into Facebook and Twitter the bulk of my consumption goes into the big Blogroll I’ve amassed and refined going back to Radio Userland days in 2002.

    When I left the pageview business I walked away from an engine that had, for many years, manufactured an audience for my writing. Four years on I’m still adjusting to the change. I always used to cringe when publishers talked about using content to drive traffic. Of course when the traffic was being herded my way I loved the attention. And when it wasn’t I felt — still feel — its absence. There are plenty of things I don’t miss, though. Among t … Read More

    via Jon Udell

  • Kim Cameron returns to Microsoft as indie ID expert • The Register

    Cameron said in an interview posted on the ID conferences website last month that he was disappointed about the lack of an industry advocate championing what he has dubbed “user-centric identity”, which is about keeping various bits of an individuals online life totally separated.

    via Kim Cameron returns to Microsoft as indie ID expert • The Register.

    CRM meet VRM, we want our Identity separated. This is one of the goals of Vendor Relationship Management as opposed to “Customer Relationship”. I want to share a set of very well defined details with Windows Live!, Facebook, Twitter, Google. But instead I exist as separate entities that they then try to aggregate and profile to learn more outside what I do on their respective WebApps. So if someone can champion my ability to control what I share with which online service all the better. If Microsoft understands this it is possible someone like Kim Cameron will be able to accomplish some big things with Windows Live! ID logins and profiles. Otherwise, this is just another attempt to capture web traffic into a commercial private Intraweb. I count Apple, Facebook and Google as Private Intraweb competitors.

  • Goal oriented visualizations? (via Erik Duval’s Weblog)

    Charles Minard's 1869 chart showing the losses...
    Image via Wikipedia

    Visualizations and their efficacy always takes me back to Edward Tufte‘s big hard cover books on Infographics (or Chart Junk when it’s done badly). In terms of this specific category, visualization leading to a goal I think it’s still very much a ‘general case’. But examples are always better than theoretical descriptions of an ideal. So while I don’t have an example to give (which is what Erik Duval really wants) I can at least point to a person who knows how Infographics get misused.

    I’m also reminded somewhat of the most recent issue of Wired Magazine where there’s an article on feedback loops. How are goal oriented visualizations different from or better than feedback loops? I’d say that’s an interesting question to investigate further. The primary example given in that story is the radar equipped speed limit sign. It doesn’t tell you the posted speed. It merely tells you how fast you are going and that by itself apart from ticketing and making the speed limit signs more noticeable did more to effect a change in behavior than any other option. So maybe a goal oriented visualization could also benefit from some techniques like feedback loops?

    Some of the fine fleur of information visualisation in Europe gathered in Brussels today at the Visualizing Europe meeting. Definitely worth to follow the links of the speakers on the program! Twitter has a good trace of what was discussed. Revisit offers a rather different view on that discussion than your typical twitter timeline. In the Q&A session, Paul Kahn asked the Rather Big Question: how do you choose between different design alterna … Read More

    via Erik Duval’s Weblog

  • JSON Activity Streams Spec Hits Version 1.0

    This is icon for social networking website. Th...
    Image via Wikipedia

    The Facebook Wall is probably the most famous example of an activity stream, but just about any application could generate a stream of information in this format. Using a common format for activity streams could enable applications to communicate with one another, and presents new opportunities for information aggregation.

    via JSON Activity Streams Spec Hits Version 1.0.

    Remember Mash-ups? I recall the great wide wonder of putting together web pages that used ‘services’ provided for free through APIs published out to anyone who wanted to use them. There were many at one time, some still exist and others have been culled out. But as newer social networks begat yet newer ones (MySpace,Facebook,FourSquare,Twitter) none of the ‘outputs’ or feeds of any single one was anything more than a way of funneling you into it’s own login accounts and user screens. So the gated community first requires you to be a member in order to play.

    We went from ‘open’ to cul-de-sac and stovepipe in less than one full revision of social networking. However, maybe all is not lost, maybe an open standard can help folks re-use their own data at least (maybe I could mash-up my own activity stream). Betting on whether or not this will take hold and see wider adoption by Social Networking websites would be risky. Likely each service provider will closely hold most of the data it collects and only publish the bare minimum necessary to claim compliance. However, another burden upon this sharing is the slowly creeping concerns about security of one’s own Activity Stream. It will no doubt have to be an opt-in and definitely not an opt-out as I’m sure people are more used to having fellow members of their tribe know what they are doing than putting out a feed to the whole Internet of what they are doing. Which makes me think of the old discussion of being able to fine tune who has access to what (Doc Searles old Vendor Relationship Management idea). Activity Streams could easily fold into that university where you regulate what threads of the stream are shared to which people. I would only really agree to use this service if it had that fine grained level of control.

  • The Sandy Bridge Review: Intel Core i7-2600K – AnandTech

    Quick Sync is just awesome. Its simply the best way to get videos onto your smartphone or tablet. Not only do you get most if not all of the quality of a software based transcode, you get performance thats better than what high-end discrete GPUs are able to offer. If you do a lot of video transcoding onto portable devices, Sandy Bridge will be worth the upgrade for Quick Sync alone.

    For everyone else, Sandy Bridge is easily a no brainer. Unless you already have a high-end Core i7, this is what youll want to upgrade to.

    via The Sandy Bridge Review: Intel Core i7-2600K, i5-2500K and Core i3-2100 Tested – AnandTech :: Your Source for Hardware Analysis and News.

    Previously in this blog I have recounted stories from Tom’s Hardware and Anandtech.com surrounding the wicked cool idea of tapping the vast resources contained within your GPU while you’re not playing video games. Producers of GPUs like nVidia and AMD both wanted to market their products to people who not only gamed but occasionally ripped video from DVDs and played them back on ipods or other mobile devices. The amount of time sunk into doing these kinds of conversions were made somewhat less of a pain due to the ability to run the process on a dual core Wintel computer, browsing web pages  while re-encoding the video in the background. But to get better speeds one almost always needs to monopolize all the cores on the machine and free software like HandBrake and others will take advantage of those extra cores, thus slowing your machine, but effectively speeding up the transcoding process. There was hope that GPUs could accelerate the transcoding process beyond what was achievable with a multi-core cpu from Intel. An example is also Apple’s widespread adoption of OpenCL as a pipeline to the GPU to send rendering requests for any video frames or video processing that may need to be done in iTunes, QuickTime or the iLife applications. And where I work, we get asked to do a lot of transcoding of video to different formats for customers. Usually someone wants a rip from a DVD that they can put on a flash drive and take with them into a classroom.

    However, now it appears there is a revolution in speed in the works where Intel is giving you faster transcodes for free. I’m talking about Intel’s new Quick Sync technology using the integrated graphics core as a video transcode accelerator. The speeds of transcoding are amazingly fast and given the speed, trivial to do for anyone including the casual user. In the past everyone seemed to complain about how slow their computer was especially for ripping DVDs or transcoding the rips to smaller more portable formats. Now, it takes a few minutes to get an hour of video into the right format. No more blue Monday. Follow the link to the story and analysis from Anandtech.com as they ran head to head comparisons of all the available techniques of re-encoding/transcoding a Blue-ray video release into a smaller .mp4 file encoded in as h.264. They did comparisons of Intel four-core cpus (which took the longest and got pretty good quality) versus GPU accelerated transcodes, versus the new Intel QuickSync technology coming out soon on the Sandy Bridge gen Intel i7 cpus. It is wicked cool how fast these transcodes are and it will make the process of transcoding trivial compared to how long it takes to actually ‘watch’ the video you spent all that time converting.

    Links to older GPU accelerated video articles:

    https://carpetbomberz.com/2008/06/25/gpu-accelerated-h264-encoding/
    https://carpetbomberz.com/2009/06/12/anandtech-avivo/
    https://carpetbomberz.com/2009/06/23/vreveal-gpu/
    https://carpetbomberz.com/2010/10/18/microsoft-gpu-video-encoding-patent/

  • Drive suppliers hit capacity increase difficulties • The Register

    Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology.

    via Drive suppliers hit capacity increase difficulties • The Register.

    This is a good survey of upcoming HDD platter technologies. HAMR (Heat Assisted Magnetic Recording)and BPM (Bit Patterned Media) are the next generation after the current Perpendicular Magnetic Recording slowly hits the top end of its ability to squash together the 1’s and 0’s of a spinning hard drive platter. HAMR is like the old Floptical technology from the halls of Steve Job’s old NEXT Computer company. It uses a laser to heat the surface of the drive platter before the Read/Write head starts recording data to the drive. This ‘change’ in the state of the surface of the drive (the heat) helps align the magnetism of the bits written so that the tracks of the drive and the bits recorded inside them can be more tightly spaced. In the world of HAMR, Heat + Magnetism = bigger hard drives on the same old 3.5″ platters and 2.5″ platters we have now.  With BPM, the whole drive is manufactured to hold a set number of bits and tracks in advance. Each bit is created directly on the platter as a ‘well’ with a ring of insulating material surround it. The sizes of the wells are sufficiently small and dense enough to allow a light tighter spacing than PMR. But as is often the case the new technologies aren’t ready for manufacturing. A few test samples of possible devices are out in limited or custom made engineering prototypes to test the waters.

    Given the slow down in silicon CMOS chip speeds from the likes of Intel and AMD along with the wall of PMR it would appear the frontier days of desktop computing are coming to a close. Gone are the days of Megahertz wars and now Gigabyte wars waged in the labs of review sites and test labs across the Interwebs. The torrid pace of change in hardware we all experienced from the release of Windows 95 to the release this year of Windows 7 has slowed to a radical incrementalism. Intel releases so many chips with ‘slight’ variations in clock speed and cache one cannot keep up with them all. Hard drive manufacturers try to increment their disks about .5 Tbytes every 6 months but now that will stop. Flash-based SSD will be the biggest change for most of us and will help break through the inherent speed barriers enforced by SATA and spinning disk technologies. I hope a hybrid approach is used mixing SSDs and HDDs for speed and size in desktop computers. Fast things that need to be fast can use the SDD, slow things that are huge in size or quantity will go to the HDD. As for next gen disk based technologies, I’m sure there will be a change to the next higher density technology. But it will no doubt be a long time in coming.

  • Announcing the first free software Blu-ray encoder

    Diary Of An x264 Developer » (4/25/2010)

    For many years it has been possible to make your own DVDs with free software tools.  Over the course of the past decade, DVD creation evolved from the exclusive domain of the media publishing companies to something basically anyone could do on their home computer.

    The move towards Blu-ray encoding is very encouraging. In reading the article I don’t see a mention of CUDA or OpenCL acceleration of the encoding process. As was the case for MPEG-2 a glaring need for acceleration of the process was painfully obvious once people started converting long form videos. I know x264 encoding can be accelerated by splitting threads across CPUs on a multi-core processor. But why not unleash the floodgates and get some extra horsepower from the ATI or nVidia graphics card too. We’re talking large frames and large frame rates and the only way to guarantee adoption of the new format is to make the encoding process fast, fast, fast.

  • iPad release imminent – caveat emptor

    Apropos to the big Easter Weekend, Apple is releasing the iPad to the U.S. market. David Pogue from the NYTimes has done two reviews in one. Rather than anger his technophile readers or alienate his average readers he gave each audience his own review of a real hands-on iPad. Where’s Walt Mossberg on this topic? (Walt likes it) Pogue more or less says lack of a physical keyboard is a showstopper for many. Instead, users who need a keyboard need to get a laptop of some sort. Otherwise for what it accomplishes through finger gestures and software design the iPad is a pretty incredible end user experience. Whether or not your personality, demeanor is compatible with the iPad is up for debate. But try before you buy, hand-on will tell you much more than doing a web order and hoping for the best. And given the price, it’s a wise choice. Walt Mossberg too feels you had better actually try to use it before you buy. It is in his own words, not like any other computer but in a different class all its own. So don’t trust other people to tell you whether or not it will work for you.

    One thing David Pogue is also very enthused by is the data plan seems less onerous than the first and second generation iPhone contracts with AT&T. The dam is about to burst on mandatory data plans, and in the iPad universe you can subscribe and lapse, re-subscribe lapse again depending on your needs. So don’t pay for a long term contract if you don’t need it. That addresses a long-standing problem I have had with the iPhone as it is currently marketed by Apple and AT&T. Battery life is another big upshot. The review models that Mossberg and Pogue used had ‘longer’, read that again LONGER run times than stated by Apple. Both guys tried doing real heavy network and video playback on the devices and went over the 10hr. battery life claimed by Apple. Score a big win for the iPad in that category.

    Lastly Pogue hinted at maps looking and feeling like real maps on the bigger display. Mossberg points out the hardware isn’t what’s really important. No, it’s what’s going to show up on the AppStore specifically for the iPad. I think I’ve heard a few M.I.T. types say this before. It’s unimportant what it does. The question is what ‘else’ does it do. And that ‘else’ is the software developer’s coin of the realm. Without developers these products have no legs, no markets outside of the loyal fan base. What may come, no one can tell but it will be interesting times for the iPad owners that’s for sure.

  • AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment

    In bypassing a traditional chip maker like Intel and creating its own custom ARM-based processor for the iPad, Apple has likely incurred an investment of about $1 billion, a new report suggests.

    via AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment.

    After reading the NYTimes article linked to within this article I can only conclude it’s a very generalized statement that it costs $1Billion to create a custom chip. The exact quote from the NYTimes article author Ashlee Vance is: “Even without the direct investment of a factory, it can cost these companies about $1 billion to create a smartphone chip from scratch.”

    Given that is one third the full price of building a  chip fabrication plant, why so expensive? What is the breakdown of those costs. Apple did invest money in PA Semiconductor to get some chip building expertise (they primarily designed chips that were fabricated at overseas contract manufacturing plants). Given Qualcomm has created the Snapdragon CPU using similar cpu cores from ARM Holdings Inc., they must have $1Billion to throw around too? Qualcomm was once dominant in the cell phone market licensing its CDMA technology to the likes of Verizon. But it’s financial success is nothing like the old days. So how does Qualcomm come up with $1Billion to develop the Snapdragon CPU for smartphones? Does that seem possible?

    Qualcomm and Apple are licensing the biggest building blocks and core intellectual property from ARM, all they need to do is route and place and verify the design. Where does the $1Billion figure come into it? Is it the engineers? Is it the masks for exposing the silicon wafers? I argue now as I did in my first posting about the Apple A4 chip, the chip is an adaptation of intellectual property, a license to a CPU design provided by ARM. It’s not literally created from ‘scratch’ starting with no base design or using completely new proprietary intellectual property from Apple. This is why I am confused. Maybe ‘from scratch’ means different things to different people.