OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions) via 10 Reasons OpenCL Will Change Your… Continue reading 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times
AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year. via AnandTech – What We’ve… Continue reading AnandTech – Testing OpenCL Accelerated Handbrake with AMD’s Trinity
Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than… Continue reading AnandTech – The Intel Ivy Bridge Core i7 3770K Review
And with clock speeds topped out and electricity use and cooling being the big limiting issue, Scott says that an exaflops machine running at a very modest 1GHz will require one billion-way parallelism, and parallelism in all subsystems to keep those threads humming. via Nvidia: No magic compilers for HPC coprocessors • The Register. Interesting… Continue reading Nvidia: No magic compilers for HPC coprocessors • The Register
Many people have predicted the demise of Moore’s Law, only to have a new process or technology rush in to save the day. Current tools are variations on a theme started in the 1960s by Shockley, Fairchild, Intel and have continued to be refined over the years. Pure research in the tools and technologies underlying semiconductor manufacturing has been going on for decades. Work on Extreme UV has gone on for years, and yet is not widely adopted as the old tools continued to scale downward in the chip rulings. But time is running out and a former principle at ARM is letting the cat out of the bag.
It may surprise some PC Fanboys but on a tower based Macintosh Pro, you cannot just throw any old graphics card into that machine install drivers and expect it to work, Oh Noes. It is like this, Apple tests hardware in small quantities that works with its hardware, engineers samples that Apple will sell as configurable items shipped with sales of new machines. You might get a choice of 3 cards in total with a new machine. After market pickings are even slimmer, and completely dependent on AMD/ATI who have to purpose build and ship a Mac-only version of a graphics card that might be slightly newer or faster (usually not though) version of a 2-3 generation old PC graphics card. It’s insulting. But hope springs eternal, and I see this news story as a ray of light for the Mac Fanboys.
Intel’s executives were quite brash when talking about Larrabee even though most of its public appearances were made on PowerPoint slides. They said that Larrabee would roar onto the scene and outperform competing products. via Intel Gets Graphic with Chip Delay – Bits Blog – NYTimes.com. And so now finally the NY Times nails the… Continue reading Intel Gets Graphic with Chip Delay – Bits Blog – NYTimes.com
Intel and Apple are making big bets on the mobile graphics market. Look at the percentage of ownership they both have in the company called Imagination. They make the graphics core used in the iPhone and the Palm Pre. Intel has yet to use the PowerVR architecture in any products
vReveal is trying to take advantage of nVidia GPUs to clean up poorly shot video in the consumer/video sharing market. Accelerating video processing is always going to be a killer app for anyone trying to leverage a PC with a beefy GPU available.
ATI Avivo is not ready for prime time. But you should read the Anandtech review all the same to see why. Maybe it can be revved and turned into something comparable to nVidia’s own GPU acceleration efforts.