Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for the ‘gpu’ Category

Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times

leave a comment »

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

The president of VMware said after seeing it (and not knowing what he was seeing), “Wow, what movie is that?” And that’s what it’s all about — dispersion of disbelief. You’ve heard me talk about this before, and we’re almost there. I famously predicted at a prestigious event three years ago that by 2015 there would be no more human actors, it would be all CG. Well I may end up being 52% or better right (phew).    - Jon Peddie

via Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times. Jon Peddie has covered the 3D animation, modeling and simulation market for YEARS. And when you can get a rise out of him like the quote above from EETimes, you have accomplished something. Between NVidia’s hardware and now its GameWorks suite of software modeling tools, you have in a word created Digital Cinema. Jon goes on to talk about how the digital simulation demo convinced a VMWare exec it was real live actors on a set. That’s how good things are getting.

And the metaphor/simile of comparing ILM to NVidia’s toolkits off the shelf is also telling. No longer does one need to have on staff computer scientists, physicists and mathematicians to help model, and simulate things like particle systems and hair. It’s all there along with ocean waves, and smoke altogether in the toolkit ready to use. Putting these tools into the hands of the users will only herald a new era of less esoteric, less high end, exclusive access to the best algorithms and tools.

nVidia GameWorks by itself will be useful to some people but re-packaging it in a way that embeds it in an existing workflow will widen the level of adoption.Whether that’s for a casual user or a student in a 3D modeling and animation course at a University. The follow-on to this is getting the APIs publishedto tap into this through current off the shelf tools like AutoCAD, 3D StudioMax, Blender, Maya, etc. Once the favorite tools can bring up a dialog box and start adding a particle system, full ray tracing to a scene at this level of quality, things will really start to take off. The other possibility is to flesh out GameWorks in a way that makes it more of a standalone, easily adopted  brand new package creatives could adopt and eventually migrate to over time. That would be another path to using GameWorks as an end-to-end digital cinema creation package.

Enhanced by Zemanta

Written by Eric Likness

April 10, 2014 at 3:00 pm

Virtual Reality | Oculus Rift – Consumer Reports

Oculus Intel

Oculus Intel (Photo credit: .michael.newman.)

Imagine being able to immerse yourself in another world, without the limitations of a TV or movie screen. Virtual reality has been a dream for years, but judging by current trends, it may not be just a dream for much longer.

via Virtual Reality | Oculus Rift – Consumer Reports.

I won’t claim that when a technology gets written up in Consumer Reports it has “jumped the shark”, no. Instead I would rather give Consumer Reports kudos for keeping tabs on others writing up and lauding the Oculus Rift VR headset. The specifications of this device continue to improve even before it is hitting the market. Hopes are still high for the prices to be reasonable (really it needs to cost no more than a bottom of the line iPad if there’s any hope of it taking off). Whether the price meets everyone’s expectations is very dependent on the sources for the materials going into the headset, and the single most expensive item are the displays.

OLED (Organic LED) has been used in mobile phones to great effect, the displays use less power and have somewhat brighter color than backlit LCD panels. But they cost more, and the bigger the display the higher the cost. The developers of Oculus Rift have now pressed the cost maybe a little higher by choosing to go with a very high refresh rate and low latency for the OLED screens in the headset. This came after first wave of user feedback indicating too much lag and subsequent headaches due to the screen not keeping up with head movements (this is a classical downfall of most VR headsets no matter the display technology). However Oculus Rift has continued to work on the lag in the current generation head set and by all accounts it’s nearly ready for public consumption. It’s true, they might have fixed the lag issue and most beta testers to date are complimenting the changes in the hardware. This might be the device that launches a thousand 3D headsets.

As 3D goes, the market and appeal may be very limited, that historically has been the case. Whether it was used in academia for data visualization or in the military for simulation, 3D Virtual Reality was an expensive niche catering to people with lots of money to spend. Because Oculus Rift was targeted at a lower price range, but with fantastic performance visually speaking who knows what market may follow it’s actual release. So as everyone is whipped up into a frenzy over the final release of the Oculus Rift VR Headset, keep an eye out for this. It’s going to be hot item in limited supply for a while I would bet. And yes, I do think I would love to try one out myself, not just for gaming purposes but for any of the as yet unseen applications it might have (like the next Windows OS or Mac OS?)

Enhanced by Zemanta

Written by Eric Likness

March 13, 2014 at 3:00 pm

Posted in gpu, science & technology

Tagged with ,

10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

OpenCL logo

OpenCL logo (Photo credit: Wikipedia)

OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

Enhanced by Zemanta

Written by Eric Likness

March 3, 2014 at 3:00 pm

Posted in computers, fpga, gpu

Tagged with , ,

The Memory Revolution | Sven Andersson | EE Times

A 256Kx4 Dynamic RAM chip on an early PC memor...

A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

via The Memory Revolution | Sven Andersson | EE Times

Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

Enhanced by Zemanta

Written by Eric Likness

February 24, 2014 at 3:00 pm

Posted in gpu, technology, wintel

Tagged with , , ,

AnandTech | The Pixel Density Race and its Technical Merits

Italiano: Descrizione di un pixel

Italiano: Descrizione di un pixel (Photo credit: Wikipedia)

If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.

via AnandTech | The Pixel Density Race and its Technical Merits.

Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.

I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained  by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.

What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.

Enhanced by Zemanta

Written by Eric Likness

February 17, 2014 at 3:00 pm

Posted in art, gpu, mobile

Tagged with , ,

nVidia Gsync video scalar on the horizon

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

http://www.eetimes.com/author.asp?section_id=36&doc_id=1320783

nVidia is making a new bit of electronics hardware to be added to LCD displays made by third party manufacturers. The idea is to send syncing data to the display to let it know when a frame is rendered by the 3D video hardware on the video card. Having this bit of extra electronics will smooth out the high rez/high frame rate games played by the elite desktop game players.

It would be cool to also see this adopted for the game console markets as well, meaning TV manufacturers could also use this same idea and make your PS4 and XBox One play smoother as well. It’s a chicken and egg situation though, where unless someone like Steam or another manufacturer tries to push this out to a wider audience, it will get stuck as a niche product for the higher of the end of the high end PC desktop gamers. But it is definitely a step in the right direction and helps push us further away from the old VGA standard from some years ago. Video cards AND displays should both be smart those no reason, no excuse to not have them both be somewhat more aware of their surroundings and coordinate things. And if AMD decide they too need this capability, how soon after that will both AMD and nVidia have to come to the table and get a standard going? I hope that would happen sooner rather than later and that too would possibly drive this technology to a wider audience.

Enhanced by Zemanta

Written by carpetbomberz

February 12, 2014 at 3:00 pm

John Carmack – Oculus Rift two great tastes…

Oculus Rift VR screen view

Oculus Rift VR screen view (Photo credit: tribehut)

http://www.theregister.co.uk/2013/11/22/carmack_goes_to_oculus_rift/

id Software has formally announced Carmack has left the building. Prior to this week he was on a sabbatical from id, doing consulting/advisory work for the folks putting the Oculus Rift together. Work being done now is to improve the speed of the refresh on the video screens. That’s really the last biggest hurdle to jump prior to this set of VR goggles and motion sensor out on the open market. The beta units are still out there, and people are experimenting with the Oculus versions of some First Person Shooters, but the revolution is not here,… yet.

Oculus will need to pull-off some optimizations for the headset. Some outstanding are not just the refresh rate but also what display technology is going to chosen. OLED is still up for consideration over backlit LCDs, but that may be a last stand in order to solve the refresh problem. Latency in the frame rates on the video displays is causing motion sickness of the current crop of beta testers of the Oculus Rift VR headset. The amount they’re attempting to speed up is roughly 1/2 the current fastest frame refresh rate they can achieve now. Hopefully this problem can be engineered out of the next revision of the beta units.

Written by Eric Likness

December 12, 2013 at 3:00 pm

Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com

Note this is a draft of an article I wrote back in June when Apple announced it was going to favor its own Maps app over Google Maps and take G-Maps out of the Apple Store altogether. This blog went on hiatus just 2 weeks after that. And a whirlwind number of staff changes occurred at Apple as a result of the debacle of iOS Maps product. Top people have been let go not the least of which was the heir apparent in some people’s views of Steve Jobs; Scott Forstall. He was not popular, very much a jerk and when asked by Tim Cook to co-sign the mea culpa Apple put out over their embarrassment about the lack of performance and lack of quality of iOS Maps, Scott wouldn’t sign it. So goodbye Scott, hello Google Maps. Somehow Google and Apple are in a period of detente over Maps and Google Maps is now returned to the Apple Store. Who knew so much could happen in 6 months right?

Garmin told Wired in a statement. “We think that there is a market for smartphone navigation apps, PNDs [Personal Navigation Devices] and in-dash navigation systems as each of these solutions has their own advantages and use case limitations and ultimately it’s up to the consumer to decide what they prefer.

via Apple, Google Just Killed Portable GPS Devices | Autopia | Wired.com.

That’s right mapping and navigation are just one more app in a universe of software you can run on your latest generation iPod Touch or iPhone. I suspect that the Maps will only be available on the iPhone as that was a requirement previously placed on the first gen Maps app on iOS. It would be nice if there were a lower threshold entry point for participation in the Apple Maps app universe.

But I do hear one or two criticisms regarding Apple’s attempt to go its own way. Google’s technology and data set lead (you know all those cars driving around and photographing?) Apple has to buy that data from others, it isn’t going to start from scratch and attempt to re-create Google’s Street View data set. Which means it won’t be something Maps has as a feature probably for quite some time. Android’s own Google Maps app includes turn-by-turn navigation AND Street view built right in. It’s just there. How cool is that? You get the same experience on the mobile device as the one you get working in a web browser on a desktop computer.

In this battle between Google and Apple the pure play personal navigation device (PND) manufacturers are losing share. I glibly suggested in a twee yesterday that Garmin needs to partner up with Apple and help out with its POI and map datasets so that potentially both can benefit. It would be cool if a partnership could be struck that allowed Apple to have feature that didn’t necessarily steal market share from the PNDs, but could somehow raise all boats equally. Maybe a partnership to create a Street View-like add-on for everyone’s mapping datasets would be a good start. That would help level the playing field between Google vs. the rest of the world.

Written by Eric Likness

December 15, 2012 at 12:22 pm

Posted in google, gpu, mobile, navigation, technology

Tagged with , , ,

The wretched state of GPU transcoding – ExtremeTech

The spring 2005 edition of ExtremeTech magazine

The spring 2005 edition of ExtremeTech magazine (Photo credit: Wikipedia)

For now, use Handbrake for simple, effective encodes. Arcsoft or Xilisoft might be worth a look if you know you’ll be using CUDA or Quick Sync and have no plans for any demanding work. Avoid MediaEspresso entirely.

via By Joel Hruska @ ExtremeTech The wretched state of GPU transcoding – Slideshow | ExtremeTech.

Joel Hruska does a great survey of GPU enabled video encoders. He even goes back to the original Avivo and Badaboom encoders put out by AMD and nVidia when they were promoting GPU accelerated video encoding. Sadly the hype doesn’t live up to the results. Even Intel’s most recent competitor in the race, QuickSync, is left wanting. HandBrake appears to be the best option for most people and the most reliable and repeatable in the results it gives.

Ideally the maintainers of the HandBrake project might get a boost by starting up a fork of the source code that has Intel QuickSync support. There’s no indication now that that everyone is interested in proprietary Intel technology like QuickSynch as expressed in this article from Anandtech. OpenCL seems like a more attractive option for the Open Source community at large. So the OpenCL/HandBrake development is at least a little encouraging. Still as Joel Hruska points out the CPU still is the best option for encoding high quality at smaller frame sizes, it just beats the pants off all the GPU accelerated options available to date.

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

Written by Eric Likness

June 14, 2012 at 3:00 pm

AnandTech – Testing OpenCL Accelerated Handbrake with AMD’s Trinity

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year.

via AnandTech – What We’ve Been Waiting For: Testing OpenCL Accelerated Handbrake with AMD’s Trinity.

There’s a lot to talk about when it comes to accelerated video transcoding, really. Not the least of which is HandBrake’s dominance generally for anyone doing small scale size reductions of their DVD collections for transport on mobile devices. We owe it all to the open source x264 codec and all the programmers who have contributed to it over the years, standing on one another’s shoulders allowing us to effortlessly encode or transcode gigabytes of video to manageable sizes. But Intel has attempted to rock the boat by inserting itself into the fray by tooling its QuickSync technology for accelerating the compression and decompression of video frames. However it is a proprietary path pursued by a few small scale software vendors. And it prompts the question, when is open source going to benefit from the proprietary Intel QuickSync technology? Maybe its going to take a long time. Maybe it won’t happen at all. Lucky for the HandBrake users in the audience some attempt is being made now to re-engineer the x264 codec to take advantage of any OpenCL compliant hardware on a given computer.

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

Written by Eric Likness

June 4, 2012 at 3:00 pm

Follow

Get every new post delivered to your Inbox.

Join 295 other followers