Posts Tagged ‘nVidia’
The president of VMware said after seeing it (and not knowing what he was seeing), “Wow, what movie is that?” And that’s what it’s all about — dispersion of disbelief. You’ve heard me talk about this before, and we’re almost there. I famously predicted at a prestigious event three years ago that by 2015 there would be no more human actors, it would be all CG. Well I may end up being 52% or better right (phew). - Jon Peddie
via Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times. Jon Peddie has covered the 3D animation, modeling and simulation market for YEARS. And when you can get a rise out of him like the quote above from EETimes, you have accomplished something. Between NVidia’s hardware and now its GameWorks suite of software modeling tools, you have in a word created Digital Cinema. Jon goes on to talk about how the digital simulation demo convinced a VMWare exec it was real live actors on a set. That’s how good things are getting.
And the metaphor/simile of comparing ILM to NVidia’s toolkits off the shelf is also telling. No longer does one need to have on staff computer scientists, physicists and mathematicians to help model, and simulate things like particle systems and hair. It’s all there along with ocean waves, and smoke altogether in the toolkit ready to use. Putting these tools into the hands of the users will only herald a new era of less esoteric, less high end, exclusive access to the best algorithms and tools.
nVidia GameWorks by itself will be useful to some people but re-packaging it in a way that embeds it in an existing workflow will widen the level of adoption.Whether that’s for a casual user or a student in a 3D modeling and animation course at a University. The follow-on to this is getting the APIs publishedto tap into this through current off the shelf tools like AutoCAD, 3D StudioMax, Blender, Maya, etc. Once the favorite tools can bring up a dialog box and start adding a particle system, full ray tracing to a scene at this level of quality, things will really start to take off. The other possibility is to flesh out GameWorks in a way that makes it more of a standalone, easily adopted brand new package creatives could adopt and eventually migrate to over time. That would be another path to using GameWorks as an end-to-end digital cinema creation package.
nVidia is making a new bit of electronics hardware to be added to LCD displays made by third party manufacturers. The idea is to send syncing data to the display to let it know when a frame is rendered by the 3D video hardware on the video card. Having this bit of extra electronics will smooth out the high rez/high frame rate games played by the elite desktop game players.
It would be cool to also see this adopted for the game console markets as well, meaning TV manufacturers could also use this same idea and make your PS4 and XBox One play smoother as well. It’s a chicken and egg situation though, where unless someone like Steam or another manufacturer tries to push this out to a wider audience, it will get stuck as a niche product for the higher of the end of the high end PC desktop gamers. But it is definitely a step in the right direction and helps push us further away from the old VGA standard from some years ago. Video cards AND displays should both be smart those no reason, no excuse to not have them both be somewhat more aware of their surroundings and coordinate things. And if AMD decide they too need this capability, how soon after that will both AMD and nVidia have to come to the table and get a standard going? I hope that would happen sooner rather than later and that too would possibly drive this technology to a wider audience.
For now, use Handbrake for simple, effective encodes. Arcsoft or Xilisoft might be worth a look if you know you’ll be using CUDA or Quick Sync and have no plans for any demanding work. Avoid MediaEspresso entirely.
via By Joel Hruska @ ExtremeTech The wretched state of GPU transcoding – Slideshow | ExtremeTech.
Joel Hruska does a great survey of GPU enabled video encoders. He even goes back to the original Avivo and Badaboom encoders put out by AMD and nVidia when they were promoting GPU accelerated video encoding. Sadly the hype doesn’t live up to the results. Even Intel’s most recent competitor in the race, QuickSync, is left wanting. HandBrake appears to be the best option for most people and the most reliable and repeatable in the results it gives.
Ideally the maintainers of the HandBrake project might get a boost by starting up a fork of the source code that has Intel QuickSync support. There’s no indication now that that everyone is interested in proprietary Intel technology like QuickSynch as expressed in this article from Anandtech. OpenCL seems like a more attractive option for the Open Source community at large. So the OpenCL/HandBrake development is at least a little encouraging. Still as Joel Hruska points out the CPU still is the best option for encoding high quality at smaller frame sizes, it just beats the pants off all the GPU accelerated options available to date.
- AnandTech – Testing OpenCL Accelerated Handbrake with AMD’s Trinity (carpetbomberz.com)
- The Wretched State of GPU Transcoding (tech.slashdot.org)
- Lucid Demonstrates XLR8 Frame Rate Boosting Technology (tomshardware.com)
And with clock speeds topped out and electricity use and cooling being the big limiting issue, Scott says that an exaflops machine running at a very modest 1GHz will require one billion-way parallelism, and parallelism in all subsystems to keep those threads humming.
Interesting write-up of a blog entry from nVidia‘s chief of super-computing, including his thoughts regarding scaling up to an exascale supercomputer. I’m surprised at how power efficient a GPU is for floating point operations. I’m amazed at these company’s ability to measure the power consumption down to the single operation level. Microjoules and picojoules are worlds apart from on another and here’s the illustration:
1 Microjoule is 1 millionth of a joule or 1×10-6 (six decimal places) whereas 1 picojoule is 1×10-12 or twice as many decimal places a total of 12 zeroes. So that is a HUGE difference 6 orders of magnitude in efficiency from an electrical consumption standpoint. The nVidia guy, author Steve estimates that to get to exascale supercomputers any hybrid CPU/GPU machine would need GPUs that have one order of magnitude higher efficiency in joules per floating point operation (FLOP) or 1×10-13, one whole decimal point better. To borrow a cliche, Supercomputer manufacturers have their work cut out for them. The way forward is efficiency and the GPU has the edge per operation, and all they need do is increase the efficiency that one decimal point to get them closer to the exascale league of super-computing.
Why is exascale important to the scientific community at large? In one segment there’s never enough cycles per second to satisfy the scale of the computations being done. Models of systems can be created but the simulations they provide may not have enough fine grained ‘detail’. The detail say for weather model simulating a period of time in the future needs to know the current conditions then it can start the calculation. But the ‘resolution’ or fine-grained detail of ‘conditions’ is what limits the accuracy over time. Especially when small errors get amplified by each successive cycle of calculating. One way to help limit the damage by these small errors is to increase the resolution or the land area over which you are assign a ‘current condition’. So instead of 10 miles of resolution (meaning each block on the face of the planet is 10miles square), you switch to 1mile resolution. Any error in a one mile square patch is less likely to cause huge errors in the future weather prediction. But now you have to calculate 10x the number of squares as compared to the previous best model which you set at 10miles of resolution. That’s probably the easiest way to see how demands on the computer increase as people increase the resolution of their weather prediction models. But it’s not limited just to weather. It could be used to simulate a nuclear weapon aging over time. Or it could be used to decrypt foreign messages intercepted by NSA satellites. The speed of the computer would allow more brute force attempts ad decrypting any message they capture.
In spite of all the gains to be had with an exascale computer, you still have to program the bloody thing to work with your simulation. And that’s really the gist of this article, no free lunch in High Performance Computing. The level of knowledge of the hardware required to get anything like the maximum theoretical speed is a lot higher than one would think. There’s no magic bullet or ‘re-compile’ button that’s going to get your old software running smoothly on the exascale computer. More likely you and a team of the smartest scientists are going to work for years to tailor your simulation to the hardware you want to run it on. And therein lies the rub, the hardware alone isn’t going to get you the extra performance.
- ExaFLOP computers: Faster than 50 million laptops – the race to go exascale (talesfromthelou.wordpress.com)
- Exascale: The Faraway Frontier of Computing? (lcitnetworks.wordpress.com)
- Nvidia: No magic compilers for HPC coprocessors (go.theregister.com)
Microsoft hasnt been granted the patent despite it having been first filed in September 2004, but it may face challenges to the claims from companies that began using GPU video encoding independently after the patent application was filed but before it was published.
Given that it took nVidia quite a while before they got any developers to work on shipping products that took advantage of their programmable GPUs (the CUDA architecture), it’s a surprise to me that Microsoft even filed a patent on this. Previously I have re-posted some press releases surrounding the products known as Avivo (from ATI/AMD) and Badaboom, which was designed to speed up this very thing. You rip a DVD and you want to save it to a smaller file size or one that’s compatible with a portable video player. But it takes forever on your computer, so what’s a person to do? Well thanks to nVidia and product X you just add a little software and speed up that transcoding to .mp4 format. It’s like discovering your car can do something you didn’t know was even possible, like turning into a Corvette on straight flat roadways. Now be advised not all roads are straight or flat, but when they are Boom! You can go as fast as you want. That’s what having an accelerated video encoding is like. It’s specialized but when you use it, it really works and it really speeds things up. I think part of why Microsoft wants to enforce this is in the hope of possibly getting licensing fees but part of it is also maintaining it’s bullying prowess on the desktop computer. They own the OS right? So why not remind everyone that were it not for their generosity and research labs we would all be using pocket calculators to do our taxes. This is one case, a premiere example of how patents are stifling innovation. And I would love to see this patent never be enforced or struck down.
There’s a new video trend in personal home video. Companies are lining up to provide aftermarket tools to process and provide corrections to camera phone video. Pure Digital’s Flip! camera line has some tools available to do some minor cutting to video clips and publish it to sharing websites. All of which presents an entrepreurial opporunity to provide pay for tools to help improve poorly shot video.
Some tools are provided within video editing suites like Apples iMovie (it corrects camera shake). Now on the PC there are two new products, one of which is designed to take advantage of the nVidia GPU acceleration of parallel programming. The product is called vReveal
While vReveal works with Windows XP or Vista (and not with Macs), it will make its enhancements much faster if the machine contains a recent graphics processing card from Nvidia, Dr. Varah said. Nvidia is an investor and a marketing partner with vReveal; a specific list of cards is at vReveal’s Web site.