Carpet Bomberz Inc.

Scouring the technology news sites every day

Posts Tagged ‘nVidia

AMD Clears the Air Around Project FreeSync

leave a comment »

examples of video connectors

A/V Connectors currently in the market

AMD has been making lots of noise about Project FreeSync these past few months, but has also left plenty of questions unanswered.

via AMD Clears the Air Around Project FreeSync.

FreeSync, and nVidia G-sync both are attempting to get better 3D rendering out of today’s graphics cards no matter what part of the market they are aimed at. But like other “features” introduced by graphics card manufacturers there’s a drive now to set a standard common to the manufacturers of cards and hopefully too, the manufacturers of display panels.

Adaptive-Sync is the grail for which AMD is searching, promoting and lobbying for going forward. It’s not too manufacturer specific and is just open enough to be adopted by most folks. The benefits are there too, as the article states Tom’s Hardware has tried out nVidia’s G-sync and it works. Which is reassuring given that sometimes these “features” don’t always appear as big revolutionaries strides in engineering so much as marketing talking points.

AMD has been successful so far in pushing adoption by the folks who make RAMDACs and video scaler circuits for the display manufacturers. That’s the real heavy lifting in driving the standard. And with some slight delays you may see the display panel manufacturers adopt this ActiveSync standard within the next year.

 

Written by Eric Likness

July 28, 2014 at 3:00 pm

Posted in gpu, h.264

Tagged with , , , ,

Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

The president of VMware said after seeing it (and not knowing what he was seeing), “Wow, what movie is that?” And that’s what it’s all about — dispersion of disbelief. You’ve heard me talk about this before, and we’re almost there. I famously predicted at a prestigious event three years ago that by 2015 there would be no more human actors, it would be all CG. Well I may end up being 52% or better right (phew).    – Jon Peddie

via Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times. Jon Peddie has covered the 3D animation, modeling and simulation market for YEARS. And when you can get a rise out of him like the quote above from EETimes, you have accomplished something. Between NVidia’s hardware and now its GameWorks suite of software modeling tools, you have in a word created Digital Cinema. Jon goes on to talk about how the digital simulation demo convinced a VMWare exec it was real live actors on a set. That’s how good things are getting.

And the metaphor/simile of comparing ILM to NVidia’s toolkits off the shelf is also telling. No longer does one need to have on staff computer scientists, physicists and mathematicians to help model, and simulate things like particle systems and hair. It’s all there along with ocean waves, and smoke altogether in the toolkit ready to use. Putting these tools into the hands of the users will only herald a new era of less esoteric, less high end, exclusive access to the best algorithms and tools.

nVidia GameWorks by itself will be useful to some people but re-packaging it in a way that embeds it in an existing workflow will widen the level of adoption.Whether that’s for a casual user or a student in a 3D modeling and animation course at a University. The follow-on to this is getting the APIs publishedto tap into this through current off the shelf tools like AutoCAD, 3D StudioMax, Blender, Maya, etc. Once the favorite tools can bring up a dialog box and start adding a particle system, full ray tracing to a scene at this level of quality, things will really start to take off. The other possibility is to flesh out GameWorks in a way that makes it more of a standalone, easily adopted  brand new package creatives could adopt and eventually migrate to over time. That would be another path to using GameWorks as an end-to-end digital cinema creation package.

Enhanced by Zemanta

Written by Eric Likness

April 10, 2014 at 3:00 pm

nVidia Gsync video scalar on the horizon

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

http://www.eetimes.com/author.asp?section_id=36&doc_id=1320783

nVidia is making a new bit of electronics hardware to be added to LCD displays made by third party manufacturers. The idea is to send syncing data to the display to let it know when a frame is rendered by the 3D video hardware on the video card. Having this bit of extra electronics will smooth out the high rez/high frame rate games played by the elite desktop game players.

It would be cool to also see this adopted for the game console markets as well, meaning TV manufacturers could also use this same idea and make your PS4 and XBox One play smoother as well. It’s a chicken and egg situation though, where unless someone like Steam or another manufacturer tries to push this out to a wider audience, it will get stuck as a niche product for the higher of the end of the high end PC desktop gamers. But it is definitely a step in the right direction and helps push us further away from the old VGA standard from some years ago. Video cards AND displays should both be smart those no reason, no excuse to not have them both be somewhat more aware of their surroundings and coordinate things. And if AMD decide they too need this capability, how soon after that will both AMD and nVidia have to come to the table and get a standard going? I hope that would happen sooner rather than later and that too would possibly drive this technology to a wider audience.

Enhanced by Zemanta

Written by carpetbomberz

February 12, 2014 at 3:00 pm

The wretched state of GPU transcoding – ExtremeTech

The spring 2005 edition of ExtremeTech magazine

The spring 2005 edition of ExtremeTech magazine (Photo credit: Wikipedia)

For now, use Handbrake for simple, effective encodes. Arcsoft or Xilisoft might be worth a look if you know you’ll be using CUDA or Quick Sync and have no plans for any demanding work. Avoid MediaEspresso entirely.

via By Joel Hruska @ ExtremeTech The wretched state of GPU transcoding – Slideshow | ExtremeTech.

Joel Hruska does a great survey of GPU enabled video encoders. He even goes back to the original Avivo and Badaboom encoders put out by AMD and nVidia when they were promoting GPU accelerated video encoding. Sadly the hype doesn’t live up to the results. Even Intel’s most recent competitor in the race, QuickSync, is left wanting. HandBrake appears to be the best option for most people and the most reliable and repeatable in the results it gives.

Ideally the maintainers of the HandBrake project might get a boost by starting up a fork of the source code that has Intel QuickSync support. There’s no indication now that that everyone is interested in proprietary Intel technology like QuickSynch as expressed in this article from Anandtech. OpenCL seems like a more attractive option for the Open Source community at large. So the OpenCL/HandBrake development is at least a little encouraging. Still as Joel Hruska points out the CPU still is the best option for encoding high quality at smaller frame sizes, it just beats the pants off all the GPU accelerated options available to date.

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

Written by Eric Likness

June 14, 2012 at 3:00 pm

Nvidia: No magic compilers for HPC coprocessors • The Register

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

And with clock speeds topped out and electricity use and cooling being the big limiting issue, Scott says that an exaflops machine running at a very modest 1GHz will require one billion-way parallelism, and parallelism in all subsystems to keep those threads humming.

via Nvidia: No magic compilers for HPC coprocessors • The Register.

Interesting write-up of a blog entry from nVidia‘s chief of super-computing, including his thoughts regarding scaling up to an exascale supercomputer. I’m surprised at how power efficient a GPU is for floating point operations. I’m amazed at these company’s ability to measure the power consumption down to the single operation level. Microjoules and picojoules are worlds apart from on another and here’s the illustration:

1 Microjoule is 1 millionth of a joule or 1×10-6 (six decimal places) whereas 1 picojoule is 1×10-12 or twice as many decimal places a total of 12 zeroes. So that is a HUGE difference 6 orders of magnitude in efficiency from an electrical consumption standpoint. The nVidia guy, author Steve estimates that to get to exascale supercomputers any hybrid CPU/GPU machine would need GPUs that have one order of magnitude higher efficiency in joules per floating point operation (FLOP) or 1×10-13, one whole decimal point better. To borrow a cliche, Supercomputer manufacturers have their work cut out for them. The way forward is efficiency and the GPU has the edge per operation, and all they need do is increase the efficiency that one decimal point to get them closer to the exascale league of super-computing.

Why is exascale important to the scientific community at large? In one segment there’s never enough cycles per second to satisfy the scale of the computations being done. Models of systems can be created but the simulations they provide may not have enough fine grained ‘detail’. The detail say for weather model simulating a period of time in the future needs to know the current conditions then it can start the calculation. But the ‘resolution’ or fine-grained detail of ‘conditions’ is what limits the accuracy over time. Especially when small errors get amplified by each successive cycle of calculating. One way to help limit the damage by these small errors is to increase the resolution or the land area over which you are assign a ‘current condition’. So instead of 10 miles of resolution (meaning each block on the face of the planet is 10miles square), you switch to 1mile resolution. Any error in a one mile square patch is less likely to cause huge errors in the future weather prediction. But now you have to calculate 10x the number of squares as compared to the previous best model which you set at 10miles of resolution. That’s probably the easiest way to see how demands on the computer increase as people increase the resolution of their weather prediction models. But it’s not limited just to weather. It could be used to simulate a nuclear weapon aging over time. Or it could be used to decrypt foreign messages intercepted by NSA satellites. The speed of the computer would allow more brute force attempts ad decrypting any message they capture.

Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva ...

Nvidia Riva TNT2 M64 GPU Deutsch: Nvidia Riva TNT2 M64 Grafikprozessor (Photo credit: Wikipedia)

In spite of all the gains to be had with an exascale computer, you still have to program the bloody thing to work with your simulation. And that’s really the gist of this article, no free lunch in High Performance Computing. The level of knowledge of the hardware required to get anything like the maximum theoretical speed is a lot higher than one would think. There’s no magic bullet or ‘re-compile’ button that’s going to get your old software running smoothly on the exascale computer. More likely you and a team of the smartest scientists are going to work for years to tailor your simulation to the hardware you want to run it on. And therein lies the rub, the hardware alone isn’t going to get you the extra performance.

Written by Eric Likness

April 9, 2012 at 3:00 pm

Microsoft GPU video encoding patent could hurt creatives | Electronista

Microsoft hasnt been granted the patent despite it having been first filed in September 2004, but it may face challenges to the claims from companies that began using GPU video encoding independently after the patent application was filed but before it was published.

via Microsoft GPU video encoding patent could hurt creatives | Electronista.

Given that it took nVidia quite a while before they got any developers to work on shipping products that took advantage of their programmable GPUs (the CUDA architecture), it’s a surprise to me that Microsoft even filed a patent on this. Previously I have re-posted some press releases surrounding the products known as Avivo (from ATI/AMD) and Badaboom, which was designed to speed up this very thing. You rip a DVD and you want to save it to a smaller file size or one that’s compatible with a portable video player.  But it takes forever on your computer, so what’s a person to do? Well thanks to nVidia and product X you just add a little software and speed up that transcoding to .mp4 format. It’s like discovering your car can do something you didn’t know was even possible, like turning into a Corvette on straight flat roadways. Now be advised not all roads are straight or flat, but when they are Boom! You can go as fast as you want. That’s what having an accelerated video encoding is like. It’s specialized but when you use it, it really works and it really speeds things up. I think part of why Microsoft wants to enforce this is in the hope of possibly getting licensing fees but part of it is also maintaining it’s bullying prowess on the desktop computer. They own the OS right? So why not remind everyone that were it not for their generosity and research labs we would all be using pocket calculators to do our taxes. This is one case, a premiere example of how patents are stifling innovation. And I would love to see this patent never be enforced or struck down.

Written by Eric Likness

October 18, 2010 at 3:00 pm

Posted in vague interests

Tagged with , , ,

vReveal uses GPU to accelerate video fixes

Before and After

There’s a new video trend in personal home video. Companies are lining up to provide aftermarket tools to process and provide corrections to camera phone video. Pure Digital’s Flip! camera line has some tools available to do some minor cutting to video clips and publish it to sharing websites. All of which presents an entrepreurial opporunity to provide pay for tools to help improve poorly shot video.

Some tools are provided within video editing suites like Apples iMovie (it corrects camera shake). Now on the PC there are two new products, one of which is designed to take advantage of the nVidia GPU acceleration of parallel programming. The product is called vReveal

While vReveal works with Windows XP or Vista (and not with Macs), it will make its enhancements much faster if the machine contains a recent graphics processing card from Nvidia, Dr. Varah said. Nvidia is an investor and a marketing partner with vReveal; a specific list of cards is at vReveal’s Web site.

via Novelties – Making a Fuzzy Video Come Into Focus – NYTimes.com.

Written by Eric Likness

June 23, 2009 at 3:20 pm

Posted in computers, gpu, media, technology, wintel

Tagged with , ,