Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for the ‘gpu’ Category

AMD Clears the Air Around Project FreeSync

examples of video connectors

A/V Connectors currently in the market

AMD has been making lots of noise about Project FreeSync these past few months, but has also left plenty of questions unanswered.

via AMD Clears the Air Around Project FreeSync.

FreeSync, and nVidia G-sync both are attempting to get better 3D rendering out of today’s graphics cards no matter what part of the market they are aimed at. But like other “features” introduced by graphics card manufacturers there’s a drive now to set a standard common to the manufacturers of cards and hopefully too, the manufacturers of display panels.

Adaptive-Sync is the grail for which AMD is searching, promoting and lobbying for going forward. It’s not too manufacturer specific and is just open enough to be adopted by most folks. The benefits are there too, as the article states Tom’s Hardware has tried out nVidia’s G-sync and it works. Which is reassuring given that sometimes these “features” don’t always appear as big revolutionaries strides in engineering so much as marketing talking points.

AMD has been successful so far in pushing adoption by the folks who make RAMDACs and video scaler circuits for the display manufacturers. That’s the real heavy lifting in driving the standard. And with some slight delays you may see the display panel manufacturers adopt this ActiveSync standard within the next year.

 

Written by Eric Likness

July 28, 2014 at 3:00 pm

Posted in gpu, h.264

Tagged with , , , ,

Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

The president of VMware said after seeing it (and not knowing what he was seeing), “Wow, what movie is that?” And that’s what it’s all about — dispersion of disbelief. You’ve heard me talk about this before, and we’re almost there. I famously predicted at a prestigious event three years ago that by 2015 there would be no more human actors, it would be all CG. Well I may end up being 52% or better right (phew).    – Jon Peddie

via Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times. Jon Peddie has covered the 3D animation, modeling and simulation market for YEARS. And when you can get a rise out of him like the quote above from EETimes, you have accomplished something. Between NVidia’s hardware and now its GameWorks suite of software modeling tools, you have in a word created Digital Cinema. Jon goes on to talk about how the digital simulation demo convinced a VMWare exec it was real live actors on a set. That’s how good things are getting.

And the metaphor/simile of comparing ILM to NVidia’s toolkits off the shelf is also telling. No longer does one need to have on staff computer scientists, physicists and mathematicians to help model, and simulate things like particle systems and hair. It’s all there along with ocean waves, and smoke altogether in the toolkit ready to use. Putting these tools into the hands of the users will only herald a new era of less esoteric, less high end, exclusive access to the best algorithms and tools.

nVidia GameWorks by itself will be useful to some people but re-packaging it in a way that embeds it in an existing workflow will widen the level of adoption.Whether that’s for a casual user or a student in a 3D modeling and animation course at a University. The follow-on to this is getting the APIs publishedto tap into this through current off the shelf tools like AutoCAD, 3D StudioMax, Blender, Maya, etc. Once the favorite tools can bring up a dialog box and start adding a particle system, full ray tracing to a scene at this level of quality, things will really start to take off. The other possibility is to flesh out GameWorks in a way that makes it more of a standalone, easily adopted  brand new package creatives could adopt and eventually migrate to over time. That would be another path to using GameWorks as an end-to-end digital cinema creation package.

Enhanced by Zemanta

Written by Eric Likness

April 10, 2014 at 3:00 pm

Virtual Reality | Oculus Rift – Consumer Reports

Oculus Intel

Oculus Intel (Photo credit: .michael.newman.)

Imagine being able to immerse yourself in another world, without the limitations of a TV or movie screen. Virtual reality has been a dream for years, but judging by current trends, it may not be just a dream for much longer.

via Virtual Reality | Oculus Rift – Consumer Reports.

I won’t claim that when a technology gets written up in Consumer Reports it has “jumped the shark”, no. Instead I would rather give Consumer Reports kudos for keeping tabs on others writing up and lauding the Oculus Rift VR headset. The specifications of this device continue to improve even before it is hitting the market. Hopes are still high for the prices to be reasonable (really it needs to cost no more than a bottom of the line iPad if there’s any hope of it taking off). Whether the price meets everyone’s expectations is very dependent on the sources for the materials going into the headset, and the single most expensive item are the displays.

OLED (Organic LED) has been used in mobile phones to great effect, the displays use less power and have somewhat brighter color than backlit LCD panels. But they cost more, and the bigger the display the higher the cost. The developers of Oculus Rift have now pressed the cost maybe a little higher by choosing to go with a very high refresh rate and low latency for the OLED screens in the headset. This came after first wave of user feedback indicating too much lag and subsequent headaches due to the screen not keeping up with head movements (this is a classical downfall of most VR headsets no matter the display technology). However Oculus Rift has continued to work on the lag in the current generation head set and by all accounts it’s nearly ready for public consumption. It’s true, they might have fixed the lag issue and most beta testers to date are complimenting the changes in the hardware. This might be the device that launches a thousand 3D headsets.

As 3D goes, the market and appeal may be very limited, that historically has been the case. Whether it was used in academia for data visualization or in the military for simulation, 3D Virtual Reality was an expensive niche catering to people with lots of money to spend. Because Oculus Rift was targeted at a lower price range, but with fantastic performance visually speaking who knows what market may follow it’s actual release. So as everyone is whipped up into a frenzy over the final release of the Oculus Rift VR Headset, keep an eye out for this. It’s going to be hot item in limited supply for a while I would bet. And yes, I do think I would love to try one out myself, not just for gaming purposes but for any of the as yet unseen applications it might have (like the next Windows OS or Mac OS?)

Enhanced by Zemanta

Written by Eric Likness

March 13, 2014 at 3:00 pm

Posted in gpu, science & technology

Tagged with ,

10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

OpenCL logo

OpenCL logo (Photo credit: Wikipedia)

OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

Enhanced by Zemanta

Written by Eric Likness

March 3, 2014 at 3:00 pm

Posted in computers, fpga, gpu

Tagged with , ,

The Memory Revolution | Sven Andersson | EE Times

A 256Kx4 Dynamic RAM chip on an early PC memor...

A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

via The Memory Revolution | Sven Andersson | EE Times

Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

Enhanced by Zemanta

Written by Eric Likness

February 24, 2014 at 3:00 pm

Posted in gpu, technology, wintel

Tagged with , , ,

AnandTech | The Pixel Density Race and its Technical Merits

Italiano: Descrizione di un pixel

Italiano: Descrizione di un pixel (Photo credit: Wikipedia)

If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.

via AnandTech | The Pixel Density Race and its Technical Merits.

Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.

I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained  by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.

What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.

Enhanced by Zemanta

Written by Eric Likness

February 17, 2014 at 3:00 pm

Posted in art, gpu, mobile

Tagged with , ,

nVidia Gsync video scalar on the horizon

Image representing NVidia as depicted in Crunc...

Image via CrunchBase

http://www.eetimes.com/author.asp?section_id=36&doc_id=1320783

nVidia is making a new bit of electronics hardware to be added to LCD displays made by third party manufacturers. The idea is to send syncing data to the display to let it know when a frame is rendered by the 3D video hardware on the video card. Having this bit of extra electronics will smooth out the high rez/high frame rate games played by the elite desktop game players.

It would be cool to also see this adopted for the game console markets as well, meaning TV manufacturers could also use this same idea and make your PS4 and XBox One play smoother as well. It’s a chicken and egg situation though, where unless someone like Steam or another manufacturer tries to push this out to a wider audience, it will get stuck as a niche product for the higher of the end of the high end PC desktop gamers. But it is definitely a step in the right direction and helps push us further away from the old VGA standard from some years ago. Video cards AND displays should both be smart those no reason, no excuse to not have them both be somewhat more aware of their surroundings and coordinate things. And if AMD decide they too need this capability, how soon after that will both AMD and nVidia have to come to the table and get a standard going? I hope that would happen sooner rather than later and that too would possibly drive this technology to a wider audience.

Enhanced by Zemanta

Written by carpetbomberz

February 12, 2014 at 3:00 pm