CUDA vs. OpenCL vs. OpenGL

What is CUDA? What about OpenCL and OpenGL? And why should we care? The answers to these questions are difficult to pin down — the computer-equivalent of the metaphysical unanswerables — but we’ll attempt a clear explanation in simple to understand language, with perhaps a bit of introspection thrown in as well.

There comes a time in a video editor’s life when they inevitably ponder the basic questions: “Is this all the speed that I have? Is there nothing more?” Like the search for the meaning of life or a grand unified theory, this simple thought launches you down an endless and infinitely deep chasm of contemplation and research, until you inevitably land on a question that you just can’t get a true answer to, and there the search stalls.

Now, we can’t help you with any grand unified theory, but we can say that the information wall you will ultimately hit on your search for video processing speed will ultimately come down to this: “What is CUDA, what is OpenCL, and why do I care?”

The truth is that in order to understand CUDA and Open GL, you’ll need to know about Open CL as well.

“Now wait”, you say. “The headline says Open GL. There must be a typo”. No, there’s just a lot of people with no sympathy for naming standards. The truth is that in order to understand CUDA and Open GL, you’ll need to know about Open CL as well. Now you could jump on the Internet and wiki all of these terms, and read all the forums, and visit the sites that maintain these standards, but you’ll still walk away confused. In this article, we’ll come to your metaphysical video conundrum rescue with as simple a language as is possible. You won’t see any circular speak about terms like “application programming interface” here! So, like all quests for answers, let’s logically start… in the middle.

What is CUDA?

Created by graphics card maker Nvidia, in as simple terms as possible, CUDA lets your programs use the brains of your graphics card as a sub-CPU. Your CPU passes certain tasks off to the CUDA enabled card. The graphics card specializes in calculating things like lighting, movement and interaction as quickly as possible. Graphics cards are specifically designed to processes such information as fast as possible, even sending it through multiple lanes at once– like if you had four checkout lanes at the supermarket for one shopping cart. The results of this work are then passed back to the CPU, which has since gone on to bigger and better things.

The Benefits

For programmers, it’s relatively simple to integrate. As it’s software based, much of the system must be programmed into the program’s code, and thus its function can vary or be customized. For the user, since CUDA’s primary functionality lies in calculation, data generation and image manipulation, your effects processing, rendering and export times can be greatly reduced, especially if upscaling or downscaling. Image analysis can also be improved, as well as simulations like fluid dynamics and predictive processes like weather patterns. CUDA is also great at light sources and ray-tracing. All of this means functions similar to rendering effects, video encoding and conversion — and more — will process much faster.

The Downside

Did you notice that one little disclaimer in the first paragraph? This works for “CUDA enabled” graphics cards only. Since CUDA is proprietary to Nvidia, you need a graphics card manufactured by that company to take advantage of it. If you have, say, a trashcan-style Mac Pro, this is simply not an option for you since they only come with AMD graphics cards. There are third party options here, but Apple only supplies AMD in their packages. You’ll also find that fewer programs support CUDA than its alternative, so let’s talk about that other option.

Well, then what’s OpenCL?

OpenCL is a relatively new system and for our discussion it can be considered an alternative to CUDA. It is an open standard however–meaning anyone can use its functionality in their hardware or software without paying for any proprietary technology or licenses. Whereas CUDA uses the graphics card for a co-processor, OpenCL will pass off the information entirely, using the graphics card more as a separate general purpose peer processor. It’s a minor philosophical distinction, but there’s a quantifiable difference in the end. For the programmer, it’s a little bit harder to code for. As a user, you are not tied to any one vendor, and support is so widespread that most programs don’t even mention its adoption.

Finally, OpenGL

OpenGL is really the beginning of the story. It’s not about using the graphics card as a general-purpose processor. Instead it’s simply about drawing pixels or vertices on the screen. It’s the system that allows your graphics card to create 2D and 3D displays for your computer much faster than your CPU could. Like CUDA and OpenCL are alternatives to one another, OpenGL is an alternative to systems like DirectX on Windows. Simply, OpenGL draws everything on your screen really fast, OpenCL and CUDA process the calculations necessary when your videos interact with your effects and other media. OpenGL may place your video within the editing interface and make it play, but when you throw color correction onto it, CUDA or OpenCL will do the calculations to alter each pixel of the video properly.

OpenGL can be implemented on the hardware level, meaning coders don’t have to include the code in their program, they just have to call on it. In addition, hardware vendors have the option to expand on the core functionality with extensions, meaning some hardware might be better at certain tasks than others. This allows for very specific customization.

Where the user will see the benefits of OpenGL is in the operational performance of the software. Previews are rendered especially fast. In many programs, it’s also utilized for accelerated interface and overlays, such as timelines, footage, windows, grids, guides, rulers and bounding boxes.

In the end, OpenGL for the user is a non-issue, as both OpenCL and CUDA can and do utilize the OpenGL system. What you need to understand here is that if you have any graphics card with the latest OpenGL support, you will always work faster than on a computer with a CPU and integrated graphics alone.

In a Nutshell

So, what does this all mean for you and your workstation? Which is better —CUDA or OpenCL? We’ll assume that you’ve done the first step and checked your software, and that whatever you use will support both options. If you have an Nvidia card, then use CUDA. It’s considered faster than OpenCL much of the time. Note too that Nvidia cards do support OpenCL. The general consensus is that they’re not as good at it as AMD cards are, but they’re coming closer all the time. Is it worth going out and buying an Nvidia card just for CUDA support? That would depend on too many specific case factors for us to cover here. You’ll need to look at your needs and do your research. Not just what kind of work your company does, but even down to the individual machine and what its workload and function will be. And if you can, test before you invest.

Adobe, for example, states on its website that with very few exceptions, everything CUDA does for Premiere Pro can be done by OpenCL, as well. It also states that it uses neither of these for encoding or decoding. They can be used for rendering previews and final exports, though. The majority of those who have “compared” the two seem to lean towards CUDA being faster with Adobe products. CUDA has the advantage of being self-contained, which because of better optimization, can result is faster performance.

Personal Experience

I would also like to make a rare move and share my personal experience here. Note however that I’ve done no concrete testing. I’m speaking strictly for myself, so take it for what it’s worth. My experience is that CUDA, when available, is great and can really increase your speed noticeably. However, it is my belief that I have had a few more crashes or glitches upon rendering, transcoding and exporting. Also, on a few rare occasions, I ran out of options and resorted to switching CUDA off, which finally resulted in a successful outcome. I’ve never had to do the opposite. I must qualify this by saying as well that my time with CUDA has been very limited. I’m also not knocking CUDA, as the issue is most likely the way the software vendor utilized it. I did experience these issues with more than one program though, so perhaps I had an old version of CUDA. I just felt it was important enough to mention that you should look out for such things. Again, I’ll ultimately say you can’t go wrong with either choice and which should be used should be determined entirely on a case-by-case basis.

So, going back on the record, what we can ultimately say with confidence is that if you’re actually in the position where it’s truly your graphics card, and not your entire system that is slowing your workflow down, then chances are, upgrading to any more recent card will be a tremendous improvement over what you’re currently experiencing. Regardless of which path you choose, understanding what these systems do will help you identify where you need to invest to make your workstations the baddest they can possibly be. Investing in a card that supports using the GPU to supplement or offload work from the CPU will speed up your workflow tremendously.

Peter Zunitch is an award-winning video editor based in New York.

Peter Zunitch
Peter Zunitch
Peter Zunitch is an award-winning video editor in New York with over 20 years of experience.

Related Content