Nvidia’s OpenACC Standard

A good idea that needs support, and less marketing hype…

SC11 is turning out to be a big event for Nvidia (NASDAQ:NVDA) this year. The company launched a new programming standard today which is no doubt going to be starting a lot of speculation from customers and investors alike. OpenACC, a new parallel programming standard designed to accelerate C and FORTRAN code, was announced by Nvidia along with Cray, the Portland Group, and CAPS enterprise to be another effort on their part to make it easier for programmers to created GPU accelerated code. Nvidia is positioning its OpenACC as a kind of high level gateway to its lower level CUDA GPU programming language. To this end Nvidia has rather conspicuously used the “open” word. A verb which it has done its best to avoid when naming past software standards.

Nvidia began a big marketing push following this announcement and banners can be seen around the on-going SC11 conference that tout a doubling in application speed after only spending four weeks of rewriting your code. Now the key here is to not take Nvidia’s advertisements out of context as so many are prone to do, and as Nvidia’s banners seem to beg you to do. Nvidia is talking about taking an application, that contains parallel workloads, and using its OpenACC programming standard to tune it to run on GPUs by adding what Nvidia calls directives. These directives identify which areas of code you want to run on your GPU verses on your CPU. From there a speed up in application execution speed can be realized by compiling your new OpenACC compliant code with a compiler from one of Nvidia’s launch partners.  Only in the case of moving from a GPU unaware application, to a GPU aware application that has been rewritten to take advantage of the OpenACC standard is Nvidia promising you a twofold speed up.

Despite Nvidia’s perennial marketing hype, OpenACC is still a positive for a number of different industries. HPC stands to gain from this new API as it will hopefully allow scientists to create code for Nvidia’s new supercomputers in record time. Additionally consumer applications such as rendering and image processing stand to benefit once again from the increased parallelism that GPUs have to offer. The real effect of Nvidia’s OpenACC remains to be seen; but at this point no one has really had a chance to spend very much time with it. So we’ll reserve judgment on its usefulness for now.

One of the key points to keep an eye on though is how “open” OpenACC turns out to be. It’s hard to say at this point if OpenACC will be “open” in the same way that Nvidia says that its CUDA language is “open”, or if Nvidia’s finally figured out how to capture the power of open standards a la OpenCL. History points to the former, but it seems as if Nvidia’s PR department would like us to believe the latter. In the end it’s hard to tell due to the relative immaturity of the standard. Hopefully Jen-Hsun Huang’s hour and a half SC11 keynote will shed some light on the subject.

In any case, if Nvidia’s press releases’ and banners ads have led us to believe anything it’s that Nvidia is aiming OpenACC at companies and institutions that are wary of jumping into the GPGPU coding pool due to the size of the investment it requires. The OpenACC standard isn’t mean to be an alternative to CUDA, but a stepping stone into Nvidia’s tightly controlled GPU compute environment. They’ll hook you with a twofold increase in performance, reel you in with a four week code development time, and sink you with a set of compilers that only support GPUs from one vendor.

Admittedly Nvidia is offering OpenACC up as standard that other hardware and compiler vendors can support, it’s just that no one else does at the moment. We can see the OpenACC standard gaining traction based solely on its own merits within Nvidia’s ecosystem, but for OpenACC to really become a standard it’s going to need gain support from other GPU vendors like Intel and AMD. Other than its Larrabee/Knight’s Ferry/Knight’s Corner parts Intel hasn’t shown GPGPU much love. The GPU on its Sandy Bridge chips still doesn’t offer proper OpenCL support, but it’s possible that Ivy Bridge will change that. AMD on the other hand has been supporting OpenCL for years and promoting GPU for compute even longer. At the end of the day the long term success of the OpenACC standard comes down to two factors, ease of implementation for developers, and support from AMD.

When it comes to ease of implementation it appears that Nvidia’s got that end of the equation all worked out. Nvidia wouldn’t be hanging banners around SC11 and preparing to do keynotes on the subject if they didn’t think that OpenACC has a chance. But support from AMD is a less surefire achievement, if OpenACC is really all that Nvidia is touting it to be then AMD will probably have support for it by the time AFDS12 rolls around in June. On the other hand, if nothing happens then Nvidia’s back to playing it’s old games.  Time will tell.S|A

Updated: 12.45pm day of publication to fix typo.

The following two tabs change content below.
Thomas Ryan is a freelance technology writer and photographer from Seattle, living in Austin. You can also find his work on SemiAccurate and PCWorld. He has a BA in Geography from the University of Washington with a minor in Urban Design and Planning and specializes in geospatial data science. If you have a hardware performance question or an interesting data set Thomas has you covered.