With the announcement of DirectX 12 features like low-level programming, it appears we're having a revival of the DirectX vs. OpenGL debates—and we can toss AMD's Mantle into the mix in place of Glide (RIP 3dfx). I was around back in the days of the flame wars between OGL and DX1/2/3 devotees, with id Software's John Carmack and others weighing in on behalf of OGL at the time. As Microsoft continued to add features to DX, and with a healthy dose of marketing muscle, the subject mostly faded away after a few years. Today, the vast majority of Windows games run on DirectX, but with mobile platforms predominantly using variants of OpenGL (i.e. smartphones and tablets use a subset called OpenGL ES—the ES being for "Embedded Systems") we're seeing a bit of a resurgence in OGL use. There's also the increasing support of Linux and OS X, making a cross-platform grapics API even more desirable.

At the Game Developers Conference 2014, in a panel including NVIDIA's Cass Everitt and John McDonald, AMD's Graham Sellers, and Intel's Tim Foley, explanations and demonstrations were given suggesting OpenGL could unlock as much as a 7X to 15X improvement in performance. Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX. It almost makes you wonder why we ever settled for DirectX in the first place—particularly considering many developers felt DirectX code was always a bit more complex than OpenGL code. (Short summary: DX was able to push new features into the API and get them working faster than OpenGL in the DX8/9/10/11 days.) Anyway, if you have an interest in graphics programming (or happen to be a game developer), you can find a full set of 130 slides from the presentation on NVIDIA's blog. Not surprisingly, Valve is also promoting OpenGL in various ways; the same link also has a video from a couple weeks back at Steam Dev Days covering the same topic.

The key to unlocking improved performance appears to be pretty straightforward: reducing driver overhead and increasing the number of draw calls. These are both items targeted by AMD's Mantle API, and presumably the low level DX12 API as well. I suspect the "7-15X improved performance" is going to be far more than we'll see in most real-world situations (i.e. games), but even a 50-100% performance improvement would be huge. Many of the mainstream laptops I test can hit 30-40 FPS at high quality 1080p settings, but there are periodic dips into the low 20s or maybe even the teens. Double the frame rates and everything becomes substantially smoother.

I won't pretend to have a definitive answer on which API is "best", but just like being locked into a single hardware platform or OS can lead to stagnation, I think it's always good to have alternatives. Obviously there's a lot going on with developing game engines, and sometimes slower code that's easier to use/understand is preferable to fast/difficult code. There's also far more to making a "good" game than graphics, which is a topic unto itself. Regardless, code for some of the testing scenarios provided by John McDonald is available on Github if you're interested in checking it out. It should work on Windows and Linux but may require some additional work to get it running on OS X for now.

Source: NVIDIA Blog - GDC 2014

Comments Locked

105 Comments

View All Comments

  • TheJian - Wednesday, March 26, 2014 - link

    Chromebooks took 21% of ALL notebook sales already. Let me know when win8 stops that. I expect to see Denver (and S805/810 etc) come for the low-end desktops next and move up the chain until someone stops them. DX12 isn't until xmas 2015 if it isn't late (big if), which gives people 2yrs to use OpenGL to drop draw calls, kill Mantle and get a larger audience overall with mobile pushing it. Since most people don't have a DX11 card (china runs 70% xp still, so not even dx10 there), there are not a TON of DX11 games even today (DX11 ONLY I mean, you can always use dx9.0c in most instances - ALL?), and if you want your game to run on mobile+PC it's best to choose an engine and probably avoid DX.

    Microsoft will be 3rd in all things mobile until Google/Apple mess something up badly. DX means NOTHING there, opengl is everything pretty much which is why they are pushing ES 3.1 for it now. Also MS continues to price themselves to death (see surface/2/pro etc), so windows is going nowhere on mobile fast. Tegra tab adds a modem price goes up $50, MS adds the same one and price rises $130. See the point? You can't tell me NV is charging them almost triple for a software modem...LOL (a Soc goes for $10-45).

    IMHO MS has screwed themselves into losing DX dominance (2yrs away while OpenGL has low driver mantle like stuff already NOW), and in turn gamers, and in the end a lot of PC market will go to linux/android/steamos etc or any combo of them on a PC running ARM custom chips from Apple, Qcom, NV with NV likely winning a SOC battle as the GPU takes over modem's importance on mobile (who has the 20yrs of gaming/gpu+Dev experience here? I see only NV raising their hand). Eventually we'll have a 500w PC like box running these and probably a discrete NV card for high-end just like now on WINTEL (maybe AMD too one day if they live that long or get bought).

    NV wants payback for Intel killing their chipset business (hate is strong between these two or Intel would have already figured out how to buy them and pay Jen Hsun to leave for a few billion which would easily triple his current wealth) and they're planning to get it with Denver/maxwell chips and beyond moving to your desktops/servers (and higher end notebooks too). With Google/Valve/Apple helping OpenGL now is the perfect time (with win8 screwing pc sales) to strike with in-house ARM 64bit cpus and move games to ARM/OpenGL via massive mobile gamers and units already dwarfing PC gamers. They are selling 1.3B units of mobile now and will be over 2B in the next few years. Only ~34mil gamers on steam (IIRC) says the PC market is pretty small compared to mobile (also only ~50mil discrete sold yearly). GDC 2013+2014 show devs have moved massively to mobile, matching PC's at 51/52% making games for both. While consoles on the other hand, are under 15% showing devs are leaving them massively (next gen hasn't moved the needle from last year).

    Why can't I run a quadboot (tri? whatever) of chrome, android, linux & steamOS? All are free and all run on ARM or will, steamos coming soon surely, NV/Valve working hand in hand on steamboxes for a few years says it will be on ARM at some point. At some point we'll all just run VM's of whatever we used to do (if needed for X software). We do it already, but I mean even home users will be using it like they do in enterprise now. I guess I pretty much think the exact opposite of what you think ;)

    Free OS's will take over as games move there, then apps will follow too. The only question I have is how much Wintel will be hurt (and consoles are dead, just a matter of when in this cycle). I think Intel does best out of this (they can fab for others in worst case), while MS has nothing to stop windows/dx/office from going down (slow or quick, but going down), though a NV buyout could help stop it, but I think that's a best fit for and Intel purchase and I'd rather see 14/10nm gpus shortly as opposed to MS getting them who can't do much for my future hardware without fabs.
  • didibus - Tuesday, August 5, 2014 - link

    I don't know why you say Java is a weakness, it's the fastest interpreted language on PC. The only thing that beats Java in performance are compiled languages. In that regard, Java and C# are pretty much on equal levels, with Java winning in benchmarks slightly.

    Obviously, Dalvik isn't up to par with Oracle's Java VM. But that's something that will be remedied over time. Google already started with ART.

    If you didn't know, Windows 8 is moving towards more C# and less C++. Apps are mostly going to be C# and HTML5+Javascript based. That's the whole point of WinRT.

    You can also make Apps in C++ for Android if you didn't know that. The Chrome Browser, or Mozilla Browser are made in C++. A lot of games for Android are made in C++. And of course, most of Android itself is C/C++.

    Dart is way slower than Java, just look here: http://benchmarksgame.alioth.debian.org/u64q/dart....

    Dart is a good language, and in a small time, they've managed to make a pretty good VM for it, but it's still has ways to go to beat Java or .Net.

    A problem for Microsoft, is that only WinRT apps work on Windows Phone and Windows 8. But right now, Windows8 productivity advantage comes from it's huge library of legacy non WinRT apps. So Windows Phone and Windows 8 will face challenges to properly integrate one another. It that realm, it's much nicer for a simple user to have an Android Phone and an Android Tablet, both sharing the exact same interface and apps (they don't even need to buy the app again in most cases). Or similarly, an iPad and an iPhone. Where Windows will need to slowly push more and more devs to convert their apps to WinRT, but right now, WinRT is way behind Android and iOS in terms of both productivity apps, and non productivity ones.

    Anyways, just look at the new Tegra K1 Android device coming out, like the Shield Tablet, and all the Android TVs, they blow the Xbox 360 out the water with their OpenGL 4.4 support and raw power, Android doesn't seem to be slowing it down at all.

    Also, remember SteamOS will be OpenGL only, and OSX is OpenGL only, and PS4 is OpenGL, and iOS and Android are OpenGL, it's looking like a lot of counter weight exists right now against DirectX 12. So it will be an interesting scenario. We shall see if it's a repeat, or this time, history will go a different way.
  • gobaers - Monday, March 24, 2014 - link

    An interesting historical note: one of NVIDIA's first successful products, the Riva128, had a key differentiator in having good Direct3D preformance. No, it wasn't as fast as the 3Dfx parts, but it was the first serviceable challenger. It combined decent 3D with great 2D in a single card unit, and the rest is history. The unit I had was this, reviewed by Anand:

    http://www.anandtech.com/show/21
  • dakishimesan - Monday, March 24, 2014 - link

    My first card was the RIVA 128ZX, the higher clock version that had more memory (8MB!) Love that card and playing Moto GT racer for the first time.
  • JarredWalton - Monday, March 24, 2014 - link

    I hail back to the glory days of ISA cards, sadly. My first PC I think had something like the earliest Cirrus Logic controllers -- capable of an astounding 640x480 256-colors! SVGA FTW. I did manage to scape together the funds in high school to buy an S3 911 I think. Then around 95 things changed radically and I was able to stop running DOS on top of Windows and start booting straight to Windows 95. Hard to believe all current high school students were born after Win95 launched. LOL
  • gobaers - Monday, March 24, 2014 - link

    I wonder what type of video board my very first computer had. It was a Hyundai AT clone running at 9MHz in 'Turbo' mode, paired with an EGA monitor instead of the CGA that everyone else had.

    I didn't start tinkering with the insides of my machine until the next one, a 386SX-25. The day I bought and installed a SoundBlaster Pro is still memorable for my brother and me. We must have stayed up until 2AM playing with Dr. SBAITSO.
  • althaz - Monday, March 24, 2014 - link

    We had a Riva 128, it was relatively poor, but when the TNT came out, that was THE SHIT.

    We added a Voodoo (1) to our PC when I was a kid and just couldn't believe how awesome it was, but it had nothing on the Riva TNT. The TNT was faster than the Voodoo (though not, IIRC the Voodoo 2, though performance was close), plus it had incredible 32-bit colour so image quality was FAR superior. The other bonus for nVidia (and their customers) was that their's was a single-card solution and absolutley mauled 3dfx's single-card solution, the Voodoo Banshee (which was the fastest 2D card on the market then, but had less than Voodoo 1 levels of 3D performance, from memory).
  • coder543 - Monday, March 24, 2014 - link

    I've never understood this. Maybe DirectX once held a strong lead in terms of features or performance, but it's been quite a few years since that was the case. If I were starting a new project in the last 5 years, it would seem completely unreasonable to lock myself into DirectX unless I was targeting the Xbox. I'm no Richard Stallman, but locking *your* code into a platform where you have no freedom to move it to something else without rewriting all of the graphics calls just seems like a poor decision. By supporting OpenGL, you not only get arguably better performance, but you can run it on almost every platform in existence. That would seem worthwhile to me, regardless of what Microsoft says.
  • inighthawki - Monday, March 24, 2014 - link

    Any modern game engine is flexible to support almost all rendering APIs. Every talks about how "Oh everyone should use OpenGL cause its up to date and supported everywhere!"

    Except that it's not. Sure you have it on PC, and OpenGL is on mobile devices. But then you have consoles - PS3, PS4, 360, XBO, Wii. None of which use OpenGL. If you want to effectively target these, you need a flexible design that supports multiple APIs. At that point, most developers choose to support DirectX because Windows is the largest market for their games, DirectX tends to provide more consistent results across different hardware vendors, and DirectX tends to be more pleasurable to work with over OpenGL. DX has a long history of significantly better graphics tools, so a lot of developers will write a DX version first just to get it working, then write an OpenGL layer afterwards.
  • inighthawki - Monday, March 24, 2014 - link

    My optional U in brackets after Wii seemed to denote underlining. Sorry about that.

Log in

Don't have an account? Sign up now