glNext

By Shamus Posted Sunday Sep 21, 2014

Filed under: Programming 82 comments

Big things are going on in the world of graphics API. A graphics API is what a programmer uses to talk to the graphics hardware. This is a complicated job. You write some videogame code, which talks to the graphics API, which talks to the graphics driver, which makes the graphics card give up the shiny pixels for the player.

For a lot of years, there were really only two players in town: OpenGL and DirectX. OpenGL is so old that the original code was written in hieroglyphs on stone tablets, and all of the documentation was localized for Mesopotamia. The first version was released in 1992, back when developers were still living on Pangaea. It was built in a development world very unlike the one we have today. Before C++ rose to become the language of choice for AAA game development. Before shaders existed, and indeed before consumer-level graphics cards evolved.

This means that the OpenGL API looks pretty weird to modern coders. There’s an alternative, but…

The only other alternative is DirectX, which is controlled by Microsoft. That means it’s only available in places where Microsoft chooses. So if you make a game using DirectX and want to port your game over to (say) Linux but Microsoft has decided to they don’t care to make a Linux version available, then you can’t. You don’t have access to a platform unless Microsoft goes there first, which gives them a scary amount of power over various platforms. This is a big part of how Microsoft works: Slowly get a stranglehold over something and then use that control to strong-arm rivals and reward obedience. Every game made with DirectX makes them just a little stronger. If you’re a developer taking the long view, then you’d probably rather not add to this.

Example: Microsoft is working On DirectX 12. It will have shiny new features that developers will want to use. But they probably won’t make it available on Windows 7. So suddenly to play the latest game you’ll need to buy a new version of Windows you don’t want or need, even if your machine has plenty of power to run the game in question. They did the same thing to XP.

So by making a game with DirectX 12, you’ll effectively turn your game into a sales pitch for Microsoft Windows™.

On the other hand, Microsoft can write good software. I mean, when they want to. When they don’t care you get Games For Windows Live, a program that would qualify as Malware if it wasn’t so buggy. But when they care they make awesome stuff like Visual Studio. And DirectX is indeed one of the things they care about. I’m not riding the cutting edge with the hotshot AAA developers, but the buzz seems to be that DirectX is significantly better and faster.

Image unrelated.

So for years developers have had to make a choice: Do I go with the open platform that sucks or the closed platform that puts me at the mercy of Microsoft? This choice is no fun. So what do we do?

AMD tried to “help” by coming out with a third graphics API, called Mantle. Note that AMD is one of two major players in the graphics card industry, with the other being NVIDIA. They claimed it would be an open standard, meaning it could work (or could be made to work) on NVIDIA hardware, but these two idiots have been cock-blocking each other at every turn, and there’s no way one would accept a standard devised by the other. I’d be willing to bet that AMD would design Mantle to favor their hardware at the expense of NVIDIA hardware (so games using Mantle would run faster on AMD cards than NVIDIA cards) and I’m sure NVIDIA would design their stuff to preemptively sabotage any such move. This entire enterprise sounds like a non-starter to me. And even if it worked, you’d just be at the mercy of AMD instead of Microsoft, which is just a different devil.

At the same time, Apple rolls out yet another API, which is (because things aren’t confusing enough yet) Metal. (Not Mantle.) The names Mettle, Molten, and Mental are still available if anyone else wants to enter the fray and make things more confusingMercifully, Mattel is taken.. But it’s Apple, which means it has all the drawbacks of Microsoft. EDIT: Also, as Nick points out below, Metal is only for iOS, so it’s only useful if you’re targeting Apple devices.

So what else can we do? Lots of graphics engines act as “wrappers” for the big two: They use DirectX where that makes sense, and OpenGL where DX isn’t available. You just talk to the graphics engine and don’t worry your pretty little head about what’s happening under the hood. That’s nice, but graphics engines are expensive, and they end up being yet another layer between you and the hardware. This might mean that your game runs slower than if you used OpenGL or DirectX natively. It also means you might end up cut off from certain features. Maybe there’s some exotic (or new) thing that graphics cards can do, but is not supported by your graphic engine. Either the developers didn’t know about it, didn’t think it was important”Why would anyone do that?” is one of the most natural yet infuriating questions an engineer can ask., or it didn’t exist yet when the engine was written. Because you’re so far from the hardware you won’t be able to use that feature.

So… expensive, slow, and limiting. This isn’t an attractive option either.

Image unrelated.

Of course, you could just write all your code twice. Write a DirectX version of your game and an OpenGL version. Have fun doing three times the work for code that will be a support nightmare. Loser.

Valve has been funding an open-source alternative to OpenGl called Mesa. This could fix (or perhaps has already fixed) some of the speed problems of OpenGL, but it can’t really help with the slightly idiosyncratic and very dated OpenGL API. It would get faster, but not easier to use.

It should be clear by now that the real solution that would make everyoneExcept Microsoft, who is the main beneficiary of this mess. happy would be for OpenGL to stop sucking. The Kronos GroupThe not-for-profit member-funded industry consortium that maintains OpenGL. has been trying to do this for years. I’m not in the loop enough to understand all their moves because I’m so far back the tech curve, but the buzz I generally hear is that their changes are too few, too rare, and too incremental. They seem to be an incredibly cautious and conservativeI value these attributes in engineers. Generally I think the daring, devil-may-care types are best using technology, and the people who invent the technology should be slightly obsessive and paranoid. The astronaut should be bouncing around in space and giving thumbs-up to the world while the eggheads at home wring their hands and worry about every thing that could ever go wrong. Seems to be the best way to Get Stuff Done. bunch, and they’re not eager to make Big Scary Changes.

But things have gotten bad enough – or at least worrisome enough – that they seem to be doing exactly that. They’re working on glNext, which is a complete re-write of OpenGL. Now, I recognize the re-writes are generally foolish and self-indulgent, the work of engineers who are obsessing over code “cleanness” and not on the usefulness of the product. But if re-writes are ever warranted, then I think this would be such a case.

On the other hand, if this is a complete re-write without regard for backwards compatibility, then from the perspective of a game developer it’s basically just another API entering the field. If – like me – you’ve been faithfully working on OpenGL stuff for years, you’re not going to get a magic speed boost for nothing. You’re going to need to re-write your code to do things the new way.

On the Oculus, it’s not just framerate that matters, but also latency. It’s possible to have a demo running at 75fps, but each frame is slightly delayed by a fraction of a second due to some clog in the operating system. In this case, you’ll turn your head and the thing you’re looking at will seem to move with you, then snap back to where it should be. It looks like everything vibates slightly when you turn your head, with vibrations getting more extreme the faster you turn. This is called “juddering”. It’s not pleasant.
On the Oculus, it’s not just framerate that matters, but also latency. It’s possible to have a demo running at 75fps, but each frame is slightly delayed by a fraction of a second due to some clog in the operating system. In this case, you’ll turn your head and the thing you’re looking at will seem to move with you, then snap back to where it should be. It looks like everything vibates slightly when you turn your head, with vibrations getting more extreme the faster you turn. This is called “juddering”. It’s not pleasant.

For me, the more immediate problem is that the Oculus SDK doesn’t really properly support OpenGL right now. For a product going for a broad, multi-platform rlease, this strikes me as being really odd. The OpenGL version of the SDK is incomplete, and there’s almost nothing in the way of example code if you’re looking to figure out which parts work and which parts don’t. Specifically, you can’t render directly to the device. You have to set it up as an extra monitor, then create a maximized window on that second monitor. That ends up being really laggy for a lot of annoying reasons. Your rendering images get squeezed through some extra layers of Windows processing to be turned into images on your “desktop”, and that latency makes the Rift really uncomfortable to use.

Since I got the Rift because I wanted to experiment with simulation quality and how to minimize VR sickness, this basically puts me out of business. Trying to measure smoothness and user experience in this mode is like trying to play Jenga on a rollercoaster. The noise in the system is larger than the thing I’m trying to measure. I can either drop what I’m doing and go learn DirectX (a massive and frustrating investment of time) or I can shelve my Rift until the SDK is finally updated. And there’s no ETA on when they will add OpenGL support, so I could end up waiting a long time.

There are no easy answers. Only very ugly tradeoffs.

 

Footnotes:

[1] Mercifully, Mattel is taken.

[2] ”Why would anyone do that?” is one of the most natural yet infuriating questions an engineer can ask.

[3] Except Microsoft, who is the main beneficiary of this mess.

[4] The not-for-profit member-funded industry consortium that maintains OpenGL.

[5] I value these attributes in engineers. Generally I think the daring, devil-may-care types are best using technology, and the people who invent the technology should be slightly obsessive and paranoid. The astronaut should be bouncing around in space and giving thumbs-up to the world while the eggheads at home wring their hands and worry about every thing that could ever go wrong. Seems to be the best way to Get Stuff Done.



From The Archives:
 

82 thoughts on “glNext

  1. Bryan says:

    Have you ever written about why you use OpenGL over Direct3D? From what I’ve read, you’ve usually used Windows and Visual Studio for your development work. Since you were already using Microsoft products, what made you deviate away from them for your graphics API?

    I can understand how you may want your code to run on multiple platforms now, but I imagine that wasn’t nearly as important when you started to learn graphics programming.

    1. mhoff12358 says:

      I think its more because D3D is closed than because OpenGL is open.
      The personal benefits of having your code be cross-compatible might not be high, but if you’re yet another developer writing yet another Microsoft restricted application now you’re part of the problem.

      Business practices have the power to turn engineering decisions into political ones, and it can make people really divided.

    2. silver Harloe says:

      Clearly IANSY(*), but I can speculate that his answer is going to be three pronged:

      1) learned OpenGL first because of a job requirement or something back when he had a regular programming gig (no matter what your strengths are, if you go to work at company X and they like library A, you get to learn library A or you get to see if company Y is hiring)

      2) Using Visual Studio to program is nothing like tying your program to a MS library. VS is a great editor for C++ with all kinds of features that work regardless of whether you use DirectX or OpenGL (**), and it would be hard for MS to make VS not support OpenGL without making it hard to use any third party libraries, and there are so many useful ones out there. In any case, once your program is compiled, you can’t tell if the programmer used Visual Studio, or Emacs, or Notepad (***) – nothing of the editor is meaningful to the ability to distribute the code.

      3) too many friends who like Linux to completely ignore it as a platform. Even though the “cross compiler” setup to make VS just produce a Linux binary itself may be difficult or impossible, as long as the code is doing graphics with OpenGL instead of DirectX, one of the hugest burdens of letting someone else copy your code and make it run on Linux is already accomplished. There will still be difficulties, but they will *usually* be resolvable with some #ifdefs that change some types or a couple library calls, and not require fundamentally rethinking how textures are created (or whatever the differences are)

      (*) “I am not a lawyer” except adapted to this context.

      (**) In a way, I’m kind of surprised they don’t have ‘code suggestion’ features that make DX easier to use so that people will naturally gravitate towards it – or maybe they do if you buy the Pro version instead of using the Commie Freeloader edition

      (***) this might be technically false on the Visual Studio vs plain-text-editors front – it’s possible that there’s something it inserts differently, but I don’t think it’s likely. The noticeable difference would be use of MSC++ vs gcc to compile – that would almost certainly haven’t different stylistic things in how the machine code is laid out that someone could possibly know how to analyze to tell which compiler was used. Then again, it’s quite possible that VS is configurable to use gcc if you work at it (being able to configure different compilers might seem to go against MS Monopolism, but it also lets MS reuse the VS code for a C#, Java.net, Visual Basic.net, etc platform and/or change out underlying versions of C++ more easily – and such configurability may just have the side-effect of allowing completely non-MS compilers to be used)

      1. DrMcCoy says:

        Actually, different compilers do lay out things differenly and are pretty easy to differentiate if you know where to look.

        For example calling conventions, like how arguments are transfered to a function, where return values are stored and whether the caller or the callee is responsible for cleaning up the stack.

        Or C++ name mangling, which takes care of disambiguating overloaded functions by renaming them in a way that incorporates the argument types into the names.

        Or how class objects are instantiated. Is the caller responsible for allocating and freeing the memory for the object or the constructor/destructor?

        Or more subtle things like preference of which NOP to use, how switch() is implemented or how standard functions are implemented.

        1. Bryan says:

          Well, for a single OS, probably the only thing that’s different is the last paragraph and maybe the second-to-last one. The calling convention is standardized by the OS. The name mangling algorithm is either standardized by the OS or C++ code can’t use shared libraries unless everything was the same compiler.

          Class object instantiation might be standardized or might not be; I’m not completely sure.

          NOP use, switch implementation, standard functions, though — yes. Also which standard functions get inlined and how — e.g. strcpy.

          So yeah, in general, this is right, but some of the specifics only apply on different OSes.

          1. DrMcCoy says:

            Nope, wrong.

            Calling conventions for library functions are standardized. Internally, the program can do whatever it damn well pleases.

            If it wants and doesn’t care about recursion, a program doesn’t even need to set up stack frames for internal functions and can just pass stuff using registers. Often done for handwritten, optimized assembly code linked into the program.

            Shared library functions are resolved using exports. gcc on Windows (MinGW) still mangles names differently than Visual C++, as does the Intel compiler or Borland or what-have-you.

            Object instantiation isn’t standardized either. Again, just run something compiled with MinGW and something compiled with MSVC through a disassembler.

            But you are somewhat correct: In contrast to C libraries, C++ libraries generally aren’t ABI compatible between gcc and MSVC. In certain cases, libraries compiled with different gcc versions aren’t either, because you can compile gcc to support different ways of implementing exceptions (dwarf2, sjlj (setjump/longjump), seh). Compatibility was also broken between gcc 2.x and gcc 3.x back in the days.

            1. Bryan says:

              Well, on the one hand, sure, it’s possible to do whatever you want in a single compilation unit (if there’s no recursion, you *might* even be able to just inline every call), and maybe even within a single set of .o files (or .obj with most windows compilers). But as you say, there’s a standard for libraries — and it applies to both static and shared libraries. And given that, especially the “static” part, I have a hard time seeing the point in making a different convention for same-source-file functions; just reuse the code you already need to use to generate static library calls.

              So yes, it’s possible to do something different, you’re right there. But I’ll take refuge in the fact that I believe it’s extremely unlikely to happen in practice, so you’re highly unlikely to be able to use the calling convention to ID a compiler.

              Object instantiation can also be different — unless the OS has a defined C++ ABI. Like Linux has. :-) Any compiler that conforms to that can’t be told apart using that kind of info. OTOH, different compilers can build programs that use shared libraries, and (mostly due to soname support, and people keeping track of what “binary compatibility” means) Linux has *way* more shared libs hanging around, even C++ ones.

              (Off this system: boost (duh), libcdio++, libgmpxx, libgnutlsxx, libgs, libMagick++, libpango*, libpcrecpp, librsvg, libsmpeg, libtiffxx, maybe others. So CD support, math, encryption, postscript interpretation, image manipulation, font rendering, regular expressions, SVG rendering, MPEG rendering, TIFF image rendering, and maybe others, all have the option for C++. And that’s just the packages that I’ve either specifically enabled it on or not-noticed it.)

              With no standard C++ ABI, what you say is probably true. Of course it comes with the downside of being unable to do C++ across libraries. Insert the standard “standards” rant.

              As for gcc2 to gcc3… I remember that transition being a pain (and the less said about gcc 2.96 the better; building the kernel correctly is kinda overrated right? …). That C++ ABI, though, is what the 2->3 upgrade changed; g++ 2.x didn’t conform to it.

        2. nm says:

          None of that is relevant when distributing source code. The build system that comes tacked on with VS project files implicates VS, but the code itself could as easily have been written in notepad. In fact, when I have to use IDEs to build things, I usually use my own editor because the IDE’s editor is not Emacs.

          1. DrMcCoy says:

            Yes, of course, I’m not disputing that.

            You can sometimes guess that people use full-fledged IDE instead of a text editor by weird/stupid IDE ideosyncracies. Like “smart” placement of indenting tabs on empty lines. There is no word to describe how much I hate that, and trailing whitespace in general. And everthing apart from The One True Indenting Styleâ„¢: tabs for indenting, spaces for alignment. I never understand why most IDEs actually make it harder for you to follow properly strict indenting rules.

            Me, I’m a vim fanatic. And I’m generally GNU/Linux-only and a FLOSS zealot. :)

            1. Richard says:

              On the Tabs v Spaces front, pretty much all decent IDEs have configuration settings where you can pick what you want.

              Combined with the “Apply code style to this file” command, there’s very little reason to care which style anybody else uses.

              The only thing that matters is the style in the repository.

              As all decent source control software has hooks for both “pre check-in” and “post check-out”, you can simply set up auto-styling commands for those two and bingo, you can use whatever style you like in your C/C++ files and I’ll use whatever style I like.

              And neither of us has to give a damn about what the other prefers, which is exactly as it should be.

              Bracing and indentation style is no different to colouring your comments, constants, keywords etc.

  2. Alexander The 1st says:

    *Clearly* the solution is to write your own OpenGL SDK for the Oculus, and call it glBest.

    1. Phantos says:

      With blackjack! And hookers!

      1. Daemian Lucifer says:

        In fact,forget the graphics!

        1. Tse says:

          Forget the blackjack!

          1. Retsam says:

            Ehhh, forget the whole thing.

  3. Nick says:

    Not to be pedant, but Metal is very iDevice specific and built around the way the graphics hardware there works (hence the name) rather than trying to be another general API.

    Disclaimer: I’m an iOS developer, but I’ve never actually looked at the Metal APIs.

    1. Jnosh says:

      Disclaimer: I know some OpenGL but I am certainly not an experienced OGL programmer, I don’t know DirectX and Mantle at all and I have only looked at Metal so far as I don’t have a compatible device around but…

      … from what I know about OpenGL and what I’ve seen of Metal, they still work fundamentally the same. Ignoring programming languages, API naming conventions, etc.. Metal essentially looks a lot like a streamlined and cleaned up version of OpenGL ES 3.x to me.

      One of the big issues of OpenGL is that there are often many ways to achieve the same thing. This makes it harder for beginners to wrap their heads around how to do things and since often only one of these ways will actually be XXTREMELY FAST (TM), it is easy to ruin your performance without knowing why unless you have a deeper understanding of how OpenGL and the hardware work. Also, the best way to do things changes every few years as the hardware gets more capable which is also the reason why there are so many approaches in the first place…

      Now Metal essentially gives you just the most modern, fast way of doing things (as appropriate for OGL ES 3.x GPUs) and lets you use it in a nicer and simpler way than OpenGL which is still bound by many of the API design decisions made in the early 90s and the general “limitations” of C.

      What I am trying to say is that if you know graphics programming, this still works the same way as before it’s just a “nicer” and “easier” way to do it.

      To use good old car analogies:
      OpenGL allows you to drive your car like a hand-, ox-, donkey-, or horse-cart, horse carriage or actual car and let’s you decide which to use no matter how much sense that makes for the actual car you are using.
      Metal instead just gives you a steering wheel and automatic transmission and calls it a day.

      Now I would very much expect OpenGL Next to take the same general approach, but cross-platform and supporting all the features that mobile GPUs currently are lacking. Yes the API will look different and the programming languages used will certainly differ but I would be very surprised if OpenGL Next worked radically different than Metal.

      As for DirectX 12 and Mantle, I would again expect them to work along similar lines but it would be great to hear from someone who has actual knowledge of these :-)

      Also while Metal is certainly built around the current feature set of iOS devices, I don’t see any particular reason why it couldn’t be expanded to cover desktop GPUs as well. Will Apple do it? I don’t know but I sure wouldn’t be surprised. And knowing them, the desktop GPU driver team probably found out about Metal at the same time we did…

  4. Eathanu says:

    Calling AMD a “different devil” is pretty unfair. Based on the moves I’ve seen the company make over the years, their priorities list seems to read something like:
    1. Fix the hideous mess that is graphics hardware and programming standards
    2. Make money
    3. Push open source as much as possible

    whereas Nvidia’s is:
    1. Make shitloads of money
    2. Push AMD out of business
    3. Make fucktons of money

    They’re the closest we’ll ever have to a Valve of graphics hardware.

    1. Except for the bit where current Mantle game code isn’t even faster than DX code on GCN1.2 hardware out of the box so when they call it a closer-to-metal API they mean it is hardware specific. Not only will it be basically impossible to make Mantle work on other high end hardware (essential for the 60%+ of users who have nVidia silicon), you have to re-engineer your render code to get it to work on future AMD GPU designs. It’s a blip like Glide that will expire as AMD change their hardware so the old metal API stops working as an efficient layer (after the GCN revision, that next AMD GPU will run faster using DX/OGL than Mantle code – that’s an incredible issue), with only thousands of wasted rendering coder hours to it’s name.

      AMD constantly (over the last few years) have brought better silicon to the market for better prices than nVidia but refuse to invest in the software that would give them better experiences in games (both crippling their perf & value-add features). When you’re chasing your competition on features by using Adware, there’s something wrong. They shout about nVidia working with devs as ‘locking in’ nVidia’s advantage (by giving away code to programmers who want it) when Mantle is the most egregious move in this arena for a decade and TressFX was exactly the same move as nVidia play – they’re both involved with research into branded tech (FXAA vs MLAA – two teams building the same tech with their own branding). The open source move is a desperation play to offload some work from the driver engineers who haven’t run out of AMD to work somewhere else.

      1. Overall, I think Mantle is DOA. Nobody is going to want to code on something that effectively cuts their market by half or more.

    2. steves says:

      It is a sorry state of affairs. The relentless pursuit of money from all the major players makes things less than optimal, but we are seeing progress, and there is always going to be a balance between standards and proprietary innovation.

      Your list of Nvidia’s priorities is not far wrong, and they are a bit annoying at times, but they do make very good stuff that works. Stuff I can buy right now. A 780GTX with a 120Hz G-Sync monitor is a thing of beauty, and miles ahead of what I was using just a couple of years ago.

      Same deal with Apple on mobile – people I know who do iOS graphics stuff are salivating over Swift + Metal, and whilst I’ve only just dipped my toes into that particular pool, it does look like a developers dream…as long as you only care about iOS.

      And then you have VR. If that takes off, as I hope it will, then everything changes, and it’ll be the wild west of the mid 90’s again, complete with 3DFX Voodoo style madness, and that was kind of fun times from what I remember!

    3. Felblood says:

      We all trusted Google once, too.

      That’s how they managed to aquire more power than any one company should wield.

      It’s like the Ring of Gyges. Absolute power corrupts absolutely.

      1. Purple Library Guy says:

        I’ve always wondered about that. Power certainly seems to corrupt, but I think it’s some kind of limit function. Because, like, most of the corruption of power seems to involve the attempt to make it closer to absolute–reaching for more power, or staving off challenges to that power, or trying to increase control so that no challenges can emerge. All of this would become irrelevant given power that actually was absolute–which has never happened and almost certainly never could. I expect you could come up with physics-of-information reasons why power could never be absolute. But if it could, you’d have no more need for all the corruption; you’d be in a very different space from the realm of merely “extremely powerful”. There would be no more to grasp for and no possibility of challenge and nothing to prove. Truly absolute power might not corrupt at all.
        So it might be truer to say that corruption approaches absolute as power approaches absolute.

  5. Licaon_Kter says:

    Mesa is an open-source OpenGL implementation, not an alternative.
    BTW, AMD offered Mantle for glNext.

    1. Yep, that seems to be a common confusion over what Mesa is/does.

      The AMD offer seems to be that they will not attack any code/ideas lifted via copyright or patent claims that originated in Mantle. This is excellent news because OpenGL needs more involvement from AMD. The year gap between Khronos announcing a new OpenGL revision (with nVidia releasing their beta driver on the same day and making this the stable driver weeks later) and AMD releasing a driver that supports it (using extensions to patch any holes but not provide full support for the latest API) has been a strange deficiency that made AMD seem somewhat estranged from OpenGL.

      GLnext is exciting. For a proper cross-platform API without any cruft but also for what that should mean. XNA (which started out as Managed DirectX, where I first picked up doing serious stuff in 3D) was a great tool for education and teaching 3D rendering to novice users. A clean, modern GLnext may well provide the same oportunity for a cross-platform API.

      1. Bryan says:

        As long as it doesn’t turn into:

        http://xkcd.com/927/

        I really hope it doesn’t, but … yeah.

        1. Felblood says:

          Ugh.

          It’ll be musical chord notation all over again.

    2. Bryan says:

      Yeah, I was going to mention that.

      Mesa is the OpenGL library for lots and lots of Linux machines out there — anything that uses the open-source in-kernel direct-rendering support is going to use Mesa’s libGL. That’s every Intel video chip, at least. AMD/ATI cards I’m not terribly well versed on, but it *looks* like either the open-source driver (built off the chip documentation that AMD provides) or the AMD binary driver, both use Mesa’s libGL. (As they both seem to use the standard DRI kernel interface.)

      On the other hand, nvidia cards can use either the reverse-engineered “nouveau” driver, which also uses Mesa’s libGL and DRI, or they can use nvidia’s own binary-only driver, which comes with its own libGL.

      (…And because I’m just now realizing that this may not be obvious… libGL is the library that implements the OpenGL API. You can run either a software-renderer version of it, and Mesa has one, or a version that knows how to talk to your hardware accelerator, either through DRI, which Mesa also has one of, or through whatever custom thing that nvidia uses. Or if you use a libGL from someone else, then maybe it uses some other backend. But the library interface is the same.)

  6. Suddenly, the core concept for games like Dwarf Fortress don’t seem all that original if the guy coding it did any sort of graphics programming, and it explains why graphics are the last thing Dwarf Fortress’ developer wants to touch. :)

    1. MadTinkerer says:

      Text mode game programming was originally born out of necessity. Way back on the mainframes, sometimes text mode was what you had available. Then a decade later on the earliest consoles you did have some capability for sprites and bit-maps (gratuitous dash inserted to emphasize that we’re talking about one bit per pixel bitmaps, not the ridiculous memory hogging 32 bit color we’re used to), but everything was low level programming and you’re either an employee of the manufacturer or a former employee because it’s the 1970s and there’s no other means to learn how to make games for those systems.

      Then personal computers came along and you had a huge rift between home computers which required BASIC and games because that’s what the kids wanted and serious business machines engineered by motherfuckers who thought that monochrome-green was good enough for every possible serious use of their serious computers. The graphics standards on the “kids” computers was different for each machine, but wholly integrated into the system and usually a difference of just a few BASIC commands for how the pixels got put on the screen. Unfortunately, the business machines took over the home computer market, which meant that you learned how to program for Video Graphics Adapter peripheral cards (in Pascal, I think) or you decided to make do with text mode.

      The really sucky thing is that until Carmack figured out his very clever scrolling graphics hack that Nintendo was too dumb to allow id to make into a SMB3 port, even with the peripherals the business machines lagged behind the home consoles in terms of graphics capability. Imagine trying to make games for the IBM-compatible between the U.S. release of Super Mario Bros. in 1986 and the release of Commander Keen in 1990. Imagine the frustration of me in 1988, eight years old and trying to figure out how to make scrolling 2D graphics in BASIC on my 286 and not even realizing the full scope of the problem until long after Carmack solved it.

      In a time and place where 2D scrolling is a cutting-edge feature of licensed graphic engine (no plural until later AFAIK), and even non-scrolling 2d requires you to… [hang on, I’m going to get out my copy of Programmer’s Guide to the EGA and VGA Graphics Cards] oh holy cow I was wrong, there’s circuit diagrams and examples in a mix of C and assembler. There’s page registers, shift registers, sequencer registers, attribute controller registers, Bresenham Line Algorithm, light pen support (oh hey: doesn’t this mean all modern computers have some kind of light pen support for backwards compatibility?), data rotate register, non-Microsoft “windows”, color registers, and a chapter named “Memory Addressing Techniques”.

      So yeah, in that kind of environment, text mode looks like a pretty nice option for getting anything done.

  7. Rob says:

    No discussion of the shortcomings of OpenGL (especially historically) is complete without this link.

    It’s not just that it’s a product of its times and slow to change, but also that whenever they do update it with new functionality they seem to miss the mark completely more often than not.

  8. General Karthos says:

    I’d argue that Apple products don’t have ALL the drawbacks of Microsoft products, but I can see it would turn into a “Mac vs. PC” debate, and it’s really not worth all the animosity just because I’m right.

    What I will say is that even if you’re helping a super-evil mega-corporation, DirectX is better than the competition. This I know because I’m stuck with OpenGL.

  9. Rick says:

    And then they go and release a new hardware revision while you’re waiting for the last one’s SDK to be finished :(

  10. Bryan says:

    Wait, OpenGL 4.5 is going to do direct-state-access?

    WHERE DO I SIGN UP?!

    Forget C versus C++. The big problem I’ve always had with OpenGL is not that it’s tied to C (C++ compilers are *crap* on a lot of architectures that OpenGL can’t very well be “cross platform” without supporting; just because x86 is as popular as it is doesn’t mean that this other stuff doesn’t exist), it’s that it forces you to control hidden state. Which forces the code to be pretty hostile to threading: you can’t very well change the bound texture in one thread while trying to do anything else with the GL context in a different thread. So your rendering is stuck singlethreaded.

    Which might actually be good, come to think of it … but the annoyance of having to handle multiple textures being possibly bound when the function starts, and correctly restoring that state when it exits, is still a giant pain. And this will drop that.

    I’m kinda dancing in my seat here… :-)

  11. MichaelG says:

    From what little I’ve looked at in the Oculus SDK, here’s the situation:

    Both OpenGL and DirectX apps are building a texture image (one for each eye), and then handing that off to the SDK. So you can do anything OpenGL you like, and the SDK is not involved. Just generate that texture image. Then the SDK is supposed to distort it to correct for the lenses, and send it to the display.

    Under DirectX, the SDK does this with some sneaky use of a call found in the device driver called GetDX11SwapChain. There’s some magic there to extract the data and get it to their custom display driver. I can’t follow it all.

    Under OpenGL, there’s no weird hook to the swap chain, and it looks like they can’t get the data. This results in the situation Shamus describes. It’s not that you can’t do OpenGL calls. They just can’t use their modified display driver since they don’t know how to get the data under the covers.

    If there really is no way to get the data, this means they completely screwed up OpenGL support. Or perhaps there is some sneaky way to do it, and they just haven’t coded it up. Or there’s some broken version there already, but I haven’t found it.

    1. Geebs says:

      i don’t understand this really; in openGL it’s trivially easy to render to a couple of framebuffers. Why can’t the SDK just specify a couple of buffers to render to and then do its thing? Hell, why can’t it just ask you to render everything to a single unfiltered texture a bit bigger than twice the size of their viewport and just grab that?

  12. Decius says:

    Doesn’t Valve have the talent, budget, position, incentives, and culture required to create a new alternative to DirectX?

    1. Matt Downie says:

      They have the resources to make Half-Life 3 too, but I’m not holding my breath.

      1. Eathanu says:

        12:42 am posting time. HL3 confirmed!
        (12 / 4 = 3, 3 comes after 2. Flawless logic.)

        1. Retsam says:

          I bet someone’s come up with a script that can take an arbitrary comment and turn it into a Half-Life 3 confirmation. If not, I might.

    2. RandomInternetCommenter says:

      Valve knows how to do game design and how to do game marketing, but I’m not convinced they’re all that when it comes to game programming. I’ve experienced more bugs and crashes in each Valve game than in the entire id software catalog, and to this day Steam is still infuriately clunky and prone to random errors.

  13. Bropocalypse says:

    Speaking as someone who is just starting to learn OpenGL…. I have no idea what any of you are talking about.

    1. Bropocalypse says:

      To clarify, having used openGL lightly off and on for a few months, I’m learning how to properly use glRotatef() and glTranslatef() as well as the family of glBegin(WHATEVER) commands. Today I just heard about glPushMatrix(). I’m choosing to blame the lack of good documentation and the normal problem among programmers that a lot of stuff goes unexplained and even fewer include any ‘You should also read these related topics’-style links. I didn’t even know what the modelview matrix did, exactly, until a couple of hours ago.
      Sometimes I look for an answer for what something does, only to be greeted with extremely technical mathematics and jargon when all I wanted was an example. Compounding these issues is that because openGL seems to be virtually always used as a library for a higher-level language like C++ or Java or whatever, and even within THOSE there seem to be different ‘styles’ of OpenGL commands, it’s almost impossible for me to find an example that I can intuit into the language and openGL implementation I am currently using.
      And THEN I look at the things that are linked in this article, the article’s links, and in the comments HERE, and I see things like ‘driver compatibility’ and ‘optimization’ and so on and on, and I can’t even imagine beginning to learn how to comprehend what those things would entail. Is this all just because OpenGL is so old and bloated, or is graphical programming inherently complex at every level? I just want to make the game rolling around inside my head.

      1. Matt Downie says:

        “Is this all just because OpenGL is so old and bloated, or is graphical programming inherently complex at every level?”
        A bit of both.

      2. Zukhramm says:

        If you’re using glBegin and glEnd you’re using the parts that are suppoed to be gone and never used, pretty much.

        I’ve gone through the same thing lst half year. Not only figuring out opengl, GLSL, the specifics of the non-C implementation I was using (Java) and the mathematics.

        The probolem for me was that most materials I found online were more interested in getting you to write code than to understand what was actually going on.

        1. Bropocalypse says:

          Wh-what? What’s replacing glBegin and glEnd?

          1. MichaelG says:

            There are probably some up to date OpenGL tutorial sites. The NeHe series (http://nehe.gamedev.net/) was good, but I’m not sure they are up to date.

            Or you can read a book. I used the OpenGL SuperBible, by Richard S. Wright.

            To answer your question about glBegin and glEnd: these days, you create vertex buffers, attach shaders, draw the vertex buffers, then call SwapBuffers to exchange the front and back planes.

            It’s not any harder, but the learning curve is steeper. You can’t really put a triangle on the screen without writing at least a simple shader. Or copying one from the book.

      3. WILL says:

        Stop all this glRotatef/glBegin nonsense. Switch to OpenGl 3+ right now. Fixed pipeline graphics have been dead for almost a decade.

        1. Shamus says:

          Actually, those calls have their place. Certainly you don’t want to be using glBegin in PRODUCTION code, but not everyone is writing production code.

          It takes time to set up all the buffers and to load shaders and all that other screwing around. Sometimes you just want to put a few triangles on the screen. Not all programs are intended to ship to consumers. There are prototype projects. Educational projects. Legacy projects. Lo-fi old-school projects where the polycounts are so low that performance can’t possibly matter. For these situations, it’s just WAY easier to use immediate mode.

          If the new version of GL gets rid of that (I’m certain they will, assuming they ever finish it) then so be it. But you might as well use the tool while you have it. I’m grateful I can still put a few triangles on screen without pages of vertex buffer loading, shader loading, error checking, handle tracking, and GLSL value-passing.

          1. Shamus says:

            Oh, and I forgot: Implement your own matrix stack.

            1. Shamus says:

              Don’t forget writing your own shaders, which requires you to know a completely different language.

          2. Zukhramm says:

            Trying to learn OpenGl this year, I thought immediate mode was a lot more confusing with its hidden matrices and mysterious function calls. I’m a lot more comfortable seeing what’s going on. Sure, writing my own matrix and vector classes took an extra afternoon but it’s work that needs to be done only once, the mathematics are not suddenly going to change.

            Same thing with shader loading and creating vertex buffers. Do it once and you’ve got a quick way to put triangles on the screen from the on. It just makes more sense to me to go to the thing you’ll actually end up using most of the time than to first learn this additional thing that works in an entirely different way and that you actually won’t use.

            1. Shamus says:

              Sigh.

              “Sure, writing my own matrix and vector classes took an extra afternoon but it's work that needs to be done only once, the mathematics are not suddenly going to change.”

              Except, what if you DIDN’T know how to do that? What if that stuff was opaque to you? Then to do something SUPER SIMPLE (make 1 triangle) you would have days and days of hassle. You need to learn everything about how matrices work (both modelview and projection) before you could even begin writing that code, and then you’d need to finish that code before you could begin your project. How long did it take you to learn all of that the first time? (I’m still learning.)

              It’s the classic engineer problem, “I know how to do this and so its easy. Everyone else should just learn what I know and then they wouldn’t have a problem.”

              Of course, this is why we make APIs – so we can abstract problems so everyone doesn’t need to know everything in order to get anything done.

              EDIT: Removed several flagrant typos.

              1. Zukhramm says:

                But you need to know that to use them in immediate mode too, right? So how does it benefit learners to spend their time learning yet another thing, and one they won’t be using much anyway?

                Edit/Addition:

                I mean, if you don’t know anything about matrices, isn’t the time spent learning something new better spent at learning that instead of a form of opengl that won’t be that useful? It seems inefficient to learn one additional, not very useful thing just to hold off on learning the one really useful one.

                And it’s not like you need full understanding of linear algebra to start drawing with VBOs and shaders either. A well written tutorial should be able to get someone started and then gently introduce one matrix at a time.

                1. Zukhramm says:

                  Meh. Ran out of editing time. But this is something I meant to post as another edit:

                  Second edit: Before we go anywhere else with this I want to say that my initial “extra afternoon” wasn’t really meant as a comment on how simple it would be for someone learning but more as a response to the protyping/”just let me get something on the screen now” in that it’s not something that needs to be redone for every new project. But of course, I mixed in with everything else in my own comment about my own learning and that’s not what I meant.

              2. silver Harloe says:

                just to check if I’m understanding:

                1) OpenGL contains something called “immediate mode” which they’re hoping to deprecate, but which provides a way to make simple graphics happen now, if inefficiently

                2) they apparently also have a more efficient version, but you have to write your own matrix library, because they aren’t gonna

                3) “immediate mode” contains a matrix implementation which is hidden from the user and not extensible

                assuming I followed, I have question:
                1) are either of these subsets of their library implemented in terms of the other?

                2) if no, why not implement immediate mode as “reference matrix library + the other subset used correctly” and call it a set of examples which happen to also be usable?

                1. Shamus says:

                  Yep. You’ve got a pretty good grip on what’s going on.

                  I’m sure #2 is what will happen. I don’t really begrudge OpenGL gearing itself primarily towards high-performance development. I’m just saying that in the meantime, if all you want is to shove some polygons around then you might as well use immediate mode while its available.

                  1. Richard says:

                    I disagree.

                    To get some triangles on screen in “proper OpenGL 3” mode, you need a simple vertex shader and a trivial fragment shader.

                    Then loop dropping a new value into the shader each iteration and tada! Spinny cube.

                    The MVP matrix math etc is done by these shaders, which the tutorials simply hand you as black boxes.

                    That’s no different to asking the fixed-function pipeline to do it – both are “Copy this piece of magic, don’t worry about how it works yet”. You can learn what is actually going on later. (I still have trouble visualising quaternions)

                    Everything else is the same stuff in both techniques, just described differently.

                    Personally I found the idea of arrays of vertices much easier than building the same geometry in immediate mode – it suddenly made sense, given my previous 3D modelling experience.

                    Then you can start pulling apart the fragment shader (do that first ‘cos pretty colours) and then the vertex shader, passing it more or less stuff and seeing what happens.

                    However, everyone learns in different ways.
                    I just worry that if you learn immediate mode first, you’ll get into bad habits that are hard to break.

                    1. Domochevsky says:

                      Ah, so a “simple” vertex shader and a “trivial” fragment shader. Well, if it’s that easy…

                      “Personally I found the idea of arrays of vertices much easier than building the same geometry in immediate mode ““ it suddenly made sense, given my previous 3D modelling experience.”

                      I guess that’s where the disconnect lies. That stuff is of little use to me, while building something out of coordinates of points seems a lot more intuitive with this “immediate mode”.

                      As note: I do indeed want to “just put some triangles on the screen”, since I’m primarily an artist (secondarily a coder), doing an isometric 2D game.
                      I’ve got at best a few hundred triangles on screen at any given time, so performance in that regard simply doesn’t matter. Ease of use does. And Immediate Mode tutorials are much easier to find and comprehend.

                    2. Richard says:

                      The “trivial” fragment shader is output = input:

                      #version 330

                      out vec4 outputColor;
                      in vec4 inputColor;
                      void main()
                      {
                      outputColor = inputColor;
                      }

                      The ‘simple’ vertex shader does the MVP transform (model space > view space > projection space) and passes on the vertex colour.
                      If you aren’t a maths person this one is harder to understand.

                      However, you can treat it as a black box – possibly forever.

                      Oh yes, linky to a tutorial I found useful:
                      http://www.arcsynthesis.org/gltut/index.html

      4. Bryan says:

        Would it be better to start with webgl?

        There’s no possibility of immediate mode there, and there are a bunch of matrix stacks in javascript already (personally, I like glmatrix). The docs are also not terrible, but grabbing a tutorial is actually not nearly as problematic as with desktop GL, because there’s only the one (ES) interface.

        On the other hand, webgl doesn’t have a lot of recent extensions that allow much lower driver overhead (most of the stuff in the GDC talk linked way up above is not possible in webgl, for instance). So maybe. It’ll at least get you a way to avoid the current setup, *hopefully* without tearing your hair out.

  14. Samopsa says:

    Yo Shamus, Oculus just announced a couple of days ago that the new version of Unity (both the free and paid versions) ship with the newest Oculus Dev Tools for free. As Unity is open source and multiplatform that might be a good environment to tool around with!

  15. Geebs says:

    I’m already anticipating somebody writing a wrapper for glNext to allow them to use all of the ancient direct mode calls, and then writing a long, pissy blog post about it….

    Also: CARMACK Y U NO LIKE OGL NO MOR ??????!

      1. MichaelGC says:

        That is certainly quite pissy. With politeness and friendliness evidently deprecated in the comments…

      2. WJS says:

        Jeez, that thing again. Written by someone who clearly doesn’t get how deprecating stuff is supposed to work. You keep deprecated stuff in for legacy programs. On a new platform like mobile, there are no legacy programs – everything needs to be built new for the new architecture.

  16. Volfram says:

    Personally I use OpenGL because it has better cross-platform compatability, AND because it has about 10% better performance than DirectX in all comparisons I’ve performed or read about.

    Seriously, the repeated statements of “DirectX is faster” confused me. In not one of my experiences has DirectX ever been faster than OpenGL. Just use OpenGL, it’s just plain better.

    1. Alexander The 1st says:

      Presumably when they say DirectX is faster they also mean in development time.

      Especially since I’ve never heard anyone complain about DirectX documentation; 10% speed boost isn’t really useful on machines that won’t stutter with a 10% slowdown because of how fast the computer runs anyways, and even if it is about benchmarks – DirectX probably is faster in the stupider implementations, when you don’t optimise. Like in Shamus’ old example of people putting 5 50% opacity blinds next to each other to get a 80% opacity effect.

      Or in other words, DirectX could be faster in worst case scenarios.

      1. Volfram says:

        I can see improved documentation being better for improved turnaround time.

        I’ve never actually worked directly with DirectX, but I’ve messed around with a couple of graphics engines that support both, before I decided to branch off and make my own exclusively OGL system.

  17. From what I hear, direct X 12 will be coming to windows 7 thank christ. I also THINK I heard that it will not require new hardware, but I can’t tell if that is what the guy is saying or not

    Anyways, The original video is here:
    https://www.youtube.com/watch?v=AfsedRZKX1c&list=UU_Yypfyx5fKHpd640wPGFzQ (5: 06)

    Still hope that the GLnext thing works out, I cannot for the life of me get anything I program in OGL to work!

    1. Zukhramm says:

      Either spambots are getting good or you copy-pasted the wrong thing.

  18. Kyte says:

    I feel you’re being unfair to DX. For one, DX was made almost concurrent to OGL (MS had a hand in its development, even), and it came about from the simple business need of “We want to have games on Windows”. Its lack of support for other platforms is simply the lack of a business need for “We want games on that are not on Windows”.

    But far more importantly: XP was left in the lurch because there are fundamental differences between graphics drivers written for XP and down and graphics drivers written for Vista and up. This is the so-called WDDM, which was necessarily for about a dozen good reasons and probably twice as many not-so-good ones. This is, incidentally, why Vista was such a painful upgrade: Suddenly manufacturers couldn’t be sloppy with their drivers and then they failed in delivering those drivers on time.

    It’s extremely unfair to characterize Microsoft as one of the drivers of the upgrade treadmill (that’d be Apple) when there’s documented proof you can still run Windows 1.0 apps in Windows 8. They are, and have always been, concerned first and foremost with compatibility. And the reason for this, straight from the mouth of Windows developers, is the exact opposite reason you state: If an app stops running, customers won’t buy an upgrade. “Windows broke my app!”. Users blame the OS, not the app.

    Also, MS puts no effort on Linux but it doesn’t block it either. Wine has DirectX.

    (As an aside maybe it’s because I’m a whippersnapper but it’s always a bit perplexing when you talk about ‘learning DX’ or ‘learning a new language’ as this massive painful undertaking. Especially learning a new language, when both Java and C# are extremely easy (and pleasant!) to learn to a C++ veteran.)

    1. Shamus says:

      “Also, MS puts no effort on Linux but it doesn't block it either.”

      The point stands that if you target DX then you can’t make your game Linux native.

      “As an aside maybe it's because I'm a whippersnapper but it's always a bit perplexing when you talk about “˜learning DX' or “˜learning a new language' as this massive painful undertaking. Especially learning a new language, when both Java and C# are extremely easy (and pleasant!) to learn to a C++ veteran.)”

      I’ve spent years learning things. Learning things well takes time. I have deep knowledge in a couple of areas, and it’s pretty hard to give that up so I can do something else half-assedly. Learning this is great when you want to learn things, but really tedious when you want to accomplish things. Maybe someday I’ll suddenly decide I want to know DX, and that would be a good time to learn it. But learning DX when I want to work on Oculus is a terrible idea. Oculus work is very demanding, and not appropriate as a learning project.

      Also, learning things makes for frustrating blogging. Instead of sharing my knowledge, I end up documenting all my mistakes and having dozens of people critique my work, give me conflicting advice, and tell me how “obvious” my mistakes are.

      Yes, I could sink a bunch of time learning DX well enough to do Oculus development. But that would take time, it would make for unsatisfying articles, and once the Oculus SDK supports OpenGL I’m going to go back to it because I know it better. And thus all the time spent on DX will have been wasted.

      Time is precious. Headspace is precious. Specialization is not something you can easily ignore.

      1. Kyte says:

        I guess we just have different approaches. Almost every technology I’ve used I’ve learned on the fly, either out of preference or out of necessity. My belief is that most knowledge can be ported and Google can cover the gaps (so far it’s done so admirably). It seems to be the expectation both from my classmates and my teachers, so to me it’s the norm.

        (That said, blogging a Newbie’s Guide to X would probably be quite valuable. Like your guide to music. It’s fascinating to see things from the perspective of someone still learning, and you have a particularly didactic way of blogging.)

        1. silver Harloe says:

          Learning languages is trivial. Learning *libraries* takes time. What one can do very rapidly with one library, they might take hours of constantly consulting references to do in another. So, yes, you technically can do all your coding with google open, but that doesn’t always give acceptable labor performance.

          Plus, libraries always have gotchas. Like before an earlier reply thread to this very post, I would not have known that GL had two approaches to everything, and I might’ve spent hours trying to debug while I can use function X from immediate mode with function Y from not-immediate mode. And the more complex the library, the more like Google won’t lead you to answers, but to thread after thread of people asking the same question with no useful reply.

          And languages and libraries usually have good ways and bad ways to do things. You’ll usually find one, assume problem solved, and move on, even if the first solution you found was the least performant and exists only for some edge condition or because it’s older and should be deprecated but isn’t because backwards-compatibility.

          It’s easy to learn the syntax of a language and do a few simple programs, but it takes real experience to program *well*. I cannot count how many times I’ve seen my code from a year ago and screamed at past-me for being such an idiot, but past-me was usually just using the wrong method to do things – often because past-me coded up a quick function for something that was in the library.

          1. Kyte says:

            Fair enough. You reminded me of that time I had to learn Rails on the fly.

    2. Purple Library Guy says:

      “Also, MS puts no effort on Linux but it doesn't block it either. Wine has DirectX.”

      Wine doesn’t have DirectX out of the goodness of MS’ heart, you know. Wine has DirectX because Wine developers worked damned hard. And I don’t know about DirectX in specific, but Wine, and Samba for instance, and other interoperability efforts (document formats, eg) were able to get as far as they have largely via strenuous lobbying of governments to have laws enforced (competition laws etc), leading to government agencies dragging MS kicking and screaming into sharing various information needed for interoperability. The results, and in many cases the information, remain far from perfect.

      “MS doesn’t block Linux”–I’m sure you’re sincere about that, but it’s because you are unaware of the history involved. I was using Linux through much of that history so I am fairly aware of it. MS has recently become much more ambivalent about blocking Linux, but that is solely because in many significant spaces (server, cloud, supercomputing etc) Linux became as big as Windows or bigger, so they’ve had no choice–they need interoperability more than Linux does. If they stayed stubborn they’d be the ones perceived as an obstacle and they’d be the platform that got dropped. The desktop, however, is MS’ last bastion and I doubt they will completely give up trying to nobble Linux there. Take for instance UEFI Secure Boot, a much more recent effort. MS has made many, many efforts to block Linux, and with a good deal of success (on the desktop if nowhere else).

      In the case of DirectX, it may be that any given design decision that in point of fact blocked Linux had other reasons driving it and even would have been decided the same if blocking Linux were not a consideration. But we can be pretty sure that “helps block Linux” was one of the checkboxes totted up when such design decisions were made, whether it happened to be a decisive one there or not.

  19. Norman Ramsey says:

    Sadly, Microsoft is not a benefactor here, but rather a beneficiary.

    (At least you know that someone is reading the cool new footnotes.)

    1. Shamus says:

      Whoops. Thanks, fixed.

  20. Frank G says:

    I’ve been working on an OpenGL game engine since I learned OpenGL in 2001. I originally did it all using immediate mode (glBegin/glEnd) and the fixed function pipeline, because back then that was all that existed. In the past few years I added shaders and painfully ported everything to work in the core context, which required removing all the immediate mode and fixed function pipeline calls.

    I guess I started using OpenGL because that was what was taught in the course I took. I stick with it because I want to, and have, ported my engine to linux. That took much less work than porting to an OpenGL core context.

    I agree with all the other posters that OpenGL documentation sucks, and there are too many ways to do the same thing, and writing a modern OpenGL program using a core context would require a thousand lines of boilerplate code. Almost all the tutorials are out of date, and most don’t work on modern hardware. The graphics driver quality often sucks, especially AMD drivers, where driver updates tend to break my engine every time.

    OpenGL (the Kronos group) goes out of their way to try to make things backwards compatible, and that’s why OpenGL has gotten so complex. But even though the spec allows mixing old and new style calls, most drivers don’t handle that well. At some point they’re going to drop support for immediate mode/fixed function, and that will be good. Sure, it will generate a lot of work for everyone to port their old code, but it’s required if OpenGL is going to survive.

    What we really need is wrapper functionality, or an open source library that wraps all the low-level boilerplate code so that you *can* write something simple that draws some triangles (inefficiently). I wrote my own, and many projects did the same. So if you want to see how to do this, find a good open source modern OpenGL program and look at the code – it’s much easier than finding a good tutorial. (But be careful about copyright issues if you just copy the code).

    If you’re worried about the transform matrices, that’s relatively easy to do yourself. Just use GLM, which was written for the purpose of replacing the fixed function OpenGL matrix stack. It comes with all the vector math stuff as well. Eventually I believe more libraries like this will come out. They just need to extend GLM to handle more of the flow – something that does shader management, texture management, etc.

    If anyone is interested, maybe I could put parts of my wrapper code up somewhere. It was written to be high performance and take advantage of all the power available in new cards. I’ve gotten close to the max triangle throughput spec of my GPU, so you can’t say that OpenGL is always slower than DirectX.

    1. Richard says:

      Take a look at the latest Qt – well, 5.4 which is ‘real soon now’.

      It handles most of the boilerplate while letting you get down and dirty when you need to.

      Unfortunately the examples remain few and far between – hopefully that’ll change, but who knows.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to WJS Cancel reply

Your email address will not be published.