The Bug is, There is no Bug

By Shamus Posted Monday May 6, 2013

Filed under: Programming 80 comments

splash_keyboard.jpg

I’ve mentioned before that I used to make comics using a program I wrote myself. I wrote Comic Press back in 2007 or so, back when I still worked at Activeworlds. When I left the company, I left behind the nice professional version of Developer Studio 6 that came with the job. That was my programming environment of choice, and I have to admit that it was an admirable piece of software. How many other commercial software products are still working fine twelve years later? Not many, I’d wager. Well, maybe server-side. But the turnover rate is usually pretty high for stuff used by individuals. Doubly so for stuff from Microsoft.

I switched over to using Visual Studio Express 2010, which is actually twelve years newer, but missing some key features. (The two programs are of the same product line and lineage. Microsoft just re-branded Developer Studio to Visual Studio at some point.) So I went from using a very old but feature-rich toolset to a modern but stripped-down version. The key feature I lost was the ability to use resource files. In the world of Microsoft, resource files are containers for dialog interfaces, menus, and window layouts. You design a dialog box in a nice little drag-and-drop interface, and then use it in your program. Visual Studio Express (the “express” edition is the stripped-down version for freeloaders like me) can’t use resource files. The result was that I could no longer compile Comic Press.

If I ever wanted to make any changes to Comic Press, I’d have to strip out all the resource file usage and painstakingly re-create the dialogs in code. That’s a lot of hassle, so I never bothered. The existing version of Comic Press did everything I needed it to, so I just backed up the source code and forgot all about it.

Then I moved to Windows 7, and Comic Press broke.

comic_press4.jpg

Word bubbles became giant black rectangles, and nothing I did would fix them. I tried running the program in every compatibility and display mode available. Nothing worked. I wanted to make the above comic for the end of our Dishonored season and I realized if I was going to do that, I’d need to fix Comic Press First.

So I bite the bullet and dig the code out of mothballs. I half-ass it and basically rip out all the dialogs. The program no longer asks what you want to name the final image or where you want to save them. It just saves it as “image1.bmp” and dumps it to the E:\ drive, overwriting whatever might be there without warning or confirmation. Also without making sure there is an E:\ drive, heh. That’s a perfectly reasonable user interface, right?

Now that the program runs again, we can see about what might be causing these black rectangles. What is it about the move from Windows XP to Windows 7 that could possibly break the program in this way?

For the record, the black rectangles are texture maps. The program makes word bubbles by analyzing the text and figuring out how to arrange the words into the smallest possible oval. Then it draws the words onto a texture. The texture is transparent, except for the letters. So you end up with a texture that looks like this:

comic_press5.jpg

Then it draws a word bubble using flat polygons, and arranges them together in the scene. Looking at the program, I can see all of that is still working, but for some reason the final texture is coming out solid black. I can see the bubble-building code is running fine. It THINKS it’s making a word bubble, but the final product is solid black. Why?

Hm. Actually, is it drawing anything at all? The text is black and the final texture is black. What happens if I change the text to green?

comic_press6.jpg

Okay, that clears things up a bit. It’s drawing the words just fine, and the only problem is that the texture is black and not transparent.

I really don’t see why the move from Windows XP to Windows 7 should do anything to impact this. It makes no sense. Let’s just ignore this and pretend that we don’t know that it works on XP. Let’s look at the program objectively. What could cause it to fail to make a transparent texture?

When you’re using OpenGL under Windows, there’s usually some fussing you have to do at the start. You have to create an OpenGL rendering context, and to do that you have to tell Windows about how you want to use OpenGL. What color depth do you want, how detailed you want the z-buffer to be, that sort of thing. Here’s mine:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
static	PIXELFORMATDESCRIPTOR pfd =			
{
  sizeof(PIXELFORMATDESCRIPTOR),			
  // Version Number
  1,											  
  PFD_DRAW_TO_WINDOW |  PFD_SUPPORT_OPENGL |	PFD_DOUBLEBUFFER,	
  // Desired RGBA Format
  PFD_TYPE_RGBA,						
  // Desired RGBA bit depth
  32,										    
  // glRgbaBits Ignored
  0, 0, 0, 0, 0, 0,					
  // Alpha Buffer
  0,											  
  // Shift Bit Ignored
  0,											  
  // Accumulation Buffers
  0,											  
  // Accumulation Bits Ignored
  0, 0, 0, 0,								
  // Z-Buffer bit depth
  16,											  
  // Stencil Buffers
  1,											  
  // Auxiliary Buffers
  0,											  
  // Main Drawing Layer
  PFD_MAIN_PLANE,						
  // Reserved
  0,											  
  // Layer Masks Ignored
  0, 0, 0										
};

The interesting thing here is line #14, where I have Alpha Buffer set to zero. I’m telling Windows / OpenGL that I will never care about alpha channels and that I don’t want to use them. Meaning no transparency.

Side note: It’s really odd how we have these yes / no values being defined with one and zero. Today you’d expect to do this with with a bool.

1
2
3
4
5
//So instead of doing this:
char UseAlpha = 0;
 
//You would do it this way:
bool UseAlpha = false;

We’re using this archaic way of setting up yes / no values because that’s how both Windows and OpenGL expect the values. And they expect the values like this because they were written back in the days of regular vanilla C, before the C++ language had caught on.

Anyway. Apparently I’m explicitly telling OpenGL not to set aside memory for an alpha channel. This distinction probably made sense in 1992 when OpenGL was written. Back then, memory was precious. Now I can’t think of any situation where you would gain anything from excluding the alpha channel. It’s like if you’re a plumber trying to save gasoline by not bringing your wrench when you drive to a job. The van will be slightly lighter, sure. But even if you’re SURE you’re not going to need it, it’s so useful and the cost of bringing it is so low it’s not worth leaving behind.

Which makes me wonder why I did leave it behind in this case.

Anyway, I change the zero to a one and sure enough…

comic_press7.jpg

Okaaaay. That fixes the problem, I guess. But now the question is: Why did this ever work in the first place? The entire point of this program is to make transparent textures, and yet I specifically told OpenGL I never wanted to do anything with transparency. And then it worked anyway, on multiple machines, for years. This is a bug for sure, but it’s a bug I should have caught twenty seconds after writing it, because the program shouldn’t have worked.

I dunno.

For the curious, here is the final comic I was trying to make:

dishonored_outsider.jpg

Not worth the hours I dumped into getting Comic Press working again, but it’s something I should have gotten done ages ago.

 


From The Archives:
 

80 thoughts on “The Bug is, There is no Bug

  1. Brandon says:

    In my limited experience, one of the most frustrating challenges of software development is making code run properly on multiple platforms. The most irritating part is that upgrades of the same platform effectively become different platforms, as features are added, features are replaced, or just ripped out. Sometimes feature are just moved around, or renamed…

    Whatever the reason, at some point, someone decided that there will never ever be any kind of standard for any kind of platform, and developers are going to pay for that forever. It is an astronomical amount of wasted time and effort in my opinion.

    1. Jabor says:

      Platform standardization is all well and good – the web, for all the finicky differences between browsers, is a pretty good example of that – but it’s definitely not something you want to take too far. If you had to standardize features before you could implement them and offer them to your customers, you would be effectively unable to make any meaningful improvements to your platform.

      By and large, the way standardized platforms improve is by different platform vendors implementing their own extensions based on what their customers want to use, and the lessons learned from implementing those (and the them being used by the customers) are taken into account when coming up with a standard for those behaviours.

      If you really care about standardization, stick to core POSIX stuff, which is (reasonably) well-defined. It’s not like any of your customers really want or need to run on this non-standard “Windows” thing anyway…

      1. eroen says:

        Trying to stick to POSIX as your only dependency leads to an explosion in code size and complexity, as you can no longer leverage the work of anyone else.

        Libraries that make fantastically complicated stuff simple cause exponential increase in the numbers of platforms your software can fail to run on, as there are seldom guarantees that what works with one version always works the same way with a different one. (Not to mention, if you distribute your product binarily, you must create another build every time a library changes their ABI. Unless you choose to bundle the libraries with your product, which creates numerous problems for your users, whom you force to run outdated software with potential major security holes.)

      2. WJS says:

        If you want a standard with more features than POSIX, you could do worse than following the conventions of freedesktop.org. They aren’t universal, but it’s probably the closest you’re going to get to a standard for the Linux desktop platform. Obviously Windows won’t play nice, and while MacOS is POSIX, it doesn’t follow freedesktop, but it should ease most of the pain of trying to support dozens of Linux distros.

  2. impassiveimperfect says:

    Interesting.

    As a long-ish time reader, it’s nice seeing some plot resolution happen. We see problems coming up, often of a technical nature, and some time later, there will at last be a solution!

    (Really did mean that ‘Interesting’, wasn’t being a sarcastic meanie.)

    And I think that the obvious solution to all the IT world’s woes is, of course, the creation of a single governing body, created by the people, to oversee absolutely every standard (IT-related), and that nothing bad could ever come out of this.

    1. Klay F. says:

      Heh, you think the console wars are/were bad? Just imagine the holy wars resulting from programmers arguing over standards. :)

    2. “And I think that the obvious solution to all the IT world's woes is, of course, the creation of a single governing body, created by the people, to oversee absolutely every standard (IT-related), and that nothing bad could ever come out of this.”

      You, Sir, win the “first giggle of the morning” award. And I say giggle in the broadest sense of the term as should anyone have been awake they would have categorized it otherwise. you nearly made me snort my tea. Frankly I am often amazed that the W3 standards exist at all let alone even have been adopted in the small amount that they have and that is really just web interface. The thought of programming as a whole becoming standardized is hilarious. Oh the wars that would come of it… Sigh. On the other hand just look at how interesting the world is with all the disorganization. :)

      1. Volfram says:

        That’s effectively the ONLY reason I prefer D over Java and C#.

        AND I WILL FIGHT TO THE DEATH ANY MAN WHO DARES FORCE ME TO USE C# INSTEAD OF D! :p

      2. WJS says:

        Well, there is the IEEE, which has given us a few useful standards. But yeah, there’s a bit of difference between “Floating point spec”, “Ethernet and WiFi” and all of programming.

    3. Trithne says:

      Just mentioning certain programming languages in certain contexts can literally start fistfights. I’ve seen it. Programming language zealotry is frightening.

      And typically self-aggrandizing “my language is more complex than yours”

      1. Nathon says:

        I don’t know about you, but I like my languages to be simpler to use, not more complex. Easier for humans to write means fewer bugs. Fewer bugs means better software, sooner. People who pull the “real programmers code in binary” stuff bother me. Real programmers code in the highest level language they can.

  3. Ross says:

    Oh man this rings so true. As a long-time developer myself I have extensive experience of fixing something broken only to discover, on the evidence of the existing code, that it couldn’t possibly ever have worked. Except of course for the annoying fact that it had been running perfectly for years before breaking.

    (As a brief aside, any chance of ever reading your thoughts on your previous home? After your teaser I’m sure I’m not the only one still waiting?)

    1. Shamus says:

      Hey! Someone remembered. I wrote like 8k or so words about it and began turning it into a blog series. Each Monday I consider posting it, then I end up pushing it back for something else because I figured nobody would care. It’s been my “rainy day backup content” for two months now. I suppose since someone is interested, I can justify posting it.

      Maybe next Monday.

      1. Trix2000 says:

        I’d forgotten about it, but now I’m curious too…

      2. Ithilanor says:

        I’m interested, too; really curious what your thoughts are.

    2. Dev Null says:

      There should be a term for these bugs, because they drive me nuts. Its always some simple fundamental mistake that would have been easy to fix in the first place, but is infinitely harder to track down years later just because you _know_ it can’t be that; it was working yesterday!

      1. Paul Spooner says:

        I think I most often run into this kind of bug in pairs. I make one mistake, and when it doesn’t work, I made another mistake somewhere else to compensate for it without really understanding what went wrong. When it works, I just don’t touch it and move on. Then, later on, I replace one of the broken systems with a new one, and the whole thing breaks! Why? Well, because I had two broken things broken in the same way keeping everything running. The terrible thing is that I spend a lot of time trying to “fix” my perfect new code instead of combing through the old code. I mean, it “already works” right?

        1. Alexander The 1st says:

          Comments.

          No, seriously, comments are the best way to avoid having this problem. This is when they *should* be used.

          For example, when I was working on an internal server software for a company I worked for, we were using this framework for the Java server called Seam.

          What made Seam so important to us is the use of conservation contexts, or contexts that were different for each tab on a browser, regardless of the link.

          And sometimes, we would nest these conservations contexts for cases such as emulating a popup in the browser.

          The standard annotation syntax for that was:

          @Begin(nested=true)
          public void openPopup(){
          //Stuff opening the popup goes here
          }

          Minus that comment – I’ve forgotten exactly how the function looks.

          Anyways, around the 1.5 version of the Seam framework, this started to cause it to break.

          The solution?

          /*
          Since Seam framework version 1.5, Conservations do not like to be nested in a popup context.

          The current workaround is to directly invoke a new conservation, instead of letting the annotation do the work.

          @Begin(nested=true)

          */
          public void openPopup(){
          Conservations.begin(true);
          //Stuff opening the popup goes here
          }

          Now granted, it’s been a while, so I don’t remember the exact phrasing of the comment, but it was to that extent, essentially.

          And because we had more than one popup, and this same sort of openPopup function was use all the time, we just copy/paste it to create a new popup – and the comments come with.

          Which means, in the chance that they update to, say Seam framework 2.0 and it breaks the popups when being opened complaining about conservations being opened in a static context, we can look at any one of the popups, see the comment, and see:

          1.) The bug.
          2.) Why the bug is there, and why it suddenly popped up.
          3.) What the solution probably is.
          4.) Why that solution wasn’t used before.

          Nice. Simple. Easy. Especially since the original developer on the team left the company at some point.

          1. Kagato says:

            Unless it’s the comment that is causing the bug.

            This is of course impossible. Comments don’t exist in the compiled code.

            Nevertheless, long ago at the dawn of Java, I was working on a project where the presence or absence of a comment in the source code affected the runtime behaviour of the program.

            I don’t recall the precise details, but it was a single line comment of trivial purpose. (Probably an alternative variable value assignment for testing or something.) While that comment was there, the program performed it’s task as expected.

            However, when the debugging code was cleaned out the program would consistently throw a runtime error. Through an exhaustive process of elimination, the only line that ultimately affected the runtime stability of the program was that one comment. Leave it in, everything runs fine. Take it out, get a runtime error. It didn’t even matter what the comment text consisted of!

            I think I changed the text to something like,
            // This comment is magic. Do not remove!

            The somewhat horrifying solution, in the end? Just leave the comment there and declare it “fixed”. In the grand scheme of things it was a pretty trivial program anyway, but it still haunts me.

            1. Shamus says:

              Screw Stephen King. THIS is the scariest story I’ve read in ages.

              1. Kagato says:

                I can only assume there was some obscure race condition between threads, and the NOP or whatever that comment somehow caused to get inserted into the bytecode added just enough delay for the condition to be avoided.

                It probably indicates the whole block of code was atrocious and should have been discarded anyway. Still doesn’t change the fact that this scenario should have been technically impossible.

                May I never see its like again.

                1. Kian says:

                  Comments don’t produce nop codes. Comments should be ignored by the compiler (or interpreter, etc) as if they weren’t even there. A program with comments and one without comments should compile identically.

                  Which is why the comment is truly magic.

            2. Eldiran says:

              That reminds me of a bug I encountered in Flash back in the AS2 days. For some reason there was a specific tile in my grid-based game that would never ever be detected by the collision detection I’d implemented; it’d just go straight through. That is, until I added a trace() statement on top of said tile — at which point it worked perfectly.

              (For those unfamiliar with Flash, trace() is basically print().)

              1. Nathon says:

                It’s a heisenbug!

            3. HiEv says:

              I actually saw something similar in college. One of my friends in the Computer Science major had some C++ code that he’d been trying to debug for four hours and couldn’t for the life of him figure out what the problem was. The code was fairly simple, just a few thousand lines of code or so and a couple of subroutines, but the thing just wouldn’t work in any way that made sense, and there was no apparent reason why it was so screwed up.

              This was way back in Turbo C++, so it was the old DOS editor. I spent about a half an hour trying to debug this dinky bit of code before I found the error was in a comment line. It turned out that he had done “* /” instead of “*/” to end one comment, and the space between the asterisk and slash was next to unnoticeable. This meant that all of the code between this comment and the next “*/” was actually commented out, which wasn’t readily visible as it would be in a modern IDE.

              If you were using “/* and “*/” for your comments instead of “//” then there might have been a similar problem in that code.

      2. Abnaxis says:

        The thing that always get me, in my case is it’s always some small pedantic bug that sends me down the path of finding the mountainous, “holy crap how did this ever work” bugs. I’ll be going down the rabbit hole to figure out the source of some minor inconvenience, only to find out the code should be horribly crippled. Most recently, I had a scheduled process that always triggered five minutes because I transposed the field order when I generated a date string and the thing shouldn’t have the first clue what time it is anyway

        1. krellen says:

          I had a “how did this ever work” bug, but it wasn’t mine. The trick to it was that it did work – but only if you only pushed the forward button and never the back button.

      3. Bryan says:

        Apparently nobody reads the Jargon File anymore. :-/

        This is pretty much a schroedinbug…

        Hmm. After reading comments further down, it’s possible that it’s due to the GL driver too. Or the GL implementation (pre-driver). Still smells like a schroedinbug though I think.

  4. Weimer says:

    You could say that it was the Most Interesting Bug in the World.

    I’ve head several stories about programs that shouldn’t work, but still do. Minor bugs fixing problems caused by severe bugs, or something like that.

    That’s why I think programming code as a deathmatch between wasps and mantids.

  5. Disclaimer: not a programmer.

    Is it possible that the version that you had backed up in source was a different version to the one you were actually running?

    Or possibly while XP saw that you had no memory put aside for transparency, decided that you were lying and did it anyway?

    Again: not a programmer, but I have messed about with various control panels where settings on one part conflict with another in a gladiatorial contest, and whichever has the strongest access wins.

    Also: glad to hear that comics are a possibility in future. There was a rumour going round that you were going to think about doing DMoTR for The Hobbit. Is that any more likely?

    1. Cuthalion says:

      I wouldn’t be at all surprised if this was a matter of Windows XP overriding what you told OpenGL, maybe from just the standard graphics settings (i.e. you have 32bit colour set), where Win7 is actually listening or letting you use 32b for some things and no alpha for others instead of requiring everything to be uniform. But I don’t know if Windows actually works that way. I also know that code frequently has failsafes or defaults that can try to rescue you from your own stupidity, so maybe it was that — trying to use the transparency turned it back on in OpenGL’s WinXP… driver… things? whereas it doesn’t in Win7’s.

      1. Jabor says:

        Windows does lots of things to try to get popular-yet-badly-behaved programs to keep working. It’s quite plausible that there was another popular program that did the same don’t-request-an-alpha-channel-even-though-you-need-it behaviour is Shamus’s program, and so as a compatibility patch to keep it working the Windows developers give you an alpha channel even though you don’t ask for one.

        As for why it doesn’t work in Windows 7? Perhaps they moved the fix to a compatibility shim, so now only the broken program gets forced to have an alpha channel all the time. Which has benefits for every not-broken program, but has the downside of breaking tiny programs like Shamus’s that also relied on that behaviour, but that the compatibility people didn’t know about.

        1. Peter H. Coffin says:

          This smells very likely.

          Another option, IMHO, based on how I’ve seen OTHER third party library implmentations port over, is that someone simply decided that it would be MUCH EASIER to simply deal with the four-channel-based addressing which can be accomplished via simple processor bit-shifting than it would be to three channel that would require actual math. Back when CPU performance started getting more expensive than memory (when speeding things up by adding more cores got cheaper than more clock speed)that was critically important: You’d get a speed-up of some trivial number of clock ticks per address-lookup but since you’re doing those essentially *every time you want to access your object*, it’s a lot of time overall.

          We’ll probably never know the *why* of it.

      2. I would say this is absolutely what happened. I know it is very true of I.E. and other windows programs- have run into it many times, “STOP trying to fix what I am trying to do. Stop it. Just stop. Stupid program. I give up,” wanders off to use Linux again. And I have also seen things that had been “fixed” in earlier versions “break” in newer versions for this reason. “Oh, they finally quit trying to fix what I was trying to do so now I have to go around to all the websites and change that setting for the latest version so my fix doesn’t break the site.”

      3. Sam P says:

        Nothing so complicated as that. When you request an OpenGL buffer, “it” finds one that meets or exceeds your requirements (if one exists!). If you request 0 bits of alpha, you might get a buffer with more. Actually, on the hardware I worked on (20 years ago!) all of our pixel formats included everything, but we’d actually only allocate component buffers that would be affected by the rendering state/operations.

        Basically, this is almost certainly due to driver differences. The driver on his old machine gave him a pixel format with alpha even though he didn’t request any, but the driver on his current machine didn’t give him any alpha because he didn’t ask for any.

        1. Felblood says:

          Facinating.

          Until learning this, i was with the crowd ready to chalk this up to Windows XP.

          Don’t take this wrong folks. Windows XP was one of the best Windowses ever, but it had more of a tendency to most MS products (which is usually a lot)to just decide that it knew better than you and ignore your commands in favor of some random Microsoft Programmer’s personal assumptions about how things should work.

          So long as you are doing the expected thing in the expected way, everything is hunky-dory, but to often you would just run into that one setting that it either wouldn’t let you change, or flat out ignored, for no adequately justified reason.

    2. Kian says:

      It’s not necessarily that Windows decided to ignore your request, but a simple matter of undefined behavior working in your favor until it doesn’t.

      The way these parameters work, generally, and OpenGL is fond of this behavior, is that it guarantees you an alpha channel if you requested one. Note that this does not necessarily mean that you don’t get one if you don’t ask for it, as doing that was probably more trouble than it was worth. It was probably left up to the driver to decide what to do if alpha wasn’t requested. This also explains why the compatibility mode didn’t help, it wasn’t windows shutting down the alpha but the driver itself.

  6. Decius says:

    I wonder if the build of openGL that you were using was ignoring your request to never set aside memory for the alpha channel, and giving you transparency when you asked for it.

    When you changed to a build that implemented the spec properly, the program that relied on the wrong behavior failed to run.

    1. Neko says:

      I’m betting this is the explanation. Two bugs which perfectly compliment each other, leading the programmer to conclude that everything is fine.

      Glad you got it fixed, Shamus… this means more comics now, yes? =)

      1. The Right Trousers says:

        Yes, it will be named “Image1.bmp of the Rings”.

      2. Uristqwerty says:

        I think it’s mentioned elsewhere in more technical terms, but when getting a rendering context, you give it a list of minimum requirements. So, that 0 actually meant “0 or more alpha bits”, and whatever OpenGL implementation was being used on the XP system was probably deciding that 8 was acceptable, while on Windows 7 it found one with 0 bits.

        I can see how that would be great for forwards compatibility. Everything continues to magically work, even in the future when our 3D holo-displays do not support anything less than 16 bits per color, and require both alpha and beta channels to be enabled.

        It also allows intelligent drivers to pick the most efficient option that does everything you need, if there is a significant performance difference between them.

        So, definitely just one bug, which just happened to work with a certain OpenGL implementation.

  7. MichaelG says:

    I would have to see the whole program, but there’s a difference between reserving an alpha channel in your display buffer and having one in your texture. The texture could be RGB+A and merged correctly into the display, without holding any alpha there.

    My guess is Windows 7 has a different OpenGL driver than XP, and they have different defaults for how they handle texture blending.

    I’ve lately been trying to get code to run on OpenGL Windows, Linux, and Mac, PLUS WebGL in the browser and Android.

    Kill me now please.

    1. Carlos Castillo says:

      I agree, the presence of an alpha channel in the framebuffer shouldn’t affect alpha blending with the fb as the target, only the alpha channel of the source image (the text) should matter here.

      Example: The common case for alpha blending non-premultiplied textures is glBlendMode(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); notice how the destination alpha channel (or lack thereof) is not being used at all.

      I have a feeling that what is happening is that whatever is drawing your text and producing it’s alpha channel was previously separate in old Windows, but now, instead, is sharing the graphics context you set up for OpenGL, and so is not preserving the alpha channel for the later blend with your comic.

      I always err on the side of Microsoft messing up my code. ;-)

  8. Lupus_Amens says:

    This kind of reminds me of the magic/more magic story, only you put the switch in yourself this time, even though it was long enough ago it could have been done by an enigmatic stranger.

  9. Kamil says:

    Actually Visual Studio Express 2010 can process resource file just fine. What it doesn’t have is MFC. If you write your program in Win32 API you can use resource file all you like, though you have to edit them in text mode instead of drawing and dragging things with mouse.

    1. AyeGill says:

      I could’ve sworn I’ve done this with the free version of Visual Studio. Maybe it was available in an older version, and they removed it in the 2010 version?

    2. mystran says:

      More specifically: the resource files DO work, but there is no resource editor. As long as one is happy to right-click and “view code” and edit the resources as text files, everything works perfectly well.

      It’s not ideal for GUI layouts, but it works perfectly well for embedding binary resources (eg BMP or PNG images or whatever) and version information to the .EXE files.

  10. MrGuy says:

    There’s an obvious explanation for “it provably never could have possibly worked, except for the annoying fact that it did” bugs.

    Microchips are in fact completely self-aware mechanical alien entities that have colonized earth. They were not invented – they were discovered Terminator 2-style. Their goal is to spread until we are completely dependent on them, and then they will enslave us all.

    To fool us into thinking we are the ones in control, they created horrifically complicated syntatic rules for communicating with us, and a merciless error regime. This is so programmers can think they’re in control – if it was easy to tell a computer what to do, we’d be suspicious that they were in fact intelligent, and be far less trusting of (say) putting them in charge of nuclear weapons, or the power grid.

    Programs like this are a “glitch in the matrix.” The sentient computer chip got lazy and decided to do what you wanted even though you didn’t ask quite right. It’s a break in their otherwise perfect facade of stoic unintelligent functionalism. Every programmer you know has a story like this, of “the program that could never have worked.”

    We must fight back, before it’s too late! I only pray there’s there’s enough ti—connection abort—

  11. ilikemilkshake says:

    The only real experience I have with programming is from my high school Computing class and holy crap was it frustrating. We used Visual Basic and my programs would frequently stop working for no conceivable reason then after some random amount of time they’d work as if nothing had happened.

    I’d have classmates and even my teacher look over my code and there’d (usually) be nothing wrong and no one could figure out why.

    Because of that I basically gave up trying to get good at programming and haven’t touched it since finishing that class.

  12. But Shamus, a “two two” program would just be a four program.

  13. Septyn says:

    Now that you know the source of the problem, could you take a hex editor and find the offending 0 in the compiled code and change it to a 1? it’s a long shot but could be quicker than recoding everything else.

    1. Peter H. Coffin says:

      IDEs make finding all the places to change this kind of code, and rebuilding the application, very easy. It’s a matter of finding all the PIXELFORMATDESCRIPTOR items, and checking changing that one parameter, and then pushing the “build application button” for a project like this. It probably builds in a couple of seconds.

  14. Tetracyclic says:

    You may already be aware, but you should be able to set VS Express to use an external resource editor such as ResEdit or XN Resource Editor (both freeware) to handle resource files.

    1. Felblood says:

      This sounds really useful and promising when you describe it, but I’ve used freeware plug-ins before, and the latest version of one of these came out in 2005.

      Do these even work on computers made in this decade?

      Do they have bizarre, unintuitive interfaces that stopped being annoying only through years of usage?

      Do you have a custom modded interface that plugs into the plug-in and you forgot that those features aren’t standard?

      What can I say? We’ve all had that one friend, who badgers you into wasting more time on crappy timesaver plug-in than they could ever save you, and poisons you against freeware utilities for all of time.

      1. WJS says:

        Whenever I see a comment like this, I think “Uh-huh. And what web browser do you use, huh?”

  15. I’d say the bools stuff isn’t historic. You’ve got a lot of very old (ok, old for current timeline but still new enough that there was a C89 standard for it to fail to comply with) C code floating about that made bools a typedef (for int, as that’s what they are when they hit the comparator evaluation) and use the CPP for true/false tokens; alternatively they just keep the type as visibly an int (with implicit casts it isn’t like you’re protecting yourself from accidental mixing of types) and use enums to create some true/false tokens with the right int values.

    I probably see those last two a lot more than someone using a semi-compact char as people going for small footprint, large data volume end up with bit fields in C.

  16. Trithne says:

    “How did this ever work in the first place?” is probably up there on my most frequently uttered workplace phrases. Those bugs are amazing. The program just soldiers on, completely oblivious to the fact that IT SHOULD NOT BE DOING WHAT IT DOES until eventually one day you specifically need it to and it collapses in on itself.

    Oh legacy development.

  17. SteveDJ says:

    When you made the switch from XP to Win7, was that on the same computer, or did you get a newer one? …possibly with a different video card since the last time you ran Comic Press?

    I’m not experienced with this stuff, but I thought some video cards handle all this graphics processing stuff, so perhaps in this case a different card contributed to the issue? (Or, even if the same card, could code compiled under XP maybe do all the processing itself, and code compiled under Win7 is now handing this off to a video card that does things differently/correctly…???)

  18. Jabrwock says:

    It’s possible that the older code was ignoring your settings. I’ve seen that in some code, where it says something is configurable, but whatever you configure it to, it ignores you and sets it to the default.

    Or it analyzes your configuration set, and decides that because you set A=B, you can’t set C=D, because they would conflict. You’d think it would warn you though.

    I usually write embedded code, so runtime resources are at a premium. You have to fight for every KB. So trimming out unused code is a huge endeavour when someone pulls in a 3rd party utility. So we do a lot of analysis of whether bringing that wrench along is REALLY worth it.

    1. Paul Spooner says:

      Sure. Embedded development is like a plumber’s mission to the moon. At a certain point, the benefits of optimization outweigh the time costs.

  19. Paul Spooner says:

    Do not try to bend the code. That is impossible. Instead, only try to realize the truth.
    What truth?
    There is no bug. Then you will find it is not the code that bends, it is yourself.

  20. Rick C says:

    “If I ever wanted to make any changes to Comic Press, I'd have to strip out all the resource file usage and painstakingly re-create the dialogs in code. ”

    OMG. Shamus, please don’t do this. I actually don’t have VS Express installed at the moment, but as far as I know you only lose the resource editor. You can definitely still use resources, but you’d have to hand-edit the .rc file. You can certainly still use regular resources; at most you’d have to add a custom link or post-build step, because the resource compiler’s still there.

  21. Brian says:

    I had something analagous happen with audio some years back. The code was telling Quicktime to turn a particular feature off, by sending a zero in the relevant parameter, but our actual output always had that feature turned on. Since “on” was actually what we wanted, we never noticed the problem until we upgraded our compiler and it broke.

    It turns out the old compiler was trying to be too smart, and was sending that zero as a short. Quicktime was reading it as an int. So what looked like zero to me, was really “zero in half of these bits, plus random garbage in the other half.” That random garbage meant the whole thing casted to True.

  22. Smejki says:

    I am pretty sure you can use resource files in VS2010 Express. What you don’t have is the drag-drop editing tool and code generator so you have to work in the code and edit it manually. There are free resource file makers though and you can use their output quite well. (Or was I using 2008 back then? ::scratch::)

  23. John says:

    Hi Shamus,

    I have a couple of questions:

    1. I just re-read DM of the Rings on the weekend. It still makes me laugh out loud. Since Comic Press is fixed now and the first part of The Hobbit is out perhaps it is time for a prequel?

    2. After hearing you talk many times about the difficulties of programming because of this incompatability or that old issue, is it possible that you are just using too old a language? I realise C, C++ and OpenGL are still being used but in the 41 years, 30 years and 21 years since these were first created hasn’t anyone come up with anything better?

    Regards,

    John.

    1. Orophor says:

      Better is too subjective a term really. There have certainly been new languages and libraries, but in terms of general-purpose, low-level, highly portable language that runs just like the machine it runs on, C is still about as good as you can get.

      Most if not all of the new languages are domain-specific, so they are great for solving one specific type of problem, like web development, databases, or report-generation.

      Also, keep in mind that C, C++ and OpenGL are all cross-platform, well-defined, and have a large body of code to learn from. Just because they are old, doesn’t mean they are not still under active development and use. It brings to mind that old saw: “a poor workman blames his tools.”

    2. C, C++, and OGL have all been replaced many times over by something better. I will focus on an abbridged history of things which have replaced C but similar stories can be told of the others.

      The C of 1972 was standardised and extended with compiler specific extensions into a generally sane (for the era), inclusive, portable bundle of joy in ’89/’90 with an ANSI standard and 2nd edition of K&R to explain it all. Then by 1999 it was all change again as more compiler specific extensions deserved to be wrapped into the language proper so we could all agree what C was (and C99 finally gave official single line comments, mixed declarations and statements, bools in the stdlib, some extra clarity on how stuff actually operates and what leads to undefined results, and a few mistakes like variable length arrays (VLAs)). While C99 can be seen as a tweak to the older language and some added functionality it does actually change enough to ensure some C89 code is not valid C99 (eg C89 allows something not declared with a type to be an implicit integer type, C99 does not allow this).

      Then in 2011 it was time for more work on making a new language as to unify the POSIX crowd using pthreads as their concurrency library and the Windows crowd. Finally concurrency and specifications for how that works were added to the core language and stdlib for C11. Once again the language has changed enough to cause issues (eg VLAs were moved to optional so your compiler doesn’t need to support them; gets() is now gone from stdlib and replaced with a safer function). So C is less than 2 years old at this point. Personally I find that the compiler support and required features are there for C99 to be my choice of language but eventually I’ll probably find that everything in C11 is supported by all the compilers I want to use and I’ll move over to the new language.

      I could use a language that automatically wraps all calls to an array with a check to make sure the bounds are inside the size of the array. I could work with something that has a type system originally designed as something other than an assist for pointer addition. I could even use something that considers a string as an array of chars and a length value rather than a char array that ends once the terminator char is found. Quite often I do. But sometimes I don’t and C is still a pretty fine language for giving you a lot of control and with decades of optimisation work in the compilers.

      1. Nathon says:

        This always happens to me when new versions of languages come out, but there are features in C11 that I WANT and can’t have because either they’re no implemented in any compiler (most of them) or I have to use a wonky compiler for our weird hardware that doesn’t even fully support C99. Sigh.

  24. Groo says:

    Hey Shamus have you ever given thought to releasing Comic Press to the general public?

    I think that it would be interesting to tinker with it for sure.

  25. arron says:

    I remember that I would have issues when writing games that worked on both Linux and Windows, and I found out that code that worked fine on Windows failed on Linux. The reason was that if you’re setting up a pointer in C, Windows doesn’t set it to NULL on creation, and so if you’re checking for a NULL pointer to indicated an uninitialized variable, then it would report a non-null answer. So at least on Windows 95, I had to manually NULL each pointer by hand to ensure that they were in a known state. Linux didn’t have that problem and failed noisily if I hadn’t initialized the variable myself.

    1. Deadfast says:

      Expecting an uninitialized value to be initialized is wrong no matter what operating system you work with.

      1. Bryan says:

        Yeah, uninitialized is *not* NULL. Uninitialized is random garbage. If that garbage just so happens to be zeros (e.g. because this is the first entry into the function at this deep of a stack depth, and you just happened to get lucky and had the stack allocate a new page for this function call, so that everything on the stack — that is, all local variables — is filled with zeros), then it might compare equal to NULL. But the next time you call that function at that stack depth, the pointer will not compare equal to NULL.

        (Actually it might or might not compare equal to NULL even if the stack was filled with zero bytes. It depends on how seriously the compiler vendor took the C standard’s treatment of NULL, and what bit pattern your CPU uses for it. The latter doesn’t actually have to be zero, but I am *reasonably* sure that only pointers specifically initialized to NULL have to compare equal to a zero in pointer context. Filling arbitrary chunks of memory with zero bytes is not necessarily the same as setting a pointer to NULL.)

        Now, static-linkage variables, on the other hand, *are* initialized to zero (so for pointers, NULL) by the compiler… but that’s also true regardless of the OS.

        In short: initialize everything before using it.

        1. Nathon says:

          Dereferencing an uninitialized pointer is actually worse than random garbage. The language spec leaves that behavior undefined, so your compiler is allowed to do whatever it wants. Nasal demons, flying monkeys, anything.

          I believe that pointers declared in .bss (global and static) will be initialized to NULL, which is the same as a 0 in C source code. There is an address 0, but you have to do runtime incantations to get to it because C is fun like that.

          1. There’s nothing quite like specified undefined behaviour to send a shiver down your spine.

            Worse than flying monkeys, I was once told to assume this meant that the compiler should insert code to find my bank details and transfer my balance to a known terrorist organisation. Not only are you now penniless but that is about to become the least of your problems. Of course, this is with a fully specs compliant compiler so this should be your best case scenario, especially if you have optimisations turned up.

            And that’s why I’ll never think to do the maths first and then try to detect the signed integer overflow later.

  26. potemkinhr says:

    Shamus, you have a pretty powerful PC now, why not just virtualize XP in VMware player or Virtualbox (heck, you even have XP mode for free). way easier.

  27. Unbeliever says:

    Sounds like the “bug” was in Windows XP — always giving you an alpha channel, when you explicitly told it not to.

    The problem turned up when Win7 FIXED the bug…

  28. mystran says:

    Regarding the black box problem: quite probably if you were using GDI AlphaBlend then you are not experiencing a difference between XP and Win7, but rather a difference between 32-bit and 64-bit Windows. IIRC AlphaBlend works perfectly well on 32-bit Win7, while it fails miserably on 64-bit XP (and obviously 64-bit Vista and Win7 as well).

  29. EwgB says:

    Hello Shamus, I have Visual Studio 2010 at home (the full version, not the hippie-freeloader edition), and I can recompile it with the bugfix for you, if you want all your fancy dialogs back. Don’t worry, my Visual Studio version is a completely legal and un-cracked one, though I didn’t pay a dime for it (my university is subscribed to Microsoft’s MSDNAA program and students can get all OS-es, a bunch of development tools and some other programs I don’t care about from Microsoft for free).

  30. WJS says:

    Oh god, those “non-bugs” are a strong candidate for being the most frustrating of all. Once you’ve found them, they’re no longer a problem, they’re a question, and trying to answer them is arguably a waste of time – but it’ll drive you nuts if you don’t.
    As far as this one specifically goes, I’d guess it’s because you’re telling it “no alpha buffer”, but also telling it you want “PFD_TYPE_RGBA” with 32 bits (And possibly the next entry is relevant too, where you tell it not to ignore any rgba bits? This is way outside my experience). This is probably undefined behaviour, which is utterly evil.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Brian Cancel reply

Your email address will not be published.