Lingua Programmatica

By Shamus Posted Monday Mar 29, 2010

Filed under: Programming 187 comments

Typical non-programmer question: Why are there so many programming languages? Why doesn’t everyone just pick the best one and use that?

Fair enough.

The definition of the term “computer language” can be really nebulous if you encounter someone who is in the mood to engage in social griefing through pedantry. Instructions for a machine to follow? Does that include 19th century player pianos? Machine code? Flipping switches and wiring on those first-gen electric computer-type contraptions? Considering over half the “language” consists of dragging and dropping icons, does Scratch count?

Let’s just sweep that all aside and assume languages began with the idea of stuff like COBOL and FORTRAN and we don’t care to refine the definition further. Humor me.

It’s pretty much standard practice to do the “Hello World” program when you’re learning a new language. The goal is to simply print the worlds “Hello World!”, and that’s it. It’s basically the simplest possible program that can still do something observable and meaningful for someone new to the language.

Here is the program in assembly language:

_start:					
 
	mov	edx,len
	mov	ecx,msg
	mov	ebx,1
	mov	eax,4	
	int	0x80

	mov	eax,1
	int	0x80
 
section	.data
 
msg	db	'Hello, world!',0xa
len	equ	$ - msg

Here is a functionally identical program, written in standard C:

				
#include <stdio.h>
  
int main(void)
{
    printf("hello, world\n");
    return 0;
}

And in BASIC:

				
10 PRINT "Hello, world!"

The first is incomprehensible to anyone who doesn’t understand assembler. The second is tricky, but you might be able to intuit what it does. The last one is simple and obvious. So why would you ever use anything other than BASIC?

Here is how the trade-off works: On one end you have a powerful, flexible language that makes highly efficient code. It can be used to make anything and the code will always be extremely speedy and have the lowest possible memory overhead. (Setting aside the issue of individual programmer skill.) On the other end you have a language that’s easy to use and understand. Some languages are optimized for specific tasks. If you happen to be doing one of those tasks, then your work as a coder will be easier.

Let’s say you want to write a program to take a given number and perform two tasks:

1) Print the number normally in base ten. So, ten and a half would look like: 10.5
2) Print the number in a base-6 number system and use an @ symbol instead of a decimal point. So, ten and a half would look like: 14@3. I don’t know why you would want a program that does this, but I promise this isn’t really any more arbitrary or goofy than a lot of crazy stuff a boss might assign the hapless coder.

The first task is conventional and almost all languages will have a shortcut for making that happen. The second task is unconventional and thus we’re not likely to have a lot of built-in language tools for doing it.

In assembler, these two tasks will be of a similar level of difficulty. You’ll have to write your own number-printing code from scratch, but when you’re done the two bits of code will be about the same level of complexity (very complex) and the same level of efficiency. (Highly optimized. (Again, this is assuming you know what you’re doing.))

In C, the first task will be trivial, and the second will take some extra effort. Printing the base ten number will be much, much faster than printing in base 6 with @ symbols. (Although both will be so fast on modern computers you’d have trouble measuring them. Still, if you had to print out a LOT of numbers, the differences between base 10 and base 6 would become apparent.)

In BASIC, the first task would be super-trivial. One line of code. The second task would require pages of code. 99% of your programing time would be spent on the second task, and it would be much, much slower than the first task.

Assembly is referred to as a “low level” language. You’re down there interfacing with the machine on a pure, fundamental level. Every line of code is basically an instruction carried out by the processor. BASIC is a very high level language. You’re writing things in abstracted, human-friendly terms, and a single line of code might represent hundreds or even thousands of processor instructions. Generally, the higher the level of the language, the more tasks become either trivially easy, or impossible.

The C / C++ language seems to be the “sweet spot” in this particular tradeoff. Most software on your computer was written in that language. But despite its dominance, there are still a tremendous number of situations where other languages are better for specific tasks. Some examples:

  • Java is totally cross platform. Write one bit of code and it’ll run the same everywhere. Downside: It’s slower. Really slow.*
  • Visual basic is dynamite if you need lots of small programs with simple functionality but complicated interfaces. If you need a lot of complex dialog boxes with sliders and buttons and drop downs, it will be easier to set them up in Visual Basic than C++.
				
10 PRINT "My chosen computer language is better than yours!"
20 GOTO 10

* There. Now we can be friends again.

 


From The Archives:
 

187 thoughts on “Lingua Programmatica

  1. wtrmute says:

    In my country, we say that programming languages are like soccer teams. Everyone has their favourite, and not necessarily for any identifiable reason.

    Still, the choice of language (at least for general-purpose languages like C or Basic, and not specific like SQL) is a very personal choice for a coder. Sometimes it’s as simple as “what I’m used to”, sometimes they can spout a list of twenty items mentioning such exotics as Hindley-Milner type inference.

    For the record, my favourite languages are, in no particular order, C, Scala and Lua.

    1. Vegedus says:

      A programming metaphor that involves sports? Heresy!

    2. Ingvar M says:

      Hm, I’d have to say “Common Lisp”, “Python” and would have a hard choice picking “C” or “go” as my third choice.

    3. MrWhales says:

      (this is very late for a reply, but!..)

      sounds like a reasonable metaphor to me, what country though? interested..

      My favorites are very close to your actually, Mine(also no particular order as well) are C(++), Lua, and Python.

      Python is really easy, and i usually think about it more than others.

  2. somebodys_kid says:

    Great post…haven’t seen assembly like that in at least five years.
    Different tools for different tasks for people with different skill levels…that’s what it basically boils down to.

  3. Eric says:

    If you don’t mind me asking, what are the differences between PERL and Python? What are they used for?

    1. Factoid says:

      PERL’s primary domain is in performing common tasks very well. Use PERL if you want to scan file directories, use regular expressions, etc… Application and human interface type stuff.

      Python has a lot of overlap with PERL, but what it’s best at is stuff that goes beyond PERL’s intended purpose…something like a complex data structures, OOD, etc…

      They’re sort of like first cousins as far as languages go. THere are a million programs that you could do just as well in either language, but some that will be much much easier in one or the other.

      Personally I prefer PERL, because I use those kinds of features a lot more. I will often write a program that scans a data set and performs regular expressions on it, but I rarely need to do anything with a complex data structure.

      A lot of languages are like this. They’re about 90% similar, and the other 10% is the stuff that the original designer felt was lacking in other languages.

      1. karln says:

        Python tends to be more readable; Perl can certainly be written readably, but it’s also much easier to end up with a visual disaster if you’re not careful. The tradeoff is that Python doesn’t give you as many ways to do the same thing; in Perl you have many options with their own upsides and downsides, so you can do either much better or much worse than Python, depending on your experience, attention levels etc.

        Perl is also very nice to use for people with an old-school UNIX background, as it uses many conventions from that environment.

        1. halka says:

          They say that good Perl code is indistinguishable (?) from line noise :)

          My favorites would probably be Python, bash (er.) and awk (um.); and Delphi for windows platforms – precisely for the reason Shamus mentioned VB. Programming basic applications in Delphi is a childs’ play, really.

          — include standard disclaimer about english not being my first language

      2. W says:

        Aw, man. Don’t write Perl as “PERL”. It was never an acronym and all the definitive texts write it ‘Perl’ for the language and ‘perl’ for the interpreter. ‘PERL’ makes it seem like FORTRAN or COBOL, not the lovely, thriving language that it is.

        1. Kayle says:

          Perl is the Practical Extraction and Report Language. I’m fairly sure the full text was devised mostly after Wall had a name for it.

          1. silver Harloe says:

            There are many backronyms for Perl, but they’re just that backronyms.

            1. Kayle says:

              I’ll point to the Perl 1.0 man page

              perl – Practical Extraction and Report Language

              (side note: I never used Perl 1.0, but I have a vague recollection of seeing Perl 2.0 come across Usenet).

              Though there’s also this note, at the end of the same man page:

              Perl actually stands for Pathologically Eclectic Rubbish Lister, but don’t tell anyone I said that.

        2. Neil says:

          Are you saying FORTRAN isn’t a lovely, thriving language?
          Cuz I’d probably have to agree.

    2. wtrmute says:

      PERL is a comparatively older language, evolved (I think) from a set of macros to manipulate text, so it is very good at that. Its development has always been very haphazard, with features added as time went. As a result, it’s a very heterogeneous language, and a bit tough to learn. It is, however, a very good language to write things like batch files and initialisation scripts.

      Python is comparatively newer, and tries to make its syntax be as clear as possible. It’s a language that can be used for embedded scripting (for example, in games), but it’s rather large and generally is used on its own for all sorts of small to medium-size projects.

      I unfortunately don’t have snippets of code to show you the difference, but suffice to say they have very different feels.

      1. Kayle says:

        More specifically, Perl was devised as a way to pull together the functionality of a bunch of Unix text tools that traditionally were pieced together using shell scripts: awk, grep, sed, tr (and others, but these are pretty clearly the most important and most influential). Perl was intended to help automate Unix system adminstration tasks.

        Aha! here is the original public release (on the Usenet group, comp.sources.unix).

        Python is a much more elegant and modern language with a robust library that supplies the functionality that’s embedded into Perl’s language core. As such, Python tends to be rather more verbose but more understandable than Perl code often tends to be.

    3. Alan De Smet says:

      From a “what are they good at” point of view, they’re close enough that they’re basically interchangeable. Both are quite good at text processing, acting as glue between other programs, and are powerful enough to do Real Work, if you’re willing to accept that they’re slower than, say, C. Both provide a rich set of primitives and vast libraries of supporting tools allowing you to get on with your actual task instead of spending time building infrastructure. Both are great for the sort of small one-off programs that programmers frequently find themselves needing; tasks like, “We need a program to covert all of our data from the old database system to the new one.”

      The difference is primarily mindset.

      “The Zen of Python” include this key element: “There should be one– and preferably only one — obvious way to do it.” (That’s from the Z Python says that uniformity is best; part of the payoff is that if I need to work on another person’s Python code, it likely look very similar to how I would have written it. I shouldn’t have to learn the unique idioms of a particular project or developer when a common set of idioms could have solved the problem.

      The Perl motto is: “There’s more than one way to do it.” Perl says that a programming language shouldn’t tell you how to do your work, that you’re a smart human being and know the best way to approach your problem. Sure, you could solve the problem given more limited tools, but the resulting solution won’t be quite as succinct, quite as clear, and quite as idiomatic to the problem. Larry Wall, the creator or Perl, once pointed out that the real world is a messy place with all sorts of unusual problems, and that Perl provides a messy language that frequently lines up well with the unusual problems programmers have to solve.

      For most programmers, one of those two mindsets will better fit how you approach problems. The other language will look stupid.

      Interestingly, this argument plays out in other sets of languages as well. Scheme versus Lisp, Java versus C++.

      1. Python is also really, really good at graphical processing. Perl, not so much.

        1. scragar says:

          I don’t see why you say that, perl has SDL and openGL just like python does, both have access to a huge range of GUI interface toolsets like GTK or QT.

          I’ll admit that perl wasn’t written that way, it’s got some quirks as a result, but the functionality is there, and it works perfectly.

      2. Tizzy says:

        The downside of: “There's more than one way to do it.” is that reading other people’s code can be a real challenge. Hence this remark lifted from Wikipedia:

        Some languages may be more prone to obfuscation than others. C, C++, and Perl are most often cited as easy to obfuscate.

    4. Eric says:

      Wow. I wasn’t expecting that many replies, but thanks all: that helped a lot.

  4. Zyzzyva says:

    Then there are also languages like LISP and Prolog, which are very good at doing things quite unlike the things BASIC and C are good at, and really terrible at doing other things (including, sadly, a lot of practical tasks); and then there’s the really fun stuff like Scheme and the simply-typed lambda calculus, which are terrible at pretty much everything but are much beloved by computer scientists because, while doing things with them is pretty close to impossible (or at least, pretty mind-numbing), proving things about them is amazingly easy. God help you if you want to prove something about C++.

    1. Factoid says:

      I never felt like Scheme was difficult to use…but the only thing I’d ever want to use it for would be AI programming. Maybe that’s because that’s what I was taught to use it for.

      I can’t even wrap my head around writing a genetic algorithm in C++, but in Scheme it’s no problem.

      It’s a niche language, for sure, though.

      1. Sauron says:

        Having done genetic algs in C++ let me tell you: they’re really not that bad at all.

        1. Abnaxis says:

          I second this. I took a genetics algorithm course with a bunch of mechanical engineers, where I was the only one who knew C/C++. While they wrote stuff in a higher level language that took forever in their various complex solution-spaces, I was able to tailor my code to make it efficient, without much coding time added. All it takes is some carefully planned object manipulations.

      2. Morat20 says:

        The GA’s I used for my Master’s Thesis were originally encoded in C (written as a single-semester project), ported to C# when I went independent study (I was learning C# and coding new stuff for it in C# was a good way of doing it), and continued in C# when I ripped it apart and rewrote it.

        Finaly project involved complex data processing, database creation and storage, a really complex GA, automated run/load/train/test/re-start processes, and worked quite well. Sadly, 2 of my three “Big ideas” utterly failed to pan out.

        Still, got my degree, learned C#, and patted myself on the back for being rigorous with my OO-design when I found myself rewriting the low-level stuff for the third time and realized how much time I’d saved by properly black-boxing things.

    2. In college I got to learn ML (which is a LISP-type language) and Prolog. Prolog is a ridiculous language. It’s sort of fun from an AI perspective, but it seems kind of useless if you want to actually want to make it do something.

      ML, on the other hand, was a lot of fun. It’s a hard language to learn, but once you figure it out, it’s a pretty solid language. The trick with LISP-type languages is learning how to do everything recursively. It’s counter-intuitive at first, but once you get the hang of it, it’s pretty useful. I think it’s worth learning a LISP-type language, if only so you can better understand how recursion works.

      Of course, given that recursion tends to take up a lot of memory, LISP type languages probably aren’t the most efficient for everyday tasks.

      1. Garden Ninja says:

        Of course, given that recursion tends to take up a lot of memory, LISP type languages probably aren't the most efficient for everyday tasks.

        I don’t think that’s true in most cases. Tail Call Optimization is extremely common in functional languages (though perhaps not in every implementation). I believe it’s required by the spec of Common Lisp. Essentially, if the recursive call is the last thing in the function (i.e. a Tail Call), then it can be converted into a loop by the compiler. You certainly can write recursive programs that don’t take advantage of this, and therefore have memory issues, but it isn’t as much an issue as you might think.

        1. Ingvar M says:

          Required by the Scheme spec(s), not required by the Common Lisp standard, but frequently available at the right optimization settings.

          1. Nathon says:

            surprisingly not in CLISP’s default settings

      2. Sauron says:

        I’m assuming when you say “LISP-type” you mean “functional”, since that is the only way this post really makes sense. In that case, it’s full of wrongness, as Haskell is actually one of the more efficient languages out there right now. Scarily efficient, actually.

        1. kentona says:

          I was wondering how long it was going to take to get Haskell mentioned!

      3. swimon says:

        “These are your father’s parentheses. Elegant weapons for a more… Civilized age.”

  5. krellen says:

    Some languages even exist in a weird limbo zone whose tasks are so highly specialised they’re practically useless outside that task – RPG, for instance, creates reports. It will never do anything else. But fifty years ago it was a tremendous boon to the industry because it was much simpler to make a nicely formatted report in RPG than in COBOL.

    And that’s another issue – programming has been around for seventy years. We have seventy years worth of programming tools laying around, and like most tools, programming languages don’t just go away because a better tool comes along. Those old tools may be better for working on older programs, and the newer tools might be lacking a key, but obscure, feature an older one had. So all these programming languages just linger around, still being somewhat useful and not quite obsolete.

    One thing to keep in mind is this: BASIC got us to the moon. Even a high-level, low-powered language can be highly useful.

    1. bbot says:

      Beg pardon? Are you saying the Apollo Guidance Computer was programmed in BASIC?

      1. krellen says:

        Systems inside the lander itself, if I remember correctly.

        I doubt the BASIC we use now would be recognisable as the version used by NASA, however.

      2. SKalsi says:

        Taken from http://en.wikipedia.org/wiki/Apollo_Guidance_Computer – “AGC software was written in AGC assembly language…” and “The AGC also had a sophisticated software interpreter, developed by MIT, that implemented a virtual machine with more complex and capable pseudo-instructions”.
        However, no mention of BASIC.

    2. Peter H. Coffin says:

      RPG has changed radically in the past 10 years or so, and is much more general purpose language these days. It does have a whole boatload of things that make handling formatted files really easy, including the ability to embed SQL into the program and run that on files…

  6. Ross Bearman says:

    Whilst I’ve been having to suffer Java recently, I do feel the need to point out that these days most operations have no noticeable speed difference between native code and managed code. The real speed hit with Java is on startup, but once it’s running most differences are negligible.

    1. Jock says:

      http://en.wikipedia.org/wiki/Java_performance For a fuller discussion, but it basically boils down to ‘It used to be RE-HE-HE-HEALLY slow but each new version has brought optimizations such that it’s to the point where the winner will depend on context’

      1. Moon Monster says:

        Yeah, nowadays Java is more of a server-side language in my experience (my experience being a server-side programmer, so…)

        Ironically this utterly ignores Java’s so-called portability (which is much less functional in reality once you start writing complicated programs). What it gets you is fast-enough performance, ease of coding (nice libraries, esp with network code), and free memory management. Even that is possible to defeat (believe me, there are still plenty of out of memory issues) but it’s much less likely you’ll write a program that will choke and die after a few hours of non-stop use. Which, you know, is good for servers.

        1. silver Harloe says:

          The motto of Java is supposed to be “write once, run anywhere.” But, really, it’s “write once, test everywhere.” Each Java VM is slightly different in annoying ways. Oh, how I do not miss my 5 years as a Java programmer.

  7. AGrey says:

    I learned to code with pascal back in high school

    might just be nostalgia, but it’s still my favorite language.

    I really like java, too

    1. krellen says:

      Unsurprising. The whole reason Pascal exists is to create a good language for teaching general programming structure, while being robust enough to create complex-enough programs to show students why they would want to learn programming in the first place.

      I, too, learned Pascal in high school and I, too, hold a special place in my heart for it. But I also understand why it doesn’t have the hold that C does.

  8. BlckDv says:

    Ah, the joy of being so distant from a problem that it becomes difficult to see. You didn’t even have to get into libraries, the nature of variables, definitions, and the thousand other things that usually mean you consider whatever you got trained in first / most thoroughly to be “superior” because the way the language behaves in the same as the way you “think” a solution to a coding problem, and so it becomes natural to express your solution in that language, trying to express it in another language can be very difficult.

    For these reasons I still have a soft spot for PASCAL, despite not actually using it on a computer in over a decade, I still write my to-do/task lists in a shorthand version of Ruscal. (dating myself there)

    1. Zyzzyva says:

      Heh, you reminded me of a story on The Daily WTF: there was a bit of C code that opened with

      #define BEGIN {
      #define END }

      and went on from there.

      “A true programmer can write FORTRAN code in any language.”

  9. Factoid says:

    Full text in RSS now? When did that start happening?

    1. Henebry says:

      Yeah! we want just a teaser, or otherwise we feel like it’s a waste to click into the actual site!

      1. ClearWater says:

        Wait, so I didn’t actually have to come here? I clicked the link before even seeing the RSS contained the whole text.

        Regardless, this is one of the few sites I like to visit rather than read the posts in RSS.

    2. Teldurn says:

      I noticed that too.

  10. I think the thing people need to understand is that there isn’t really any such thing as a “best” programming language. In principle, all programming languages are equally powerful. If you can write a program for it in Assembly, you can write in C or in BASIC or in Python or in LISP.

    The difference between languages is usually a matter of tradeoffs. Lower level languages are more efficient, but harder to use and much more platform dependent. Higher level languages tend to be less efficient, but they’re much easier to use and are less platform dependent.

    Then there’s the fact that learning a new programming language isn’t an easy task and most people tend to stick with what they know. As a result most programmers favor C or C-like languages (C++, C#, Java). C was the first good, high-level programming language that worked on most platforms, so everybody learned it.

    When I was in college my professors all hated C++, even though out in the working world that was the language everybody used. Even if other languages are better, when you get a job, you’re probably going to work with C++ (or these days, C#). Knowing a better programming language is worthless if no one else is using it (unless you’re a one man development team). No point in learning the awesome powers of LISP if you’re the only one who knows it.

    1. Sauron says:

      No point in learning the awesome powers of LISP if you’re the only one who knows it.

      Whoa, whoa, whoa. Hold it right there! We learn other languages for a variety of reasons, and actually using said language in practice is one of the small reasons. In my mind, the two major reasons to learn other languages are to get used to thinking about programming in a different light (for example, LISP is a great way to get comfortable with recursion!) and for helping us better understand the strengths of the language we do use.

      1. Jabor says:

        Definitely agreed with this.

        In my opinion, every serious programmer needs to learn to program in at least one assembly language and at least one functional language, even if they never use them.

        1. silver Harloe says:

          Plus, learning Lisp makes you really comfortable with operating on lists, and a great boon in learning how to do nifty magical things in languages with native list types (Perl and PHP, for example)

  11. SolkaTruesilver says:

    What about Python? I heard a lot of things about that language..

    1. The nice thing about python is that it’s easy to learn and easy to use. It has the gentlest learning curve of any programming language I know. The result is you spend less time struggling to learn how to accomplish even the most basic tasks and more time writing programs that do cool things.

      Like flying: http://www.xkcd.com/353/

    2. Primogenitor says:

      Python is “the best” to me – easy to learn & write, vast built in library, cross-platform, and if you need speed can be extended with C.

      1. Heron says:

        I’ve been tinkering with Python on and off over the last two years, and I’ve come to one inescapable conclusion: it’s great if you’re writing a processing script for one specific task, but I wouldn’t ever try to use Python to write anything with a GUI. That’s what C# and Java are for.

        I might consider using Python to do a quick functional demo of a concept (even a GUI), but I don’t think I’d go any further than that.

        The reason is this: Python tries to pretend it’s dynamically typed, but it’s actually strongly typed — but even worse, it sometimes swallows type errors silently, producing invalid behavior. I once spent three hours debugging something that turned out to be Python choking on an int when it was expecting a string, only it never actually gave me an error message about it. As a result I have become extremely wary of anything more complicated than Project Euler when it comes to Python.

        As for speed, well, depending on what you’re doing it’s not really that slow… but if you really do need the speed difference, and you’re going to write Python extensions in C, why not do the whole program in C/C++ in the first place?

        1. Garden Ninja says:

          Python tries to pretend it's dynamically typed, but it's actually strongly typed

          I’m not sure what you mean here. Those are not incompatible concepts. (Edit: May be a terminology problem. I found this article very useful: What To Know Before Debating Type Systems. )

          Static vs. Dynamic typing has to do with when names are bound. Static languages require you to specify ahead of time, the types used in your program. If you call an invalid function for a type, the compiler will yell at you. In a dynamic language, the error doesn’t show up until runtime.

          Weak vs. Strong typing has to with whether types are coerced by the runtime. With weak typing, if you pass a int to a function that expects a double, the language will convert the int to a double for you. With strong typing, you get a type error, and have to cast it yourself.

          I don’t use Python much, so I can’t comment on the other issues you brought up.

          1. Heron says:

            Sorry, I misspoke. Python (at least when you learn it) appears to be weakly typed, but it’s actually strongly typed.

            By “pretends to be” I don’t necessarily mean from the language’s point of view, I really just meant from the programmer’s point of view (at least if you’re not already familiar with how it works).

            But maybe the real lesson to take from this is that I suck at Python.

          2. I read that article you linked. It’s pretty interesting.

            Towards the end the author talked about using static typing to prove program correctness. I’d never thought about typing that way before. Also, I was always told that proving program correctness was something that only mathematicians and academics cared about. It seems like this guys making the case that we should use statically typed languages to prove program correctness, rather than testing our programs until we’re pretty sure that we’ve removed most of the bugs, which is an interesting thought.

        2. Nathon says:

          The reason why we don’t program in C or (shudder) C++ instead of Python is because Python has automatic garbage collection and high level abstractions that lower level languages lack. For example, try writing the following Python code in C:

          from random import randint
          foo = [randint(1,100000) for i in xrange(10000)]
          foo.sort()
          # code tags don't seem to like indentation...
          for item in foo: print item
          del foo

          It would be huge. First you’d have to write your own linked list implementation, then your own sorting algorithm for said linked list implementation. You’d have to allocate and deallocate memory, probably including some temporary storage for your sort. If I had to write that program in C, I would bet that it would take longer to run (albeit with less memory) than the Python snippet above the first time it worked. And it would definitely not work the first time. It would take hours to write. I didn’t run that example code in an interpreter and I’m fairly confident it will print out a sorted list of 10,000 pseudorandom numbers.

          I’m not saying C doesn’t have its place; I write C code for a living. I’m just saying that if you have the option and the compute time and resources, there’s no reason whatsoever to write in C these days.

          By the way, PyLint will help a lot with finding those strongly dynamically typed errors. I use it heavily on bigger Python programs and it’s found many bugs.

          Now this is getting long, but another benefit is that while the same C code will run on many architectures, that Python code will run on any major and many minor operating systems. I’m fairly sure porting the C stuff would be a nontrivial amount of work.

  12. ima420r says:

    I used to program BASIC games from books as a kid. In high school we used BASIC on the Apple IIs. My favorite though was AMOS, a graphical programming tool for the Amiga. I used to make games like Asteroids and Pacman on it. Was lots of fun! I haven’t programmed in years now, really want to get back to it and learn some more C but I don’t really have the time (I PLAY too many video games to be making them).

    1. Garden Ninja says:

      I PLAY too many video games to be making them

      I’m in the same boat, with a couple of additional caveats. My job is developing web applications, and often, when I get home, I don’t have the interest to work on one of my personal projects. Plus, I feel like if a game is going to go anywhere, I would need to write it in C++, which at this point I would have to relearn, especially the memory management stuff. So when it comes down to either learning C++, so that several months down the line, I can write some interesting code that actually does something, or playing video games, video games always wins. I do work on other personal projects sometimes, but they are all things that are fun/interesting on their own, and that let me see interesting results right away.

      1. Chris Ciupka says:

        If you don’t want to deal with memory management and C++, check out the XNA Framework which will let you write games using C#.

        The creators site also has a ton of useful samples to help you get something interesting going quickly.

        1. Garden Ninja says:

          Which brings up the other issue. I run Linux (Ubuntu, specifically) on my primary laptop. I was booting to Windows for a while, and working through a C# DirectX book (not XNA specific), until I found out that my video card is so out of date that some of the examples just didn’t work. The one I remember had to do with putting a three color gradient on a triangle. It worked like it was supposed to on my work machine, but on my laptop, the triangle was a single color.

          I have another laptop with Vista on it, and a slightly newer video card, but I just haven’t gotten around to trying it there.

        2. Nathon says:

          obligatory SDL bump. Don’t tie yourself to Windows, says the fanatic.

        3. Megabyte says:

          XNA is a good tool to learn about development. You can master concepts like drawing graphics and the update loop quickly. It’s memory management isn’t foolproof. Once, I needed an array of about 500 sprites. For some reason, XNA wouldn’t initialize one that large, even though I remade my project. It didn’t work until I rebooted my system.

  13. Robert says:

    I write all my code in Superbase Data Management Language, a 16-bit Basic-like with an integrated non-SQL database engine.

    Because I like it, that’s why.

  14. HarveyNick says:

    Yeah… Java is comparable in speed to C++ in most situations these days, after the JVM has fully started up. It’s not interpreted any more, it’s just-in-time compiled. That was actually true five years ago.

    Often times a Java program winds up being faster and more efficient than something comparable in C++, as well, plus faster to code. Depending on the ability of the programmer, of course.

    I think the example you wanted for a very high level, but slow language was probably Python or Ruby. Not sure I’d ever use the word “dynamite” to describe VB, unless the connotation you were looking for was “will leave your computer a pile of smoking rubble”.

    1. Jabor says:

      Though the thing to remember is that the biggest JVM around today doesn’t JIT code until it needs to, to cut down on startup time.

      Java starts out running pretty slowly, then once it gets a feel for what code paths are often-utilized it gets much faster.

  15. Primogenitor says:

    Huh. I always think of a different trade off – time to write vs time to run.

    Some languages are easy to write (python), some are hard (assembler). Easy to write generally means slow to run. Great if your writing something that will be only ever be run once (e.g. research), terrible if it will be run many times (e.g. web page generator).

    Factored alongside this is “time to learn” – your first program in any language will take a while (though this overhead decreases as you know more languages) but as you do more, you get quicker. Hence why some use FORTRAN; they don’t want the overhead of learning another one.

    And then there is portability – if you only need it to run on one machine vs customers need to install it on Windows / Mac / Linux / mobile / etc.

  16. Alan De Smet says:

    Why are there so many programming languages? Why doesn't everyone just pick the best one and use that?

    Why are there so many types of vehicle on the road? Subcompacts, compacts, coupes, full size, minivans, vans, pickups, trucks, semis, motorcycles, scooters. Why doesn’t everyone just pick the best one and use that?

    1. Davin Valkri says:

      Why are there so many types of aircraft? Dedicated cargo haulers, dedicated fighters, fighter-interceptors, fighter bombers, dedicated bombers, jetliners, electronics platforms, single engine personals? Why are there so many types of firearms?
      *Channeling my inner McNamara–the ****

  17. skeeto says:

    Paul Graham has a detailed essay on this topic: Beating the Averages. He says that, in general, you should be using the most powerful programming language available. And he’s right.

    I think deciding which language is the most powerful is a complicated task. The power of a language depends on the task at hand, where a domain-specific language could be more powerful than a language that’s a lot more powerful for more general tasks. There’s also the matter of support. Some languages in wide use are crappier than other languages no longer in use, but it’s better to go with the lesser language as it has much better support and a larger community.

    As for the definition of programming language, Turing-completeness is a good rule-of-thumb, but is by no means a requirement. All Turing-complete languages are equal in computation ability.

  18. someguy says:

    Since my brain refuses to memorize anything as arbitrary as programming/script-languages’ syntax, my ..uh.. language of choice goes like this: Hello World

  19. Drakey says:

    Its a treat sometimes to see you explain these programming concepts.

    When I was finnishing High School, I had taken an advanced placement computer science course that was equivelent at the time (over ten years ago) to a first year university course on computer programming. The year I took this course was the last year that turbo pascal was being taught and the curicullum was moving over to C++. It was frustrating for me at the time because it was evident that the languages used were going to change often and frequently. I did not continue due to this, as I wasnt prepared to re-learn a new language each time. I guess C++ held on, but they still seem to carry obvious similaritys to what I had learned.

    Its fun to see what things still remain the same in the languages. And heres to those who ‘sucked it up’ and continued the learning process in this field of study. I admire it and enjoy reading about it.

    THanks again Shamus, and to your pals that add thier own knoledge on this subject. Its fun reading for me :)

  20. Sydney says:

    Time for the Stupidest Comment of the Day: “How does the computer know what the code means?” Surely the microchip doesn’t understand the word “print”.

    Sorry.

    1. Gnagn says:

      Assuming you’re serious, and not just making a funny, see that bit of Assembly code up top? There’s a very complicated program called a compiler that translates the basic code (or the C++ code, or the Python code, etc.) into that code. It does the heavy lifting so you don’t have to. As Shamus mentioned, the Assembly code is the language the processor speaks.

      1. bbot says:

        Further pendantry: Assembly still has to be assembled, by an assembler, before an actual binary executable is produced.

      2. Sydney says:

        Well, same question then. Before there was assembly code, how did they program the computer such that it understood assembly code?

        1. krellen says:

          The assembler is the compiler for Assembly. Someone wrote in direct machine code a program that took Assembly commands and translated them back to machine code.

          Early programmers, the very first ones, wrote directly in machine code – binary codes, simple commands that manipulated bits in memory and produced results.

        2. silver Harloe says:

          To understand why assembly “works” (or, rather, why the 1s and 0s (the machine language) it so closely corresponds to work), you have to understand how the processor on the machine works. You don’t program a computer to know assembly (or, rather, the machine language), the computer understands it by construction.

          Like the line “mov edx,len” might really be (by the way, this binary is made up, not even close to real – for example, in reality, these would be at least whole bytes, not nibbles as I’ve shown. And, really, the first two codes might be combined in some clever way):
          0001 0001 1010

          the chip on the machine gets fed ‘0001’ that puts it in a ‘state’ where it next expects to see a reference to one of the tiny memory buffers on the chip. Then it sees ‘0001’ which (in my silly example machine language) corresponds to the ‘edx’ register). Now it’s in a state where it next expects to find a memory address. ‘1010’ is fed in, so it looks in the 10th byte of memory (this computer only has 16 bytes of memory. yipes!) and copies that memory into the edx register. All of this because the chip itself changes state with each input of bits. It isn’t programmed to have these states, it’s built to have them – you could, given a diagram of the chip, follow how the ‘0001’ toggles a couple flip flops as the voltage goes down the ‘wires’ (traces in silicon. or something more advanced than silicon these days).

        3. Chris Ciupka says:

          Essentially, assembly instructions correspond to bit strings (for instance, on a 32-bit processor, a series of 32 ones and zeroes in a row).

          This is what the CPU actually acts on. As a gross simplification, consider that the CPU understands that a string of bits is an Add command because the first 4 bits are all ones, whereas a Subtract command would be 4 zeroes, then the remaining 28 bits contain data like the two numbers to be added and the location in memory where the result should be stored.

          Now, before assembly languages existed, a programmer would have to somehow manually create these strings of bits, and have the CPU execute them (I have no idea how they did this back then ;P). I assume that this is how an assembler was written.

          1. silver Harloe says:

            They literally toggled switches for the next byte, and pushed a button to process the “next clock tick”. Then someone made punch card readers to automate that…

            Yah. These days we measure millions or billions of cycles per second, back then it was seconds per cycle. Fun stuff. Glad I started after there were keyboards and disk files to store the code in :)

            1. Chris Ciupka says:

              I was going to guess punch cards. ;)

              1. Jabor says:

                “Real programmers use bare copper wires connected to a serial port” :P

            2. Bryan says:

              Oh yes, the days of the altair 8800 with its switch interface and 7-segment LED display come back to mind. Yes, in the old days people actually worked like that. Keyboards and crt monitors were unheard of except for the big multi-million dollar corporations. If you were a hobbyist you were lucky to have switches, and a display like a cheap four-function calculator.

          2. Erik says:

            How did it you manually create the strings of bits? Well, back in the day (you kids get off my lawn!), there were coding sheets.

            They were exactly one instruction wide, marked into bits, and had columns for each field in the assembly instruction. For example, the first 5 bits may be the opcode, next the first operand type, then the details of the first operand, then the second operand type, and so on. Those were filled out by hand in binary. Repeat for every instruction in the program.

            Then they had to be entered into the machine, at first by toggle switches. Set 8 switches, press the enter button, go to the next byte, repeat. As you may guess, one of the very first programs written for most machines was some way to read the program from a paper tape, punch cards, or other storage, so you could stop flipping switches. :)

            Once you had a working system, you could program it to read text and output binary – that was an assembler. The difference between an assembler and a compiler is that the assembler ONLY converts from text to binary and back. The text version of the assembler has to have all the same fields that the coding sheets did. They were just written in something that a human could learn to read.

            1. Mertseger says:

              Cue horrible memories of coding a Z80 in machine language to produce saw-toothed waves back in 1982 for my Physics 111 lab at Cal. (Shudder.) The worst thing about keying in the hex code was that if you missed an entry, suddenly a memory reference would become a command or vice versa, and off the pointer would go to gibberish land. Debugging was non-existant, and you just had to keep trying to re-enter the code until ou got the chip to do something. Needless to say, the experience scared me away from low-level coding forever.

          3. Jabor says:

            Minor nitpick: An n-bit processor doesn’t mean that each instruction is n bits – for example, Itanium was a 64-bit architecture, and contained three instructions in 128 bits.

            1. Chris Ciupka says:

              Good point, thanks for the correction!

    2. krellen says:

      That’s what the Compiler does, actually. The compiler’s job is to translate the language you understand – “print” – into the machine code that creates the effect you desire. Compiled code generally only runs on a specific subset of machines – those that understand the particular machine code the compiler created.

      You can take your “Print” program to any machine, compile it, and get the same result, but if you took the compiled code from a PC to a Mac, it wouldn’t work at all, because Mac doesn’t speak PC.

    3. Erik says:

      That’s not a stupid question at all. Asking that question, over and over at each level of explanation, is how I ended up with a degree in hardware instead of software.

      Gnagn has the basics right, so let me just add one followup and get even more pedantic. Assembler is not just one language. There are as many different types of assembler as types of processor. The main CPU in your computer speaks one version, which is different from the computer in your car engine, which is different from the one in your microwave, which is different from the one in your modem/router, which is different from… You get the idea.

      But for all of those different assembler languages, the Basic/C++/Java code is always the same, so the programmer doesn’t have to care what the assembler looks like. This was the original reason for high-level languages, before the Intel processors got so common. The differences can be hidden by the compiler, which reads the common high-level code and translates it to the specific binary assembler code for the kind of processor you actually need.

      The compiler is what allows someone to be a “programmer” and not have to get separately certified for every unique processor type in existence. Not to mention, it saves us all from having to write in assembler. :)

      1. BlckDv says:

        NO! I was promised I would never ever have to remember PAL-Asm existed again.

  21. bbot says:

    The post right before this one in the RSS reader was from Lambda the Ultimate, and I managed to mix them up. When you described Java as “slow, really slow” I thought, “Holy shit, the comments section will be insane.” Just mentioning Java on LtU is enough to spark a flame war, let alone calling it slow.

    But no, it was d20, so the comment thread was mostly sycophantry.

    (Also: heck yes full text RSS feed)

    1. Brandon says:

      Yeah, I’m not seeing sycophantry here on the topic of Java. I’m seeing polite disagreement on the issue of whether Java is ass slow, which the majority of commenters seem to have decided it is not.

  22. Sheer_Falacy says:

    Sidney: That’s why the assembly code exists – so the computer doesn’t need to know. The C gets compiled into basically that exact code by something that understands print, and the BASIC gets interpreted (or sometimes compiled) by something else that knows what print means.

    Shamus: I disagree on your Base 6 example. I think the code for printing that out would be considerably easier in BASIC than in C, because you’re basically going to be writing the same things (take the number modulo 6, put it in the next slot to the left, and recurse on the number divided by 6), but of course BASIC makes things like that much easier than C (how much space do you malloc for the string? In BASIC you don’t care).

    Actually, it’d be pretty easy to do in either if you just wanted to print it and ignored the fraction part.

    void printbase6(int num) {
    if(0 == num)
    return
    printbase6(num/6);
    printf(“%d”, num % 6);
    }

    I bet BASIC would be easier for writing the fraction part just because C makes a lot of things harder than they should be.

  23. neothoron says:

    Not a bad explanation. I have some things to add:

    You say that C/C++ seems to be the sweet spot for complexity vs simplicity.

    I believe that C/C++ don’t so much hit “the” sweet spot in complexity vs simplicity rather than hit “every” required sweet spot, from kernel to high-level GUI application programming. In fact, it illustrates most magnificently the drawbacks that a unique lingua programmatica has: integrating every feature for everyone results in something that is only really understandable by seasoned programmers, who will only use a small subset of functionalities in any given program.

    You could conclude that, in the end, there is no “best” programming language, because the criteria for determining best depend on the design goals, time constraints, programmers experience, etc.

    1. krellen says:

      C is the English of programming. Like English, it’s flexible enough to cover most concepts, and if there’s a concept it can’t cover, it will simply steal the concept from some other language and pretend it was there all along.

      Most people can grasp the basics of both languages with a bit of schooling, but it takes years being immersed in the language to truly grasp all its nuances and call yourself fluent.

  24. Echoing comments above – Java is really not slow once the JVM is up and running. C/C++ can be faster, especially on low-level stuff, but Java is in general a lot more useful with it’s libraries nowadays.

    @Sydney:
    The code goes through a program called a compiler, which turns the instructions into 1s and 0s that the processor can execute. The binary file is what is executed, not the text file.

  25. Kdansky says:

    I want to add that C is actually not that much faster than Java any more. Years ago, it was. By now, not so much:

    http://shootout.alioth.debian.org/u64q/which-programming-languages-are-fastest.php
    (The only thing that could be called “slow” is Python3, everything else is pretty much similar)

    On average, it is less than a factor of two. To someone in computer science, a fixed factor is nothing, we are usually interested in orders of magnitues. You could essentially just use a machine that is three times faster, and program in Java. This is often way cheaper than writing code in C, because C is more complicated and therefore more error-prone, which means that it takes that much longer to finish any given task. And since even a single programmer-month is more expensive than a dozen computers (and the computers stay with you for more than a month to boot) this more often than not makes the “easiest” language a good choice for most tasks. Instead of throwing money at a problem, we beat it to death with processing power. :)

    Additionally, writing the same code in a higher-level language (such as C# compared to C++) is not just “easier”. In C++, you have to write your own memory management system. In C#, you do not have to do that, but instead you can spend the same amount of time on optimizing your code. Assuming infinite time, the C++ code will (nearly) always end up faster. But assuming limited time (and realistically, your deadline was last week), you will often end up with optimized C# code compared to unoptimized C++ code, because the C++ guy spent all his time writing stuff the C# guy got for free. I dare you to take a single hour and implement and tune a simple application in Java, and once in C++. Your Java version will most likely run faster, because it can use sophisticated algorithms and optimizations and be properly tested, while your C++ version will be pretty much bare-bones, if you can even finish it in this very short time frame. And it probably crashes ;)

    But most people do not choose language by listening to reason, but rather by “I’ve always written my code in [antique monster]! There’s no reason why we cannot use it for this project!”

    1. Shamus says:

      I’m 50% graphics programmer. I’m pretty sure I’m not the victim of dogma when I insist that the speed advantages of C++ are worth the cost. Sometimes those “trivial” speed differences are really, really important. And you need access to libraries / SDK’s that won’t be available to Java.

      1. Chris Ciupka says:

        You should check out the XNA Framework, Shamus.

        C# is of course slower than C/C++, but I find writing in it really satisfying, for some reason, and ironically enough I hated Java back in school…

        Edit: Btw, I don’t mean this in a “C# will change your mind” way at all, I simply mean you might be interested to check out XNA out of curiosity or to experiment, since I know from some of the programming projects you’ve written about that you like to do that sometimes.

      2. Garden Ninja says:

        (Edit: I worded this poorly, and I think it may come off as an insult. Please don’t take it that way.)

        speed advantages of C++ are worth the cost

        Then say that. It’s a trade off either way, and what constitutes “Really slow” depends on the problem space. Using something besides C for graphics is probably a bad idea, but if you are working on a web app, then Java, or something like it, is great. Heck, even Ruby is fine unless you are planning to become the next Twitter.

        1. Shamus says:

          Uh. I think it’s better for Java programmers to not be so thin-skinned than to add a bunch of qualifiers that the target audience of this post will not care about in the least. (This is for non-programmers, remember.)

          I spend a lot of time in the graphics area, so to me, Java IS really slow. :)

          (And to be fair, I didn’t hear about how much the speed gap had closed until this thread. I tried Java about 4 years ago and concluded the differences were too extreme for me to spend time getting to know the language better.)

          1. Brandon says:

            Shamus,

            I think part of this is that Java is still very much a language that is growing, developing, and becoming more and more optimized. While C and C++ constantly see the addition of new libraries, they are older and more mature libraries for which speed improvements are likely harder won (in terms of compiler optimizations and whatnot). Because Java is JIT compiled code and Java compilers are comparatively young, there is, or at least was, apparently a good deal of performance yet to be wrung. At the actual language level it’s a fine, high quality language. At the compiler level it has seen much improvement.

            So while it may not be the preferred language for an area as twitchily performance dependent as 3D graphics, it performs quite well in many other areas. I think Java has established it’s longevity and isn’t going anywhere. And it’s certainly not the whipping boy it was when I was learning it in 1999. If you do any work that’s not 3D graphics dependent you may wish to give it another chance. You may be pleasantly surprised.

          2. Garden Ninja says:

            I’m a C#/ASP.NET programmer, actually, but I get your point. However, if the goal is to educate non-programmers, in a broad way, then mentioning performance at all doesn’t seem all that useful, especially next to your good VB example. Maybe something like “Often used for server applications”, or something like that, would be better? Not sure.

            Regarding your parenthetical note, I haven’t used Java since college, (except as an end user, and even then, only sparingly). At the time, it seemed to offer some benefit over C++ in that “everything” (but not really) was OO, and it had built in memory management. These days, I’ve seen and used better, more cohesive, and just in general better designed languages. Java seems to suffer from a lack of an identity. (C# does to, but it seems to have learned, to some degree, from Java’s mistakes.) My information about Java is out of date at this point, but I do read things occasionally about Java, and I never see enough interesting stuff there to bother trying it out again. I wouldn’t turn down a job, just because it required Java, but I’m not itching to use it (or C# for that matter) for a personal project.

          3. The reason we Java programmer get a little twitchy is that you were so dismissive of java that new programmers would be completely put off learning it. It’s as if I described programming, used Java as my natural language of choice and then mentioned C at the end as ‘unnecessarily low-level and non-object oriented’. That sentence may have some elements of truth to it, but it’s more than a little unfair to C and would discourage people from learning it

            1. BritishDan says:

              Let me just say though, as a guy who professionally programs in both Java and C++, that no student should ever study Java until their last year. It’s too much like BASIC in that it makes it too easy to take shortcuts without actually understanding what you are asking the processor to do.

              http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html

              1. Blake says:

                At my university we did a whole year of Java before touching C++. I found it to be a perfect way to go about it because we learned OO concepts and how to structure and write programs without needing to worry about memory management and with far more useful compiler error messages than you’d get in C to get us up and working faster.

                The rest of the course was in C++ where we could really dig deep learning to write memory systems, optimize code, deal with preprocessor macros etc.

                I’ve been working in the games industry for nearly 2 years now and firmly believe learning languages in the order I did made it a very easy transition.

              2. Kdansky says:

                Don’t you think it is a lot easier to first learn how to use something, and then learn how it works on the inside?

        2. Octal says:

          but if you are working on a web app, then Java, or something like it, is great

          Please, nobody do this. Maybe it’s great for you, but having to deal with some huge Java-laden webpage on a lower-end machine sucks. (Especially when I have to use different browsers for different tasks because the site is differently buggy in each one. This is what I’m talking about. That system is great… for hurling invective at.)

          Unless you’re making something just for yourself, I mean. Then I guess you can do whatever you want.

          1. Garden Ninja says:

            Perhaps I didn’t phrase my point well, but I wasn’t referring to Java applets, or client apps. I meant using Java on the backend, and rendering HTML. Javascript is a different issue, but if that’s causing performance problems, then you have done something seriously wrong. When I work on personal project, I prefer minimalist Javascript, if I use it at all.

            1. Mephane says:

              I agree. Generally, for web development I think the language of choice should be no lower level than Java or C#, i.e. managed code with a level of hard abstraction from the bare machine (no pointers or malloc).

      3. HarveyNick says:

        Assuming that you’re talking about OpenGL, then it’s all made available by the JOGL and LWJGL libraries. It’s not an ideal solution (and can be quite slow), but the Java Native Interface can give you access to just about anything. Automatically generating Java, Python and C# bindings for a C++ library is what I’m working on at work at the moment.

        And another thing…

        If you native compile Java (such as with Excelsior Jet, which my company uses for security reasons) then you get some additional speed increase as well, and completely loose any penalty for calling native code.

        One last thing…

        I’m actually starting to get interested in Objective-C, which seems to have a lot of the benefits of C++ and Java and few of the downfalls. It’s fully native, object orientated, can directly call C and C++ code and has optional garbage collection.

        1. Miral says:

          Why would you need to work on your own language bindings when there is SWIG?

          1. HarveyNick says:

            I’m using SWIG, but there is a non trivial amount of work to be done to actually implement the input to swig. Especially when your build system is CMake…

    2. silver Harloe says:

      A fair analysis if (and only if), you actually need the features C# provides that C++ doesn’t (such as memory management). I wrote a compiler (in college) in C++, and didn’t malloc a thing – I had a few fixed sized buffers to contain the next symbol and wrote my output directly to disk. The memory management in C# wouldn’t have bought me anything, so I had the same amount of time to optimize as the C# programmer would have had. :)

    3. Daimbert says:

      What, what?!?

      For my job, I’ve had to program in Java, C, and C++ (mostly the latter) so …

      “This is often way cheaper than writing code in C, because C is more complicated and therefore more error-prone, which means that it takes that much longer to finish any given task.”

      Um, syntax-wise, I don’t see all that much difference, except for pure C not having objects. Objects — and everything being such — are often more complicated than just having a function to call, and Java has it’s own peculiarities (only passing by reference, for example) that can complicate certain things. I don’t see how C, therefore, is inherently more error-prone or more complicated than Java is (we’ll get to memory management later).

      “Instead of throwing money at a problem, we beat it to death with processing power. :) ”

      This only works if you can convince your USERS to buy new machines. Sometimes, that isn’t possible. And with really, really large software suites, if all of them require that you might have a real problem in that your inefficiencies add up process-by-process to an utterly ridiculous level.

      “In C++, you have to write your own memory management system. In C#, you do not have to do that, but instead you can spend the same amount of time on optimizing your code. ”

      Huh?!? Using “new” and “delete” doesn’t work for you?

      SOMETIMES, a new memory management system is required. In some cases, that might not have been required with something like Java. But in a lot of cases, that new memory management system is written BECAUSE the default system simply doesn’t work. Thus, C++ would win in those cases because it LETS you re-write that memory management system to do what you want, and let’s you take the consequences.

      Otherwise “x = new ClassA()” and “delete x” works fine. There are issues if you forget to delete the class you created, but then again at least it definitely gets deleted when you don’t want it anymore.

      “Your Java version will most likely run faster, because it can use sophisticated algorithms and optimizations and be properly tested, while your C++ version will be pretty much bare-bones, if you can even finish it in this very short time frame. And it probably crashes ;)”

      And you aren’t using libraries in your C++ version to get pretty much what Java gives you because … why, exactly? Using libraries lets you use those same algorithms and optimizations, with the same amount of testing. I have no idea what C++ code you have that doesn’t allow for this. Well, Java MIGHT come with more standard jars, but that’s about it as far as advantage goes.

      I’m not saying that C++ is the best language to use. My view on it is this: Java does everything for you, C++ gives you control. In many cases, for very basic functionality, standard libs and jars gives you what you need for both. Java might go a little further in that, but because a lot of those are so tightly intertwined it’s often VERY hard to change the behaviour if it doesn’t suit you. C++’s advantage is that it is, in general, pretty easy to do that if you need to. Now, that also sometimes makes it easier to screw-up, since C++ won’t tell you that you are screwing up until you run it. I prefer more control, so I prefer C++. Python’s also cool for quick work where I don’t need graphics. But I can see why people like Java; it’s the same people that like Microsoft Word [grin].

      I will say that the standardization effort of C++ and the invention of STL really did help the language.

      1. Garden Ninja says:

        I only used C++ in High School, (keep meaning to relearn it) so take this with a grain of salt, but I consistently hear several complaints about C++.

        1. Manual Memory Management. The problem isn’t with new and delete. It’s with having to always call them your self. Yes, it’s deterministic, which is good, but forgetting to delete your variable when your done leads to a memory leak. A garbage collector is non-deterministic, but it means that memory management is one less thing to worry about. How big of a deal this is probably depends on the project.

        2. The surface area of C++ is so large that everyone ends up using a subset of the language features, and it is a often a different subset. This is probably true for other languages as well, but I get the impression that it is really bad for C++.

        3. The syntax itself is needlessly complicated, so e.g. it’s hard to write parsers for it.

        Like I said, I don’t use C++, so I can’t judge if these issues are accurate, but I do get the impression that a lot of the complaints amount to whining that a hammer makes a really bad screw driver.

        1. Jabor says:

          Nondeterminism isn’t what makes a garbage collector :)

          There are C++ libraries that somewhat relieve you of having to do manual garbage collection – Boost’s smart pointers, for example.

          The problem is that reference counting doesn’t catch cyclic dependencies (so there is still stuff you need to watch out for and manually free), and that a lot of the smart pointer syntax is incredibly arcane and it’s easy to mistakenly add an extra addref or release somewhere.

        2. Daimbert says:

          1) Yep, that’s the problem I mentioned: having to remember to delete it (and preferably nullify it) or else it’s a memory leak. Mostly my reply was aimed at needing to write your own memory management system, which I took as something far stronger than “remember to delete after your news”.

          2) Same problem happens with Java for certain. A lot of people on the project before me used GridBagLayout for the GUI work. I had a hard time getting that to work out, since it resizes its grid according to what I ask it to stick in. After discovering that other people struggled with it, I used SpringLayout. The more variety in your libs you have, the more likely it is that everyone will pick their favourite and use it. And that’s not even including things like window builders …

          3) Comparing it to Java or even Python, the syntax is remarkably similar. I knew pretty much what everything did before learning them because their syntax matched C’s and C++’s so well. Other languages might be better.

          1. Kdansky says:

            I was really not going for syntax, which is really the same in nearly all OO-languages. C++ is much more difficult to get right because of things such as these:

            – No garbage collection. Seriously, freeing all your memory perfectly and correctly is a ton of work when projects get big and does not improve performance by much, compared to a GC.
            – Segmentation Faults. In Java, you do not have to bother with arrays, you can just use templated Lists (though I hear the C++ people have a library for that too), which will only go into pointer-nirvana when you make grave mistakes and offer very practical accessors such as for-each loops. And even if stuff goes wrong, it is easy to debug or handle.
            – Hard Typing and no pointer arithmetic. It is very easy to mess up a construct along the lines of *pointer& in C. Since you are not allowed to do such things in Java, you cannot make mistakes and that means you do not have to debug such problems.
            – It is incredibly easy to write incredibly ugly code which nobody else can understand, because there are twenty billion ways to do every simple thing and another twenty billion short hands which are hard to read. Sure, once in a blue moon you actually need that functionality, but at the same time, it would be a lot easier if your every day code was written and understood twice as fast. And if you write twice as fast, you get to do ugly hacks twice as often, and we all love to do those, right? :D

            A programmer spends the majority of his time debugging, and that means that code that can be read and understood quickly is incredibly important. C++ is really bad at that.

            That said, I am not much fond of Java either, with its horrible Swing-layouts and sometimes rather complicated ways of doing things: Creating a new, empty XML-document takes four instructions instead of one, and basically requires you to look them up every time you have to use them. ;)

            That said, the language I am currently learning in my free time is Objective C.

            1. Mephane says:

              “I was really not going for syntax, which is really the same in nearly all OO-languages.”

              You should try F# (yes, it’s OO, because it’s just as multi-paradigm as C++, just the paradigm “functional” on top of it). You pretty much have to learn like 80-90% totally new syntax. Good thing is, the syntax is still pretty good to understand when reading.

              Example:

              type Person(_name: string) =
                let mutable _name = _name
                member this.Name
                  with get() =
                    _name
                  and set(name) =
                    _name <- name
                member this.Greet someone =
                  printfn "Hello %s, my name is %s!" someone _name

              (Yes, F# reintroduced the good old printf family of C-functions, just with real built-in strings instead of pointers)

          2. Garden Ninja says:

            For 2), I actually meant language features specifically, not API, but I suppose they amount to the same thing.

            For 3), yes, it is fairly similar from a programmer’s perspective, but what I was referring to, poorly (I couldn’t think of the right word), was that C++ has a context-sensitive grammar, which leads to undecidability, and really nasty errors. (The guy who wrote that site seems to really hate C++, but it comes up a lot in discussions. I haven’t seen a rebuttal of it, but that doesn’t imply that it’s correct.)

            Then again, you know the saying: There are two types of programming languages: those that people complain about, and those that no one uses.

      2. wererogue says:

        I was trying to work out where to jump in, and here seems as good a point as anywhere.

        I don’t have time to make a proper post, and most people have done great already, so have some bullet points:

        – C still cheaper than C++ (barely)
        – Java pretty fast nowadays, esp. with JOGL and JNA
        – STL not great for portability (much worse than Java)
        – Neither Java nor STL are good enough for high-performance apps (i.e. games). Often need to write own containers for specific problems, avoiding cache misses etc.
        – Garbage collector is a pain (Java, C#), but can be worked around
        – C# as a scripting language is lovely (interop)
        – C#/XNA woefully inadequate on the XBox 360 (Cache misses for managed memory, mark-and-sweep garbage collection churns on an in-order processor)
        – I’ve used C++, Java, C# and Python (and BASIC) a *lot*. I love them all, and assembly too. But not VB (shudder).

    4. Isaac Gouy says:

      >> The only thing that could be called “slow” is Python3 <<

      That just depends on how many languages are measured :-)

      You only looked at measurements made on x64 allowing all 4 cores to be used. More languages were measured on x86 forced onto just one core.

      (And ye olde out-of-date measurements from years ago included even more language implementations.)

      >> less than a factor of two <<

      Compare Java and C programs side-by-side.

      >> In C++, you have to write your own memory management system. <<

      Really? Why wouldn't you re-use a memory management library written by someone else?

  26. Carra says:

    Why don’t we all speak English?

    As you mention languages have different purposes. It’s a trade-off between quickly programming something and speed. Speed is becoming less and less of a concern these days. Our GUI is written in C#, our server backend in C++. It really doesn’t matter if a language is “slow” for the GUI side.

    None the less I’d say speed doesn’t matter much these days unless if you’re working on embedded devices or kernels. Else? Go for a higher level language, you’ll program a lot faster. Grahams story is indeed nice. Since they were competent in a higher level languages they could implement features faster and provide eadier, more maintainable code.

    1. Tizzy says:

      And that’s why English is spoken so much worldwide, isn’t it? It’s so easy to pick up, not too many prescriptive rules, little in the way of conjugation. The trade-off there is that it’s a lot easier to speak it than to understand it, precisely because of the “fast and loose” feel of the language.

      1. Eidolon says:

        No, it’s not.

        In fact, English is one of the more difficult languages to learn. It’s got a lot of rules and, perhaps more importantly, a lot of exceptions to those rules. It’s not the hardest language — some of the American Indian languages are harder, and the major Chinese dialects are both tonal and ideographic, making both speaking and reading them a grueling process to learn. But it’s up there.

        The reason it’s so widely spoken is because of geopolitical, economic, and cultural influence. It was the language of the British Empire, which came to dominate most of the globe by the end of the 18th century, and it’s the language of the US, which has been reasonably influential since World War II or so.

        For similar reasons, in ancient Rome, the dominant language among the educated upper classes was… Greek.

  27. Ell Jay says:

    OT, more or less: this is one of the most reasonable and even-handed discussions I’ve ever seen on the Internet.

  28. Heather B says:

    Thanks so much for posting this! Now I want to flood your questions page with dumb non-programmer questions about programming. I’ll restrain myself.

    For now.

    1. Chris Ciupka says:

      I kind of like “dumb non-programmer questions about programming” personally, it’s always an interesting challenge to explain something about programming in a straightforward and clear manner.

      A lot of the time I think we as programmers underestimate how arcane some of the stuff we say can sound.

  29. Peter H. Coffin says:

    Bear in mind that what makes c easily able to do the magic thing with the printing specially-formatted output is in the

    #include <stdio.h>

    not c itself. BASIC is similarly extensible, even at some pretty primitive versions, via the DEF FN functionality.

  30. Vegedus says:

    As a budding programmer, I’m desperately trying to glean some sense of what languages I should learn from all this. I’m getting the impression that programmers are a completely conservative lot, and I better learn some powerful languages soon, or I’ll be jaded and forever stuck with an inferior language if I go with the popular stuff. I really don’t know, I’m not too into the industry, history and platform-dependence, I just want to code stuff.

    1. HarveyNick says:

      Python is a good place to start. It’s a bit of a toy programming language, and it tends to encourage really bad programming practice in some cases, but it’s very easy to pick up and very fast to code. It’s also some use in web programming and an increasing number of actual applications are being written in it.

      If you want to program for windows you probably want to be looking at C# and/or C++. I’d start with C#, it’s a blatant knockoff of Java and it maintains a few of C++’s more dodgy design decisions, but an easier language to work with.

      If you want to program for Mac learn Objective-C. Here, it’s the best option by far.

      If you want to program for the web then Java and Ruby are your best bets, though Python and Groovy* have some utility as well.

      If you’ve got games on your mind… that’s a tricky one. C / C++ will give you the performance, but will also make your life a lot harder. Java has pretty good support for 3D graphics (see http://www.jMonkeyEngine.com) and is good language for writing any AI (I did my PhD using it and it needed 3D graphics and a lot of AI). Likewise, C# has pretty good support for building games, XNA is a good place to start.

      *Groovy is my personal favourite scripting language.

    2. Anaphyis says:

      You will be jaded as stuck with an inferior language no matter what language you actually chose.

      As demonstrated here: http://wurstball.de/static/ircview/pictures/749cd15bf9d0254286148f468567b29e.jpg

      It really doesn’t matter what language you pick as long as it gets the job done. The hight of the hurdles for different tasks depends on the language you use but with an affinity for programming and experience you can overcome these hurdles in a reasonable time or pick up another language which can. As a programmer, you will sooner or later have more then one language under your belt anyway.

    3. Alan De Smet says:

      The first few languages you learn are the hardest. Everything after that is easy. I’m suspicious of any professional programmer who isn’t comfortable in at least 3 programming languages.

      So where to start? A common concern for new programmers, but it turns out that it largely doesn’t matter. What’s more important is to just start. My recommendation would be to look to what you want to accomplish, and look what languages people in that area tend to use. There is probably a good reason, typically either the language is innately well suited to the task, or have good libraries for the task.

      In the long run, you’ll want to learn enough languages to cover the range of techniques. My really quick and dirty list:

      Object Oriented: Java or C++. C# or Objective-C are also fine, but tend to cluster more strongly around specific companies offerings (Microsoft and Apple). Object oriented imperative languages are where most work is being done these days.
      Scripting/glue language: Python. Perl or Ruby are also good options. Most programmers end up doing lots of glue work or one-off work, and a scripting language really speeds it up.
      Imperative (notably, sans-objects): C. Maybe Pascal or Fortran. Non-object oriented imperative languages are disappearing, which is a shame because working in one sheds a lot of light onto object-oriented programming.
      Functional: Lisp or Scheme. When you start to really “get” functional programming, it opens your mind up to lots of useful techniques that apply even outside of functional languages.

      1. Jabor says:

        I would add that it’s important to learn an assembly language at some point. Knowing exactly what the computer is capable of, and what’s just being abstracted away by your choice of high-level language, stops you doing silly things like

        for(int i = 0; i < arraylen; i++)
        strcat(mystring, array[i]);

        1. Tizzy says:

          I think we can add to that a good grasp of algorithmic principles. This way you get all your bases covered: what is theoretically possible and how it works in practice.

      2. silver Harloe says:

        “If you all have is a hammer, everything looks like a nail.”

        A programmer that can’t learn a new language in a couple of weeks is like a carpenter that can’t learn a new tool in a couple of minutes. Hopefully, the programmer (or carpenter) in question is merely inexperienced, because that is something which fixes itself with practice. If you’re looking at a programmer’s resume and it only includes one language, don’t hire them. Even if it’s the language you’re hiring programmers for (unless you purposefully want a newb so you can mold everything they know, muhuwahaha. er. ahem)

        And, really, learning a new syntax is trivial (except maybe APL). The reason it takes time to learn a new language is you have to pick up on idioms and explore the libraries. Although, these days, Google makes that a lot easier than it was when I started :)

  31. Matt says:

    I’m not sure why, but I feel like this article should have a reference to Lingua::Romana::Perligata.

    1. silver Harloe says:

      Damian Conway is awe inspiring, isn’t he? :)

  32. SatansBestBuddy says:

    This reminds me…

    How does the optimization of your website fair, Shamus?

    Been hit with another wave of traffic that would have crashed the old site off of Reditt or something yet?

    1. Shamus says:

      Sadly, no traffic spikes in the last month or so.

      1. Qwertopew says:

        Just get Stephen Fry to twitter about this place :P

  33. Galad says:

    Where does Prolog stand on the complexity vs. simplicity scale?

    1. silver Harloe says:

      Prolog is a different mindset. The closest thing it resembles that most (*n*x) programmers deal with is writing a “Makefile” – you write rules, and ask the database questions and it tries to deduce the answers from the rules. You don’t use it to “write programs” – it’s not something you’d make a web page with. It’s a deduction engine and has limited use outside of that field.

  34. RPharazon says:

    The best trade-off I’ve found is Cython, a branch of Python that’s closely linked with its C framework. You can do anything you’d normally do in Python, but it’s amazingly integrateable with C, in case you run into tricky things that need C, or would be easier to write in C. It’s very versatile, and generally very user-friendly.

    To anyone that likes mathematics, logic puzzles, and/or programming: Go to Project Euler. Awesome problems that you solve by programming. Very awesome.

    Finally, realize this: Chris Sawyer wrote RCT1, RCT2, Transport Tycoon, and Locomotion entirely in Assembly. Worship the man. WORSHIP.

  35. Galenor says:

    Being an amateur programmer, I don’t judge programming languages by adaptability or efficiency. I can devour memory and make it only work on Windows 7 64-bit in Bulgarian, and I’d still be over the moon; my code is based on what I’m doing.

    At the moment, I’m not very diverse with my languages. For networking, I like Java, albeit I will never use that damn language outside anything to do with sockets and packets. ‘Number crunching’, databases, and other such business practices are lovingly prepared in C++. Go into the indie gaming side of me, however, and it takes a twist towards C# coupled with XNA.

    Now that I’ve put it on paper(?!), I can see that my choices are mainly based on the ease of the language to do the task I want it to do. I don’t have much pride in crowbarring a fully-working packet sender into C++, but I do have the desire to keep head-desking to a minimum. :)

  36. Samael says:

    A reasonable programming discussion? This is still the Internet, isn’t it? oO

    But this may be an opportunity to get help with a problem I have. I’ve tried to learn one or two neww languages recently (since C and Perl are the only things I can work with and I’m interested in new things). However, every single book I stumble upon for whatever language is utter garbage.

    I’m a learning-by-doing guy so the best way for me to learn a new language is getting a task and a programming reference to accomplish this task. As a hobbyist I have no running projects and all books have either copy&paste examples explaining what they do (which I don’t need because I can handle a reference book and I’m not a typist) or outlandish tasks with no foundation in reality (let’s program a car or otherwise stupid projects; it should be a program that not only runs but accomplishes something useful)

    Someone got an idea or some recommendation on how to tackle this problem? Because I tried to broaden my horizon a bit for over 3 years now an I’m always falling back to my status quo.

    1. Brandon says:

      Well, if you want to branch into Java, think of a nice web-based app you’d like people to have access to. Only think of something that would work better client-side than server-side. Since Java is very cross-platform you could offer this app or functionality to almost anyone with a robust web browser and Java installed.

    2. Jabor says:

      Structure and Interpretation of Computer Programs is a decent book for learning Scheme. Starts off fairly simply (it is an introductory text after all), but ends up getting you to write a Lisp interpreter in Scheme.

      http://mitpress.mit.edu/sicp/

    3. Goggalor says:

      You could always look on open-source websites such as Soureforge.net or github.com.
      Their search engine lets you filter by project language and release status (e.g. coding, Alpha testing, production etc).

      So you could browse the non-Production projects in a language you want to learn, find a project that sounds interesting, download/checkout the source and see if they have a TODO or BUGS list. That way you can use the existing source as an example of the language and you can extend it for the practise.

  37. Luke Maciak says:

    Btw, does anyone still use BASIC for anything serious? I think a better example of a very high level language would be python or ruby.

    Also you forget that the abstraction level is not the only factor. Different languages offer different features. Some languages are compiled, some are interpreted and some offer both. Some languages are procedural, some are functional, others are object oriented. There is static typing and dynamic typing. Then there are logical languages like prolog.

    At the end of the day the speed and optimization is only a fraction of the argument. For example, if you compare Java, Jython and Clojure they will all likely have similar performance. They all compile into Java bytecode and run on the JVM. But picking one language over the other is a matter of features and aesthetics. If you want static typing and OO, use Java. If you want dynamic typing and more flexibility use Jython. If you want functional language with lisp style macros use Clojure.

    It is a very complex question – nevertheless this is a decent crack at explaining it to a non-programmer. Still, you should put a disclaimer there that it is an oversimplification. :)

    1. silver Harloe says:

      Yes, people use Basic — especially Visual Basic. Note that these days Basic has OO capabilities and the occasional label instead of omnipresent line numbers. You wouldn’t recognize it :)

    2. HarveyNick says:

      Not sure about Clojure, but Jython has significantly slower performance than Java, slower than CPython, even. My understanding is that one of the main reasons for this is that the JVM doesn’t natively support dynamic typing, and so this basically has to be emulated. Plans for Java 7 include an update to the JVM to make this work correctly.

      Groovy, which I mentioned earlier, is capable of both static and dynamic typing, which is a nice feature.

  38. Allen says:

    I think computer languages are to programmers as tools are to carpenters – you need a few, for different situations. (And any decent programmer – not elite, just “normal” – should be able to adapt to new languages fairly easily).

    I started with BASIC, because pretty much all you can get your hands on at an 80’s elementary school ;)
    High school was Macs, so you learn Hypercard.
    In University I was subjected to Modula-2 before getting the true power of C and C++ (and Unix in general).
    I started Web programming with C++, but once I was out of school webhosting companies had this odd dislike of compiled programs on their servers. (Can’t imagine why…). So it was time to learn Perl (which I think gets a bad rap, to be honest).
    My current job is VBA with some JScript and batch files. (Mainly because the intended users pretend it’s just an Excel/Access file ;)
    And just for kicks I’m trying to learn some Java (so I can fix some stupidity Corporate wants us to implement).

    Never learned assembly, though…

  39. Unconvention says:

    Why are there so many programming languages? Why doesn't everyone just pick the best one and use that?

    Why are there so many tools? Why doesn’t everyone just pick the best one and use that?

    You can hammer in a nail with a saw, but it’ll take a while and be a pain to do it. Equally, you can ‘cut’ that piece of wood in half with a hammer, but it’ll take a while and produce a really ugly end product.

    Programming languages are tools. They’re (mostly) designed with a particular purpose in mind, so there is no ‘best’ language in the same way that there is no ‘best’ tool.

  40. Kyle says:

    The goal is to simply print the worlds “Hello World!”, and that's it.

    Print the “word” or “world”.

    Funny typo, just letting you know.

  41. BlackBloc says:

    There is exactly ONE thing I like better about C++ compared to Java, and it’s template programming. Which ironically is the one thing most C++ programmers (who are actually mostly C programmers with objects thrown in) don’t use much.

    Java is probably not worth it for triple-A games programming, but IMO it’s definitly worth using for more indy/retro games. The only disadvantage of Java is if you want to leverage its cross-platform capabilities you need to stick to OpenGL instead of DirectX. At my workplace we do a graphics-heavy program (though not speed critical like an action game) and we’re busy switching our code from C++ to Java where necessary (because we have a client/server architecture, whenever we move functionalities to the client or want to remove cross-platform headaches).

  42. Deoxy says:

    OK, don’t have time to read it all today (sadly), but I do have to say one thing about C/C++.

    Generally, I agree that many languages have their strengths and purposes – your general point in the article is spot on.

    HOWEVER.

    C is just plain stupid. It drives me crazy that it continues to be used, almost exclusively out of habit. SURE, it gives you a lot more power/flexibility, and occasionally you need some of that… but there are other ways to get it than a language that allows bloody POINTER ARITHMETIC (among other crazy things).

    It’s like saying that you really need that 30 inch, 80 horsepower handsaw with no guard – sure, it’s powerful… but there’s a reason none of those (if they even exist) actually see any use.

    Even if you made a successful case that C hits some kind of sweet spot for certain things (a big if, but I’ll grant it for this argument), why oh why oh WHY is it used for SO MUCH OTHER BLOODY STUFF?!?

    GAH!!!

    Edit: I’ve done actual paid work in VB, C++, VB.net, C#.net, COBOL (and Cobol), and bits and pieces in a few other things, and C# is really not much like C other than syntax, but C++ is. Python is on my hobby list, but I haven’t touched much on my hobby list in a while, really. /Edit

    Just to end this post on a little nicer note, I will point out that there’s really nothing special about the Java language itself (it’s primarily a rip-off from C) – the ability to run multiple places with the same code could easily be (and in a few other cases that aren’t nearly as wide-spread, actually has been) implemented with any language. The big bonus to Java is simply that it’s already been done and the resulting “virtual machine” widely distributed.

  43. Deoxy says:

    Hmm… I commented, then edited the comment to add a list of languages I’ve worked in, and “Your edited comment was marked as spam”. ???

    I wonder how I did that – honestly, the only thing I could see about my edit that seemed much different than the rest of my post was a few all caps words and words for a couple of the programming languages. Weird.

    1. Shamus says:

      Sometimes my spam filter is just a sadistic imbecile. It will let pass some obvious bit of spam and then eat a perfectly innocent post with no “danger” words and no links. I dunno.

      In any case: Comment restored.

      1. Deoxy says:

        Thanks!

        Not just imbecile, but sadistic imbecile, eh? Sounds like you should check that code for useful video-game AI teammate code – sadistic imbecile would be an improvement for most of them.

        1. Garden Ninja says:

          I’ve already seen that implemented in at least one game: Army of Two the 40th Day. There are spots where you are supposed to save civilians from the bad guys. When you come up to one of these spots, rather than getting to cover, or shooting back at the guys actually firing on them, the bad guys kill the civilians. Uh… what?

  44. UtopiaV1 says:

    Wow, a lot of programmers sure do read this blog!

  45. Haskell!
    That is the language to which my vote goes.

  46. Lalaland says:

    One more point on JAVA I was taught it in first year in CS&SE and then they changed course directors and from year 2 on it was C all the way. More relevant, maybe, but when you’ve done your ‘Coding 101’ courses ignoring memory management arriving into year 2 with lecturers expecting you to be up to speed with C memory management is a real issue. I never really got over this hurdle personally (my nightmares are haunted by segment faults and pointers) and struggled through the degree. Most of my friends who stayed in SE work with Java ironically but the whole experience soured me and now I sell PCs for a living!

    To bring this back on point I think a lot of talented programmers can underestimate how much the core conceptual differences between languages can throw novices for a loop. I still love the core logic of code which is why I find discussion so f programming topics so fascinating.

    1. Psithief says:

      I had a similar experience at Curtin University.

  47. Tizzy says:

    I enjoy reading all this material about general-purpose languages. One think most people would be amazed at also is the crazy variety of languages that exist for very specialized fields, especially academic fields. The wikipedia comparison pages are fairly instructive in that regard.

  48. vede says:

    I can kinda understand why, seeing how I think this discussion is primarily focused on languages meant to complete some specific task, but I think this comment thread could stand to be a bit more esoteric. A lot of programmers (in my experience) like to toy with esoteric languages just as fun pastimes, and they’re fun to show non-programmers and reinforce all those movie-based stereotypes about programming (it’s just a bunch of seemingly meaningless symbols, that is).

    Some examples:

    My, and probably most people’s, first esoteric language: Brainfuck
    Only eight instructions, no graphical capabilities (though you could theoretically make a roguelike of some kind, as far as I know), and pretty much everything you ever write in it will be a blast because you actually made the language do anything recognizable.

    “Hello, world!” as seen by Brainfuck:

    ++++++++++[>+++++++>++++++++++>+++>+<<<++.>+.+++++++..+++.>++.<.+++.------.--------.>+.>.

    A more interesting one: Piet

    A language where every piece of code is literally a work of art. The code is an image, with a cursor that gets its instructions and where it needs to go next from colors in the image.

    “Hello, world!” as seen by Piet: http://en.wikipedia.org/wiki/File:Piet_Program_Hello_World(1).gif

    One that takes the term “esoteric” to levels of pure insanity: Malbolge

    Named after the eighth level of Hell, the language is so contrived that it took two full years after it was made public in 1997 for any functional program at all to be written, and even then, the program wasn’t even written by a human.

    “Hello, World!” as seen by Malbolge:

    ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#"
    `CB]V?Tx

    A slightly more hilarious one: LOLCODE

    IM IN YR CODE MAKIN U LOL

    “HAI WORLD” LIKE SEEN N LOLCODE:

    HAI
    CAN HAS STDIO?
    VISIBLE "HAI WORLD!"
    KTHXBYE

    (LOLCODE has actually been used to for web page scripting, so it is definitely a functional language. It’s also a funny language.)

    EDIT: I will be moderately amazed if this doesn’t get caught in a spam-filter of some kind. Take the link to a .gif, a ling containing the word “fuck”, the loltalk, the lines of crap that amount to essentially gibberish… Impressive.

    1. Deoxy says:

      No, marking as spam is only for those that appear innocent – as Shamus mentioned above, the spam filter is a sadistic imbecile.

    2. Garden Ninja says:

      I’ll see your Malboge, and raise you Whitespace.

  49. Steve C says:

    I’ve never quite understood why programming languages don’t program other languages.

    Program in the easiest language for the task then output that into assembly. Then compile the assembly and have that language instruct the computer. In theory it would be efficient both for the computer and efficient for the human programmer.

    But I’m a non-programmer and understand only the very basics of coding. I’m sure there’s got to be a reason why that isn’t done.

    1. silver Harloe says:

      That’s just what compilers do, actually: turn your high level (easy to write in) language into assembly, and then turn the assembly into machine code…

      It’s become trendy of late, though, to make programming languages write their assembly/machine code for “virtual machines” instead of the actual chip in the computer, because you can make the virtual machine the same for every computer – meaning you get to compile it once and run it on any machine.

      But, let us set that aside and address the core of your question, the unasked next question: well, if we compile programs just like that, why aren’t all these languages at the peak of potential efficiency, then?

      The answer is because the assembly code they generate comes from templates and patterns which naturally express what you wrote in the high level language, and which do not necessarily represent the best possible machine language program you could write for the task.

      For example, to make a string in assembly (like what Mr Young did at the end of his example):


      section .data
      msg db 'Hello, world!',0xa

      That string contains exactly the number of bytes it needs and no more.
      However, in _even the most efficient compiled language there is_, the code it generates for a string will be


      msg db 'Hello, world!',0xa,0x0

      The extra byte is so that it knows where the string ends. C is, however, blazingly efficient, and the compiler code is so old that people have taught it all kinds of tricks to speed things up here and there (they call it “optimization” of the compiled code)…

      It gets worse in the next logical jump from C to C++, because in addition to allocating the string, it’s going to allocate memory to contain data about the “string object” that it’s a part of (probably a couple bytes representing the object ID, several more so it can map object IDs to code, a couple bytes representing a reference count on the string, and so on).

      And so far I’ve just covered the memory footprint.

      Even in C, when you call printf(“Hello, world!n”); you don’t get the 9 assembly instructions Mr Young wrote to to perform the print. You get a bunch of assembly to define the function of “printf()” (which has quite a few features, and so generates a lot of code), code to set up the string in memory so that it can call printf, code to call printf…
      when you jump to C++, you get even more ‘free code’ to go with the ‘extra features’ of the language. If you compile Java straight to assembly, you get even more free code to do the “memory management” stuff people keep bringing up. If you compile a really high level language like Perl straight to assembly (though, to my knowledge, there isn’t such a compiler), then you get Yet Even More ‘free code’ so that it can treat strings as potential containers of numbers and all kinds of spiffy features. and you get all that code even though in each of those languages the program is very, very simple indeed:

      (in C:


      #include <stdio.h>
      void main() {
      printf("Hello, World\n");
      }

      in C++, the C code will work. I don’t remember it in Java, but I think it’s


      class Main {
      public void main() {
      System.out.println("Hello, World");
      }
      }

      and in Perl it’s


      print "Hello, World\n";

      Each language (except Java :) ) gets simpler as you move “up” the hierarchy, but each generates more assembly language, as well, to handle all the neat features you aren’t even using in this example program. Whereas a human writing assembly would just generate the 9 instructions needed, because the human knows there aren’t any more features needed.

      (okay, for some reason I can’t get Word Press to let me insert backslashes where they go, but in the C and Perl examples, there’s a backslash before the final ‘n’)

      1. Deoxy says:

        In theory, the compiler could determine which stuff to include – that is to say, only include the “extra” code if the code being compiled makes use of features that require that extra code. Assembly created by such a system would be, in many cases, perfectly efficient (in other cases, the person writing the high-level code would be using features in unnecessary ways, but the compiler wouldn’t know that).

        Unfortunately, no one has written a compiler like that… it would be insanely complicated to write.

        1. silver Harloe says:

          That may be true for some languages, but many these days have reflective features that let you modify the code itself, and thus have to be prepared for any eventualities. I say “may” be true, because there’s a lot of past thought that has gone into this sort of thing, with phrases like “NP Complete Problems” which may apply here, but I’m not current on the topic, so I can’t say for sure (I do know that “determining whether or not an arbitrary program terminates in a finite time” is a literally unsolvable problem)

  50. Jattenalle says:

    In BASIC, the first task would be super-trivial. One line of code. The second task would require pages of code. 99% of your programing time would be spent on the second task, and it would be much, much slower than the first task.
    You might want to read up on your BASIC, Shamus.
    Check out a language called FreeBASIC while you’re at it ;)

    For a concrete example of what puny slow BASIC can do nowadays check my website link, it’s even in OpenGL ;)

  51. saj14saj says:

    Over the last (and I cannot believe I am writing this) 30 years or so, I have written production real world code in a significant plurality of the languages mentioned in this thread, including all of the BIG ones, and several others not mentioned.

    One can only really understand the value of a programming language in context of its place in history and the understanding of the craft (and I use that word carefully, not a science, and not an art, but elements of both) of programming.

    The one trend that I see absolutely governing my choice of tools as it has changed over the years is this: human time is more valuable than computer time, in most circumstances (I will not argue that there are exceptions). Therefore, the more the development environment (including the language features like garbage collection, and my latest love, dynamic typing, as well as the libraries, and even the IDE in which the work is done) take care of, the more the human time is expended on the actual business (or scientific, or gaming) project at hand, and not the mechanics.

    In many programming domains (with the notable exception of Shamus’ one), optimization is too expensive to do except when it is really needed–because humans are more expensive than machines, over the lifetime of an application. I teach my team to optimize -maintainability- over everything else. Again, not appropriate in all domains, but I believe it shows the value and trend of the evolution of tools.

    The lowest level tools haven’t evolved much since the macro assembler; but the highest level ones continue to be a hot bed of innovation.

  52. Martin Annadale says:

    I code often in C++, C# and Delphi.

    But Delphi is my favorite. It strikes a balance between awesome ease of use (setting up an interface) and power. I haven’t found anything that I can do in C++ that I can’t do in Delphi. It even has the ability to inbed assembly code on the fly. It compiles super fast (seriously anyone used to any other language will wet themselves at the speed). On the Win32 versions the .exe’s it produce are rediculously small (easily 30 to 40% smaller than other languages).

    So. Basically. I love Delphi to death.

  53. Yoshida says:

    One thing I haven’t seen anyone mention is the HUGE size of a compiled C program. I had been using Java for quite some time and tried to learn C. I wrote a simple “Hello World” program and it compiled into a 153 kB executable. By comparison, the very first Legend of Zelda came on a 128 kB double-sided floppy disk.

    I don’t know why anyone would use C when it takes a program literally larger than Hyrule in order to display a string on the screen. (In Java, a “Hello World” program was only 550 bytes).

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Isaac Gouy Cancel reply

Your email address will not be published.