Archive for the ‘Concept programming’ Category

Animation and 3D: the web is doing it wrong

In Animation and 3D: the web is doing it wrong, I argue that the way the web does animation and 3D is completely bogus and deserves to die. With Tao Presentations, we offer a dynamic document description language that lets us write shorter code that is much closer to storytelling. We’d like to bring this to the web.

Have you ever created a dynamic animation on a web site, or displayed a 3D object? Really, why is it so complicated? Why should I learn how to use half a dozen libraries, write dozens of line of boilerplate HTML, WebGL and CSS code, just to rotate some text on the screen or display a 3D object? Why do I need three (or four, or five) languages, libraries or frameworks to design a single animated web page?
In the realm of business presentations, Tao Presentations solved this problem with a 3D dynamic document description language, letting you easily create sophisticated interactive 3D animations and presentations. What if we brought this very innovative platform to the web? What kind of applications would become possible if we improved web browsers in the areas of storytelling, interactivity, 3D or multimedia?


Is it normal to wait for your computer? Why should I wait 5 seconds when I click on a menu? Why does it sometimes take half a minute to open a new document? Developers, optimize your code, if only as a matter of public service! What about making it a New Year resolution?

Why is my Mac laptop slower than my iPad?

Apple cares about iPad performance

Apple cares about iPad performance

I have a serious issue with the fact that on a laptop with 8G of RAM, 1TB of hard disk, a quad-core 2GHz i7, I spend my time waiting. All the time. For long, horribly annoying pauses.

Just typing these few paragraphs had Safari go into “pause” twice. I type something and it takes ten seconds or so with nothing showing up on screen, and then it catches up. Whaaaaat? How did programmers manage to write code so horribly that a computer with a quad-core 2.6GHz i7 can’t even keep up with my typing? Seriously? The Apple II, with its glorious 1MHz 8-bit 6502 never had trouble keeping up, no matter how fast I typed. Nor did Snow Leopard, for that matter…

Even today, why is it that I always find myself waiting for my Mac as soon as I have 5 to 10 applications open, when a poor iPad always feel responsive even with 20 or 30 applications open at the same time? Aren’t we talking about the same company (Apple)? About the same core operating system (Darwin being the core of both iOS and OSX)? So what’s the difference?

The difference, folks, is optimizations. Code for iOS is tuned, tight, fit. Applications are programmed with severe hardware limitations in mind. The iPad, for instance, is very good at “pausing” applications that you are not using and recalling them quickly when you switch to them. Also, most applications are very careful in their use of resources, in particular memory and storage. Apple definitely cares about the performance of the iPad. There was a time the performance of the Mac mattered as well, but that was a long time ago.

Boiled frog syndrome : we slowly got used to desktops or laptops being slower than tablets, but it’s just plain stupid.

Lion and Mountain Lion are Dog Slow

It's obvious why they called it Lion...

It’s obvious why they called it Lion…

I’ve been running every single version of MacOSX since the Rhapsody days. Up until Snow Leopard, each release was a definite improvement over the previous version. Lion and Mountain Lion, on the other hand, were a severe step backwards…

Lion and Mountain Lion were not just loaded with features I didn’t care about (like crippling my address book with Facebook email addresses), they didn’t just break features I relied on on a daily basis (like full screen applications that works with multiple monitors, or RSS feeds). They were slow.

We are not talking about small-scale slowness here. We are talking about molasses-fed slugs caught in a tar pit, of lag raised to an art form, of junk code piling up at an industrial scale, of inefficiency that makes soviet car design look good in comparison.

And it’s not just me. My wife and my kids keep complaining that “the machine lags”. And it’s been the case with every single machine I “upgraded” to Lion or Mountain Lion. To the point where I’m not upgrading my other machines anymore.

In my experience, the core issue is memory management. OSX Lion and Mountain Lion are much worse than their predecessors at handling multiple programs. On OSX, the primary rule of optimization seems to be “grab 1GB of memory first, ask questions later.” That makes sense if you are alone: RAM is faster than disk, by orders of magnitude, so copying stuff there is a good idea if you use it frequently.

But if you share the RAM with other apps, you may push those other apps away from memory, a process called “paging“. Paging depends very largely on heuristics, and has major impact on performance. Because, you see, RAM is faster than disk, by orders of magnitude. And now, this plays against you.

Here is an example of a heuristic that I believe was introduced in Lion: the OS apparently puts aside programs that you have not been using for a long while. A bit like an iPad, I guess. On the surface, this seems like a good idea. If you are not using them, free some memory for other programs. But this means that if I go away from my laptop and the screen saver kicks in, it will eat all available RAM and push other programs out. When I log back in… I have 3GB of free RAM and a spinning beach ball. Every time. And even if the screensaver does not run, other things like backupd (the backup daemon) or Spotlight surely will use a few gigabytes for, you know, copying files, indexing them, stuff.

Boiled frog syndrome : we slowly got used to programs using thousands of Mac128K worth of memory to do simple things like running a screensaver. It’s preposterous.

Tuning memory management is very hard

Virtual Memory is complicated

Virtual Memory is complicated

The VM subsystem, responsible for memory management, was never particularly good in OSX. I remember a meeting with an Apple executive back in the times OSX was called Rhapsody. Apple engineers were all excited about the new memory management, which was admittedly an improvement over MacOS9.

I told the Apple person I met that I could crash his Mac with 2 minutes at the keyboard, doing only things a normal user could do (i.e. no Terminal…) He laughed at me, gave me his keyboard and refused to even save documents. Foolish, that.

I went to the ancestor of, opened a document, clicked on “Zoom” repeatedly until the zoom factor was about 6400% or so. See, in these times, the application was apparently allocating a buffer for rendering that was growing as you zoomed. The machine crawled to a halt, as it started paging these gigabytes in and out just to draw the preview on the screen. “It’s dead, Jim“, time to reboot with a long, hard and somewhat angry press on the Power button.

That particular problem was fixed, but not the underlying issue, which is a philosophical decision to take control away from users in the name of “simplicity“. OS9 allowed me to say that an App was supposed to use 8M of RAM. OSX does not. I wish I could say: “Screen Saver can use 256M of RAM. If it wants more, have it page to disk, not the other apps.” If there is a way to do that, I have not found it.

Boiled frog syndrome : we have slowly been accustomed by software vendors to give away control. But lack of control is not a feature.

Faster machines are not faster

A 1986 Mac beats a 2007 PC

A 1986 Mac beats a 2007 PC

One issue with poor optimizations is that faster machines, with much faster CPUs, GPUs and hard disks, are not actually faster to perform the tasks the user expects from them, because they are burdened with much higher loads. It’s as if developers always stopped at the limit of what the machine can do.

It actually makes business sense, because you get the most of your machine. But it also means its easy to push the machine right over the edge. And more to the point, an original 1986 Mac Plus will execute programs designed for it faster than a 2007 machine. I bet this would still hold in 2013.

So if you have been brainwashed by “Premature optimization is the root of all evil“, you should forget that. Optimizing is good. Optimize where it matters. Always. Or, as a colleague of mine once put it, “belated pessimization is the leaf of no good.”

Boiled frog syndrome : we have slowly been accustomed to our machines running inefficient code. But inefficiency is not law of nature. Actually, in the natural world, inefficiency gets you killed. So…


Designing a new programming language

I’ve been working for years on a programming language called XL. XL stands for “extensible language”. The idea is to create a programming language that starts simple and grows with you.

All the work of Taodyne, including our fancy 3D graphics, is based on XL. If you think that Impress.js is cool (and it definitely is), you should definitely check out what Tao Presentations can do. It’s an incredible environment for quickly putting together stunning 3D presentations. And we are talking real 3D here, with or without glasses depending on your display technology. We can even play YouTube videos directly, as the demo below shows.

But the paradox is that the work for Taodyne keeps me so busy I’m no longer spending as much time on the core XL technology as I’d like. Fortunately, vacation periods are an ideal time to relax, and most importantly, think. So I guess it’s no chance if Christmas is one of the periods where XL leaps forward. Last year, for example, I implemented most of the type inference I wanted in XL, and as a result, got it to perform close to C on simple examples.

This year was no exception. I finally came up with a type system that I’m happy with, that is simple to explain, and that will address  the demands of Tao Presentations (notably to implement easy-to-tweak styles). The real design insight was how to properly unify types, functions and data structures. Yep, in XL, types are “functions” now (or, to be precise, will be very soon, I need to code that stuff now).

Another interesting idea that came up is how to get rid of a lot of the underlying C++ infrastructure. Basically, symbol management for XLR was currently dome with C++ data structures called “Symbols” and “Rewrites”. I found a way to do exactly the same thing only with standard XLR data structures, which will allow XLR programs to tweak their own symbol tables if they choose to do so.

An interesting rule of programming: you know you have done something right when it allows you to get rid of a lot of code while improving the capabilities of the software.

Back to coding.

Using XL for high-performance computing

There is a little bit of activity on the XLR-Talk mailing list, dedicated to discussions about the XL programming language and runtime. XL is a high-level general purpose programming language I designed and implemented over the years. It exists in two primary flavors, XL2, an imperative statically-typed language, and XLR, a functional flavor. Both share a very similar syntax and basic concepts.

One of the interesting threads is about using XL for high-performance computing. I like it when someone writes:

Thank you for releasing Xl and in particular Xl2, this is a most
interesting and exciting development.  I am a very long-time C++ user
and appreciate the power of generic programming through templates but
the syntax is clunky and I often find myself going off the end of what
is currently possible and end up messing around with the C pre-
processor which is frustrating.  I am hoping that Xl2 will prove to be
an effective alternative to C++ templates and provide the programming
flexibility I crave.

Now, XL2 is not ready for prime-time yet. Its library is significantly lacking. But the core compiler is already quite advanced, and can compile very interesting pieces of code. For instance, XL was as far as I know the first language to introduce variadic templates, for code like this:

generic type ordered where
    A, B : ordered
    Test : boolean := A < B
function Max (X : ordered) return ordered is
    return X
function Max (X : ordered; ...) return ordered is
    result := Max(...)
    if result < X then
        result := X

What happens in this little piece of code is interesting. It introduces two key features of XL: true generic types and type-safe variadics.

True generic types

The ordered type is an example of “true generic type”, meaning that it can be used as a type in function declarations, but it implicitly makes the corresponding function generic (C++ programmers would say “template”). In other words, ordered can represent types such as integer or real, and you can use Max with all these types.

In that specific case, ordered is a validated generic type, meaning that there are some conditions on its use. Specifically, ordered only represents types that have a less-than operator, because that operator is necessary to implement Max. Note that a compile-time failure will occur if you attempt to use Max with a type that doesn’t have a less-than, even if no less-than operation is used in the instantiation.

Type-safe variadics

The second interesting feature demonstrated on this small example is the use of ... to represent arbitrary lists of arguments. This is used here to implement type-safe variable argument lists. You can for example write Max(1, 3, 5, 6, 9), and the compiler will recursively instantiate all the intermediate Max functions until it can compute the result.

These same features are also used for functions that have lists of argument with differing types, such as WriteLn. The XL2 implementation of WriteLn is found here:

to WriteLn(F : file; ...) is
       any.Write F, ...
       PutNewLineInFile F

The Write function itself is implemented with a similar recursion that ends on functions that write a single argument, e.g. an integer value.

How does it help HPC

So how do these features help high-performance computing? They allow you to easily write highly generic code, covering a large range of uses, without paying a run-time penalty for it. No objects are constructed. No garbage collector is required to clean up memory allocations : there are no memory allocations. Everything can easily be inlined.

There are other features in XL that also help with HPC:

Where do we go from here?

XL2 is currently a little bit on hold because I’m currently focusing a lot of my energy on the functional variant, XLR, used by Taodyne in its products.

However, I believe that it reached a stage where other people can contribute relatively easily. For example, it would be useful to implement the various “fundamental” data structures in the library, i.e. go a little bit beyond arrays. If you want to contribute to XL2, nothing would please more than to give pointers. Simply join xlr-talk.

Trends in programming languages…

September 29, 2010 3 comments

A colleague sent me an interview with Kalani Thielen about trends in programming languages.

I’m fascinated by this interview. Kalani is obviously an expert. But what should we think of the following?

The function, or “implication connective” (aka “->”), is an important tool and ought to feature in any modern language.

I have a problem with this kind of jargon. Why? Because if concept programming teaches us anything, it’s that a computer function is anything but a mathematical “implication connective”. Programming languages are not mathematics.

Programming languages are not mathematics

Computer science courses often spend a lot of time teaching us about various aspects of mathematics such as lambda calculus. So we are drawn to believe that programming languages are a sub-branch of mathematics.

And indeed, this is the view that Kalani expresses:

The lambda calculus (and its myriad derivatives) exemplifies this progression at the level of programming languages. In the broadest terms, you have the untyped lambda calculus at the least-defined end (which closely fits languages like Lisp, Scheme, Clojure, Ruby and Python), and the calculus of constructions at the most-defined end (which closely fits languages like Cayenne and Epigram). With the least imposed structure, you can’t solve the halting problem, and with the most imposed structure you (trivially) can solve it. With the language of the untyped lambda calculus, you get a blocky, imprecise image of what your program does, and in the calculus of constructions you get a crisp, precise image.

The statement that I strongly dispute is the last one: In the calculus of constructions, you get a crisp, precise image. The Calculus of Constructions (CoC) is the theoretical basis for tools such as Coq. These tools are intended to assist with computer-generated proofs. So the very thing they talk about is represented crisply by the CoC. But if I want is to represent the behavior of malloc() (a function whose primary role are its side effects) or how CPUs in a system communicate with one another (physical interactions), then CoC is of no use. It doesn’t give me a crisp, precise image.

In other words, high-level languages with first-class functions, implicit garbage collection or dynamic typing are really cool, but they give me a blurry, imprecise picture of how the computer actually works. The reason is that a computer is not a mathematical object, it’s a physical object. An “integer” in a computer is only an approximation of the mathematical integer, it’s only an approximation of the mathematical entity with the same name.

Trying to hide the differences only makes the picture more blurry, not more crisp. For example, you can use arbitrary-precision arithmetics instead of integers with a fixed number of bits. And now, your “integers” start consuming memory and have other side effects. In any case, these arbitrary-precision numbers are not “native” to the computer, so they are “blurry”.

Programming languages are languages

With concept programming, I’ve consistently argued that programming languages are, first and foremost, languages. Trends in programming languages are a problem of linguistics, not mathematics. The goal should not be to make the language more precise, but to make it more expressive. Nobody cares if you can solve the halting problem regarding programs in your language, if to achieve that objective you have to give up expressiveness.

Let’s make an analogy with real-world languages. Imagine that we decided that it’s important to make English “precise”. We’d set the goal that any sentence in Precisenglish could be provably true or false. First, you’d end up with a language where it’s impossible to write “This sentence is false”, since it’s impossible to prove this sentence true or false. Now, this particular sentence may not seem indispensable. But what about questions? “What time is it?” wouldn’t be a valid Precisenglish sentence… Isn’t that a hefty price to pay for precision?

The same is true for programming languages. You can impose constraints on the language that make it easier to prove things. And then, simple things like side effects become really complex. In Haskell, the simple task of writing something to the console requires complex constructions such as monads

Mathematics and programming both derive from languages

It’s interesting to observe that mathematics is also a form of language. It’s a precise, if restricted, language that helps us reason about symbols, build theorems, prove things. So it makes sense that mathematics and programming languages are related: they both are forms of languages. But it’s not because programming languages derive from mathematics. It’s because programming languages and mathematics both derive from languages.

In my opinion, progress in programming languages will happen if we decide to give up mathematics and embrace linguistics. When we try to start focusing on how we translate concepts in our head into computer approximations.

A good language by that definition can adapt to mathematics as well as to other domains of knowledge. The ideal programming language should be capable of representing mathematical properties. It should be possible to write a subset of the language that is precise, just like mathematics can be seen as a precise subset of human languages. But it should also be possible to have a subset that is not necessarily as precise because it addresses other domains. There should be a subset that deals with I/Os not using monads or other abstractions, but representing what’s going on in the machine as precisely as possible. There should be a subset that deals with parallelism, or computations on a GPU, or floating-point arithmetic.

And you basically have a definition of what I’m trying to achieve with XL.

Created the XLR-Talk Google Group

Practically since its creation, the XL programming language project had relied on SourceForge mailing lists. Unfortunately, these mailing lists attracted spam like flies, and after a while, they became unreadable.

So today, I created a Google group, XLR-Talk, dedicated to discussing XL-related matters. I hope that this will make discussions much easier than they were before. The Google Groups interface is quite handy.

If you want to view the group or subscribe:

Google Groups
Subscribe to XL programming language and runtime

Visit this group

The XL axioms: reconciling Lisp and C++ hackers

Re-reading Paul Graham, specifically The Hundred-Year language helped me understand that I need to write down the axioms of XL. I need to describe XL’s equivalent of lambda-calculus for Lisp. If there is such a thing.

Lisp inventions

In Revenge of the Nerds, Paul Graham makes a list of the 9 things that Lisp invented

  1. Conditionals
  2. Function types (i.e. functions as first-class items)
  3. Recursion
  4. Dynamic typing
  5. Garbage collection
  6. Programs composed of expressions (no statements)
  7. A symbol type
  8. A notation for code using trees of symbols and constants (programs as data).
  9. The whole language there all the time (no distinction between load time, compile time and run time).

Of these, many became part of mainstream languages over time. Some remain the exception rather than the norm.

What makes XL unique

Later in that same essay, however, Paul Graham unknowingly explains that makes XL unique in my opinion (emphasis mine):

Macros (in the Lisp sense) are still, as far as I know, unique to Lisp. This is partly because in order to have macros you probably have to make your language look as strange as Lisp. It may also be because if you do add that final increment of power, you can no longer claim to have invented a new language, but only a new dialect of Lisp.

What XL demonstrates is precisely that you can have Lisp-style macros (and in fact build a whole language on macros) without the hairy syntax. Here is how you use ‘if then else‘ in XL (whether the imperative variant XL1 or the functionalish variant XLR):

    if A < 0 then
        write "A is negative"
        write "A is positive or null"

This code should look really familiar to users of other languages, I would hope. Yet under the hood, it corresponds exactly to a simple parse tree similar to what Paul Graham refers to in his point #6. Here is what that parse tree will look like if you expand it:

% ./xl -parse ifthenelse.xl -style debug -show
(infix else
 (infix then
   (infix <
  (block indent
    "A is negative")))
 (block indent
   "A is positive or null")))

The big difference with Lisp is that Lisp, as Paul Graham points it out, has no parser. XL has a simple parser, but a parser nonetheless.

Axiom #1: A simple parse tree improves code readability

This leads to axiom #1 of XL: you can parse text with 8 node types or less. The 8 node types currently found in XLR are:

  1. Integer: Numbers like 1234
  2. Real: Numbers like 3.1415
  3. Text: Literals such as "ABC" or 'X'
  4. Names and symbols: Names contain letters and digits, like HelloWorld or X0, symbols contain other characters such as ; or ->
  5. Prefix: Adjacent nodes where the first one defines the operation, such as sin x or -3.
  6. Postfix: Adjacent nodes where the second one defines the operation, such as 3! (factorial of 3) or 24in (24 inches)
  7. Infix: Adjacent nodes separated by a name or symbol which defines the operation, such as 3+4 or X and Y
  8. Block: A node surrounded by separators, such as (24-3) or the second half of the prefix A[3].

This can be reduced to less than 8 relatively easily: the real and integer types could be merged into a “number” type, the prefix and postfix could be merged into a “positional” operator, which could be made more general, for instance allowing “above” and “below” notations.

In practice however, the choice currently implemented in XL is a practical trade-off. For instance, the natural computer representation of integers and real numbers are different, so it makes sense to treat them as separate from the beginning.

Of note, in XL, indentation is represented by a block, and line separators are just another kind of infix.

The benefit of this representation compared to the Lisp representation is that it is only marginally more complex (4 constructed node types instead of one, the list), but allows us to parse things much closer to the way humans do. For instance, XL will parse 2+3*6 the right way.

Axiom #2: Programs can be evaluated with macros

Despite what Paul Graham says, Lisp really has two “modes” of operation: macros/compile-time and evaluation. The distinction is subtle: Macros (and compilers) evaluate one program to transform another program, whereas evaluation evaluates the program itself.

In the XL1 compiler (the XL imperative language), the distinction is more marked than in Lisp, whereas in XLR (the XL runtime language), it is less clearly visible. Let me illustrate with an example of each.

In XL1, you would define a recursive factorial as follows:

function Factorial(N : integer) return integer written N! is
    if N = 0 then
        return 1
        return N * (N-1)!

That example looks really similar to what you would write in C or Ada or whatever other imperative language you can think of. But the parse tree is there, not too far. One example is the notation written N!. This allows the compiler to recognize that (N-1)! is really a call to Factorial. This is called expression reduction. This is already quite useful, but basically it’s a compile-time only thing.

Under the hood, however, the XL1 compiler uses the exact same mechanism to recognize (N-1)! and to recognize if-then-else. The exact same macro rewrite mechanism is at play. In fact, you can extend the compiler easily with your own “macros”, so that for example you could write d/dx(sin x) and have the compiler rewrite it for you as cos x. This particular differentiation plug-in has been in the compiler for several years now.

In short, the language looks imperative, but it’s implemented as a big set of macros.

By contrast, in XLR, you would define the same function as:

0! -> 1
N! -> N * (N-1)! 

The code is much shorter, because XLR basically has only one built-in operator, which is the macro rewrite operator ->. XLR is little more than a big macro pre-processor. Now, if XLR happens to dynamically generate machine code using LLVM to evaluate this quickly enough, it’s just an optimization, an evaluation strategy.

As a result, in XLR, evaluation and macros are practically the same thing, even more so than in Lisp in my opinion.

Axiom #3: C+ hackers and Lisp hackers are both right

Depending on whether your favorite language is C or Lisp, Ada or Haskell, Java or Python, you may prefer the first or the second approach. It’s really a matter of taste, and after years of healthy competition, the two camps are not any closer to being reconciled

What XL brings to the table is a single technology that demonstrably can do both.

Axiom #4: It works, almost

Well, today, XL does neither too well, mostly because the technology is not finished. It has never stopped making progress in the years since the project began, though. If you want to contribute, go to the XL web site

Categories: Concept programming, Lisp, XL

XL: Advancing the state of programming languages

This is the 200th post on this blog!

The WGP2010 workshop was a good occasion for me to write a short summary of where we stand with XL. With the creation of Taodyne, I’m spending much more time working on XL and with XL. This may not be immediately visible, thought, because XL is only a tool to achieve Taodyne’s objectives, it is not a goal in itself.

The article’s title is “Eliminating Newspeak in Programming language“. Here is the abstract:

Programming languages provide us with numerous tools. But like the
fictional language Newspeak in George Orwell’s 1984, they also restrict
what we can say, and in doing so, they shackle our minds. As a result,
historical programming language all became an economic dead weight as
soon as the hardware, fueled by Moore’s law, passed them by.

XL is a programming language designed to get rid of this Newspeak in
programming. Its primary focus is to help programmers add their own
concepts and their own notations to the language. To validate this
approach, many of the traditional features that XL provides are
constructed, not built in the compiler. The associated methodology is
called “concept programming”. It focuses on the transformation of ideas
into programs.

Writing an article about XL made me realize two things and a half:

  1. XL remains novel and relevant today. Ten years after I first shared code, the problems that XL addresses are still there, the solutions are still nowhere seen in other languages.
  2. On the other hand, the language never caught on. But then, the article, if accepted, will be the first one I ever wrote about XL for academic circles. From that point of view, it’s good to have left HP.
  3. XL is still evolving, and the compilers are still far from being finished… Shame on me.

So I think that XL is really capable of advancing the state of programming languages, but I need to put some additional muscle to explain what it is. Or as a friend reminded me yesterday: “It goes without saying, but it goes better saying it.”

From concept to code

The first key idea behind XL is that the role of a programming language is to help us transform ideas into code. The idea seems simple enough… until you dig deeper and realize just how hard this is. This presentation is a good starting point…

What does it mean to you, to transform ideas into code? How well do the existing languages do that for you? Do you sometimes feel impaired in the way to write your code? Are there things that you simply can’t say with your favorite programming language? Do you have funny horror stories about saying one thing and having the compiler understand an entirely different thing?

Why Go isn’t my favourite programming language

In case you are wondering, the title of this post is a nod to Brian Kernighan’s Why Pascal isn’t my favourite programming language. I’m only showing my age here, not trying to compare myself to Kernighan :-)


There has been a lot of buzz recently about the Go programming language. There was an article on Slashdot recently, videos of Google talks on YouTube (slides), the works. And of course, there is the Go web site itself. The YouTube video below has been viewed 183,000 times as of this writing, so the roll-out campaign had some success…

Of particular interest are the minds and forces behind the language. The initial contributors are Robert Griesemer, Ken Thompson, and Rob Pike, and the project was developed at Google over a time period of two years. With such a pedigree, expectations are high. And that’s probably why I’m a little disappointed with what was presented so far. It’s not that it’s a bad language, it’s more like as so-what language. There’s nothing I find really compelling about it. In particular, I didn’t find anything that would make me significantly revisit my own pet programming language, XL.

The point of views below are indeed largely based on my own experience designing and implementing a programming language. So I would say that they are both educated and biased. Keep this in mind as you read on. Also, I tend to point out the negative aspects because the campaign discussed above did a rather good job at highlighting the strong points of Go.

First impression

My initial impression of the Go programming language, based on the material above, is that it has a few serious flaws:

  • No generics: Go is yet another language where generics will be bolted-on as an afterthought. As a direct consequence, Go includes many constructs (including maps or goroutines) that in my opinion rightfully belong to a library. This is important for programmers: the choices of the Go designers might not match the needs of the Go programmers. Worse yet, if you have some reason to use some kind of fancy container, Go is not the right language to implement it.
  • Not suitable for what I call “Systems”: The Go designers seem to consider that “Systems” equates writing applications such as web browsers. To me, “systems” implies that you can write programs that may talk directly to the hardware (as in operating systems or real-time systems). At first glance, Go seems ill-suited for such tasks. In particular, it does not just provides a concurrency or memory management model, but actively enforces it at the language level. In other words, if you want to write a real-time system in Go, you first need to implement the runtime support for goroutines or Go garbage collection. I ran into this problem in the early days of HPVM: implementing a C++ runtime on bare metal proved complicated enough that I had to rewrite my C++ binary translator in plain old C.
  • Orwellian Newspeak: The two examples above illustrate how, all too often, Go “knows what is good for you” and won’t let you deviate from the party line. This goes as far as imposing where the braces go for rather bogus syntactic reasons, deciding when you can use upper-case letters, or formatting the code automatically when you submit it!

These three aspects of the design mean that I probably won’t use the language seriously any time soon. I can’t do meta-programming well because of the first issue. I can’t do system software because of the second issue. And if I want to explore, I will be constrained in what I can do because of the third one. These issues may not mean as much to the majority of programmers, though. Just because Go is not a language for me doesn’t mean that it won’t work for you. Your mileage may vary.

In any case, there are a number of aspects to the Go language worth discussing, if only because of the publicity it received.

Stated Objectives

The objectives of the team in creating the language are, according to the slides:

  • New
  • Experimental
  • Concurrent
  • Garbage-collected
  • Systems
  • Language

In my humble opinion, these objectives lack a clear sense of direction and purpose. They include mere facts (“New”), low-level implementation details (“Garbage-collected”), usage models (“Systems”) and actual design objectives (“Concurrent” and maybe “Experimental”). Specifically, “New” might be intended to mean innovative, but while there’s little debate that Go is new, I didn’t find much in it that wasn’t already hammered elsewhere. “Experimental” should probably mean that it explores a few radical ideas, but it could also simply reveal that it’s not fully cooked yet. Unfortunately, radical ideas are exactly what I didn’t find.

Concurrency is the most interesting of these objectives. It is not exactly a novel idea, but there is still a lot of progress to be made in how languages support concurrency. Still, it’s hard for me to see what Go adds compared to industrial languages like Erlang. Sure, Rob Pike demonstrated 100,000 goroutines completing in 1s. At first sight, that might seem impressive. But according to this source, someone ran 20 million processes in Erlang as early as 2005. And apparently, run-time performance at the time were similar: 5.3μs + 6.3μs times 100,000 would be 1.16 second, very comparable to what Rob Pike demonstrated on presumably faster hardware.

The language prominently features garbage collection. Garbage collection for memory today is practically a given. You can’t appeal to Java programmers without it. But collecting garbage is not about just memory objects. In real programs, there are many other forms of garbage to recycle: temporary files, open files, locks, threads, open network connections, … Furthermore, the requirements placed on the garbage collector may vary from application to application. Providing a garbage collector by default is good, providing one that is implemented in the library, that you can taylor to your needs and [gasp] apply to non-memory objects would be so much better… In short, is Go’s garbage collection worth the prominent position that its designers gave to it in the presentations? I don’t think so personally.

Implicit Objectives

It’s even more interesting to analyze the implicit objectives of the language. I could find at least three such objectives that, while not explicitly stated, seemed quite important to the design team.

  • Compilation speed: There were many demos of how fast Go compiles, and it’s featured prominently on the front page of the web site, so this seems to be a rather important objective to the Go design team.
  • Simplicity: Although it is being described as “slightly bigger than C”, Go is clearly intended to remain simple, with a simplified syntax, no parsing ambiguities, …
  • Programmer comfort: Rob Pike stressed how the language is designed to make it easy to write tools, to compose software (there’s a rather nice dependency checking mechanism), to reduce typing, and so on. And of course, not waiting for the compiler is comfortable too.

All these are rather noble objectives. But they also stress the kind of trade-offs that were made in the design of the language. For example, compilation speed does matter, nobody can dispute that. But in exchange for that speed, we pay 10% in execution speed, and more importantly we lose a number of features I consider essential for productivity, like templates. This may not matter much to software consumers, however. It is true that more and more, we simply compile someone else’s code. In that case, compilation speed is practically the only metric that matters.

Tipping the balance the other way, why not push the reasoning to the limit, and ditch compilation entirely? A lot of recent work has been in the field of just-in-time compilers. LLVM has shown that it’s possible to dynamically generate high-quality code in a portable way. The XLR runtime component of XL is now using LLVM, so that you can execute XLR programs (that is, the run-time language) without any explicit compilation. Compilation does happen, but entirely transparently, on the fly, as you execute the program. In that scenario, some heavier compilation remains possible once the program runs, to get better optimizations, faster execution or tighter verification of the code.


In conclusion, Go shows how difficult it is to design an innovative programming language for today’s programming world. Many of the choices made by the design team seem rather old-fashioned to me. Go didn’t rattle my brain the way Haskell, Erlang or Ada did in their time.

But ultimately, the bottom line for me is this: Go seems to be a solution in search of a problem.

Goodbye HP, Hello Taodyne

After almost 16 years working for HP, almost half of which on HPVM, I have decided to do something else. Please welcome my new little company, Taodyne, into the fierce world of software development startups. Taodyne is supported and funded by Incubateur PACA-Est.

For me, this is a pretty major change. So it was not an easy decision to make. Leaving behind the HPVM team, in particular, was tough. Still, the decision was the right one for a couple of reasons.

  • First, I had to face the fact that HP is not trying too hard to retain its employees. To be honest, in my case, they actually offered something at the last minute. But it was way too late. The decision had already been made over one year ago. There was no going back, as the creation of the new company commits other people who I could not simply leave behind.
  • However, the primary reason for leaving HP was my desire to invent new stuff, something that I felt was less and less possible at HP. Don’t get me wrong, it’s still possible to “invent” at HP, but you have to do it on demand, it has to fit in the corporate agenda, to belong to the corporate portfolio. Nobody could invent a totally new business like the Inkjet or the calculator at HP nowadays. Well, at least I could not (and I tried).

The tipping point for me was probably when one of my past managers told me that I should feel content with my contributions to HPVM, that I “had been lucky” to have this as the “high point to my career”. That manager probably meant his remark as a compliment. But I personally think sweat had more to do with the success of HPVM than luck, and I’m not sure how to take the “high point part” (“from there, you can only go down”?)

Ultimately, the choice was relatively obvious, if hard to make. Simply put, I didn’t have a dream job at HP anymore, nor any hope that things would improve. I was working long hours (meetings from 5PM to 1AM 5 times a week, anyone?), more or less officially had two jobs for only one pay, only to be told year after year that the only way forward was down… I owed it to my family to look for a way out.

So really, for me, HPVM was hopefully not the high point in a career, it was hopefully just one step along the way. Taodyne is the next one. Let’s hope I still have something to offer. When we deliver our first product, I hope that you will agree that I took the right decision. In the meantime, please check out our progress on the XLR web site.

Update: This entry was shortened compared to the original to remove useless material.


Get every new post delivered to your Inbox.

Join 365 other followers

%d bloggers like this: