Archive for the ‘Programming techniques’ Category

Should your build systems be up to date?

A recent post on the mailing list for Tao Presentations beta got me thinking: should build systems be up to date?

The poster explains:

Regarding the ubuntu version, was it build with a g++ 4.6 ?
It seems to require a GLIBCXX_3.4.15 which is just above the one I have
on Natty.

I am going to install a g++ 4.6 from source in order to get the right
libstdc++ but I wanted to let you know just in case someone else get the
same issue.

So here is the problem : if we upgrade our build systems to the latest and greatest patch level, we will also include a number of recent dependencies that our customers may not have yet.

Some development environments provide options to select which build tools you use, allowing you to revert to older build configurations. For example, my MacOSX system shows the following variants of G++:

g++            g++-4.0        g++-apple-4.2  g++2
g++-3.3        g++-4.2        g++-mp-4.4     g++3

Unfortunately, that facility doesn’t automatically extend to all the libraries you depend on. For example, we use Qt to develop cross-platform GUI code, and versions before the still unreleased 4.8 version emit compilation warnings while compiling on MacOSX Lion. So if I want to build with Lion, my options are either to build in an unsupported configuration (and stop using the -Wall option in our builds), or to build with a not-yet-released library.

So is the best solution to keep your build systems behind, with all the implied security risks? Is there a better overall strategy I didn’t think of?

Branches are evil

You may have heard of the Joel Test to evaluate startups:

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

At Taodyne, we pass all these, yet our project took a major blow because there is at least one other key rule that goes above all these. I will call it rule zero, and it impacts how you write code.

  1. Do you go in a single direction?

The code-oriented formulation of rule zero is the following:

  1. Do you avoid branches like the plague?

Do not ever fork your codebase

With tools like Git, Mercurial or Bazaar, branches are much easier to manage than they were in the past. They actually became part of the process. With these distributed source control systems, you keep creating and merging branches for practically everything you do.

So why the title, “Branches are evil” (a favorite of über-geek Dennis Handly)? Here, by “branch” I don’t mean the kind of minor branching where two variants of the codebase differ by a few lines of code. This has to happen as you develop code. The problem is when the fork affects the design itself, or the direction the project is following.

My biggest mistake at Taodyne was ignoring this key rule. It all started with a simply thought experiment about an alternative design, quickly followed by a little bit of experimental code. And I forked our code. But then, others started building useful stuff on top of that branch.

When we decided to drop that alternate design, we had a big problem on our hands. We needed to salvage all the good stuff on that branch, that had been written against a completely different design. We have spent several months trying to reconcile these two conflicting designs that had conflicting objectives. As a result, I spent countless hours trying to fix bugs that were a result of merging code with different assumptions.

When a single mistake delays the release of your product by a few months, you know you did something solidly wrong. I thought I’d share the experience so you won’t do the same mistake.

Startup Week-End Nice Sophia-Antipolis : Big Success

I spent the last four days at a rather exciting entrepreneurial event on the French Riviera, which really combined three distinct events under the umbrella of the brand new RivieraCube association:

  • An Open Coffee with the Sophia-Antipolis team. Open Coffee is an informal gathering of (mostly Java) geeks around a coffee (or, more often in our case, a beer, since we do that in the evening). This was so successful that a new Open Coffee group for Nice spontaneously emerged.
  • A BarCamp the next day, with a small (and cramped) Startup Corner where Taodyne presented its flagship product, Tao Presentations. We had some exceptional unconferences from well-known French serial-entrepreneurs, including Kelkoo founder Pierre Chappaz or Kipost founder Pierre-Olivier Carles.
  • A Startup Week-end which gathered about 100 enthusiasts with the intent to create a startup in 54 hours. And some of them actually managed to pull it off, which is pretty amazing when you think about it. But the talent and energy in that room were simply amazing, and reminded me of some of the best moments I had in the Silicon Valley.

Reports on the web

There are already a large number of blogs reporting on this event, but I believe the best indicator of how lively it was is its twitter hashtag, #swnsa. There was actually a friendly contest with another Startup Week-end held the same day in Lausanne, Switzerland:

And the winner is…

There was a number of exciting projects, but there was generally little surprise as to who the winners were. The first three projects get a lot of help from local consulting companies, and the leader of the winning team gets a free Geek Trip to the Silicon Valley.

The winner was “Mamy Story” (@MamyStory), which I believe surprised no one in the audience. The concept is simple (tell the story of your grand-parents), has an interesting innovation (which I won’t disclose here), a catchy name (“papy” or “mamy” in French is a common nick-name for grandparents), but more importantly, appeals to our emotions, something which they largely exploited during their pitch.

As a matter of fact, they managed to get a member of the jury to tell them they could reach a larger market than what they presented in the plan. Here is another example of why they have a market.

The runner ups were :

  • Dynoo (@dynoo_com), a project to “spread the word” (the French pronunciation for Dynoo sounds like “Dis nous” or “tell us”, although they sometimes said it the english way, which I think weakens it. They should consider renaming it to deenoo),
  • Qwinti (@qwinti), a web site to save your social activity, who had a really good designer on the team,
  • JustMyTrends (@JustMyTrends), a web site offering a personalized shopping experience for hard-to-find items (the founder has a hard time finding shoes fitting his über-large feet).

And the winner is (redux)…

There was also an iPad2 to win, offered by Everything4Mobile (a very cool web site created by Virgile Cuba, a regular at the Sophia Open Coffee).

The winner was Matthieu Loutre, who was a member of our team. He lives in Montpellier, but he will happily drive on the 25th of March to Nice just to get his new gadget from the friendly team at the Apple Store (and when I say “friendly”, I don’t say that lightly – The user experience in that store is remarkable, doubly so by French standards).

First use of Tao Presentations in a conference

On Friday evening, I joined a project that I won’t talk about, because I believe the project leader has needs a bit of time to flesh his idea out, and even more time to turn it into a real product.

That being said, that was an occasion to try our prototype of Tao Presentations in a real, competitive environment. I learned a number of things :

  • It’s a really competitive way to tell a story. You think about the story first, the way to tell it follows, something which is often harder with other presentation software.
  • The presentation part just works. It didn’t crash once during the two days of rather heavy use, and the worst misbehavior was transient lighting glitches on the screen when using OpenGL lights.
  • One of the challenges was to test whether creating live mock-ups of software to explain an idea was possible. It worked, it was easy, it really added to the presentation, but then we couldn’t really use that part because the question we expected didn’t come up :-)

Some aspects were less positive:

  • Editing slides triggers an elusive bug on a relatively regular basis. I had the issue about half a dozen times in two days. The program crashes, which is not a real issue because of the way the workflow is organized (I never lost a single bit of what I had done), but still is annoying.
  • The software doesn’t automatically reload images when they change on disk, which means you sometimes need to restart it just to load a new version of the pictures. To be fixed.

Overall, I had some rather good feedback on the presentation. I showed a talk about XL to half a dozen true geeks, talked about programming techniques.

Young programmers and compilers…

These discussions made me realize something : many talented young programmers don’t even seem to know what a compiler is or how it works. They know about languages like Python, XML, Javascript, and just don’t care much how it runs on the machine.

I think it’s a good thing overall, but then someone still needs to get interested enough by system software. I’m afraid system software programmers are getting old. We need to train the new generation, to get them interested in languages that can run fast.

The good news, then, is that XL got rather positive comments. No “why invent a new language” in this crowd.

Using XL for high-performance computing

There is a little bit of activity on the XLR-Talk mailing list, dedicated to discussions about the XL programming language and runtime. XL is a high-level general purpose programming language I designed and implemented over the years. It exists in two primary flavors, XL2, an imperative statically-typed language, and XLR, a functional flavor. Both share a very similar syntax and basic concepts.

One of the interesting threads is about using XL for high-performance computing. I like it when someone writes:

Thank you for releasing Xl and in particular Xl2, this is a most
interesting and exciting development.  I am a very long-time C++ user
and appreciate the power of generic programming through templates but
the syntax is clunky and I often find myself going off the end of what
is currently possible and end up messing around with the C pre-
processor which is frustrating.  I am hoping that Xl2 will prove to be
an effective alternative to C++ templates and provide the programming
flexibility I crave.

Now, XL2 is not ready for prime-time yet. Its library is significantly lacking. But the core compiler is already quite advanced, and can compile very interesting pieces of code. For instance, XL was as far as I know the first language to introduce variadic templates, for code like this:

generic type ordered where
    A, B : ordered
    Test : boolean := A < B
function Max (X : ordered) return ordered is
    return X
function Max (X : ordered; ...) return ordered is
    result := Max(...)
    if result < X then
        result := X

What happens in this little piece of code is interesting. It introduces two key features of XL: true generic types and type-safe variadics.

True generic types

The ordered type is an example of “true generic type”, meaning that it can be used as a type in function declarations, but it implicitly makes the corresponding function generic (C++ programmers would say “template”). In other words, ordered can represent types such as integer or real, and you can use Max with all these types.

In that specific case, ordered is a validated generic type, meaning that there are some conditions on its use. Specifically, ordered only represents types that have a less-than operator, because that operator is necessary to implement Max. Note that a compile-time failure will occur if you attempt to use Max with a type that doesn’t have a less-than, even if no less-than operation is used in the instantiation.

Type-safe variadics

The second interesting feature demonstrated on this small example is the use of ... to represent arbitrary lists of arguments. This is used here to implement type-safe variable argument lists. You can for example write Max(1, 3, 5, 6, 9), and the compiler will recursively instantiate all the intermediate Max functions until it can compute the result.

These same features are also used for functions that have lists of argument with differing types, such as WriteLn. The XL2 implementation of WriteLn is found here:

to WriteLn(F : file; ...) is
       any.Write F, ...
       PutNewLineInFile F

The Write function itself is implemented with a similar recursion that ends on functions that write a single argument, e.g. an integer value.

How does it help HPC

So how do these features help high-performance computing? They allow you to easily write highly generic code, covering a large range of uses, without paying a run-time penalty for it. No objects are constructed. No garbage collector is required to clean up memory allocations : there are no memory allocations. Everything can easily be inlined.

There are other features in XL that also help with HPC:

Where do we go from here?

XL2 is currently a little bit on hold because I’m currently focusing a lot of my energy on the functional variant, XLR, used by Taodyne in its products.

However, I believe that it reached a stage where other people can contribute relatively easily. For example, it would be useful to implement the various “fundamental” data structures in the library, i.e. go a little bit beyond arrays. If you want to contribute to XL2, nothing would please more than to give pointers. Simply join xlr-talk.

Trends in programming languages…

September 29, 2010 3 comments

A colleague sent me an interview with Kalani Thielen about trends in programming languages.

I’m fascinated by this interview. Kalani is obviously an expert. But what should we think of the following?

The function, or “implication connective” (aka “->”), is an important tool and ought to feature in any modern language.

I have a problem with this kind of jargon. Why? Because if concept programming teaches us anything, it’s that a computer function is anything but a mathematical “implication connective”. Programming languages are not mathematics.

Programming languages are not mathematics

Computer science courses often spend a lot of time teaching us about various aspects of mathematics such as lambda calculus. So we are drawn to believe that programming languages are a sub-branch of mathematics.

And indeed, this is the view that Kalani expresses:

The lambda calculus (and its myriad derivatives) exemplifies this progression at the level of programming languages. In the broadest terms, you have the untyped lambda calculus at the least-defined end (which closely fits languages like Lisp, Scheme, Clojure, Ruby and Python), and the calculus of constructions at the most-defined end (which closely fits languages like Cayenne and Epigram). With the least imposed structure, you can’t solve the halting problem, and with the most imposed structure you (trivially) can solve it. With the language of the untyped lambda calculus, you get a blocky, imprecise image of what your program does, and in the calculus of constructions you get a crisp, precise image.

The statement that I strongly dispute is the last one: In the calculus of constructions, you get a crisp, precise image. The Calculus of Constructions (CoC) is the theoretical basis for tools such as Coq. These tools are intended to assist with computer-generated proofs. So the very thing they talk about is represented crisply by the CoC. But if I want is to represent the behavior of malloc() (a function whose primary role are its side effects) or how CPUs in a system communicate with one another (physical interactions), then CoC is of no use. It doesn’t give me a crisp, precise image.

In other words, high-level languages with first-class functions, implicit garbage collection or dynamic typing are really cool, but they give me a blurry, imprecise picture of how the computer actually works. The reason is that a computer is not a mathematical object, it’s a physical object. An “integer” in a computer is only an approximation of the mathematical integer, it’s only an approximation of the mathematical entity with the same name.

Trying to hide the differences only makes the picture more blurry, not more crisp. For example, you can use arbitrary-precision arithmetics instead of integers with a fixed number of bits. And now, your “integers” start consuming memory and have other side effects. In any case, these arbitrary-precision numbers are not “native” to the computer, so they are “blurry”.

Programming languages are languages

With concept programming, I’ve consistently argued that programming languages are, first and foremost, languages. Trends in programming languages are a problem of linguistics, not mathematics. The goal should not be to make the language more precise, but to make it more expressive. Nobody cares if you can solve the halting problem regarding programs in your language, if to achieve that objective you have to give up expressiveness.

Let’s make an analogy with real-world languages. Imagine that we decided that it’s important to make English “precise”. We’d set the goal that any sentence in Precisenglish could be provably true or false. First, you’d end up with a language where it’s impossible to write “This sentence is false”, since it’s impossible to prove this sentence true or false. Now, this particular sentence may not seem indispensable. But what about questions? “What time is it?” wouldn’t be a valid Precisenglish sentence… Isn’t that a hefty price to pay for precision?

The same is true for programming languages. You can impose constraints on the language that make it easier to prove things. And then, simple things like side effects become really complex. In Haskell, the simple task of writing something to the console requires complex constructions such as monads

Mathematics and programming both derive from languages

It’s interesting to observe that mathematics is also a form of language. It’s a precise, if restricted, language that helps us reason about symbols, build theorems, prove things. So it makes sense that mathematics and programming languages are related: they both are forms of languages. But it’s not because programming languages derive from mathematics. It’s because programming languages and mathematics both derive from languages.

In my opinion, progress in programming languages will happen if we decide to give up mathematics and embrace linguistics. When we try to start focusing on how we translate concepts in our head into computer approximations.

A good language by that definition can adapt to mathematics as well as to other domains of knowledge. The ideal programming language should be capable of representing mathematical properties. It should be possible to write a subset of the language that is precise, just like mathematics can be seen as a precise subset of human languages. But it should also be possible to have a subset that is not necessarily as precise because it addresses other domains. There should be a subset that deals with I/Os not using monads or other abstractions, but representing what’s going on in the machine as precisely as possible. There should be a subset that deals with parallelism, or computations on a GPU, or floating-point arithmetic.

And you basically have a definition of what I’m trying to achieve with XL.

XL: Advancing the state of programming languages

This is the 200th post on this blog!

The WGP2010 workshop was a good occasion for me to write a short summary of where we stand with XL. With the creation of Taodyne, I’m spending much more time working on XL and with XL. This may not be immediately visible, thought, because XL is only a tool to achieve Taodyne’s objectives, it is not a goal in itself.

The article’s title is “Eliminating Newspeak in Programming language“. Here is the abstract:

Programming languages provide us with numerous tools. But like the
fictional language Newspeak in George Orwell’s 1984, they also restrict
what we can say, and in doing so, they shackle our minds. As a result,
historical programming language all became an economic dead weight as
soon as the hardware, fueled by Moore’s law, passed them by.

XL is a programming language designed to get rid of this Newspeak in
programming. Its primary focus is to help programmers add their own
concepts and their own notations to the language. To validate this
approach, many of the traditional features that XL provides are
constructed, not built in the compiler. The associated methodology is
called “concept programming”. It focuses on the transformation of ideas
into programs.

Writing an article about XL made me realize two things and a half:

  1. XL remains novel and relevant today. Ten years after I first shared code, the problems that XL addresses are still there, the solutions are still nowhere seen in other languages.
  2. On the other hand, the language never caught on. But then, the article, if accepted, will be the first one I ever wrote about XL for academic circles. From that point of view, it’s good to have left HP.
  3. XL is still evolving, and the compilers are still far from being finished… Shame on me.

So I think that XL is really capable of advancing the state of programming languages, but I need to put some additional muscle to explain what it is. Or as a friend reminded me yesterday: “It goes without saying, but it goes better saying it.”

From concept to code

The first key idea behind XL is that the role of a programming language is to help us transform ideas into code. The idea seems simple enough… until you dig deeper and realize just how hard this is. This presentation is a good starting point…

What does it mean to you, to transform ideas into code? How well do the existing languages do that for you? Do you sometimes feel impaired in the way to write your code? Are there things that you simply can’t say with your favorite programming language? Do you have funny horror stories about saying one thing and having the compiler understand an entirely different thing?

Eating my own dog food

This post was edited from within Tao, for the first time. This is looking really cool, but I can’t really tell why just yet. It just makes me happy to see that we are making very significant progress.

More interesting general news: code bubbles are an interesting new paradigm for IDEs. It’s a really nice looking user interface, both fancy and apparently usable. Worth taking a look at, IMHO.

Now, I wonder why the Java programmers get all the fun and interesting tools… I have a theory that maybe they need so many tools because programming in Java is more difficult than in most other languages…

The Reinvigorated Programmer

Thanks to Slashdot, I just came across a blog I did not know, The Reinvigorated Programmer. The last few posts I read there were all quite good:

  • The value of BASIC as a first programming language dwells into interesting educational aspects of a much criticized language. To his argument, I’d add that anybody who learned BASIC had to learn how to do interesting programs in 48K at best, but sometimes as little as 1K (like on the ZX-81)
  • Whatever happened to programming, quickly followed by a redux, discusses how programming has changed into a job where you mostly assemble software components. I’d say it’s about time, but it’s true that assembling components is sometimes less fun than creating your own.

In any case, all the posts I read interested me, despite being illustrated by strange pictures of cold dead fish or cold dead humans. Well, the writing is good, and at least the guy understood that a few pictures here and there help.

Added to the side links.

Why Go isn’t my favourite programming language

In case you are wondering, the title of this post is a nod to Brian Kernighan’s Why Pascal isn’t my favourite programming language. I’m only showing my age here, not trying to compare myself to Kernighan :-)


There has been a lot of buzz recently about the Go programming language. There was an article on Slashdot recently, videos of Google talks on YouTube (slides), the works. And of course, there is the Go web site itself. The YouTube video below has been viewed 183,000 times as of this writing, so the roll-out campaign had some success…

Of particular interest are the minds and forces behind the language. The initial contributors are Robert Griesemer, Ken Thompson, and Rob Pike, and the project was developed at Google over a time period of two years. With such a pedigree, expectations are high. And that’s probably why I’m a little disappointed with what was presented so far. It’s not that it’s a bad language, it’s more like as so-what language. There’s nothing I find really compelling about it. In particular, I didn’t find anything that would make me significantly revisit my own pet programming language, XL.

The point of views below are indeed largely based on my own experience designing and implementing a programming language. So I would say that they are both educated and biased. Keep this in mind as you read on. Also, I tend to point out the negative aspects because the campaign discussed above did a rather good job at highlighting the strong points of Go.

First impression

My initial impression of the Go programming language, based on the material above, is that it has a few serious flaws:

  • No generics: Go is yet another language where generics will be bolted-on as an afterthought. As a direct consequence, Go includes many constructs (including maps or goroutines) that in my opinion rightfully belong to a library. This is important for programmers: the choices of the Go designers might not match the needs of the Go programmers. Worse yet, if you have some reason to use some kind of fancy container, Go is not the right language to implement it.
  • Not suitable for what I call “Systems”: The Go designers seem to consider that “Systems” equates writing applications such as web browsers. To me, “systems” implies that you can write programs that may talk directly to the hardware (as in operating systems or real-time systems). At first glance, Go seems ill-suited for such tasks. In particular, it does not just provides a concurrency or memory management model, but actively enforces it at the language level. In other words, if you want to write a real-time system in Go, you first need to implement the runtime support for goroutines or Go garbage collection. I ran into this problem in the early days of HPVM: implementing a C++ runtime on bare metal proved complicated enough that I had to rewrite my C++ binary translator in plain old C.
  • Orwellian Newspeak: The two examples above illustrate how, all too often, Go “knows what is good for you” and won’t let you deviate from the party line. This goes as far as imposing where the braces go for rather bogus syntactic reasons, deciding when you can use upper-case letters, or formatting the code automatically when you submit it!

These three aspects of the design mean that I probably won’t use the language seriously any time soon. I can’t do meta-programming well because of the first issue. I can’t do system software because of the second issue. And if I want to explore, I will be constrained in what I can do because of the third one. These issues may not mean as much to the majority of programmers, though. Just because Go is not a language for me doesn’t mean that it won’t work for you. Your mileage may vary.

In any case, there are a number of aspects to the Go language worth discussing, if only because of the publicity it received.

Stated Objectives

The objectives of the team in creating the language are, according to the slides:

  • New
  • Experimental
  • Concurrent
  • Garbage-collected
  • Systems
  • Language

In my humble opinion, these objectives lack a clear sense of direction and purpose. They include mere facts (“New”), low-level implementation details (“Garbage-collected”), usage models (“Systems”) and actual design objectives (“Concurrent” and maybe “Experimental”). Specifically, “New” might be intended to mean innovative, but while there’s little debate that Go is new, I didn’t find much in it that wasn’t already hammered elsewhere. “Experimental” should probably mean that it explores a few radical ideas, but it could also simply reveal that it’s not fully cooked yet. Unfortunately, radical ideas are exactly what I didn’t find.

Concurrency is the most interesting of these objectives. It is not exactly a novel idea, but there is still a lot of progress to be made in how languages support concurrency. Still, it’s hard for me to see what Go adds compared to industrial languages like Erlang. Sure, Rob Pike demonstrated 100,000 goroutines completing in 1s. At first sight, that might seem impressive. But according to this source, someone ran 20 million processes in Erlang as early as 2005. And apparently, run-time performance at the time were similar: 5.3μs + 6.3μs times 100,000 would be 1.16 second, very comparable to what Rob Pike demonstrated on presumably faster hardware.

The language prominently features garbage collection. Garbage collection for memory today is practically a given. You can’t appeal to Java programmers without it. But collecting garbage is not about just memory objects. In real programs, there are many other forms of garbage to recycle: temporary files, open files, locks, threads, open network connections, … Furthermore, the requirements placed on the garbage collector may vary from application to application. Providing a garbage collector by default is good, providing one that is implemented in the library, that you can taylor to your needs and [gasp] apply to non-memory objects would be so much better… In short, is Go’s garbage collection worth the prominent position that its designers gave to it in the presentations? I don’t think so personally.

Implicit Objectives

It’s even more interesting to analyze the implicit objectives of the language. I could find at least three such objectives that, while not explicitly stated, seemed quite important to the design team.

  • Compilation speed: There were many demos of how fast Go compiles, and it’s featured prominently on the front page of the web site, so this seems to be a rather important objective to the Go design team.
  • Simplicity: Although it is being described as “slightly bigger than C”, Go is clearly intended to remain simple, with a simplified syntax, no parsing ambiguities, …
  • Programmer comfort: Rob Pike stressed how the language is designed to make it easy to write tools, to compose software (there’s a rather nice dependency checking mechanism), to reduce typing, and so on. And of course, not waiting for the compiler is comfortable too.

All these are rather noble objectives. But they also stress the kind of trade-offs that were made in the design of the language. For example, compilation speed does matter, nobody can dispute that. But in exchange for that speed, we pay 10% in execution speed, and more importantly we lose a number of features I consider essential for productivity, like templates. This may not matter much to software consumers, however. It is true that more and more, we simply compile someone else’s code. In that case, compilation speed is practically the only metric that matters.

Tipping the balance the other way, why not push the reasoning to the limit, and ditch compilation entirely? A lot of recent work has been in the field of just-in-time compilers. LLVM has shown that it’s possible to dynamically generate high-quality code in a portable way. The XLR runtime component of XL is now using LLVM, so that you can execute XLR programs (that is, the run-time language) without any explicit compilation. Compilation does happen, but entirely transparently, on the fly, as you execute the program. In that scenario, some heavier compilation remains possible once the program runs, to get better optimizations, faster execution or tighter verification of the code.


In conclusion, Go shows how difficult it is to design an innovative programming language for today’s programming world. Many of the choices made by the design team seem rather old-fashioned to me. Go didn’t rattle my brain the way Haskell, Erlang or Ada did in their time.

But ultimately, the bottom line for me is this: Go seems to be a solution in search of a problem.

ACM Sigplan Workshop on Generic Programming

I received a call for papers for the 6th ACM SIGPLAN Workshop on Generic Programminng.

Goals of the workshop

Generic programming is about making programs more adaptable by making
them more general. Generic programs often embody non-traditional kinds
of polymorphism; ordinary programs are obtained from them by suitably
instantiating their parameters. In contrast with normal programs, the
parameters of a generic program are often quite rich in structure; for
example they may be other programs, types or type constructors, class
hierarchies, or even programming paradigms.

Generic programming techniques have always been of interest, both to
practitioners and to theoreticians, and, for at least 20 years,
generic programming techniques have been a specific focus of research
in the functional and object-oriented programming communities. Generic
programming has gradually spread to more and more mainstream
languages, and today is widely used in industry. This workshop brings
together leading researchers and practitioners in generic programming
from around the world, and features papers capturing the state of the
art in this important area.

We welcome contributions on all aspects, theoretical as well as
practical, of

  • polytypic programming,
  • programming with dependent types,
  • programming with type classes,
  • programming with (C++) concepts,
  • generic programming,
  • programming with modules,
  • meta-programming,
  • adaptive object-oriented programming,
  • component-based programming,
  • strategic programming,
  • aspect-oriented programming,
  • family polymorphism,
  • object-oriented generic programming,
  • and so on.

More information can be found at the original web site.


Get every new post delivered to your Inbox.

Join 392 other followers

%d bloggers like this: