Archive

Archive for the ‘Programming techniques’ Category

Taodyne’s best wishes – Source code version

I have release the source code for the “Season’s Greetings” from Taodyne. This can give you an idea of what XL can do today. In that video, which is rendered in real-time and in 3D, the XL program executes 60 times per second to synchronize the animations on the screen with the background movie.

In other XL-related news, I’ve been working on an updated language reference document. This describes where I want to bring XL, notably the type system and library. A lot of what is in this document is not implemented yet, but it will give the directions that I intend to follow in the coming months.

Designing a new programming language

I’ve been working for years on a programming language called XL. XL stands for “extensible language”. The idea is to create a programming language that starts simple and grows with you.

All the work of Taodyne, including our fancy 3D graphics, is based on XL. If you think that Impress.js is cool (and it definitely is), you should definitely check out what Tao Presentations can do. It’s an incredible environment for quickly putting together stunning 3D presentations. And we are talking real 3D here, with or without glasses depending on your display technology. We can even play YouTube videos directly, as the demo below shows.

But the paradox is that the work for Taodyne keeps me so busy I’m no longer spending as much time on the core XL technology as I’d like. Fortunately, vacation periods are an ideal time to relax, and most importantly, think. So I guess it’s no chance if Christmas is one of the periods where XL leaps forward. Last year, for example, I implemented most of the type inference I wanted in XL, and as a result, got it to perform close to C on simple examples.

This year was no exception. I finally came up with a type system that I’m happy with, that is simple to explain, and that will address  the demands of Tao Presentations (notably to implement easy-to-tweak styles). The real design insight was how to properly unify types, functions and data structures. Yep, in XL, types are “functions” now (or, to be precise, will be very soon, I need to code that stuff now).

Another interesting idea that came up is how to get rid of a lot of the underlying C++ infrastructure. Basically, symbol management for XLR was currently dome with C++ data structures called “Symbols” and “Rewrites”. I found a way to do exactly the same thing only with standard XLR data structures, which will allow XLR programs to tweak their own symbol tables if they choose to do so.

An interesting rule of programming: you know you have done something right when it allows you to get rid of a lot of code while improving the capabilities of the software.

Back to coding.

In defense of Euclideon’s Unlimited Details claims

There’s been a bit of a controversy recently regarding Euclideon’s “Unlimited Details” 3D rendering technology:

The video that caused all this stir is here:

This looks pretty good. Yet the video shows a number of limitations, like repeated patterns at high scale. These limitations may explain why John Carmack, Notch and others say that Euclideon is only implementing well-known algorithms and scamming people for funding.

Is it new? Is it real?

As for me, I wouldn’t call it a scam, and I don’t think the technology is that far out. Euclideon’s technology is in my opinion a little more than just sparse voxels octrees(SVOs), though the general principle are most likely similar. That being said, I must agree with Notch that Euclideon is anything but above hyperlatives, something I discussed recently.

To more directly address the criticisms being made by Notch, what they have done looks like a real technical innovation worth funding. For example, they deal with shadows and illumination, something that is hard to do with SVOs. They demonstrated some basic animation. They don’t have to tell all of us the gory technical details and limitations, as long as they tell their investors. And from the interview, it seems that they had people do some due diligence.

Regarding Carmack’s tweet, it does not dismiss the technology. It’s public knowledge that Carmack is himself working on similar stuff. He said this:

In our current game title we are looking at shipping on two DVDs, and we are generating hundreds of gigs of data in our development before we work on compressing it down.  It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues.  So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner.  Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure.  It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.

Carmack and Notch have high stakes in the game. To them, a technology like Euclideon’s is threatening, even if mildly. Dismissing something new because we are experts in the old ways is so easy to do. But let’s not get used to it.

3D rendering before GPUs

What legitimacy do I have myself to give my opinion on this topic, you may ask? I wrote a game called Alpha Waves a couple of years before John Carmack’s claimant to the title of “the first 3D PC game ever!“. That game is in the Guiness Book as “The first 3D platform game” (not the first 3D game, since there were several much simpler 3D games before).

More importantly, Alpha Waves was as far as I know the first 3D game offering interaction with all 3D objects in the environment: you could touch them, bounce off them, etc. Also new for the time, the game was 6-axis (all rotations were possible), fullscreen (most games at the time showed 3D in a quarter screen or half screen), and showed 20 to 50 3D objects simultaneously (the standard back then was 1 to 5). Games like Carmack’s Hovertank 3D had none of these capabilities. This blog post explains how this was achieved.

In short, I have been thinking about 3D rendering for a long time. I have written 3D rendering engines in assembly language at a time when there was no graphic card and no floating-point on the CPU. Back then, drawing a line or a circle was something you had to code by yourself. Not having GPUs or 3D APIs whatsoever meant that you attacked the problem from all angles, not just as “a polygon mesh with texture mapping”.

(As an aside, I’m still doing 3D these days, but even if we are pushing the boundaries of what today’s graphic cards can to (e.g. they are not optimized for rendering text in 3D on auto-stereoscopic displays), that’s much less relevant to the discussion.)

Exploring 3D rendering algorithms

In the late 1980s, when many modern 3D rendering algorithms were still to invent, I continued exploring ways to render more realistic 3D, as did many programmers at the time. I tried a number of ideas, both on paper and as prototypes. I tried software texture mapping (a la Wolfenstein), which looked positively ugly with only 16 simultaneous colors on screen. I reinvented Bresenham’s line drawing algorithm and then generalized it to 2D shapes like circles or ellipses, and then tried to further generalize for 3D planar surfaces and shapes, or to 3D rays of light.

This was how you proceeded back then. There was no Internet accessible to normal folks, no on-line catalog of standard algorithms or code. We had books, magazines, person-to-person chat, computer clubs. As a result, we spent more time inventing from scratch. I tried a number of outlandish algorithms that I have never read anything about, which is a good thing because they didn’t work at all.

More to the point, I also designed something that, on paper, looked a bit like what Unlimited Details is supposed to be today. At the time, it went nowhere because it exceeded the available memory capacity by at least three orders of magnitude. Of course, we are talking about a time where having 512K of RAM in your machine was pretty exciting, so three orders of magnitude is only 512M, not a big deal today. It’s based on this old idea that I believe Euclideon’s claims have some merit.

What do we know about Unlimited Details?

Here are some facts you can gather about Unlimited Details technology. I will discuss below why I believe these observations teach us something about their technology.

  • The base resolution is 64 atoms per cubic millimeter. Like Notch, I thought: that’s “4x4x4″. Please note that I write it’s the resolution, not the storage. I agree with Notch that they can’t be storing data about individual atoms, nor do they claim they do. It’s pretty obvious we see the same 3D objects again and again in their “world”.
  • They claim that their island is 1km x 1km. That means 4 million units along each direction, which is well within what a 32-bit integer coordinate can hold. So it’s not outlandish.
  • They claim repeatedly that they compute per pixel on screen. This is unlike typical 3D game rendering today, which compute per polygon, then render polygons on screen. There are many per-pixel technologies, though: SVOs are one, but raytracing and others are also primarily per screen pixel.
  • There’s a lot of repetitive patterns in their video, particularly at large scale. For example, the land-water boundaries are very blockish. But I think this is not a limitation of the technology, just a result of them building the island quickly out of a logo picture. Below is an example that shows no such effect:

  • Notch points out that objects tend to all have the same direction, which he interprets as demonstrating that they use SVO, where rotations are hard to do.
  • On the contrary, close-ups of the ground or the picture above do not show such obvious pattern repetitions, and objects seem to have various orientations and, even more telling, to “intersect” or “touch” without obvious “cubic” boundaries. My impression is that at high-level, they organized objects in cubic blocks to create the demo, but that we shouldn’t infer too much from that.
  • Euclideon shows dynamic reflexions on water surfaces. What we see is only very planar water surfaces (no turbulence, …), however.
  • They have some form of dynamic shadow computation, but it’s still in progress. They talk about it, but it’s also quite visible around 1:30 in the video above, or even better, in the picture below from their web site:

  • They do software rendering at 1024×768 at around 20fps “unoptimized”, single-threaded on a modern i7 CPU. They claim that they can improve this by a factor of 4.
  • When they zoom out of the island, the colors fade. I think that this is a rather telling clue about their renderer. I didn’t see any transparency or fog, however.
  • On a similar note, there is no visible “flicker” of individual pixels as they zoom in or out, as you’d expect if they really picked up an individual atom. The color of nearby atoms being different, either their algorithm is very stable with respect to the atom it picks (i.e. it is not solely based on the direction of an incoming ray), or they average things out with neighboring atoms.
  • Last, and probably the most important point: they know quite well what others are doing, they are quoting them, and they are pointing out their difference.

Voxels and Sparse Voxel Octrees?

There are a number of people working with voxels and sparse voxels octrees. You can search videos on YouTube. They all share a number of common traits:

  • They generally look blocky. The Euclideon videos look blocky at large scale, but all the blocks themselves remain highly detailed. You never see uniform squares or cubes appear as in most SVO demos, e.g. Voxelstein 3D below. This may be due to the 4 atoms-per-mm resolution. But see below the related problem.

  • It’s difficult to do nice looking shadows and lighting with SVOs. Look on the web to see what I mean. The counter examples I saw used some form of raytracing, non real-time rendering, or were GPU demos. The shadows and lighting in Euclideon’s demos look quite smooth.

But there’s one big issue with SVOs, which is traversal time (see this article for a good background). Consider the following numbers: Euclideon claims to draw 1024×768 pixels, 20 times per second. That’s roughly 16M pixels per second. On a 2GHz CPU, that leaves on the order of 100 cycles per pixel to find the object and compute the color of the pixel. In the best case, that’s roughly 500 instructions per pixel, but don’t forget that an ILP level of 5 is quite hard to sustain. A single cache miss can easily cost you 100 cycles.

Memory considerations

And that’s really the big question I have with Euclideon’s claims. How do they minimize cache misses on a data set that is necessarily quite big? Consider the minimum amount of data you need for a sphere of radius R, The area of the sphere is 4πr2.

Even if we have atom data only for the surface, a sphere of 1000 units would use roughly 12 million atom storage elements. It’s not a big sphere either: a unit being reportedly 0.25mm, so that would be a sphere with a 25cm radius, 10 inches for imperialists. An object of the size of the elephant (1m) takes roughly 200M atoms, and the big statues must be around 500M atoms. This explains why “memory compression” is something that Euclideon repeatedly talks about in their video. My guess here is that there aren’t that many kinds of atoms in the elephant, for example. I’m tempted to think they have “elements” like in the physics world.

But memory compression doesn’t solve cache latency issues. That, combined with the lack of spatial or temporal flicker, tells me that they found a really smart way for their “atom search” to be stable and to preserve good memory locality. My guess is that they do this based on a “radius” of the incoming ray. If the camera is far from the object being looked at, you only pick “large” atoms considered representative of the whole. Identifying a good candidate “average” atom is something you can do at object encoding time.

Number of instructions per pixel

Assuming Euclideon solved memory latency so that, on average, they get, say, a good 200 instructions per pixel. We start with a resolution of 4 million units in the X, Y and Z direction, and we need to find the target voxel. So we start with (roughly) 22 bits along each axis. My guess is they may have less along the Z axis, but that doesn’t make a big difference. With an octree, they need, roughly, 22 steps through the octree to get down to the individual “atom”. That leaves 200/22, or roughly 10 machine instructions per octree traversal step. That’s really not a lot.

Of course, in general, you don’t need to get down to the individual atoms. The whole viability of the octree scheme is largely based on the fact that you can stop at a much higher granularity. In reality, the size of the atoms you need to find is a function of the screen size, not the individual atom resolution. We are therfore looking more at 10 iterations instead of 20. But still, that won’t leave us with more than about 20 instructions.

Ah, but wait, an SVO algorithm also requires a small amount of ray tracing, i.e. for each pixel, you need to check along the way if a ray starting at that pixel is hitting things. You need to repeat along the path of the ray until you hit something. And ray displacement computations also take time. The amount of time depends on the particular algorithm, but to my knowledge, the best scenario is roughly a log of the number of possible hits along the way (i.e. you do another binary search). So if we keep the same 22 binary searches along the way, that leaves us with less than one instruction per step on average.

Clearly, these numbers explain why there is some skepticism of Euclideon’s claims. The CPUs today are just short of being able to do this. Which is why the SVO demos we see require smaller worlds, lower resolution. All this reduces the number of bits significantly, making it much easier to achieve correct performance. But this analysis shows why a simple standard SVO algorithm is unlikely to be what Euclideon uses.

Just short = Optimizations are useful

In general, I tend to stick with the First Rule of Optimization: Don’t. But being “just short” is that exact time when tiny optimizations can do a whole lot of difference between “it works” and “it doesn’t”.

At the time of Alpha Waves, we were “just short” of being able to do 3D on a PC, because the mathematics of 3D required multiplications, which were tens of cycles on the machines of the time. The tiny optimization was to replace 9 multiplications costing each 14 to 54 cycles with three additions costing 4 cycles each (on the 68K CPU of my Atari ST). I often did multiple additions, but even so, that was less expensive. And that single optimization made the whole scheme viable, it made it possible for Alpha Waves to render 30 objects on screen when others programs of the time slowed down with 3.

So what kind of optimizations can we think about in the present case? Here are some semi-random ideas:

  • Assuming 21 bits instead of 22, or making the Z axis smaller (i.e. you could have 24+24+16, where the Z axis would be “only” 65 meters deep), you can make all three coordinates fit in a 64 bit value. So basically, each “atom” in the world is uniquely addressed by a 64-bit “address”. You can then rephrase the problem as “map a 64-bit value to a sparse data structure”.
  • You don’t need the order of bits in the “address” to be fixed. There are plenty of multimedia instructions for reshuffling bits these days. Among other things, you can work with X:Y:Z triples where X, Y and Z have 20 bits, 2 bits or 1 bit. For example, if X:Y:Z are packed bit-by-bit, a sequence of 3 bits can serve as an index to resolve a step of the octree traversal.
  • If there’s no transparency, you can draw a ray from the camera to the atom, and know that you will stop exactly once, all atoms before it being transparent. If you have a variable size for atoms, you can have very large transparent atoms so that advancing the ray will be pretty fast across transparent regions. You can practically get back a factor of 20 in the instructions per iteration.
  • Computing the ray displacement using a variation on Besenham’s algorithm, you can basically do the “hard” computation once for 8 rays using symmetries, keeping only the “simple” computations in the main loop.
  • At first level, you can use memory mapping to let the hardware do the work for you. You can’t do this on 64-bits, mind you, since there are various per-OS limits on the largest range that you can map. But taking advantage of the TLB means a single instruction will do a 4-level table lookup with hardware caching. I’m pretty sure Euclideon people don’t do that (I’m suggesting it only because I did operating systems design and implementation for a little too long), but it would be cool if they did.

There are obviously many more tricks required to achieve the objective. But I did a quick back of the envelope calculation on the average cost using some of the tricks above and a couple more, and I ended up with cycles to spare. So I believe it can be done on today’s hardware.

Conclusion

John Carmack is right on the spot when he says that productization is the real issue. Prototypes are easy, products are hard. But I wouldn’t rule out seeing a game using that technology on current generation systems.

Just my two bits.

Should your build systems be up to date?

A recent post on the mailing list for Tao Presentations beta got me thinking: should build systems be up to date?

The poster explains:

Regarding the ubuntu version, was it build with a g++ 4.6 ?
It seems to require a GLIBCXX_3.4.15 which is just above the one I have
on Natty.

I am going to install a g++ 4.6 from source in order to get the right
libstdc++ but I wanted to let you know just in case someone else get the
same issue.

So here is the problem : if we upgrade our build systems to the latest and greatest patch level, we will also include a number of recent dependencies that our customers may not have yet.

Some development environments provide options to select which build tools you use, allowing you to revert to older build configurations. For example, my MacOSX system shows the following variants of G++:

g++            g++-4.0        g++-apple-4.2  g++2
g++-3.3        g++-4.2        g++-mp-4.4     g++3

Unfortunately, that facility doesn’t automatically extend to all the libraries you depend on. For example, we use Qt to develop cross-platform GUI code, and versions before the still unreleased 4.8 version emit compilation warnings while compiling on MacOSX Lion. So if I want to build with Lion, my options are either to build in an unsupported configuration (and stop using the -Wall option in our builds), or to build with a not-yet-released library.

So is the best solution to keep your build systems behind, with all the implied security risks? Is there a better overall strategy I didn’t think of?

Branches are evil

You may have heard of the Joel Test to evaluate startups:

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

At Taodyne, we pass all these, yet our project took a major blow because there is at least one other key rule that goes above all these. I will call it rule zero, and it impacts how you write code.

  1. Do you go in a single direction?

The code-oriented formulation of rule zero is the following:

  1. Do you avoid branches like the plague?

Do not ever fork your codebase

With tools like Git, Mercurial or Bazaar, branches are much easier to manage than they were in the past. They actually became part of the process. With these distributed source control systems, you keep creating and merging branches for practically everything you do.

So why the title, “Branches are evil” (a favorite of über-geek Dennis Handly)? Here, by “branch” I don’t mean the kind of minor branching where two variants of the codebase differ by a few lines of code. This has to happen as you develop code. The problem is when the fork affects the design itself, or the direction the project is following.

My biggest mistake at Taodyne was ignoring this key rule. It all started with a simply thought experiment about an alternative design, quickly followed by a little bit of experimental code. And I forked our code. But then, others started building useful stuff on top of that branch.

When we decided to drop that alternate design, we had a big problem on our hands. We needed to salvage all the good stuff on that branch, that had been written against a completely different design. We have spent several months trying to reconcile these two conflicting designs that had conflicting objectives. As a result, I spent countless hours trying to fix bugs that were a result of merging code with different assumptions.

When a single mistake delays the release of your product by a few months, you know you did something solidly wrong. I thought I’d share the experience so you won’t do the same mistake.

Startup Week-End Nice Sophia-Antipolis : Big Success

I spent the last four days at a rather exciting entrepreneurial event on the French Riviera, which really combined three distinct events under the umbrella of the brand new RivieraCube association:

  • An Open Coffee with the Sophia-Antipolis team. Open Coffee is an informal gathering of (mostly Java) geeks around a coffee (or, more often in our case, a beer, since we do that in the evening). This was so successful that a new Open Coffee group for Nice spontaneously emerged.
  • A BarCamp the next day, with a small (and cramped) Startup Corner where Taodyne presented its flagship product, Tao Presentations. We had some exceptional unconferences from well-known French serial-entrepreneurs, including Kelkoo founder Pierre Chappaz or Kipost founder Pierre-Olivier Carles.
  • A Startup Week-end which gathered about 100 enthusiasts with the intent to create a startup in 54 hours. And some of them actually managed to pull it off, which is pretty amazing when you think about it. But the talent and energy in that room were simply amazing, and reminded me of some of the best moments I had in the Silicon Valley.

Reports on the web

There are already a large number of blogs reporting on this event, but I believe the best indicator of how lively it was is its twitter hashtag, #swnsa. There was actually a friendly contest with another Startup Week-end held the same day in Lausanne, Switzerland:

And the winner is…

There was a number of exciting projects, but there was generally little surprise as to who the winners were. The first three projects get a lot of help from local consulting companies, and the leader of the winning team gets a free Geek Trip to the Silicon Valley.

The winner was “Mamy Story” (@MamyStory), which I believe surprised no one in the audience. The concept is simple (tell the story of your grand-parents), has an interesting innovation (which I won’t disclose here), a catchy name (“papy” or “mamy” in French is a common nick-name for grandparents), but more importantly, appeals to our emotions, something which they largely exploited during their pitch.

As a matter of fact, they managed to get a member of the jury to tell them they could reach a larger market than what they presented in the plan. Here is another example of why they have a market.

The runner ups were :

  • Dynoo (@dynoo_com), a project to “spread the word” (the French pronunciation for Dynoo sounds like “Dis nous” or “tell us”, although they sometimes said it the english way, which I think weakens it. They should consider renaming it to deenoo),
  • Qwinti (@qwinti), a web site to save your social activity, who had a really good designer on the team,
  • JustMyTrends (@JustMyTrends), a web site offering a personalized shopping experience for hard-to-find items (the founder has a hard time finding shoes fitting his über-large feet).

And the winner is (redux)…

There was also an iPad2 to win, offered by Everything4Mobile (a very cool web site created by Virgile Cuba, a regular at the Sophia Open Coffee).

The winner was Matthieu Loutre, who was a member of our team. He lives in Montpellier, but he will happily drive on the 25th of March to Nice just to get his new gadget from the friendly team at the Apple Store (and when I say “friendly”, I don’t say that lightly – The user experience in that store is remarkable, doubly so by French standards).

First use of Tao Presentations in a conference

On Friday evening, I joined a project that I won’t talk about, because I believe the project leader has needs a bit of time to flesh his idea out, and even more time to turn it into a real product.

That being said, that was an occasion to try our prototype of Tao Presentations in a real, competitive environment. I learned a number of things :

  • It’s a really competitive way to tell a story. You think about the story first, the way to tell it follows, something which is often harder with other presentation software.
  • The presentation part just works. It didn’t crash once during the two days of rather heavy use, and the worst misbehavior was transient lighting glitches on the screen when using OpenGL lights.
  • One of the challenges was to test whether creating live mock-ups of software to explain an idea was possible. It worked, it was easy, it really added to the presentation, but then we couldn’t really use that part because the question we expected didn’t come up :-)

Some aspects were less positive:

  • Editing slides triggers an elusive bug on a relatively regular basis. I had the issue about half a dozen times in two days. The program crashes, which is not a real issue because of the way the workflow is organized (I never lost a single bit of what I had done), but still is annoying.
  • The software doesn’t automatically reload images when they change on disk, which means you sometimes need to restart it just to load a new version of the pictures. To be fixed.

Overall, I had some rather good feedback on the presentation. I showed a talk about XL to half a dozen true geeks, talked about programming techniques.

Young programmers and compilers…

These discussions made me realize something : many talented young programmers don’t even seem to know what a compiler is or how it works. They know about languages like Python, XML, Javascript, and just don’t care much how it runs on the machine.

I think it’s a good thing overall, but then someone still needs to get interested enough by system software. I’m afraid system software programmers are getting old. We need to train the new generation, to get them interested in languages that can run fast.

The good news, then, is that XL got rather positive comments. No “why invent a new language” in this crowd.

Using XL for high-performance computing

There is a little bit of activity on the XLR-Talk mailing list, dedicated to discussions about the XL programming language and runtime. XL is a high-level general purpose programming language I designed and implemented over the years. It exists in two primary flavors, XL2, an imperative statically-typed language, and XLR, a functional flavor. Both share a very similar syntax and basic concepts.

One of the interesting threads is about using XL for high-performance computing. I like it when someone writes:

Thank you for releasing Xl and in particular Xl2, this is a most
interesting and exciting development.  I am a very long-time C++ user
and appreciate the power of generic programming through templates but
the syntax is clunky and I often find myself going off the end of what
is currently possible and end up messing around with the C pre-
processor which is frustrating.  I am hoping that Xl2 will prove to be
an effective alternative to C++ templates and provide the programming
flexibility I crave.

Now, XL2 is not ready for prime-time yet. Its library is significantly lacking. But the core compiler is already quite advanced, and can compile very interesting pieces of code. For instance, XL was as far as I know the first language to introduce variadic templates, for code like this:

generic type ordered where
    A, B : ordered
    Test : boolean := A < B
function Max (X : ordered) return ordered is
    return X
function Max (X : ordered; ...) return ordered is
    result := Max(...)
    if result < X then
        result := X

What happens in this little piece of code is interesting. It introduces two key features of XL: true generic types and type-safe variadics.

True generic types

The ordered type is an example of “true generic type”, meaning that it can be used as a type in function declarations, but it implicitly makes the corresponding function generic (C++ programmers would say “template”). In other words, ordered can represent types such as integer or real, and you can use Max with all these types.

In that specific case, ordered is a validated generic type, meaning that there are some conditions on its use. Specifically, ordered only represents types that have a less-than operator, because that operator is necessary to implement Max. Note that a compile-time failure will occur if you attempt to use Max with a type that doesn’t have a less-than, even if no less-than operation is used in the instantiation.

Type-safe variadics

The second interesting feature demonstrated on this small example is the use of ... to represent arbitrary lists of arguments. This is used here to implement type-safe variable argument lists. You can for example write Max(1, 3, 5, 6, 9), and the compiler will recursively instantiate all the intermediate Max functions until it can compute the result.

These same features are also used for functions that have lists of argument with differing types, such as WriteLn. The XL2 implementation of WriteLn is found here:

to WriteLn(F : file; ...) is
       any.Write F, ...
       PutNewLineInFile F

The Write function itself is implemented with a similar recursion that ends on functions that write a single argument, e.g. an integer value.

How does it help HPC

So how do these features help high-performance computing? They allow you to easily write highly generic code, covering a large range of uses, without paying a run-time penalty for it. No objects are constructed. No garbage collector is required to clean up memory allocations : there are no memory allocations. Everything can easily be inlined.

There are other features in XL that also help with HPC:

Where do we go from here?

XL2 is currently a little bit on hold because I’m currently focusing a lot of my energy on the functional variant, XLR, used by Taodyne in its products.

However, I believe that it reached a stage where other people can contribute relatively easily. For example, it would be useful to implement the various “fundamental” data structures in the library, i.e. go a little bit beyond arrays. If you want to contribute to XL2, nothing would please more than to give pointers. Simply join xlr-talk.

Follow

Get every new post delivered to your Inbox.

Join 412 other followers

%d bloggers like this: