I’ve been working for years on a programming language called XL. XL stands for “extensible language”. The idea is to create a programming language that starts simple and grows with you.
All the work of Taodyne, including our fancy 3D graphics, is based on XL. If you think that Impress.js is cool (and it definitely is), you should definitely check out what Tao Presentations can do. It’s an incredible environment for quickly putting together stunning 3D presentations. And we are talking real 3D here, with or without glasses depending on your display technology. We can even play YouTube videos directly, as the demo below shows.
But the paradox is that the work for Taodyne keeps me so busy I’m no longer spending as much time on the core XL technology as I’d like. Fortunately, vacation periods are an ideal time to relax, and most importantly, think. So I guess it’s no chance if Christmas is one of the periods where XL leaps forward. Last year, for example, I implemented most of the type inference I wanted in XL, and as a result, got it to perform close to C on simple examples.
This year was no exception. I finally came up with a type system that I’m happy with, that is simple to explain, and that will address the demands of Tao Presentations (notably to implement easy-to-tweak styles). The real design insight was how to properly unify types, functions and data structures. Yep, in XL, types are “functions” now (or, to be precise, will be very soon, I need to code that stuff now).
Another interesting idea that came up is how to get rid of a lot of the underlying C++ infrastructure. Basically, symbol management for XLR was currently dome with C++ data structures called “Symbols” and “Rewrites”. I found a way to do exactly the same thing only with standard XLR data structures, which will allow XLR programs to tweak their own symbol tables if they choose to do so.
An interesting rule of programming: you know you have done something right when it allows you to get rid of a lot of code while improving the capabilities of the software.
Back to coding.
- The Word of Notch, written by Markus Persson, the author of Minecraft, published a post titled “It’s a scam!“, suggesting essentially that it’s just a sparse voxel octree, and that the company is over-hyping well known techniques to get some funding.
- That post was quickly followed by another one called, well, “But Notch, it’s NOT a scam!“, where he says that Euclideon’s marketing department is lying by not talking about the well-known drawbacks of sparse octree voxels technologies, and by inventing new terminology like “search engine” for “octree traversal”.
- Even John Carmack chimed in with a twitter post stating: “no chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging“.
- Many others are claiming that it’s a hoax.
This looks pretty good. Yet the video shows a number of limitations, like repeated patterns at high scale. These limitations may explain why John Carmack, Notch and others say that Euclideon is only implementing well-known algorithms and scamming people for funding.
Is it new? Is it real?
As for me, I wouldn’t call it a scam, and I don’t think the technology is that far out. Euclideon’s technology is in my opinion a little more than just sparse voxels octrees(SVOs), though the general principle are most likely similar. That being said, I must agree with Notch that Euclideon is anything but above hyperlatives, something I discussed recently.
To more directly address the criticisms being made by Notch, what they have done looks like a real technical innovation worth funding. For example, they deal with shadows and illumination, something that is hard to do with SVOs. They demonstrated some basic animation. They don’t have to tell all of us the gory technical details and limitations, as long as they tell their investors. And from the interview, it seems that they had people do some due diligence.
Regarding Carmack’s tweet, it does not dismiss the technology. It’s public knowledge that Carmack is himself working on similar stuff. He said this:
In our current game title we are looking at shipping on two DVDs, and we are generating hundreds of gigs of data in our development before we work on compressing it down. It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues. So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner. Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure. It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.
Carmack and Notch have high stakes in the game. To them, a technology like Euclideon’s is threatening, even if mildly. Dismissing something new because we are experts in the old ways is so easy to do. But let’s not get used to it.
3D rendering before GPUs
What legitimacy do I have myself to give my opinion on this topic, you may ask? I wrote a game called Alpha Waves a couple of years before John Carmack’s claimant to the title of “the first 3D PC game ever!“. That game is in the Guiness Book as “The first 3D platform game” (not the first 3D game, since there were several much simpler 3D games before).
More importantly, Alpha Waves was as far as I know the first 3D game offering interaction with all 3D objects in the environment: you could touch them, bounce off them, etc. Also new for the time, the game was 6-axis (all rotations were possible), fullscreen (most games at the time showed 3D in a quarter screen or half screen), and showed 20 to 50 3D objects simultaneously (the standard back then was 1 to 5). Games like Carmack’s Hovertank 3D had none of these capabilities. This blog post explains how this was achieved.
(As an aside, I’m still doing 3D these days, but even if we are pushing the boundaries of what today’s graphic cards can to (e.g. they are not optimized for rendering text in 3D on auto-stereoscopic displays), that’s much less relevant to the discussion.)
Exploring 3D rendering algorithms
In the late 1980s, when many modern 3D rendering algorithms were still to invent, I continued exploring ways to render more realistic 3D, as did many programmers at the time. I tried a number of ideas, both on paper and as prototypes. I tried software texture mapping (a la Wolfenstein), which looked positively ugly with only 16 simultaneous colors on screen. I reinvented Bresenham’s line drawing algorithm and then generalized it to 2D shapes like circles or ellipses, and then tried to further generalize for 3D planar surfaces and shapes, or to 3D rays of light.
This was how you proceeded back then. There was no Internet accessible to normal folks, no on-line catalog of standard algorithms or code. We had books, magazines, person-to-person chat, computer clubs. As a result, we spent more time inventing from scratch. I tried a number of outlandish algorithms that I have never read anything about, which is a good thing because they didn’t work at all.
More to the point, I also designed something that, on paper, looked a bit like what Unlimited Details is supposed to be today. At the time, it went nowhere because it exceeded the available memory capacity by at least three orders of magnitude. Of course, we are talking about a time where having 512K of RAM in your machine was pretty exciting, so three orders of magnitude is only 512M, not a big deal today. It’s based on this old idea that I believe Euclideon’s claims have some merit.
What do we know about Unlimited Details?
Here are some facts you can gather about Unlimited Details technology. I will discuss below why I believe these observations teach us something about their technology.
- The base resolution is 64 atoms per cubic millimeter. Like Notch, I thought: that’s “4x4x4″. Please note that I write it’s the resolution, not the storage. I agree with Notch that they can’t be storing data about individual atoms, nor do they claim they do. It’s pretty obvious we see the same 3D objects again and again in their “world”.
- They claim that their island is 1km x 1km. That means 4 million units along each direction, which is well within what a 32-bit integer coordinate can hold. So it’s not outlandish.
- They claim repeatedly that they compute per pixel on screen. This is unlike typical 3D game rendering today, which compute per polygon, then render polygons on screen. There are many per-pixel technologies, though: SVOs are one, but raytracing and others are also primarily per screen pixel.
- There’s a lot of repetitive patterns in their video, particularly at large scale. For example, the land-water boundaries are very blockish. But I think this is not a limitation of the technology, just a result of them building the island quickly out of a logo picture. Below is an example that shows no such effect:
- Notch points out that objects tend to all have the same direction, which he interprets as demonstrating that they use SVO, where rotations are hard to do.
- On the contrary, close-ups of the ground or the picture above do not show such obvious pattern repetitions, and objects seem to have various orientations and, even more telling, to “intersect” or “touch” without obvious “cubic” boundaries. My impression is that at high-level, they organized objects in cubic blocks to create the demo, but that we shouldn’t infer too much from that.
- Euclideon shows dynamic reflexions on water surfaces. What we see is only very planar water surfaces (no turbulence, …), however.
- They have some form of dynamic shadow computation, but it’s still in progress. They talk about it, but it’s also quite visible around 1:30 in the video above, or even better, in the picture below from their web site:
- They do software rendering at 1024×768 at around 20fps “unoptimized”, single-threaded on a modern i7 CPU. They claim that they can improve this by a factor of 4.
- When they zoom out of the island, the colors fade. I think that this is a rather telling clue about their renderer. I didn’t see any transparency or fog, however.
- On a similar note, there is no visible “flicker” of individual pixels as they zoom in or out, as you’d expect if they really picked up an individual atom. The color of nearby atoms being different, either their algorithm is very stable with respect to the atom it picks (i.e. it is not solely based on the direction of an incoming ray), or they average things out with neighboring atoms.
- Last, and probably the most important point: they know quite well what others are doing, they are quoting them, and they are pointing out their difference.
Voxels and Sparse Voxel Octrees?
There are a number of people working with voxels and sparse voxels octrees. You can search videos on YouTube. They all share a number of common traits:
- They generally look blocky. The Euclideon videos look blocky at large scale, but all the blocks themselves remain highly detailed. You never see uniform squares or cubes appear as in most SVO demos, e.g. Voxelstein 3D below. This may be due to the 4 atoms-per-mm resolution. But see below the related problem.
- It’s difficult to do nice looking shadows and lighting with SVOs. Look on the web to see what I mean. The counter examples I saw used some form of raytracing, non real-time rendering, or were GPU demos. The shadows and lighting in Euclideon’s demos look quite smooth.
But there’s one big issue with SVOs, which is traversal time (see this article for a good background). Consider the following numbers: Euclideon claims to draw 1024×768 pixels, 20 times per second. That’s roughly 16M pixels per second. On a 2GHz CPU, that leaves on the order of 100 cycles per pixel to find the object and compute the color of the pixel. In the best case, that’s roughly 500 instructions per pixel, but don’t forget that an ILP level of 5 is quite hard to sustain. A single cache miss can easily cost you 100 cycles.
And that’s really the big question I have with Euclideon’s claims. How do they minimize cache misses on a data set that is necessarily quite big? Consider the minimum amount of data you need for a sphere of radius R, The area of the sphere is 4πr2.
Even if we have atom data only for the surface, a sphere of 1000 units would use roughly 12 million atom storage elements. It’s not a big sphere either: a unit being reportedly 0.25mm, so that would be a sphere with a 25cm radius, 10 inches for imperialists. An object of the size of the elephant (1m) takes roughly 200M atoms, and the big statues must be around 500M atoms. This explains why “memory compression” is something that Euclideon repeatedly talks about in their video. My guess here is that there aren’t that many kinds of atoms in the elephant, for example. I’m tempted to think they have “elements” like in the physics world.
But memory compression doesn’t solve cache latency issues. That, combined with the lack of spatial or temporal flicker, tells me that they found a really smart way for their “atom search” to be stable and to preserve good memory locality. My guess is that they do this based on a “radius” of the incoming ray. If the camera is far from the object being looked at, you only pick “large” atoms considered representative of the whole. Identifying a good candidate “average” atom is something you can do at object encoding time.
Number of instructions per pixel
Assuming Euclideon solved memory latency so that, on average, they get, say, a good 200 instructions per pixel. We start with a resolution of 4 million units in the X, Y and Z direction, and we need to find the target voxel. So we start with (roughly) 22 bits along each axis. My guess is they may have less along the Z axis, but that doesn’t make a big difference. With an octree, they need, roughly, 22 steps through the octree to get down to the individual “atom”. That leaves 200/22, or roughly 10 machine instructions per octree traversal step. That’s really not a lot.
Of course, in general, you don’t need to get down to the individual atoms. The whole viability of the octree scheme is largely based on the fact that you can stop at a much higher granularity. In reality, the size of the atoms you need to find is a function of the screen size, not the individual atom resolution. We are therfore looking more at 10 iterations instead of 20. But still, that won’t leave us with more than about 20 instructions.
Ah, but wait, an SVO algorithm also requires a small amount of ray tracing, i.e. for each pixel, you need to check along the way if a ray starting at that pixel is hitting things. You need to repeat along the path of the ray until you hit something. And ray displacement computations also take time. The amount of time depends on the particular algorithm, but to my knowledge, the best scenario is roughly a log of the number of possible hits along the way (i.e. you do another binary search). So if we keep the same 22 binary searches along the way, that leaves us with less than one instruction per step on average.
Clearly, these numbers explain why there is some skepticism of Euclideon’s claims. The CPUs today are just short of being able to do this. Which is why the SVO demos we see require smaller worlds, lower resolution. All this reduces the number of bits significantly, making it much easier to achieve correct performance. But this analysis shows why a simple standard SVO algorithm is unlikely to be what Euclideon uses.
Just short = Optimizations are useful
In general, I tend to stick with the First Rule of Optimization: Don’t. But being “just short” is that exact time when tiny optimizations can do a whole lot of difference between “it works” and “it doesn’t”.
At the time of Alpha Waves, we were “just short” of being able to do 3D on a PC, because the mathematics of 3D required multiplications, which were tens of cycles on the machines of the time. The tiny optimization was to replace 9 multiplications costing each 14 to 54 cycles with three additions costing 4 cycles each (on the 68K CPU of my Atari ST). I often did multiple additions, but even so, that was less expensive. And that single optimization made the whole scheme viable, it made it possible for Alpha Waves to render 30 objects on screen when others programs of the time slowed down with 3.
So what kind of optimizations can we think about in the present case? Here are some semi-random ideas:
- Assuming 21 bits instead of 22, or making the Z axis smaller (i.e. you could have 24+24+16, where the Z axis would be “only” 65 meters deep), you can make all three coordinates fit in a 64 bit value. So basically, each “atom” in the world is uniquely addressed by a 64-bit “address”. You can then rephrase the problem as “map a 64-bit value to a sparse data structure”.
- You don’t need the order of bits in the “address” to be fixed. There are plenty of multimedia instructions for reshuffling bits these days. Among other things, you can work with X:Y:Z triples where X, Y and Z have 20 bits, 2 bits or 1 bit. For example, if X:Y:Z are packed bit-by-bit, a sequence of 3 bits can serve as an index to resolve a step of the octree traversal.
- If there’s no transparency, you can draw a ray from the camera to the atom, and know that you will stop exactly once, all atoms before it being transparent. If you have a variable size for atoms, you can have very large transparent atoms so that advancing the ray will be pretty fast across transparent regions. You can practically get back a factor of 20 in the instructions per iteration.
- Computing the ray displacement using a variation on Besenham’s algorithm, you can basically do the “hard” computation once for 8 rays using symmetries, keeping only the “simple” computations in the main loop.
- At first level, you can use memory mapping to let the hardware do the work for you. You can’t do this on 64-bits, mind you, since there are various per-OS limits on the largest range that you can map. But taking advantage of the TLB means a single instruction will do a 4-level table lookup with hardware caching. I’m pretty sure Euclideon people don’t do that (I’m suggesting it only because I did operating systems design and implementation for a little too long), but it would be cool if they did.
There are obviously many more tricks required to achieve the objective. But I did a quick back of the envelope calculation on the average cost using some of the tricks above and a couple more, and I ended up with cycles to spare. So I believe it can be done on today’s hardware.
John Carmack is right on the spot when he says that productization is the real issue. Prototypes are easy, products are hard. But I wouldn’t rule out seeing a game using that technology on current generation systems.
Just my two bits.
The poster explains:
Regarding the ubuntu version, was it build with a g++ 4.6 ?
It seems to require a GLIBCXX_3.4.15 which is just above the one I have
I am going to install a g++ 4.6 from source in order to get the right
libstdc++ but I wanted to let you know just in case someone else get the
So here is the problem : if we upgrade our build systems to the latest and greatest patch level, we will also include a number of recent dependencies that our customers may not have yet.
Some development environments provide options to select which build tools you use, allowing you to revert to older build configurations. For example, my MacOSX system shows the following variants of G++:
g++ g++-4.0 g++-apple-4.2 g++2 g++-3.3 g++-4.2 g++-mp-4.4 g++3
Unfortunately, that facility doesn't automatically extend to all the libraries you depend on. For example, we use Qt to develop cross-platform GUI code, and versions before the still unreleased 4.8 version emit compilation warnings while compiling on MacOSX Lion. So if I want to build with Lion, my options are either to build in an unsupported configuration (and stop using the -Wall option in our builds), or to build with a not-yet-released library.
So is the best solution to keep your build systems behind, with all the implied security risks? Is there a better overall strategy I didn't think of?
Reviewing my daily feed of tweets this morning, I ran across a presentation called “It’s time to fix HTTPS“. The topic itself is of interest to all of us, since it concerns the security of e-commerce transactions, among other things.
Yet the slide deck lacks basic appeal:
- Only text (or busy screen snapshots)
- No obvious organization or story
- Three boring slides of disclaimers and acknowledgments at the beginning,
- Acronyms, jargon,
- Long sentences, broken apparently at random
This kind of presentation is not an infrequent occurrence, unfortunately. For some reason, many scientists and computer scientists seem to take pride in showing horrible slides. I resisted the urge to make a catalog.
So let me state something that should be obvious, but obviously is not: Just because you are smart doesn’t mean your presentations have to suck. Or put it another way: Your time is not so precious that you shouldn’t help your readers get your point.
Sharing an idea
The whole point of a presentation is to share an idea, to convince someone. This requires some work at two distinct levels, form and contents. Let’s assume that you have the contents, what can you do about the form?
Here are three simple things to keep in mind to build a presentation that is useful for you and for your readers:
- Tell a story
- Keep their attention
- Be a guide
Remember above everything else that your objective is to share your idea, not rehash it to yourself. Therefore, if the idea does not contaminate your audience, the presentation failed its objective.
The “storytelling” word has been used and abused. The gist of storytelling is that sharing an idea is not just about sharing facts, it’s about making your audience take ownership of your idea, make it their own.
This is often hard to accept for the scientific minds. Aren’t facts enough? In reality, all facts can be disputed. All opinions have to be defended, explained, elaborated. Even if the idea is obvious to you, it may still be wrong, or dangerous, or you may need to explain the basics to avoid losing half of you audience.
Storytelling is not about what you say, but about how you say it. Don’t write “It’s time to fix HTTPS“. Prefer “Do you know it’s really your bank talking to your browser?” Instead of “Global PKI, as currently implemented in browsers, does not work“, what about “The browser chain of trust hangs to weak links“? (assuming I understood the core argument correctly)
What am I doing with this simple rephrasing? I’m trying to deliver the same facts, the same core idea, but in a way that the audience may relate to. Not everybody knows what HTTPS means, but anybody (reading Google Docs) knows what a browser is or that security matters when it talks to a bank.
Even the best facts need a good story for people to get interested or remember them.
Keep their attention
The slide deck about HTTPS is on a topic that interests me, but I had some trouble following it to the end.
In these days of soundbites when wisdom has to fit in 140 characters, sometimes you need all the help you can get from fancy visuals, animations, speaker charism simply to keep the audience awake. And if you don’t have a speaker (e.g. for an on-line presentation), you may need other tricks.
Google Docs is clearly not the best tool when it comes to delivering fancy presentations. It’s not a limitation of on-line tools, though. Actually, some of the most convincing innovation in that spaces comes from on-line tools. SlideRocket delivers really nice presentations, arguably much better than the average PowerPoint. And what’s the best reason to use Prezi, if not fancy visuals?
Still, do not go overboard. Beware that a movie does not replace a presentation. Who has not seen one too many on-line video like this one?
It certainly took a lot of work. But in my option, it’s the exact opposite of the HTTPS slide, i.e. it’s all about showing off effect after effect. It doesn’t keep my attention either, it smites it to bits.
So how could the HTTPS slide deck retain my attention better? There needs to be some level of organization, some key message, some way for me to understand “Ah, that’s what they are talking about now”. We don’t want raw data, we are already over-fed with data. So whatever we pay attention to needs to be structured.
Fancy visuals do not replace the presentation. But, utilized well, they make it live.
Be a guide
Sharing innovation is even more difficult. To paraphrase A.C. Clarke, any sufficiently advanced idea is indistinguishable from gibberish.
It takes a fair amount of marketing and communication to correctly explain the value and benefits of some new technology. I remember being very happy that VMware was doing all the work of educating our customers about the value of server virtualization, which meant we didn’t have to do that work when talking about HPVM.
Innovation is about telling others what to do. And nobody wants to follow directions, so you need to do it not with brute force, but by getting the audience to actually follow you. One way to do that is by showing a better way. Another is by inducing fear of the current situation.
The HTTPS presentation tries both approaches, but without much conviction. The fear is too implicit, you really need to understand the technology. The better way, the greener pasture just over the fence is a little bit too vague. So it’s not entirely convincing. The technical arguments could be made into a much more appealing proposal, however.
To be a guide, you need to already know where you are going.
You may have heard of the Joel Test to evaluate startups:
- Do you use source control?
- Can you make a build in one step?
- Do you make daily builds?
- Do you have a bug database?
- Do you fix bugs before writing new code?
- Do you have an up-to-date schedule?
- Do you have a spec?
- Do programmers have quiet working conditions?
- Do you use the best tools money can buy?
- Do you have testers?
- Do new candidates write code during their interview?
- Do you do hallway usability testing?
At Taodyne, we pass all these, yet our project took a major blow because there is at least one other key rule that goes above all these. I will call it rule zero, and it impacts how you write code.
- Do you go in a single direction?
The code-oriented formulation of rule zero is the following:
- Do you avoid branches like the plague?
Do not ever fork your codebase
With tools like Git, Mercurial or Bazaar, branches are much easier to manage than they were in the past. They actually became part of the process. With these distributed source control systems, you keep creating and merging branches for practically everything you do.
So why the title, “Branches are evil” (a favorite of über-geek Dennis Handly)? Here, by “branch” I don’t mean the kind of minor branching where two variants of the codebase differ by a few lines of code. This has to happen as you develop code. The problem is when the fork affects the design itself, or the direction the project is following.
My biggest mistake at Taodyne was ignoring this key rule. It all started with a simply thought experiment about an alternative design, quickly followed by a little bit of experimental code. And I forked our code. But then, others started building useful stuff on top of that branch.
When we decided to drop that alternate design, we had a big problem on our hands. We needed to salvage all the good stuff on that branch, that had been written against a completely different design. We have spent several months trying to reconcile these two conflicting designs that had conflicting objectives. As a result, I spent countless hours trying to fix bugs that were a result of merging code with different assumptions.
When a single mistake delays the release of your product by a few months, you know you did something solidly wrong. I thought I’d share the experience so you won’t do the same mistake.
I spent the last four days at a rather exciting entrepreneurial event on the French Riviera, which really combined three distinct events under the umbrella of the brand new RivieraCube association:
- An Open Coffee with the Sophia-Antipolis team. Open Coffee is an informal gathering of (mostly Java) geeks around a coffee (or, more often in our case, a beer, since we do that in the evening). This was so successful that a new Open Coffee group for Nice spontaneously emerged.
- A BarCamp the next day, with a small (and cramped) Startup Corner where Taodyne presented its flagship product, Tao Presentations. We had some exceptional unconferences from well-known French serial-entrepreneurs, including Kelkoo founder Pierre Chappaz or Kipost founder Pierre-Olivier Carles.
- A Startup Week-end which gathered about 100 enthusiasts with the intent to create a startup in 54 hours. And some of them actually managed to pull it off, which is pretty amazing when you think about it. But the talent and energy in that room were simply amazing, and reminded me of some of the best moments I had in the Silicon Valley.
Reports on the web
There are already a large number of blogs reporting on this event, but I believe the best indicator of how lively it was is its twitter hashtag, #swnsa. There was actually a friendly contest with another Startup Week-end held the same day in Lausanne, Switzerland:
And the winner is…
There was a number of exciting projects, but there was generally little surprise as to who the winners were. The first three projects get a lot of help from local consulting companies, and the leader of the winning team gets a free Geek Trip to the Silicon Valley.
The winner was “Mamy Story” (@MamyStory), which I believe surprised no one in the audience. The concept is simple (tell the story of your grand-parents), has an interesting innovation (which I won’t disclose here), a catchy name (“papy” or “mamy” in French is a common nick-name for grandparents), but more importantly, appeals to our emotions, something which they largely exploited during their pitch.
As a matter of fact, they managed to get a member of the jury to tell them they could reach a larger market than what they presented in the plan. Here is another example of why they have a market.
The runner ups were :
- Dynoo (@dynoo_com), a project to “spread the word” (the French pronunciation for Dynoo sounds like “Dis nous” or “tell us”, although they sometimes said it the english way, which I think weakens it. They should consider renaming it to deenoo),
- Qwinti (@qwinti), a web site to save your social activity, who had a really good designer on the team,
- JustMyTrends (@JustMyTrends), a web site offering a personalized shopping experience for hard-to-find items (the founder has a hard time finding shoes fitting his über-large feet).
And the winner is (redux)…
There was also an iPad2 to win, offered by Everything4Mobile (a very cool web site created by Virgile Cuba, a regular at the Sophia Open Coffee).
The winner was Matthieu Loutre, who was a member of our team. He lives in Montpellier, but he will happily drive on the 25th of March to Nice just to get his new gadget from the friendly team at the Apple Store (and when I say “friendly”, I don’t say that lightly – The user experience in that store is remarkable, doubly so by French standards).
First use of Tao Presentations in a conference
On Friday evening, I joined a project that I won’t talk about, because I believe the project leader has needs a bit of time to flesh his idea out, and even more time to turn it into a real product.
That being said, that was an occasion to try our prototype of Tao Presentations in a real, competitive environment. I learned a number of things :
- It’s a really competitive way to tell a story. You think about the story first, the way to tell it follows, something which is often harder with other presentation software.
- The presentation part just works. It didn’t crash once during the two days of rather heavy use, and the worst misbehavior was transient lighting glitches on the screen when using OpenGL lights.
- One of the challenges was to test whether creating live mock-ups of software to explain an idea was possible. It worked, it was easy, it really added to the presentation, but then we couldn’t really use that part because the question we expected didn’t come up :-)
Some aspects were less positive:
- Editing slides triggers an elusive bug on a relatively regular basis. I had the issue about half a dozen times in two days. The program crashes, which is not a real issue because of the way the workflow is organized (I never lost a single bit of what I had done), but still is annoying.
- The software doesn’t automatically reload images when they change on disk, which means you sometimes need to restart it just to load a new version of the pictures. To be fixed.
Overall, I had some rather good feedback on the presentation. I showed a talk about XL to half a dozen true geeks, talked about programming techniques.
Young programmers and compilers…
I think it’s a good thing overall, but then someone still needs to get interested enough by system software. I’m afraid system software programmers are getting old. We need to train the new generation, to get them interested in languages that can run fast.
The good news, then, is that XL got rather positive comments. No “why invent a new language” in this crowd.
There is a little bit of activity on the XLR-Talk mailing list, dedicated to discussions about the XL programming language and runtime. XL is a high-level general purpose programming language I designed and implemented over the years. It exists in two primary flavors, XL2, an imperative statically-typed language, and XLR, a functional flavor. Both share a very similar syntax and basic concepts.
One of the interesting threads is about using XL for high-performance computing. I like it when someone writes:
Thank you for releasing Xl and in particular Xl2, this is a most
interesting and exciting development. I am a very long-time C++ user
and appreciate the power of generic programming through templates but
the syntax is clunky and I often find myself going off the end of what
is currently possible and end up messing around with the C pre-
processor which is frustrating. I am hoping that Xl2 will prove to be
an effective alternative to C++ templates and provide the programming
flexibility I crave.
Now, XL2 is not ready for prime-time yet. Its library is significantly lacking. But the core compiler is already quite advanced, and can compile very interesting pieces of code. For instance, XL was as far as I know the first language to introduce variadic templates, for code like this:
generic type ordered where A, B : ordered Test : boolean := A < B function Max (X : ordered) return ordered is return X function Max (X : ordered; ...) return ordered is result := Max(...) if result < X then result := X
What happens in this little piece of code is interesting. It introduces two key features of XL: true generic types and type-safe variadics.
True generic types
ordered type is an example of "true generic type", meaning that it can be used as a type in function declarations, but it implicitly makes the corresponding function generic (C++ programmers would say "template"). In other words,
ordered can represent types such as
real, and you can use
Max with all these types.
In that specific case,
ordered is a validated generic type, meaning that there are some conditions on its use. Specifically,
ordered only represents types that have a less-than operator, because that operator is necessary to implement
Max. Note that a compile-time failure will occur if you attempt to use
Max with a type that doesn't have a less-than, even if no less-than operation is used in the instantiation.
The second interesting feature demonstrated on this small example is the use of
... to represent arbitrary lists of arguments. This is used here to implement type-safe variable argument lists. You can for example write
Max(1, 3, 5, 6, 9), and the compiler will recursively instantiate all the intermediate
Max functions until it can compute the result.
These same features are also used for functions that have lists of argument with differing types, such as
WriteLn. The XL2 implementation of WriteLn is found here:
to WriteLn(F : file; ...) is any.Write F, ... PutNewLineInFile F
Write function itself is implemented with a similar recursion that ends on functions that write a single argument, e.g. an integer value.
How does it help HPC
So how do these features help high-performance computing? They allow you to easily write highly generic code, covering a large range of uses, without paying a run-time penalty for it. No objects are constructed. No garbage collector is required to clean up memory allocations : there are no memory allocations. Everything can easily be inlined.
There are other features in XL that also help with HPC:
- Expression reduction allows you to combine operations for performance or logical reasons, e.g.
- Configurable code generation allows you to take advantage of specific hardware and integrate it directly into your code.
Where do we go from here?
XL2 is currently a little bit on hold because I'm currently focusing a lot of my energy on the functional variant, XLR, used by Taodyne in its products.
However, I believe that it reached a stage where other people can contribute relatively easily. For example, it would be useful to implement the various "fundamental" data structures in the library, i.e. go a little bit beyond arrays. If you want to contribute to XL2, nothing would please more than to give pointers. Simply join xlr-talk.
A colleague sent me an interview with Kalani Thielen about trends in programming languages.
I’m fascinated by this interview. Kalani is obviously an expert. But what should we think of the following?
The function, or “implication connective” (aka “->”), is an important tool and ought to feature in any modern language.
I have a problem with this kind of jargon. Why? Because if concept programming teaches us anything, it’s that a computer function is anything but a mathematical “implication connective”. Programming languages are not mathematics.
Programming languages are not mathematics
Computer science courses often spend a lot of time teaching us about various aspects of mathematics such as lambda calculus. So we are drawn to believe that programming languages are a sub-branch of mathematics.
And indeed, this is the view that Kalani expresses:
The lambda calculus (and its myriad derivatives) exemplifies this progression at the level of programming languages. In the broadest terms, you have the untyped lambda calculus at the least-defined end (which closely fits languages like Lisp, Scheme, Clojure, Ruby and Python), and the calculus of constructions at the most-defined end (which closely fits languages like Cayenne and Epigram). With the least imposed structure, you can’t solve the halting problem, and with the most imposed structure you (trivially) can solve it. With the language of the untyped lambda calculus, you get a blocky, imprecise image of what your program does, and in the calculus of constructions you get a crisp, precise image.
The statement that I strongly dispute is the last one: In the calculus of constructions, you get a crisp, precise image. The Calculus of Constructions (CoC) is the theoretical basis for tools such as Coq. These tools are intended to assist with computer-generated proofs. So the very thing they talk about is represented crisply by the CoC. But if I want is to represent the behavior of malloc() (a function whose primary role are its side effects) or how CPUs in a system communicate with one another (physical interactions), then CoC is of no use. It doesn't give me a crisp, precise image.
In other words, high-level languages with first-class functions, implicit garbage collection or dynamic typing are really cool, but they give me a blurry, imprecise picture of how the computer actually works. The reason is that a computer is not a mathematical object, it's a physical object. An "integer" in a computer is only an approximation of the mathematical integer, it's only an approximation of the mathematical entity with the same name.
Trying to hide the differences only makes the picture more blurry, not more crisp. For example, you can use arbitrary-precision arithmetics instead of integers with a fixed number of bits. And now, your "integers" start consuming memory and have other side effects. In any case, these arbitrary-precision numbers are not "native" to the computer, so they are "blurry".
Programming languages are languages
With concept programming, I've consistently argued that programming languages are, first and foremost, languages. Trends in programming languages are a problem of linguistics, not mathematics. The goal should not be to make the language more precise, but to make it more expressive. Nobody cares if you can solve the halting problem regarding programs in your language, if to achieve that objective you have to give up expressiveness.
Let's make an analogy with real-world languages. Imagine that we decided that it's important to make English "precise". We'd set the goal that any sentence in Precisenglish could be provably true or false. First, you'd end up with a language where it's impossible to write "This sentence is false", since it's impossible to prove this sentence true or false. Now, this particular sentence may not seem indispensable. But what about questions? "What time is it?" wouldn't be a valid Precisenglish sentence... Isn't that a hefty price to pay for precision?
The same is true for programming languages. You can impose constraints on the language that make it easier to prove things. And then, simple things like side effects become really complex. In Haskell, the simple task of writing something to the console requires complex constructions such as monads...
Mathematics and programming both derive from languages
It's interesting to observe that mathematics is also a form of language. It's a precise, if restricted, language that helps us reason about symbols, build theorems, prove things. So it makes sense that mathematics and programming languages are related: they both are forms of languages. But it's not because programming languages derive from mathematics. It's because programming languages and mathematics both derive from languages.
In my opinion, progress in programming languages will happen if we decide to give up mathematics and embrace linguistics. When we try to start focusing on how we translate concepts in our head into computer approximations.
A good language by that definition can adapt to mathematics as well as to other domains of knowledge. The ideal programming language should be capable of representing mathematical properties. It should be possible to write a subset of the language that is precise, just like mathematics can be seen as a precise subset of human languages. But it should also be possible to have a subset that is not necessarily as precise because it addresses other domains. There should be a subset that deals with I/Os not using monads or other abstractions, but representing what's going on in the machine as precisely as possible. There should be a subset that deals with parallelism, or computations on a GPU, or floating-point arithmetic.
And you basically have a definition of what I'm trying to achieve with XL.
Someone asked on the Go language mailing list about the placement of curly braces. The thread currently has 101 posts. And my guess is that this is just the beginning.
Programmers are familiar with holy wars. This thread reinforces my belief that Go should put a little more emphasis on flexibility or extensibility, and a little less on compile time.