Archive for the ‘Science’ Category

La SNCF en flagrant délit d’IP-tracking

La SNCF augmente son prix de 41% en deux minutes

La SNCF augmente son prix de 41% en deux minutes

La SNCF affirme ne pas pratiquer l’IP tracking, J’ai du mal à y croire.

Il y a quelques minutes, ma femme va sur le site Voyages SNCF, et demande un billet Paris-Antibes. Prix du billet: 80€. “Attention, dernières places à ce prix”, bien sûr. Mais à un moment, elle fait une erreur, et décide de refaire une recherche sur le même site. Le même billet passe soudainement à 113€.

Je fais un essai depuis un autre navigateur, puis depuis une autre machine dans la même maison (donc même adresse IP depuis l’extérieur). Le billet reste coincé à 113€.

Mais, histoire de vérifier si les billets à 80€ ont vraiment été épuisés (la théorie du blog de la SNCF ci-dessus), je décide de passer par mon smartphone en 3G. Du coup, forcément, changement d’adresse IP. Et là, surprise (pas vraiment, en fait), je retrouve le billet à 80€. Que j’achète.

Le billet acheté: 80€ seulement !

Le billet acheté: 80€

Alor, si la SNCF ne fait pas d’IP tracking, pourquoi ce que je viens de décrire se passe à chaque fois? Ce phénomène ne peut pas s’expliquer par l’épuisement des billets à un certain palier de tarif; parce que les prix affichés sur un même ordinateur dépendent de l’IP utilisée !

Paul Graham recommends doing things that don’t scale

As usual, Paul Graham writes an interesting piece about startups. He recommends doing things that don’t scale. Thinking like a big company is a sure way to fail. It’s a reassuring piece for the startup creator that I am, because at Taodyne, we are indeed in this phase where you do everything yourself and you’d need 48 hours a day to do the basics. Good to know that the solution to this problem is to keep working.

Connect this to the survivor bias. This is a very serious cognitive bias, which makes us look only at the survivors, at the planes who return from combat, at the successful entrepreneurs. Because we don’t look at the dead startups or planes that were shot down, we build our statistics on a biased sample. As a result, we make incorrect assumptions. For example, if the planes that return have mostly been shot in the tail and wings, you might deduce that this is where planes are being shot at, so that’s the parts you need to protect, when in reality what this proves is that these are the parts that don’t prevent a plane from returning when shot. Very useful.

Last interesting link of the day is the discussion about bullying on the Linux Kernel Mailing List (LKML). Sarah Sharp, a female Intel engineer, stands up to Linus Torvalds and asks him to stop verbal abuse. It’s an interesting conflict between very smart people. To me, there’s a lot of cultural difference at play here (one of the main topics of Grenouille Bouillie). For example, I learned from Torvalds what Management by Perkele means. On one side, it’s legitimate for Sarah to explain that she is offended by Linus’ behavior. On the other hand, it’s legitimate for Linus to keep doing what works.

Sarah reminds me of a very good friend of mine and former colleague, Karen Noel, a very sharp engineer who joined me on the HPVM project and taught me everything I forgot about VMS. Like Sarah, Karen was willing to stand up her ground while remaining very polite.

Everything is broken and no one cares

February 10, 2013 1 comment

Everything is broken and no one cares

This post from Dear Apple is just so true, and so clearly on topic for Grenouille Bouillie!

Have we reached the point in complexity where we can’t make good quality products anymore? Or is that some kind of strategic choice?

The original post is mostly about Apple products, but the same is true with Linux, with Android, with Windows.

Here is my own list of additional bugs, focusing on those that can easily be reproduced:

  1. Open a file named X in any of the new Apple applications, those without Save As. Open another file named Y. Save Y as X. Beachball. For every application. Worse yet, since applications often remember which windows were open, you get the beachball again when you reopen the application. It takes another force quit for the application to (fortunately) offer to not reopen the windows.
  2. A relatively well known one now: Type F i l e : / / / in practically any OSX application. Without the spaces. Hang or assert depending on your luck.
  3. Use a stereoscopic application like Tao Presentations ( Activate stereoscopy. Switch spaces or unplug an external monitor. Kernel panic or hang to be expected. Go tell to your customers that the kernel panic is Apple’s fault, not ours…
  4. If you backup over the network, set your computer to sleep after say 1 hour while on power. Change your disk enough that the backup takes more than one hour. Backup disk will come up as corrupt after a couple of days, and OSX will suggest you start a new one (and the cycle will repeat).
  5. Use the “Share” button. It takes forever to show up the window (like 2-3 seconds in general on my 2.6GHz quad-core i7 with 8GB of RAM). Since what I type generally begins with an uppercase letter, I usually prepare myself by having the finger on the shift key. But to that stupid animation framework, “shift” means “slow animation down so that Steve can demo it”. Steve is dead, but the “shift” behavior is still there.

I’ll keep updating this list as more come to mind. Add your own favorite bugs in the comments.

First update (Feb 13, 2013):

  1. Safari often fails to refresh various portions of the screen. Visible in particular when used in combination with Redmine. This used to be very annoying, but it has gotten much better in more recent updates of Safari.
  2. iTunes 11 no longer has Coverflow. It was a neat way to navigate in your music, which wasn’t even the default, why remove it?
  3. Valgrind on OSX 10.8 is completely broken. I have no idea what’s wrong, but it’s a pretty useful tool for developers, and Apple has nothing in its own development tools that is even remotely close.
  4. “Detect displays” is gone, both from the Monitors control panel and from the Monitors menu icon. Combine that with the fact that OSX 10.8, unlike its predecessors, sometimes totally fails to detect that you unplug a monitor. And you find yourself with windows stuck on a screen that is no longer there…
  5. That little Monitor menu icon used to be quite handy, e.g. to select the right resolution when connecting to an external projector for the first time. Now, it’s entirely useless. It only offers mirroring, fails to show up 90% of the time when there is a possibility to do mirroring, shows up when mirroring is impossible (e.g. after you disconnected the projector). It used to be working and useful, it’s now broken and useless. What’s not to love?
  6. Contacts used to have a way for me to format phone numbers the way I like. That’s gone. Now I have to accept the (broken) way it formats all phone numbers for me.
  7. I used to be able to sync between iPhone and Contacts relatively reliably. Now, if there’s a way to remove a phone number, I’ve not found it. Old numbers I removed keep reappearing at the next sync, ensuring that I never know which of the 2, 3 or 4 phone numbers I have is the not dead one.
  8. Still in Contacts, putting Facebook e-mail addresses as the first choice for my contacts? No thanks, it was heinous enough that Facebook replaced all genuine email addresses with aliases. But having that as the first one that pops up is really annoying.
  9. Now fixed, but in the early 10.8, connecting a wired network when I also had Wifi on the same network would not give me higher speed. It would just drop all network connectivity.

Updated February 28th after restoring a machine following a serious problem:

  1. Time machine restores are only good if your target disk is at least as big. But with Apple’s recent move to SSD, this may no longer be affordable to you. In my case, I’d like to squeeze 1TB of data into 512G. Time machine does not give me the level of fine-grained control I’d need to restore what I really need. So I need to try and do it manually, which is a real pain.
  2. Calendar sync is a real mess. Restoring calendars from a backup is worse.
  3. Spaces? Where are my good old spaces? Why is it I had spaces on the original machine, no longer have them, and find myself unable to say “I want 6 spaces” or to setup keyboard shortcuts for them as they used to be.

When Google oversteps its authority

Recently, a user of Tao Presentations informed us that Google Chrome displayed a dire warning after he downloaded our software: “Tao Presentations may be malicious software”. Uh oh, for the average Joe, that’s a big no-no.

Google locks out “unapproved” programs

It’s not just us. Recently, I tried to download some of the amazing demos created by Iñigo Quilez. Same thing. Seriously, a 4K exe that manages to display a complete mountain? And Google Chrome would have me believe that there’s room in there for “malicious software”? Get real.

Now, it took me quite a while to find a solution to this problem. Apparently, you just need to record your site in Google’s Webmaster tools, and after scanning your site and ensuring (I assume) that there’s no known virus signature in the files, things should improve.

I still find this really annoying that a browser vendor would, by default, tag unknown files as “malicious”. Who are they to make this judgment call?

Why didn’t Google implement a real solution?

Shouldn’t they instead have something a little more sophisticated, that actually detects malicious signatures? You know, like a real anti-virus? Don’t tell me that Google doesn’t have smart enough engineers to write an in-browser anti-virus that doesn’t completely suck.

Nah, instead they went the easy route: anything that we don’t know is malicious. And we tell your users so.

I used to be a big fan of Chrome. Not anymore. Because of this single issue. I think this demonstrate an incredibly stupid arrogance and lack of technical diligence on Google’s part.

Google overstepped its authority and took advantage of their weight. Let’s not get used to it.

Moi Président de PME

Quel “président” j’aimerais être?

Un président de PME qui, d’abord, respecte la France, qui l’aime. Je suis le président d’une petite société, je ne peux être que président de pratiquement rien, chef de rien, mais en définitive responsable de tout.

Moi, président de PME, je ne suis même pas le chef d’une minorité. Je n’ai pas le temps de recevoir qui que ce soit parce que je travaille soir et week-ends.

Moi, président de PME, j’ai à traiter avec des investisseurs, pas à polémiquer de savoir si on les appelle associés ou collaborateurs.

Moi, président de PME, je participe à toutes sortes de collectes de fond parce que que je n’ai pas l’option du déficit budgétaire.

Moi, président de PME, je fais fonctionner la boîte de façon indépendante, mais j’ai des compte à rendre si j’agis contre l’avis de mon “conseil d’administration” (ou ce qui en tient lieu dans une SAS).

Moi, président de PME, je n’ai pas la prétention de nommer des directeurs, je sais très bien que c’est par leur indépendance d’esprit et leur initiative qu’ils ont mérité ce titre

Moi, président de PME, je fais en sorte que mon comportement soit à chaque instant exemplaire, tout en ayant une conscience plus aigue que jamais de mes propres limites.

Moi, président de PME, ce n’est pas de gaité de coeur que j’ai un statut très peu protégé, sachant très bien que si mes actions venaient à être contestées, aucun magistrat n’hésiterait jamais à me convoquer.

Moi, président de PME, j’ai constitué une équipe paritaire, avec autant de femmes que d’hommes dans la mesure où on peut le faire dans une équipe de cinq. Et alors?

Moi, président de PME, je suis soumis tout comme mes investisseurs à un code de déontologie qui interdit tout conflit d’intérêts. Là encore, et alors?

Moi, président de PME, je constate que mes associés ne cumulent rien, sinon les heures de travail mal payées, car on peut considérer qu’à partir de 70h par semaine, on se consacre plus que pleinement à sa tâche.

Moi, président de PME, j’aimerais bien voir un peu de décentralisation, j’aimerais bien qu’on donne aux forces vives locales que sont les PMEs un nouveau souffle, qu’on tire parti de leurs compétences, qu’on leur accorde un peu de liberté.

Moi, président de PME, j’aimerais bien grossir assez pour avoir des partenaires sociaux ou consacrer du temps aux associations professionnelles. Je préférerais quelques discussions régulières à des lois imposées sans négociation.

Moi, président de PME, je me contenterais bien d’un petit débat. On a évoqué la taxation du capital, et il est légitime qu’il puisse y avoir sur ces questions là un débat citoyen.

Moi, président de PME, je suis soumis à la proportionnelle face à mes actionnaires, et ce n’est pas en 2017, c’est dès maintenant que l’ensemble de leurs sensibilités est représentée.

Moi, président de PME, je suis la tête dans le guidon, avec toute la hauteur de vue qui va avec. J’aimerais bien fixer de grandes orientations, de grandes impulsions, mais en même temps, je dois m’occuper de tout et je dois avoir toujours le souci de la proximité avec les clients.

J’aurais bien aimé une vie un peu plus normale, mais rien n’est normal quand on est président de PME. Etre président, c’est pas si facile. Notre monde traverse une crise majeure, en tous cas la France. Mais on peut encore réussir à se fâcher avec l’Europe. On peut encore créer plein de conflits en se montant les uns contre les autres ou en se disputant sur l’environnement Bien sûr qu’un président doit avoir une réponse toute prête qui prenne de haut ses sujets: “je n’aime pas les riches“, ça suffit largement à montrer qu’on est proche du peuple, qu’on est capable de comprendre toute la complexité de réalité économique et sociale en France.

Cela dit, moi, président de PME, j’aimerais bien qu’on laisse nos investisseurs tranquilles. Ca serait déjà pas mal comme changement tout de suite.

Et si vous ne comprenez pas pourquoi je dis ça:

(Mis à jour pour utiliser le terme de PME, plus général que SAS)

Les explications de Marc Simoncini sur BFM Business

Très intéressante intervention de Marc Simoncini, fondateur entre autres de Meetic.

Dix raisons de ne pas taxer le capital comme les salaires

Au moment où les entrepreneurs se mobilisent contre la nouvelle loi de finances 2013, il faut peut être rappeler pourquoi aligner la fiscalité du capital sur celle des salaires est, au départ, une fausse bonne idée. Read more…

Ten reasons capital and salary should not be taxed the same

At a time where French entrepreneurs are actively fighting the new French finance law for 2013, it might be good to remind our politicians of a few reasons why it is a not-so-good idea to consider that a salary and a revenue from capital should be taxed the same way.

Read more…

Explaining #geonpi to non-French entrepreneurs

There’s been a recent flurry of activity on twitter around the #geonpi hashtag. What is going on?

The short version is that French entrepreneurs are all up in arms against the French budget law for 2013. On the surface, one aspect of the law is intended to align the taxation of capital on the taxation of work, to use the words of the French government. But the reasons that entrepreneurs react is that, in practice, the new taxation may well make the creation of startups in France completely untenable.

Etymology of #geonpi

A first question you may have is: What the hell does “geonpi” mean? Well, it’s simply Verlan for pigeon. And in French, pigeon is a slang word for a dupe, a scapegoat, or someone who is being easily been taken advantage of. In short, a sucker. But there are many other expressions and word associations around pigeons, like “tir au pigeons” or “roucouler”. Les Pigeons movement is clearly about the “easily abused” meaning.

Entrepreneurs in France feel that they are the “pigeons” of the whole social system.

What caused the wrath?

Like in any other country, entrepreneurs in France take risks. They create new wealth. They create jobs. They usually reach the legal 35 hours per week on Tuesdays. They often put a good fraction of their own savings into the enterprise (I talk from experience here). They don’t sleep well at night.

Yet, in France, entrepreneurs get very little recognition. This may be hard to comprehend for someone who is more accustomed to Silicon Valley, where being an entrepreneur is a Good Thing™. But in France, “entrepreneur” is almost a dirty word. In the mind of the general public, entrepreneurs and greedy corporate executives earning millions per year are one and the same thing.

In reality, entrepreneurs in France earn very little if anything, just like in any other country. No minimum wage here. No job protection. Entrepreneurs, unlike other workers in France, have practically no retirement benefit, and certainly no “golden parachute”. More to the point, they don’t get anything from the state if they fail.

The new regime

So what changed? What bothers French entrepreneurs is how they would get treated under the new budget law in the unlikely case they succeed. In France like in any other country, nine out of ten entrepreneurs fail. They are prepared for that. But what happens if they succeed is the problem here.

The new law is supposed to double the the taxation of the benefits you might derive from a successful investment in a small startup. from an already meaty 30% or so to about 60%. Yep, you read that right, six oh. I guess you have the reaction reading that number that I had reading the gas mileage of a Humvee. [Update: ddabdul commented that I should talk about capital gain tax, but it's actually a combination of various taxes, one of them being capital gain tax. To muddy things further, the French government actually moves that to the income tax. So I decided to stick to my previous wording with this clarification.]

If, after 5 to 10 years of an extraordinarily uncomfortable life, one lucky entrepreneur happens to have any kind of success, which by the way means he created a sustainable company and presumably quite a few jobs, then under the new regime, he immediately loses a little over 60% to the Benevolent State in taxes.

But don’t worry for the Poor State of France, there’s more. The State gets to collect a few extra percents here and there in value added tax and other taxes on goods (e.g. on gas). Then 1% to 2% per year on the “tax on fortune”, another beautiful French invention. And finally 45% of whatever is left would again go to the State when the exhausted entrepreneur dies.

Le Gendre was right

Sorry to put it that way, but so much stupidity really hurts.

The debate in France about why the State needs to be so greedy is not exactly new. Colbert once asked to a merchant named Le Gendre what the State could do to help. Le Gendre reportedly answered “Laissez nous faire“. Centuries later, that wisdom remains ignored by the French government.

In defense of Euclideon’s Unlimited Details claims

There’s been a bit of a controversy recently regarding Euclideon’s “Unlimited Details” 3D rendering technology:

The video that caused all this stir is here:

This looks pretty good. Yet the video shows a number of limitations, like repeated patterns at high scale. These limitations may explain why John Carmack, Notch and others say that Euclideon is only implementing well-known algorithms and scamming people for funding.

Is it new? Is it real?

As for me, I wouldn’t call it a scam, and I don’t think the technology is that far out. Euclideon’s technology is in my opinion a little more than just sparse voxels octrees(SVOs), though the general principle are most likely similar. That being said, I must agree with Notch that Euclideon is anything but above hyperlatives, something I discussed recently.

To more directly address the criticisms being made by Notch, what they have done looks like a real technical innovation worth funding. For example, they deal with shadows and illumination, something that is hard to do with SVOs. They demonstrated some basic animation. They don’t have to tell all of us the gory technical details and limitations, as long as they tell their investors. And from the interview, it seems that they had people do some due diligence.

Regarding Carmack’s tweet, it does not dismiss the technology. It’s public knowledge that Carmack is himself working on similar stuff. He said this:

In our current game title we are looking at shipping on two DVDs, and we are generating hundreds of gigs of data in our development before we work on compressing it down.  It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues.  So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner.  Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure.  It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.

Carmack and Notch have high stakes in the game. To them, a technology like Euclideon’s is threatening, even if mildly. Dismissing something new because we are experts in the old ways is so easy to do. But let’s not get used to it.

3D rendering before GPUs

What legitimacy do I have myself to give my opinion on this topic, you may ask? I wrote a game called Alpha Waves a couple of years before John Carmack’s claimant to the title of “the first 3D PC game ever!“. That game is in the Guiness Book as “The first 3D platform game” (not the first 3D game, since there were several much simpler 3D games before).

More importantly, Alpha Waves was as far as I know the first 3D game offering interaction with all 3D objects in the environment: you could touch them, bounce off them, etc. Also new for the time, the game was 6-axis (all rotations were possible), fullscreen (most games at the time showed 3D in a quarter screen or half screen), and showed 20 to 50 3D objects simultaneously (the standard back then was 1 to 5). Games like Carmack’s Hovertank 3D had none of these capabilities. This blog post explains how this was achieved.

In short, I have been thinking about 3D rendering for a long time. I have written 3D rendering engines in assembly language at a time when there was no graphic card and no floating-point on the CPU. Back then, drawing a line or a circle was something you had to code by yourself. Not having GPUs or 3D APIs whatsoever meant that you attacked the problem from all angles, not just as “a polygon mesh with texture mapping”.

(As an aside, I’m still doing 3D these days, but even if we are pushing the boundaries of what today’s graphic cards can to (e.g. they are not optimized for rendering text in 3D on auto-stereoscopic displays), that’s much less relevant to the discussion.)

Exploring 3D rendering algorithms

In the late 1980s, when many modern 3D rendering algorithms were still to invent, I continued exploring ways to render more realistic 3D, as did many programmers at the time. I tried a number of ideas, both on paper and as prototypes. I tried software texture mapping (a la Wolfenstein), which looked positively ugly with only 16 simultaneous colors on screen. I reinvented Bresenham’s line drawing algorithm and then generalized it to 2D shapes like circles or ellipses, and then tried to further generalize for 3D planar surfaces and shapes, or to 3D rays of light.

This was how you proceeded back then. There was no Internet accessible to normal folks, no on-line catalog of standard algorithms or code. We had books, magazines, person-to-person chat, computer clubs. As a result, we spent more time inventing from scratch. I tried a number of outlandish algorithms that I have never read anything about, which is a good thing because they didn’t work at all.

More to the point, I also designed something that, on paper, looked a bit like what Unlimited Details is supposed to be today. At the time, it went nowhere because it exceeded the available memory capacity by at least three orders of magnitude. Of course, we are talking about a time where having 512K of RAM in your machine was pretty exciting, so three orders of magnitude is only 512M, not a big deal today. It’s based on this old idea that I believe Euclideon’s claims have some merit.

What do we know about Unlimited Details?

Here are some facts you can gather about Unlimited Details technology. I will discuss below why I believe these observations teach us something about their technology.

  • The base resolution is 64 atoms per cubic millimeter. Like Notch, I thought: that’s “4x4x4″. Please note that I write it’s the resolution, not the storage. I agree with Notch that they can’t be storing data about individual atoms, nor do they claim they do. It’s pretty obvious we see the same 3D objects again and again in their “world”.
  • They claim that their island is 1km x 1km. That means 4 million units along each direction, which is well within what a 32-bit integer coordinate can hold. So it’s not outlandish.
  • They claim repeatedly that they compute per pixel on screen. This is unlike typical 3D game rendering today, which compute per polygon, then render polygons on screen. There are many per-pixel technologies, though: SVOs are one, but raytracing and others are also primarily per screen pixel.
  • There’s a lot of repetitive patterns in their video, particularly at large scale. For example, the land-water boundaries are very blockish. But I think this is not a limitation of the technology, just a result of them building the island quickly out of a logo picture. Below is an example that shows no such effect:

  • Notch points out that objects tend to all have the same direction, which he interprets as demonstrating that they use SVO, where rotations are hard to do.
  • On the contrary, close-ups of the ground or the picture above do not show such obvious pattern repetitions, and objects seem to have various orientations and, even more telling, to “intersect” or “touch” without obvious “cubic” boundaries. My impression is that at high-level, they organized objects in cubic blocks to create the demo, but that we shouldn’t infer too much from that.
  • Euclideon shows dynamic reflexions on water surfaces. What we see is only very planar water surfaces (no turbulence, …), however.
  • They have some form of dynamic shadow computation, but it’s still in progress. They talk about it, but it’s also quite visible around 1:30 in the video above, or even better, in the picture below from their web site:

  • They do software rendering at 1024×768 at around 20fps “unoptimized”, single-threaded on a modern i7 CPU. They claim that they can improve this by a factor of 4.
  • When they zoom out of the island, the colors fade. I think that this is a rather telling clue about their renderer. I didn’t see any transparency or fog, however.
  • On a similar note, there is no visible “flicker” of individual pixels as they zoom in or out, as you’d expect if they really picked up an individual atom. The color of nearby atoms being different, either their algorithm is very stable with respect to the atom it picks (i.e. it is not solely based on the direction of an incoming ray), or they average things out with neighboring atoms.
  • Last, and probably the most important point: they know quite well what others are doing, they are quoting them, and they are pointing out their difference.

Voxels and Sparse Voxel Octrees?

There are a number of people working with voxels and sparse voxels octrees. You can search videos on YouTube. They all share a number of common traits:

  • They generally look blocky. The Euclideon videos look blocky at large scale, but all the blocks themselves remain highly detailed. You never see uniform squares or cubes appear as in most SVO demos, e.g. Voxelstein 3D below. This may be due to the 4 atoms-per-mm resolution. But see below the related problem.

  • It’s difficult to do nice looking shadows and lighting with SVOs. Look on the web to see what I mean. The counter examples I saw used some form of raytracing, non real-time rendering, or were GPU demos. The shadows and lighting in Euclideon’s demos look quite smooth.

But there’s one big issue with SVOs, which is traversal time (see this article for a good background). Consider the following numbers: Euclideon claims to draw 1024×768 pixels, 20 times per second. That’s roughly 16M pixels per second. On a 2GHz CPU, that leaves on the order of 100 cycles per pixel to find the object and compute the color of the pixel. In the best case, that’s roughly 500 instructions per pixel, but don’t forget that an ILP level of 5 is quite hard to sustain. A single cache miss can easily cost you 100 cycles.

Memory considerations

And that’s really the big question I have with Euclideon’s claims. How do they minimize cache misses on a data set that is necessarily quite big? Consider the minimum amount of data you need for a sphere of radius R, The area of the sphere is 4πr2.

Even if we have atom data only for the surface, a sphere of 1000 units would use roughly 12 million atom storage elements. It’s not a big sphere either: a unit being reportedly 0.25mm, so that would be a sphere with a 25cm radius, 10 inches for imperialists. An object of the size of the elephant (1m) takes roughly 200M atoms, and the big statues must be around 500M atoms. This explains why “memory compression” is something that Euclideon repeatedly talks about in their video. My guess here is that there aren’t that many kinds of atoms in the elephant, for example. I’m tempted to think they have “elements” like in the physics world.

But memory compression doesn’t solve cache latency issues. That, combined with the lack of spatial or temporal flicker, tells me that they found a really smart way for their “atom search” to be stable and to preserve good memory locality. My guess is that they do this based on a “radius” of the incoming ray. If the camera is far from the object being looked at, you only pick “large” atoms considered representative of the whole. Identifying a good candidate “average” atom is something you can do at object encoding time.

Number of instructions per pixel

Assuming Euclideon solved memory latency so that, on average, they get, say, a good 200 instructions per pixel. We start with a resolution of 4 million units in the X, Y and Z direction, and we need to find the target voxel. So we start with (roughly) 22 bits along each axis. My guess is they may have less along the Z axis, but that doesn’t make a big difference. With an octree, they need, roughly, 22 steps through the octree to get down to the individual “atom”. That leaves 200/22, or roughly 10 machine instructions per octree traversal step. That’s really not a lot.

Of course, in general, you don’t need to get down to the individual atoms. The whole viability of the octree scheme is largely based on the fact that you can stop at a much higher granularity. In reality, the size of the atoms you need to find is a function of the screen size, not the individual atom resolution. We are therfore looking more at 10 iterations instead of 20. But still, that won’t leave us with more than about 20 instructions.

Ah, but wait, an SVO algorithm also requires a small amount of ray tracing, i.e. for each pixel, you need to check along the way if a ray starting at that pixel is hitting things. You need to repeat along the path of the ray until you hit something. And ray displacement computations also take time. The amount of time depends on the particular algorithm, but to my knowledge, the best scenario is roughly a log of the number of possible hits along the way (i.e. you do another binary search). So if we keep the same 22 binary searches along the way, that leaves us with less than one instruction per step on average.

Clearly, these numbers explain why there is some skepticism of Euclideon’s claims. The CPUs today are just short of being able to do this. Which is why the SVO demos we see require smaller worlds, lower resolution. All this reduces the number of bits significantly, making it much easier to achieve correct performance. But this analysis shows why a simple standard SVO algorithm is unlikely to be what Euclideon uses.

Just short = Optimizations are useful

In general, I tend to stick with the First Rule of Optimization: Don’t. But being “just short” is that exact time when tiny optimizations can do a whole lot of difference between “it works” and “it doesn’t”.

At the time of Alpha Waves, we were “just short” of being able to do 3D on a PC, because the mathematics of 3D required multiplications, which were tens of cycles on the machines of the time. The tiny optimization was to replace 9 multiplications costing each 14 to 54 cycles with three additions costing 4 cycles each (on the 68K CPU of my Atari ST). I often did multiple additions, but even so, that was less expensive. And that single optimization made the whole scheme viable, it made it possible for Alpha Waves to render 30 objects on screen when others programs of the time slowed down with 3.

So what kind of optimizations can we think about in the present case? Here are some semi-random ideas:

  • Assuming 21 bits instead of 22, or making the Z axis smaller (i.e. you could have 24+24+16, where the Z axis would be “only” 65 meters deep), you can make all three coordinates fit in a 64 bit value. So basically, each “atom” in the world is uniquely addressed by a 64-bit “address”. You can then rephrase the problem as “map a 64-bit value to a sparse data structure”.
  • You don’t need the order of bits in the “address” to be fixed. There are plenty of multimedia instructions for reshuffling bits these days. Among other things, you can work with X:Y:Z triples where X, Y and Z have 20 bits, 2 bits or 1 bit. For example, if X:Y:Z are packed bit-by-bit, a sequence of 3 bits can serve as an index to resolve a step of the octree traversal.
  • If there’s no transparency, you can draw a ray from the camera to the atom, and know that you will stop exactly once, all atoms before it being transparent. If you have a variable size for atoms, you can have very large transparent atoms so that advancing the ray will be pretty fast across transparent regions. You can practically get back a factor of 20 in the instructions per iteration.
  • Computing the ray displacement using a variation on Besenham’s algorithm, you can basically do the “hard” computation once for 8 rays using symmetries, keeping only the “simple” computations in the main loop.
  • At first level, you can use memory mapping to let the hardware do the work for you. You can’t do this on 64-bits, mind you, since there are various per-OS limits on the largest range that you can map. But taking advantage of the TLB means a single instruction will do a 4-level table lookup with hardware caching. I’m pretty sure Euclideon people don’t do that (I’m suggesting it only because I did operating systems design and implementation for a little too long), but it would be cool if they did.

There are obviously many more tricks required to achieve the objective. But I did a quick back of the envelope calculation on the average cost using some of the tricks above and a couple more, and I ended up with cycles to spare. So I believe it can be done on today’s hardware.


John Carmack is right on the spot when he says that productization is the real issue. Prototypes are easy, products are hard. But I wouldn’t rule out seeing a game using that technology on current generation systems.

Just my two bits.


Get every new post delivered to your Inbox.

Join 365 other followers

%d bloggers like this: