In defense of Euclideon’s Unlimited Details claims

There’s been a bit of a controversy recently regarding Euclideon’s “Unlimited Details” 3D rendering technology:

The video that caused all this stir is here:

This looks pretty good. Yet the video shows a number of limitations, like repeated patterns at high scale. These limitations may explain why John Carmack, Notch and others say that Euclideon is only implementing well-known algorithms and scamming people for funding.

Is it new? Is it real?

As for me, I wouldn’t call it a scam, and I don’t think the technology is that far out. Euclideon’s technology is in my opinion a little more than just sparse voxels octrees(SVOs), though the general principle are most likely similar. That being said, I must agree with Notch that Euclideon is anything but above hyperlatives, something I discussed recently.

To more directly address the criticisms being made by Notch, what they have done looks like a real technical innovation worth funding. For example, they deal with shadows and illumination, something that is hard to do with SVOs. They demonstrated some basic animation. They don’t have to tell all of us the gory technical details and limitations, as long as they tell their investors. And from the interview, it seems that they had people do some due diligence.

Regarding Carmack’s tweet, it does not dismiss the technology. It’s public knowledge that Carmack is himself working on similar stuff. He said this:

In our current game title we are looking at shipping on two DVDs, and we are generating hundreds of gigs of data in our development before we work on compressing it down.  It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues.  So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner.  Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure.  It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.

Carmack and Notch have high stakes in the game. To them, a technology like Euclideon’s is threatening, even if mildly. Dismissing something new because we are experts in the old ways is so easy to do. But let’s not get used to it.

3D rendering before GPUs

What legitimacy do I have myself to give my opinion on this topic, you may ask? I wrote a game called Alpha Waves a couple of years before John Carmack’s claimant to the title of “the first 3D PC game ever!“. That game is in the Guiness Book as “The first 3D platform game” (not the first 3D game, since there were several much simpler 3D games before).

More importantly, Alpha Waves was as far as I know the first 3D game offering interaction with all 3D objects in the environment: you could touch them, bounce off them, etc. Also new for the time, the game was 6-axis (all rotations were possible), fullscreen (most games at the time showed 3D in a quarter screen or half screen), and showed 20 to 50 3D objects simultaneously (the standard back then was 1 to 5). Games like Carmack’s Hovertank 3D had none of these capabilities. This blog post explains how this was achieved.

In short, I have been thinking about 3D rendering for a long time. I have written 3D rendering engines in assembly language at a time when there was no graphic card and no floating-point on the CPU. Back then, drawing a line or a circle was something you had to code by yourself. Not having GPUs or 3D APIs whatsoever meant that you attacked the problem from all angles, not just as “a polygon mesh with texture mapping”.

(As an aside, I’m still doing 3D these days, but even if we are pushing the boundaries of what today’s graphic cards can to (e.g. they are not optimized for rendering text in 3D on auto-stereoscopic displays), that’s much less relevant to the discussion.)

Exploring 3D rendering algorithms

In the late 1980s, when many modern 3D rendering algorithms were still to invent, I continued exploring ways to render more realistic 3D, as did many programmers at the time. I tried a number of ideas, both on paper and as prototypes. I tried software texture mapping (a la Wolfenstein), which looked positively ugly with only 16 simultaneous colors on screen. I reinvented Bresenham’s line drawing algorithm and then generalized it to 2D shapes like circles or ellipses, and then tried to further generalize for 3D planar surfaces and shapes, or to 3D rays of light.

This was how you proceeded back then. There was no Internet accessible to normal folks, no on-line catalog of standard algorithms or code. We had books, magazines, person-to-person chat, computer clubs. As a result, we spent more time inventing from scratch. I tried a number of outlandish algorithms that I have never read anything about, which is a good thing because they didn’t work at all.

More to the point, I also designed something that, on paper, looked a bit like what Unlimited Details is supposed to be today. At the time, it went nowhere because it exceeded the available memory capacity by at least three orders of magnitude. Of course, we are talking about a time where having 512K of RAM in your machine was pretty exciting, so three orders of magnitude is only 512M, not a big deal today. It’s based on this old idea that I believe Euclideon’s claims have some merit.

What do we know about Unlimited Details?

Here are some facts you can gather about Unlimited Details technology. I will discuss below why I believe these observations teach us something about their technology.

  • The base resolution is 64 atoms per cubic millimeter. Like Notch, I thought: that’s “4x4x4”. Please note that I write it’s the resolution, not the storage. I agree with Notch that they can’t be storing data about individual atoms, nor do they claim they do. It’s pretty obvious we see the same 3D objects again and again in their “world”.
  • They claim that their island is 1km x 1km. That means 4 million units along each direction, which is well within what a 32-bit integer coordinate can hold. So it’s not outlandish.
  • They claim repeatedly that they compute per pixel on screen. This is unlike typical 3D game rendering today, which compute per polygon, then render polygons on screen. There are many per-pixel technologies, though: SVOs are one, but raytracing and others are also primarily per screen pixel.
  • There’s a lot of repetitive patterns in their video, particularly at large scale. For example, the land-water boundaries are very blockish. But I think this is not a limitation of the technology, just a result of them building the island quickly out of a logo picture. Below is an example that shows no such effect:

  • Notch points out that objects tend to all have the same direction, which he interprets as demonstrating that they use SVO, where rotations are hard to do.
  • On the contrary, close-ups of the ground or the picture above do not show such obvious pattern repetitions, and objects seem to have various orientations and, even more telling, to “intersect” or “touch” without obvious “cubic” boundaries. My impression is that at high-level, they organized objects in cubic blocks to create the demo, but that we shouldn’t infer too much from that.
  • Euclideon shows dynamic reflexions on water surfaces. What we see is only very planar water surfaces (no turbulence, …), however.
  • They have some form of dynamic shadow computation, but it’s still in progress. They talk about it, but it’s also quite visible around 1:30 in the video above, or even better, in the picture below from their web site:

  • They do software rendering at 1024×768 at around 20fps “unoptimized”, single-threaded on a modern i7 CPU. They claim that they can improve this by a factor of 4.
  • When they zoom out of the island, the colors fade. I think that this is a rather telling clue about their renderer. I didn’t see any transparency or fog, however.
  • On a similar note, there is no visible “flicker” of individual pixels as they zoom in or out, as you’d expect if they really picked up an individual atom. The color of nearby atoms being different, either their algorithm is very stable with respect to the atom it picks (i.e. it is not solely based on the direction of an incoming ray), or they average things out with neighboring atoms.
  • Last, and probably the most important point: they know quite well what others are doing, they are quoting them, and they are pointing out their difference.

Voxels and Sparse Voxel Octrees?

There are a number of people working with voxels and sparse voxels octrees. You can search videos on YouTube. They all share a number of common traits:

  • They generally look blocky. The Euclideon videos look blocky at large scale, but all the blocks themselves remain highly detailed. You never see uniform squares or cubes appear as in most SVO demos, e.g. Voxelstein 3D below. This may be due to the 4 atoms-per-mm resolution. But see below the related problem.

  • It’s difficult to do nice looking shadows and lighting with SVOs. Look on the web to see what I mean. The counter examples I saw used some form of raytracing, non real-time rendering, or were GPU demos. The shadows and lighting in Euclideon’s demos look quite smooth.

But there’s one big issue with SVOs, which is traversal time (see this article for a good background). Consider the following numbers: Euclideon claims to draw 1024×768 pixels, 20 times per second. That’s roughly 16M pixels per second. On a 2GHz CPU, that leaves on the order of 100 cycles per pixel to find the object and compute the color of the pixel. In the best case, that’s roughly 500 instructions per pixel, but don’t forget that an ILP level of 5 is quite hard to sustain. A single cache miss can easily cost you 100 cycles.

Memory considerations

And that’s really the big question I have with Euclideon’s claims. How do they minimize cache misses on a data set that is necessarily quite big? Consider the minimum amount of data you need for a sphere of radius R, The area of the sphere is 4πr2.

Even if we have atom data only for the surface, a sphere of 1000 units would use roughly 12 million atom storage elements. It’s not a big sphere either: a unit being reportedly 0.25mm, so that would be a sphere with a 25cm radius, 10 inches for imperialists. An object of the size of the elephant (1m) takes roughly 200M atoms, and the big statues must be around 500M atoms. This explains why “memory compression” is something that Euclideon repeatedly talks about in their video. My guess here is that there aren’t that many kinds of atoms in the elephant, for example. I’m tempted to think they have “elements” like in the physics world.

But memory compression doesn’t solve cache latency issues. That, combined with the lack of spatial or temporal flicker, tells me that they found a really smart way for their “atom search” to be stable and to preserve good memory locality. My guess is that they do this based on a “radius” of the incoming ray. If the camera is far from the object being looked at, you only pick “large” atoms considered representative of the whole. Identifying a good candidate “average” atom is something you can do at object encoding time.

Number of instructions per pixel

Assuming Euclideon solved memory latency so that, on average, they get, say, a good 200 instructions per pixel. We start with a resolution of 4 million units in the X, Y and Z direction, and we need to find the target voxel. So we start with (roughly) 22 bits along each axis. My guess is they may have less along the Z axis, but that doesn’t make a big difference. With an octree, they need, roughly, 22 steps through the octree to get down to the individual “atom”. That leaves 200/22, or roughly 10 machine instructions per octree traversal step. That’s really not a lot.

Of course, in general, you don’t need to get down to the individual atoms. The whole viability of the octree scheme is largely based on the fact that you can stop at a much higher granularity. In reality, the size of the atoms you need to find is a function of the screen size, not the individual atom resolution. We are therfore looking more at 10 iterations instead of 20. But still, that won’t leave us with more than about 20 instructions.

Ah, but wait, an SVO algorithm also requires a small amount of ray tracing, i.e. for each pixel, you need to check along the way if a ray starting at that pixel is hitting things. You need to repeat along the path of the ray until you hit something. And ray displacement computations also take time. The amount of time depends on the particular algorithm, but to my knowledge, the best scenario is roughly a log of the number of possible hits along the way (i.e. you do another binary search). So if we keep the same 22 binary searches along the way, that leaves us with less than one instruction per step on average.

Clearly, these numbers explain why there is some skepticism of Euclideon’s claims. The CPUs today are just short of being able to do this. Which is why the SVO demos we see require smaller worlds, lower resolution. All this reduces the number of bits significantly, making it much easier to achieve correct performance. But this analysis shows why a simple standard SVO algorithm is unlikely to be what Euclideon uses.

Just short = Optimizations are useful

In general, I tend to stick with the First Rule of Optimization: Don’t. But being “just short” is that exact time when tiny optimizations can do a whole lot of difference between “it works” and “it doesn’t”.

At the time of Alpha Waves, we were “just short” of being able to do 3D on a PC, because the mathematics of 3D required multiplications, which were tens of cycles on the machines of the time. The tiny optimization was to replace 9 multiplications costing each 14 to 54 cycles with three additions costing 4 cycles each (on the 68K CPU of my Atari ST). I often did multiple additions, but even so, that was less expensive. And that single optimization made the whole scheme viable, it made it possible for Alpha Waves to render 30 objects on screen when others programs of the time slowed down with 3.

So what kind of optimizations can we think about in the present case? Here are some semi-random ideas:

  • Assuming 21 bits instead of 22, or making the Z axis smaller (i.e. you could have 24+24+16, where the Z axis would be “only” 65 meters deep), you can make all three coordinates fit in a 64 bit value. So basically, each “atom” in the world is uniquely addressed by a 64-bit “address”. You can then rephrase the problem as “map a 64-bit value to a sparse data structure”.
  • You don’t need the order of bits in the “address” to be fixed. There are plenty of multimedia instructions for reshuffling bits these days. Among other things, you can work with X:Y:Z triples where X, Y and Z have 20 bits, 2 bits or 1 bit. For example, if X:Y:Z are packed bit-by-bit, a sequence of 3 bits can serve as an index to resolve a step of the octree traversal.
  • If there’s no transparency, you can draw a ray from the camera to the atom, and know that you will stop exactly once, all atoms before it being transparent. If you have a variable size for atoms, you can have very large transparent atoms so that advancing the ray will be pretty fast across transparent regions. You can practically get back a factor of 20 in the instructions per iteration.
  • Computing the ray displacement using a variation on Besenham’s algorithm, you can basically do the “hard” computation once for 8 rays using symmetries, keeping only the “simple” computations in the main loop.
  • At first level, you can use memory mapping to let the hardware do the work for you. You can’t do this on 64-bits, mind you, since there are various per-OS limits on the largest range that you can map. But taking advantage of the TLB means a single instruction will do a 4-level table lookup with hardware caching. I’m pretty sure Euclideon people don’t do that (I’m suggesting it only because I did operating systems design and implementation for a little too long), but it would be cool if they did.

There are obviously many more tricks required to achieve the objective. But I did a quick back of the envelope calculation on the average cost using some of the tricks above and a couple more, and I ended up with cycles to spare. So I believe it can be done on today’s hardware.

Conclusion

John Carmack is right on the spot when he says that productization is the real issue. Prototypes are easy, products are hard. But I wouldn’t rule out seeing a game using that technology on current generation systems.

Just my two bits.

8 thoughts on “In defense of Euclideon’s Unlimited Details claims

  1. Interesting post, and in fact your Alpha Waves is what led me to your blog in the first place (though I’m from the enemy camp – an Amiga fan). You should post more on these topics.

    I think all of this could be in production today. With GPGPU it would be feasible to move the rendering over to the GPU, where this technique would render all the static geometry in amazing detail and produce a high quality depth and stencil map which would let the standard 3D pipeline do all the animated meshes in a second rendering phase.

  2. Hi Christophe,

    Thank you for this post. What a refreshment! Honestly, I think you’re among the rare persons to make a genuine effort in trying to understand how such a technology could work. So many “experienced” people in the industry are dismissing it right away, just like scientists would dismiss yet another perpetual motion machine – except that scientists back their statements with scientific arguments. I mean nobody has proven that Euclideon’s algorithm cannot exist, or that it couldn’t work without violating the thermodynamic principles.

    I wrote a few graphical demos in Assembly language in the past, and – although I never wrote a raytracer to be honest –  I agree with many of your statements. What makes me laugh is how many people claimed that Euclideon’s scene is “easy to render” because it’s just a bunch of objects replicated all over. In reality, as far as I know, *nobody* today is able to accomplish that, yet everyone claims it’s easy to do. As you said, in practice the replication of objects (oriented differently) will only help with the storage problem – not much with the rendering side of things. Funny how people claim it’s “yet another voxel octree raytracer” even though it blows away any GPU-based voxel octree raytracer implementations I know of, at least by 3 orders of magnitude.

    I also have a few ideas/remarks here:
    – AFAIK nobody ever mentioned that they’re dealing with atom *volumes*. What about atom *surfaces*? It could be more efficient than volumes and I suppose they might be doing that.
    – During these many years, could he (they) have found a clever way to “solve” the visibility problem (in the general sense) without the use of octrees? I mean hash functions can be used to find an element in a hash table in only O(1) time, because they turn an input into N output bits. So what about a M-ary 3D tree that would divide space into N^3 compartments at each step? Maybe this is crazy but… who knows? It could be insanely efficient if it existed…

  3. Hi Christophe,

    Thank you for this post. Honestly, I think you’re among the rare persons to make a genuine effort in trying to understand how such a technology could work. So many “experienced” people in the industry are dismissing it right away, just like scientists would dismiss yet another perpetual motion machine – except that scientists back their statements with scientific arguments. I mean nobody has proven that Euclideon’s algorithm cannot exist, or that it couldn’t work without violating the thermodynamic principles.

    I think since 3D chips (GPUs) became successful – which is admittedly a great advance for games and computing – almost nobody is brave enough to “reinvent the wheel” anymore – like demomakers did at the time – because it’s not worth it anymore. The argument would go: “Just use the latest DirectX hardware and enjoy great graphics, which are still improving X% every year – so what are you complaining about?”

    I wrote a few graphical demos in Assembly language at the time, and – although I never wrote a raytracer to be honest –  I agree with many of your statements. What makes me laugh is how many people claimed that Euclideon’s scene is “easy to render” because it’s just a bunch of objects replicated all over. In reality, as far as I know, *nobody* today is able to accomplish that, yet everyone claims it’s easy to do. As you said, in practice the replication of objects (oriented differently) will only help with the storage problem – not much with the rendering side of things. Funny how people claim it’s “yet another voxel octree raytracer” even though it blows away any GPU-based voxel octree raytracer implementations I know of, at least by 3 orders of magnitude.

    I also have a few ideas/remarks here:
    – AFAIK nobody ever mentioned that they’re dealing with atom *volumes*. What about atom *surfaces*? It could be more efficient than volumes and I suppose they might be doing that.
    – During these many years, could he (they) have found a clever way to “solve” the visibility problem (in the general sense) without the use of octrees? I mean hash functions can be used to find an element in a hash table in only O(1) time, because they turn an input into N output bits. So what about a M-ary 3D tree that would divide space into N^3 compartments at each step? Maybe this is crazy but… who knows? It could be insanely efficient if it existed…

    Cheers

  4. I wondered what it would take to replicate the results that Euclideon is showing. (and whether I might be able to do that, even if I used the GPU and 1 FPS) But a simple calculation led me to approximately 1 microsecond per pixel, which is too little to do anything meaningful.

    You seem to be the only one that has posted an analysis of the videos, without assuming its a scam. You collected quite some interesting facts and ideas. Some of which I did not yet knew. Thanks!

    One thing you seem to have missed, is that Euclideon explained that the water reflections are rendered by duplicating the world upside down and tinting it blue. So actually, their engine does not do reflections.

    A lot of their movies (they seem to have been removing them from the Internet) show an early version of their engine which uses very limited amount of colors (some only 1). I believe that with their approach, colors are quite hard to do. So this is another hint. (maybe this is also why the island is in grayscale from a far distance.)

    1. Your remark about the colors is intriguing, I hadn’t noticed that. But in any case, I think the greatest achievement is (or would be) to solve the “visibility problem” in a meaningful way, ie: which object (and which location within that object) each pixel belongs to. There are countless ways to represent an object, each of them with its own tradeoffs. I personally think they’re dealing with surfaces only, not with volumes.

      1 microsecond is plenty of time. It’s 1000 clock cycles at 1GHz, where you could execute around 2000 assembly instructions per core. 3D is typically easy to parallelize, so if you have a 4-core CPU you could execute around 8000 instructions within 1 microsecond. If you could run it on a modern GPU you’d get almost two orders of magnitude of available power. In any case 80-90% of the instructions you execute within that microsecond would be part of an inner loop, and thus carefully chosen for maximum efficiency.

      Maybe someone could create a Facebook group (or something similar) for the purpose of discussing this? Just in case, I’m at https://www.facebook.com/joel.bourquard

  5. Now 4 years later we can say for certain that Euclideon wasn’t a scam. Though their algorithm is nowhere near usable for games (rendering quality and FPS are too low) and they lost interest in that market.

    Based on the little information that was available 2 years ago I tried to reconstruct and implement the rendering algorithm. I managed to get a ~10fps SVO renderer. When they finally released their patent ( https://www.google.com/patents/WO2014043735A1 ), I found out that I wasn’t too far off. Ironically, the algorithm is particularly well suited for Minecraft-like games.

    If you’re interested, my implementation is free and open-source ( https://github.com/bcmpinc/voxel-engine ). I also keep a blog about it ( https://bcmpinc.wordpress.com ), which I rarely update. Currently I’m working on porting it to the GPU, as the CPU (single core) seems limited to 10fps at 1024×768, which simply isn’t enough for a game. Porting isn’t easy, because the algorithm is strongly recursive, but I recently found out that it doesn’t need to be. Also, here is an (already quite old) movie:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s