Archive

Archive for the ‘Mathematics’ Category

In defense of Euclideon’s Unlimited Details claims

There’s been a bit of a controversy recently regarding Euclideon’s “Unlimited Details” 3D rendering technology:

The video that caused all this stir is here:

This looks pretty good. Yet the video shows a number of limitations, like repeated patterns at high scale. These limitations may explain why John Carmack, Notch and others say that Euclideon is only implementing well-known algorithms and scamming people for funding.

Is it new? Is it real?

As for me, I wouldn’t call it a scam, and I don’t think the technology is that far out. Euclideon’s technology is in my opinion a little more than just sparse voxels octrees(SVOs), though the general principle are most likely similar. That being said, I must agree with Notch that Euclideon is anything but above hyperlatives, something I discussed recently.

To more directly address the criticisms being made by Notch, what they have done looks like a real technical innovation worth funding. For example, they deal with shadows and illumination, something that is hard to do with SVOs. They demonstrated some basic animation. They don’t have to tell all of us the gory technical details and limitations, as long as they tell their investors. And from the interview, it seems that they had people do some due diligence.

Regarding Carmack’s tweet, it does not dismiss the technology. It’s public knowledge that Carmack is himself working on similar stuff. He said this:

In our current game title we are looking at shipping on two DVDs, and we are generating hundreds of gigs of data in our development before we work on compressing it down.  It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues.  So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner.  Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure.  It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.

Carmack and Notch have high stakes in the game. To them, a technology like Euclideon’s is threatening, even if mildly. Dismissing something new because we are experts in the old ways is so easy to do. But let’s not get used to it.

3D rendering before GPUs

What legitimacy do I have myself to give my opinion on this topic, you may ask? I wrote a game called Alpha Waves a couple of years before John Carmack’s claimant to the title of “the first 3D PC game ever!“. That game is in the Guiness Book as “The first 3D platform game” (not the first 3D game, since there were several much simpler 3D games before).

More importantly, Alpha Waves was as far as I know the first 3D game offering interaction with all 3D objects in the environment: you could touch them, bounce off them, etc. Also new for the time, the game was 6-axis (all rotations were possible), fullscreen (most games at the time showed 3D in a quarter screen or half screen), and showed 20 to 50 3D objects simultaneously (the standard back then was 1 to 5). Games like Carmack’s Hovertank 3D had none of these capabilities. This blog post explains how this was achieved.

In short, I have been thinking about 3D rendering for a long time. I have written 3D rendering engines in assembly language at a time when there was no graphic card and no floating-point on the CPU. Back then, drawing a line or a circle was something you had to code by yourself. Not having GPUs or 3D APIs whatsoever meant that you attacked the problem from all angles, not just as “a polygon mesh with texture mapping”.

(As an aside, I’m still doing 3D these days, but even if we are pushing the boundaries of what today’s graphic cards can to (e.g. they are not optimized for rendering text in 3D on auto-stereoscopic displays), that’s much less relevant to the discussion.)

Exploring 3D rendering algorithms

In the late 1980s, when many modern 3D rendering algorithms were still to invent, I continued exploring ways to render more realistic 3D, as did many programmers at the time. I tried a number of ideas, both on paper and as prototypes. I tried software texture mapping (a la Wolfenstein), which looked positively ugly with only 16 simultaneous colors on screen. I reinvented Bresenham’s line drawing algorithm and then generalized it to 2D shapes like circles or ellipses, and then tried to further generalize for 3D planar surfaces and shapes, or to 3D rays of light.

This was how you proceeded back then. There was no Internet accessible to normal folks, no on-line catalog of standard algorithms or code. We had books, magazines, person-to-person chat, computer clubs. As a result, we spent more time inventing from scratch. I tried a number of outlandish algorithms that I have never read anything about, which is a good thing because they didn’t work at all.

More to the point, I also designed something that, on paper, looked a bit like what Unlimited Details is supposed to be today. At the time, it went nowhere because it exceeded the available memory capacity by at least three orders of magnitude. Of course, we are talking about a time where having 512K of RAM in your machine was pretty exciting, so three orders of magnitude is only 512M, not a big deal today. It’s based on this old idea that I believe Euclideon’s claims have some merit.

What do we know about Unlimited Details?

Here are some facts you can gather about Unlimited Details technology. I will discuss below why I believe these observations teach us something about their technology.

  • The base resolution is 64 atoms per cubic millimeter. Like Notch, I thought: that’s “4x4x4″. Please note that I write it’s the resolution, not the storage. I agree with Notch that they can’t be storing data about individual atoms, nor do they claim they do. It’s pretty obvious we see the same 3D objects again and again in their “world”.
  • They claim that their island is 1km x 1km. That means 4 million units along each direction, which is well within what a 32-bit integer coordinate can hold. So it’s not outlandish.
  • They claim repeatedly that they compute per pixel on screen. This is unlike typical 3D game rendering today, which compute per polygon, then render polygons on screen. There are many per-pixel technologies, though: SVOs are one, but raytracing and others are also primarily per screen pixel.
  • There’s a lot of repetitive patterns in their video, particularly at large scale. For example, the land-water boundaries are very blockish. But I think this is not a limitation of the technology, just a result of them building the island quickly out of a logo picture. Below is an example that shows no such effect:

  • Notch points out that objects tend to all have the same direction, which he interprets as demonstrating that they use SVO, where rotations are hard to do.
  • On the contrary, close-ups of the ground or the picture above do not show such obvious pattern repetitions, and objects seem to have various orientations and, even more telling, to “intersect” or “touch” without obvious “cubic” boundaries. My impression is that at high-level, they organized objects in cubic blocks to create the demo, but that we shouldn’t infer too much from that.
  • Euclideon shows dynamic reflexions on water surfaces. What we see is only very planar water surfaces (no turbulence, …), however.
  • They have some form of dynamic shadow computation, but it’s still in progress. They talk about it, but it’s also quite visible around 1:30 in the video above, or even better, in the picture below from their web site:

  • They do software rendering at 1024×768 at around 20fps “unoptimized”, single-threaded on a modern i7 CPU. They claim that they can improve this by a factor of 4.
  • When they zoom out of the island, the colors fade. I think that this is a rather telling clue about their renderer. I didn’t see any transparency or fog, however.
  • On a similar note, there is no visible “flicker” of individual pixels as they zoom in or out, as you’d expect if they really picked up an individual atom. The color of nearby atoms being different, either their algorithm is very stable with respect to the atom it picks (i.e. it is not solely based on the direction of an incoming ray), or they average things out with neighboring atoms.
  • Last, and probably the most important point: they know quite well what others are doing, they are quoting them, and they are pointing out their difference.

Voxels and Sparse Voxel Octrees?

There are a number of people working with voxels and sparse voxels octrees. You can search videos on YouTube. They all share a number of common traits:

  • They generally look blocky. The Euclideon videos look blocky at large scale, but all the blocks themselves remain highly detailed. You never see uniform squares or cubes appear as in most SVO demos, e.g. Voxelstein 3D below. This may be due to the 4 atoms-per-mm resolution. But see below the related problem.

  • It’s difficult to do nice looking shadows and lighting with SVOs. Look on the web to see what I mean. The counter examples I saw used some form of raytracing, non real-time rendering, or were GPU demos. The shadows and lighting in Euclideon’s demos look quite smooth.

But there’s one big issue with SVOs, which is traversal time (see this article for a good background). Consider the following numbers: Euclideon claims to draw 1024×768 pixels, 20 times per second. That’s roughly 16M pixels per second. On a 2GHz CPU, that leaves on the order of 100 cycles per pixel to find the object and compute the color of the pixel. In the best case, that’s roughly 500 instructions per pixel, but don’t forget that an ILP level of 5 is quite hard to sustain. A single cache miss can easily cost you 100 cycles.

Memory considerations

And that’s really the big question I have with Euclideon’s claims. How do they minimize cache misses on a data set that is necessarily quite big? Consider the minimum amount of data you need for a sphere of radius R, The area of the sphere is 4πr2.

Even if we have atom data only for the surface, a sphere of 1000 units would use roughly 12 million atom storage elements. It’s not a big sphere either: a unit being reportedly 0.25mm, so that would be a sphere with a 25cm radius, 10 inches for imperialists. An object of the size of the elephant (1m) takes roughly 200M atoms, and the big statues must be around 500M atoms. This explains why “memory compression” is something that Euclideon repeatedly talks about in their video. My guess here is that there aren’t that many kinds of atoms in the elephant, for example. I’m tempted to think they have “elements” like in the physics world.

But memory compression doesn’t solve cache latency issues. That, combined with the lack of spatial or temporal flicker, tells me that they found a really smart way for their “atom search” to be stable and to preserve good memory locality. My guess is that they do this based on a “radius” of the incoming ray. If the camera is far from the object being looked at, you only pick “large” atoms considered representative of the whole. Identifying a good candidate “average” atom is something you can do at object encoding time.

Number of instructions per pixel

Assuming Euclideon solved memory latency so that, on average, they get, say, a good 200 instructions per pixel. We start with a resolution of 4 million units in the X, Y and Z direction, and we need to find the target voxel. So we start with (roughly) 22 bits along each axis. My guess is they may have less along the Z axis, but that doesn’t make a big difference. With an octree, they need, roughly, 22 steps through the octree to get down to the individual “atom”. That leaves 200/22, or roughly 10 machine instructions per octree traversal step. That’s really not a lot.

Of course, in general, you don’t need to get down to the individual atoms. The whole viability of the octree scheme is largely based on the fact that you can stop at a much higher granularity. In reality, the size of the atoms you need to find is a function of the screen size, not the individual atom resolution. We are therfore looking more at 10 iterations instead of 20. But still, that won’t leave us with more than about 20 instructions.

Ah, but wait, an SVO algorithm also requires a small amount of ray tracing, i.e. for each pixel, you need to check along the way if a ray starting at that pixel is hitting things. You need to repeat along the path of the ray until you hit something. And ray displacement computations also take time. The amount of time depends on the particular algorithm, but to my knowledge, the best scenario is roughly a log of the number of possible hits along the way (i.e. you do another binary search). So if we keep the same 22 binary searches along the way, that leaves us with less than one instruction per step on average.

Clearly, these numbers explain why there is some skepticism of Euclideon’s claims. The CPUs today are just short of being able to do this. Which is why the SVO demos we see require smaller worlds, lower resolution. All this reduces the number of bits significantly, making it much easier to achieve correct performance. But this analysis shows why a simple standard SVO algorithm is unlikely to be what Euclideon uses.

Just short = Optimizations are useful

In general, I tend to stick with the First Rule of Optimization: Don’t. But being “just short” is that exact time when tiny optimizations can do a whole lot of difference between “it works” and “it doesn’t”.

At the time of Alpha Waves, we were “just short” of being able to do 3D on a PC, because the mathematics of 3D required multiplications, which were tens of cycles on the machines of the time. The tiny optimization was to replace 9 multiplications costing each 14 to 54 cycles with three additions costing 4 cycles each (on the 68K CPU of my Atari ST). I often did multiple additions, but even so, that was less expensive. And that single optimization made the whole scheme viable, it made it possible for Alpha Waves to render 30 objects on screen when others programs of the time slowed down with 3.

So what kind of optimizations can we think about in the present case? Here are some semi-random ideas:

  • Assuming 21 bits instead of 22, or making the Z axis smaller (i.e. you could have 24+24+16, where the Z axis would be “only” 65 meters deep), you can make all three coordinates fit in a 64 bit value. So basically, each “atom” in the world is uniquely addressed by a 64-bit “address”. You can then rephrase the problem as “map a 64-bit value to a sparse data structure”.
  • You don’t need the order of bits in the “address” to be fixed. There are plenty of multimedia instructions for reshuffling bits these days. Among other things, you can work with X:Y:Z triples where X, Y and Z have 20 bits, 2 bits or 1 bit. For example, if X:Y:Z are packed bit-by-bit, a sequence of 3 bits can serve as an index to resolve a step of the octree traversal.
  • If there’s no transparency, you can draw a ray from the camera to the atom, and know that you will stop exactly once, all atoms before it being transparent. If you have a variable size for atoms, you can have very large transparent atoms so that advancing the ray will be pretty fast across transparent regions. You can practically get back a factor of 20 in the instructions per iteration.
  • Computing the ray displacement using a variation on Besenham’s algorithm, you can basically do the “hard” computation once for 8 rays using symmetries, keeping only the “simple” computations in the main loop.
  • At first level, you can use memory mapping to let the hardware do the work for you. You can’t do this on 64-bits, mind you, since there are various per-OS limits on the largest range that you can map. But taking advantage of the TLB means a single instruction will do a 4-level table lookup with hardware caching. I’m pretty sure Euclideon people don’t do that (I’m suggesting it only because I did operating systems design and implementation for a little too long), but it would be cool if they did.

There are obviously many more tricks required to achieve the objective. But I did a quick back of the envelope calculation on the average cost using some of the tricks above and a couple more, and I ended up with cycles to spare. So I believe it can be done on today’s hardware.

Conclusion

John Carmack is right on the spot when he says that productization is the real issue. Prototypes are easy, products are hard. But I wouldn’t rule out seeing a game using that technology on current generation systems.

Just my two bits.

P versus NP

A researcher from HP Labs named Vinay Deolalikar announced a new proof that complexity classes P and NP are different. The paper made it to the Slashdot front-page (more on this below).

What constitutes a “proof”?

This is far from being the first claimed proof. There are about as many proofs that P is the same as NP than proofs of the opposite. With, for good measure, a few papers claiming that this is really undecidable… This just shows that the problem is not solved just because of one more paper. Indeed, the new “proof” takes more than 60 pages to explain, and it references a number of equally complex theorems and proofs.

This is interesting, because it means that very few, except some of the most advanced specialists in the field, will be able to understand the proof by themselves. Instead, the vast majority (including the vast majority of scientists) will accept the conclusion of a very small number of people regarding the validity of the proof. And since understanding the proof is so difficult, it may very well be that even the most experience mathematicians will be reluctant to draw very clear-cut conclusions.

Sometimes, clear-cut conclusions can be drawn. When I was a student, another student made the local news by announcing he had a proof of Fermat’s last theorem. We managed to get a copy of the paper, and shared that with our math teacher. He looked at it for about five minutes, and commented: “This is somewhat ridiculously wrong”.

However, In most cases, reaching such a definite conclusion is difficult. This puts us in the difficult position of having to trust someone else with better understanding than ours.

Understanding things by yourself

That being said, it’s always interesting to try and understand things by yourself. So I tried to read the summary of the proof. I don’t understand a tenth of it. However, the little I understood seemed really interesting.

If I can venture into totally bogus analogies, it looks to me like what Deolalikar did is build the mathematical equivalent of ice cube melting, and drew conclusions from it. Specifically, when ice freezes, phase change happens not globally, but in local clusters. You can infer some things about the cluster configuration (e.g. crystal structure) that were not there in the liquid configuration. In other words, the ice cube is “simpler” than water.

Now, replace atoms with mathematical variables, forces between atoms with some well-chosen Markov properties that happen to be local (like forces between atoms). The frozen cube corresponds to a P-class problem where you have some kind of strong proximity binding, so that you can deduce things locally. By contrast, liquid water corresponds to NP-class problems where you can’t deduce anything about a remote atom from what you learn about any number of atoms. Roughly speaking, Deolalikar’s proof is that if you can tell water from ice, then P and NP classes are distinct.

Of course, this is only an analogy, and it is very limited like any analogy, and I apologize in advance for totally misrepresenting Deolalikar’s subtle work. Nevertheless, I found the approach fascinating.

Crowds are stupid

Now, an alternative to personal understanding is to trust the crowd. Democracy. Unfortunately, if Slashdot is any indication, this doesn’t work at all.

Slashdot has a moderation system, where people vote for comments they find “Interesting” or “Insightful” or “Funny”. You’d think that this would let good comments rise to the top. But what really happens is that people with “moderation points” apparently feel an urge to moderate as quickly as they can. So the very first comments get moderated up very quickly, and drown any later comments in the noise.

Here are some of the comments on the P vs. NP announcement that Slashdot thought were “good”:

  • P is not NP when N is not 1 (“Funny”)
  • A random series of totally uninformed opinions on the cryptography impact (“Interesting” and “Funny”)

There are a few relevant gems in there, like the opinion of a professor at MIT that the proof is not valid. That leads to another more serious analysis. But there are a few redeeming gems even on Slashdot. Still, it’s too bad you have to sift through mud to find them.

Really bad math in a sci-fi book…

September 8, 2008 1 comment

I have just been reading Alien Embassy by Ian Watson (in the French translation published by Presses Pocket). This is a somewhat esoteric book, and I can’t say that I’m really fond of that genre. But it is an interesting story, it’s well written, and I enjoyed it.

Howewer, something really puzzled me, and it has to do with what seems to be the deep (mis)understanding of relatively fundamental mathematics, namely irrational or imaginary numbers, somewhere in the first third of the book.

Irrational numbers

Unfortunately, I can’t quote the original text, only back-translate from French, which may be hard to map on the original text. It reads something like that (I’d appreciate if some reader could give me the actual original text):

- He’s talking about mathematics, whispered the voice of Klimt. Irrational numbers are numbers like pi, the constant ratio between the circumference and the diameter of a circle, you must know that.

- That’s about twenty two seventh, I added; I knew at least that!

- It’s a very important number. Without it, geometry could not exist, Klimt commented. It represents a true geometrical relation. Pi appears as soon as you draw a circle. Yet it is totally irrational. There is no explanation to the sum “tweny two seventh”. You can divide twenty two by seventh as long as you want, you will never get a really definitive answer. [...]

This is bogus twice.

  • First, the author reasons about 22/7 instead of reasoning about pi. The two numbers are really not that close. To the fourth decimal, 22/7 is 3.1428 whereas pi is 3.1415. It’s really bizarre to talk about the irrational nature of pi by using the different number 22/7 that just happens to share 3 digits with it…
  • But second, and more importantly, irrational in mathematics means precisely that the number cannot be identified to the ratio of any two integers. That seems to have eluded the author entirely, as he seems to think that 22/7 itself is irrational, and that “irrational” denotes a number with an infinite number of decimals, like 1/3 = 0.33333…

Imaginary numbers

There is a similar issue a couple of pages later:

- From what I know, our scientists consider mathematics as imaginary dimensions, said Klimt
- Imaginary dimensions? the Azuran fluttered. Imaginary? Ah, but that’s where you got it wrong. These other dimensions are all but imaginary. They really exist. [...]

Here, multiple layers of a very strange understanding of mathematics and physics mix up. I don’t think any scientist ever considered mathematics as a whole to be imaginary dimensions. Mathematics use symbols and relations between symbols, and I think that most mathematicians would consider the mathematics we can ever talk about as a countable set. There are imaginary numbers, a terminology that refers to a class of complex numbers like i that have a negative square (e.g. i2=-1).

Where imaginary dimensions show up is for example in some explanations of Einstein’s special relativity, where time is considered as an imaginary dimension, as opposed to three real spatial dimensions. This is a mathematical trick to account for the form of distance in space-time, which has three squares with the same sign and one with a different sign (ds2 = − (dt2) + dx2 + dy2 + dz2).

Why does it matter?

Why bother about two little errors in what is, after all, simply intended to be entertainment? Well, it shows the limits of terminology when we cross domain boundaries. What seems to have confused the author in both cases is that a terminology in mathematics has a very precise meaning that happens to be really far from the everyday meaning of the same word.

The problem is that this gave the author, and possibly his readers, a false sense of understanding. Curiously, this echoes in my mind a number of issues I had with popular science books, some of which were written by well known names in physics. And it is also a problem with the operating systems people use on computers all the time: we tend to forget that a “desktop” or a “window” has nothing to do with the common acceptation of the term, which may confuse beginners quite a bit.

I’m not sure there is much to do about it, though, but to gently correct the mistakes when you see them. After all, short of inventing new terminology all the time (like “quarks” or “widgets”), we just have to proceed by analogy (e.g. “mouse”, “complex numbers”) and then stick with that wording. We consciously forget that the word is the same, the concepts end up being quite different in our brain.

But this has implication in another area I am interested in, concept programming. We give names to concepts, but these names overlap. Our brains know very well how to sort it out, but only after we have been trained. As a result, the big simplification I was expecting from concept programming might not happen, after all…

The number you can’t compute…

One of my respected colleagues at Hewlett-Packard is a guy named Patrick Demichel. He’s one of the “titans“, having held various world records on the number of digits of π and things like that.

But a couple of weeks ago, he talked to me about something really intriguing. During a conference on high-performance computing, he told me about a number he had computed, and that even if he was not sure that the value was right, he considered it highly unlikely that anybody could prove him wrong, because there weren’t enough atoms in the universe to do so. How does that sound?

Well, it turns out that he was talking about the Skewes numbers. More specifically, he has computed the first Skewes number to be most probably equal to 1.397162914×10316. See another reference in Wolfram’s MathWorld.

You can find the complete presentation of that research here. The general idea is that we have something that is extremely difficult to compute, so we compute it at reduced precision, skipping through large ranges of numbers. In doing so, we do a probabilistic estimate of the risk that we miss the right values. It’s really low, to the point that finding a counter example would take more computations than what can have been done in the entire universe since the Big Bang. However, we clearly have not tested all possible values (which would take even more power).

OK, in reality, this depends on the behavior of the Zeta function, more specifically on the Riemann hypothesis. So there is still an infinitesimal chance that all this huge computing power went to waste, and that 1.397162914×10316 is actually not the first Skewes number. Time will tell…

Categories: Mathematics

Are real numbers real?

In the past months, I have pointed out a few times that real numbers have no reason to exist in physics, if only because our measurement instruments don’t have an infinite precision and range. In general, this point, which I believe should make any physicist nervous, does not get much traction. Maybe most people simply believe that it does not matter, but I believe on the contrary that one can show how it has a significant impact on physics.

In mathematics, Skolem’s paradox for instance can be used to invalidate some far-fetched claims. But I was not considering that real numbers in mathematics were otherwise a problem.

A Mathematician’s objection to Real Numbers

Well, it looks like at least one mathematician is also making similar points, but for mathematics instead of physics. Prof. Wildberger writes:

Why Real Numbers are a Joke
According to the status quo, the continuum is properly modelled by the `real numbers’.[...]
But here is a very important point: we are not obliged, in modern mathematics, to actually have a rule or algorithm that specifies the sequence In other words, `arbitrary’ sequences are allowed, as long as they have the Cauchy convergence property. This removes the obligation to specify concretely the objects which you are talking about. Sequences generated by algorithms can be specified by those algorithms, but what possibly could it mean to discuss a `sequence’ which is not generated by such a finite rule? Such an object would contain an `infinite amount’ of information, and there are no concrete examples of such things in the known universe. This is metaphysics masquerading as mathematics.

Contrarian view

I reached this post through one of MarkCC’s posts, where he criticizes Prof. Wildberger’s arguments. Both points of view are worth a read (although both are a bit long).

In general, MarkCC is a no-nonsense guy, and I tend to side with him. But for this once, I disagree with his criticism. As I understand it, Prof. Wildberger main point is that real numbers are defined in a way that is rather fuzzy. If it does not allow us to prove uniqueness, maybe something is flawed in the logic leading to how we build them? Again, it all boils down to the problem of defining specific, non-trivial real numbers. There are too many of them, so the set of mathematical symbols is not sufficient: it only allows us to make a countable number of mathematical propositions, and the set of real numbers is provably not countable.

And real numbers are only the beginning…

Opinions?

Categories: Mathematics, Physics

Embedding mathematical formulas in a blog

September 11, 2007 1 comment

The folks at Noncommutative geometry have given me a useful hint to embed mathematical formulas directly in a blog or web page: the use of TeXify. Here is an illustration with the collapse of the wave function written using the formalism of the theory of incomplete measurements: M M \mathcal{F}=M \mathcal{F}.

There are just three issues with that tool:

  • The text they give you to copy for web pages is, strictly speaking, not very good: no quotes, no self-closed img tag.
  • The resulting image is not anti-aliased, so it does not look very good in the middle of anti-aliased text on any modern display.
  • In that blog, it insists on putting some blue line around the images. I assume it has something to do with the style sheet, I need to look. But it’s annoying.

Anyway, it’s better than nothing.

Uh… the collapse of the wave function? Are you sure?

Maybe I should explain why the above equation is the collapse of the wave function. This is probably not obvious to anybody who did not read the TIM article. In the above equation, M is the measurement being considered, and \matcal{F} is the system the measurement applies to.

What the equation translates is the choice we make for our measurement instruments to give repeatable or stable results. If I connect a voltmeter to a battery and read “9V”, then without changing anything, read again, I expect to still read about 9V, not -32V. The output of the voltmeter should be stable when measuring a system that does not change. It would not be very useful as a measurement instrument without that property.

What the equation M M \mathcal{F}=M \mathcal{F} expresses is that what is learned by performing the measurement (M \mathcal{F}) is the same thing that what is learned by applying M again to the result (M M \mathcal{F}). It’s really very simple.

Why does it lead to the collapse of the wave-function? The more formal explanation is in the article, but the intuitive explanation is quite simple. We deduce that the state after the first measurement is an eigenvector of the measurement observable from the fact that another measurement will always give the original eigenvalue. In other words, if a measurement instrument was not giving a stable result, we would be unable to deduce that the wave function after the first measurement is an eigenvector at all. There would be no collapse of the wave function.

3D simulation of the 9/11 attacks

A finite elements simulation of the 9/11 World Trade Center attacks was carried out and visualized using 3D rendering.

Skolem’s paradox

Today, I learned about Skolem’s paradox, which I find pretty interesting. Here is a rough overview:

  • Georg Cantor demonstrated in 1874 that there are sets that are not countable. An example is the set of real numbers. Such sets are also said to be uncountably infinite.
  • But mathematics can be represented as a countable language. Such a technique was used by Kurt Gödel to prove his famous incompleteness theorem.
  • This leads to the Löwenheim-Skolem theorem, which essentially shows that you can enumerate the propositions in such a mathematical system. In other words, the propositions in the system are countable. For example, there must exist a countable set that obeys all the relationships defined on the uncountable set of real numbers.

This leads to Skolem’s paradox. You cannot count real numbers, but you can count the mathematical propositions that define them… The Wikipedia page indicates that some do not see that as a paradox because even if there is no bijection within the mathematical model, there may be a bijection outside the model. My “perception” of the paradox is that this means that our mathematical model cannot define all real numbers. There are real numbers that are not the subject of any theorem, that are not the limit of any “suite” (i.e. some expression written with a finite number of symbols, like “limit of 1/n”).

And this is only the beginning. We can keep building sets, such as surreal numbers, which are even larger than the set of real numbers.

For those interested, Paul Budnik created a video discussing these topics. I contacted him today regarding his Quantum Mechanics Measurements FAQ, since I honestly believe that I have answered several of these questions with my theory of incomplete measurements.

Bad math, bad physics, getting published…

I came across the Good Math, Bad Math blog. It is probably not as deep mathematically speaking as This Week’s Finds in Mathematical Physics, a semi-regular column by John Baez, which sends my head spinning every time I try to read it.

But I found the latest entry, What happens if you don’t understand math. It’s very critical of A New Theory of the Universe, a rather silly defense of something called “biocentrism”… Sure enough, that kind of pointless and far-fetched pseudo-philosophical speculation tends to irritate me as well. And I’m only an amateur.

On the other hand, good math does not necessarily mean good physics. There is this other equally irritating trend in physics to replace any simple concept with jargon. In the end, you write stuff that makes mathematical sense, but where the physical meaning is elusive at best.

Categories: Mathematics, Physics Tags:
Follow

Get every new post delivered to your Inbox.

Join 365 other followers

%d bloggers like this: