How to unify general relativity and quantum mechanics

Unifying quantum mechanics and general relativity has been a problem for decades. I believe that I have cracked that nut.

Special relativity:

Philosophical principle: Laws of physics should not depend on observer’s speed.

Math: Lorentz transform, new way to “add” speeds.

Issues it solved: Maxwell’s equations predict a value for the speed of light that does not depend on your own speed.

Physical observations: The speed of light is indeed independent on observers’ speed (Michelson and Morley’s experiment).

Counter-intuitive aspects: There is no absolute simultaneity and no absolute time. There’s an absolute speed limit for physical objects in the universe.

New requirements: Physicists must now pay attention to the “observer” or “referential”.

Thought experiment: Alice is in a train, while Bob is on the ground watching the train pass him by. What happens if Bob sees a flash hit the train “simultaneously” at both ends? Hint: what happens “at the same time” for Bob is not happening “at the same time” for Alice. That explains why we cannot consider simultaneity as absolute.

General relativity:

Philosophical principle: Laws of physics should not depend on observer’s state of motion, including acceleration.

Math: Non-euclidean geometry, tensor and metrics.

Issues it solved: Discrepancies in the trajectory of Mercury.

Physical observations: Gravitation has an impact on light rays and clocks.

Counter-intuitive aspects: Light has no mass, but is still subject to gravity. The presence of a mass “bends” space-time.

New requirement: Physicists must pay attention to the metric (including curvature) of a given region of space-time.

Typical thought experiment: Alice is in a box on Earth, Bob is in a similar box dragged by a rocket at 1 g. The similarity between their experience explains why we can treat gravitation as a curvature of space-time.

Quantum mechanics:

Philosophical principle: Several, “Shut up and calculate” being the top dog today (meaning: if math flies against your intuition, trust the math).

Math: Hilbert spaces, Hamiltonian.

Issues it solved: Black body radiation, structure of matter.

Physical observations: Quantization of light, wave-particle duality, Young’s slits experiment.

Counter-intuitive aspects: Observing something changes it. There are quantities we can’t know at the same time with arbitrary precision, e.g. speed and position of a particle.

New requirement: Physicists must pay attention to what they observe and in which order, as observation may change the outcome of the experiment.

Typical thought experiment: Schrödinger puts his cat in a box where a system built on radioactive decays can kill it at an unknown time in the future. From a quantum mechanical point of view, before you open the box, the cat is in a superposition of two states, alive and dead.

Theory of incomplete measurements:

Philosophical principle: Everything we know about the world, we know from measurements. Laws of physics should be independent from the measurements we chose.

Math: “Meta-math” notation to describe physical experiments independently from the mathematical or symbolical representation of the measurement results. The math of quantum mechanics and general relativity applies only to measurement results, the “meta-math” describes the experiments, including what you measure and what physical section of the universe you use to measure it.

Issues it solved: Unifying quantum mechanics and general relativity. Quantum measurement problem. Why is the wave function complex-valued. Why doesn’t quantum mechanics apply at macroscopic scale (the answer being that it does). Why are there infinities appearing during renormalization, and why is it correct to replace them with observed values?

Physical observations: Room-scale experiments with quantum-like properties. How to transition the definition of the “meter” from a solid rod of matter to a laser beam. Physically different clocks and space measurements diverge at infinity. How can we talk about the probability of a photon being “in the Andromeda galaxy” during a lab experiment? Every measurement of space and time is related to properties of photons. Space-time interpreted as “echolocation with photons”.

Counter-intuitive aspects: Quantum mechanics is the necessary form of physics when we deal with probabilistic knowledge of the world. In most cases, our knowledge of the world is probabilistic. All measurements are not equivalent, and a “better” measurement (i.e. higher resolution) is not universally better (i.e. it may not correctly extend a lower-resolution but wider scale measurement). Space-time (and all measurements) are quantized. There is no pre-existing “continuum”, the continuum is a mathematical simplification we introduce to unify physically different measurements of the same thing (e.g. distance measurements by our eye and by pocket rulers).

New requirement: Physicists must specify which measurement they use and how two measurements of the “same thing” (e.g. mass) are calibrated to match one another.

Typical thought experiment: Measure the earth surface with the reference palladium rod, and then with a laser. Both methods were at some point used to define the “meter” (i.e. distance). Why don’t they bend the same way under gravitational influence? In that case, the Einstein tensors and metrics would be different based on which measurement “technology” you used.

More details: IntroductionShort paper.

So how does the unification happen?

To illustrate how the unification happens without too much math, imagine a biologist trying to describe the movement of ants on the floor.

The “quantum mechanical” way to do it to compute the probability of having an ant at each location. The further away from the ants’ nest, the lower the probability. Also, the probability to find an ant somewhere is related to the probability of finding it someplace near a short time before. When you try to setup the “boundary conditions” for these probabilities, you will say something like: the ant has to be somewhere, so the probability summed over all of space is one; and the probability becomes vanishingly small “at infinity”.

The general-relativistic way to do it will consider the trajectories of the ants on the 2D surface. But to be very precise, it will need to take into account the fact that ants are on a large-scale sphere, and deduce that the 2D surface they walk on is not flat (euclidean) but curved. For example, if an ant travelled along the edges of a 1000km square (from its point of view), it would not return exactly where it left off, therefore proving that the 2D surface is not flat.

At a relatively small scale, the two approaches can be made to coincide almost exactly. But they diverge in their interpretation of “at infinity”. Actually, assuming observed ants stay within a radius R of the nest, there are an infinite number of coordinate systems that are equal on that radius R, but diverge beyond R. Of course, the probabilities you compute depend on the coordinate system.

In particular, if you take a “curved” coordinate systems that loops around the earth to match the “general relativistic” view of the world, the physically observed probability does not match the original idea we have that probability becomes vanishingly small at infinity and that the sum is one. In that physical coordinates system, the probability to see ants is periodically non-zero (every earth circumference, you see the same ant “again”). So your integral and probability computation is no longer valid. It shows false infinities that are not observed in the physical world. You need to “renormalize” it.

In the theory of incomplete measurements, you focus on probabilities like in quantum mechanics, but only on the possible measurement results of your specific physical measurement system. If your measurement system follows the curvature of earth (e.g. you use solid rods of matter), then the probabilities will be formally different from a measurement system that does not follow it (e.g. you use laser beams). Key topological or metric properties therefore depend on the measurement apparatus being chosen. There is no “x” in the equations that assumes some underlying space-time with specific topology or metric. Instead, there is a “x as measured by this apparatus”, with the topology and metric that derives from the given apparatus.

Furthermore, all the probabilities will be computed using finite sums, because all known measurement instruments give only finite measurement results. There may be a “measurement not valid” probability bin. But if you are measuring the position of a photon in a lab, there cannot be a “photon was found in the Andromeda galaxy” probability bin (unlike in quantum mechanics), because your measurement apparatus simply cannot detect your photon in the Andromeda galaxy. Such a probability is non-sensical from a physical point of view, so we build the math to exclude it.

So in the theory of incomplete measurements, you only have finite sums that cannot diverge, and renormalisation is the mathematical equivalent of calibrating physically different measurement instruments to match one another.

The analogy is not perfect, but in my opinion, it explains relatively well what happens with as little math as possible.


No little thing is to small for grandiose words chiseled by some marketing war machine.

Seen on a Lampe Berger anti-mosquito product this morning:

Parfum “Absolu de vanille”

Vanilla Gourmet Scent

Not only is this ridiculously hyperlative, but they also have a different “tint” for the Engish and French version. English reader will notice that the French version sounds more like “Absolute Vanilla”, because that’s basically what it means. Who on Earth paid people to tell their customers that their anti-mosquito drug had a “Vanilla Gourmet scent?”

Let’s not get used to this kind of marketing hyperbole…

Hyperbole in science

In despair, I turned to a slightly more serious text, the first page of this month’s issue of Science et Vie. And here is what I read there about faster than light neutrinos:

Incroyable? Alors là oui, totalement! Et même pis. Que la vitesse de la lumière puisse être dépassée, ne serait-ce que de très peu, n’est pas seulement incroyable, mais totalement impensable. Absolument inconcevable. […] c’en serait fini d’un siècle de physique. Mais, et ce serait infiniment plus grave, c’en serait aussi fini avec l’idée selon laquelle la matière qui compose notre univers possède des propriétés, obéit à des lois. Autant dire que la quête de connaissance de notre monde deviendrait totalement vaine.

Incredible? Absolutely! And even worse. That the speed of light can be exceeded, even a little, is not only unbelievable, but totally unthinkable. Absolutely inconceivable. […] This would end a century of physics. Even more serious, we would be done with the the idea that matter making up our universe has properties, obeys laws. This would mean that the quest for knowledge in our world would become totally hopeless.

Whaaaaat? I really don’t like this kind of pseudo-science wrapped in dogma so pungent to be the envy of the most religious zealots. How can anybody who understood anything about Einstein’s work write something like that? Let’s backpedal a little bit and remember where the speed of light limit comes from.

Where does the speed of light limit come from?

At the beginning was Maxwell’s work on the propagation of electromagnetic waves, light being such a wave. These equations predicted a propagation of light at a constant speed, c, that could be computed from other values that were believed at the time to be physical constants (the “epsilon-0” and “mu-0” values in the equations). The problems is that we had a physical speed constant, in other words a speed that did not obey the usual law of speed composition. If you walk at 5 km/h in a train that runs at 200 km/h, your speed relative to the ground is 205 km/h or 195 km/h depending on whether you walk in the same direction as the train or in the opposite direction. We talk about an additive composition rule for speed. That doesn’t work with a constant speed: if I measure the speed of light from my train, I won’t see c-200 km/h, since c is constant. The Michelson-Morley experiment proved that this was indeed the case. Uh oh, trouble.

For one particular speed to be constant, we need to change the law of composition. Instead of adding speeds, we need a composition law that preserves the value of c. It’s the Lorentz transformation. What Einstein acknowledged with his special relativity theory is that this also implied a change in how we consider space and time. Basically, Lorentz transformation can be understood as a rotation between space and time. And in this kind of rotation, the speed of light becomes a limit in a way similar to 90 degrees being the “most perpendicular direction you can take”. Nothing more, nothing less. Of note, that “c” value can also be interpreted as the speed at which we travel along time when we don’t move along any spatial dimension.

There are limits to limits

Once you understand that, you realize how hyperbolic what Science et Vie wrote is.

First, the value of c was computed as a speed of light, for equations designed for electromagnetism. It was never intended to say anything about neutrinos. We don’t know how to measure space and time without electromagnetic interactions somewhere. So the speed of light limit is a bit like the speed of sound limit for bats who would measure their world using only echo-location. It doesn’t necessarily mean nothing can travel faster than light, it only means that no measurement or interaction based on electro-magnetic interactions can ever measure it. I have tried to elaborate a bit on this in the past.

Second, Einstein revised his initial view to include gravity, and this made the world much more complex. Now space-time could be seen as modified locally by gravity. Now imagine how solid your “90 degrees is the most perpendicular direction” argument is if you look at a crumpled sheet of paper. The reasoning doesn’t mean much beyond very small surfaces. Remember that in the neutrinos experiments, we are in a very complex gravitational environment (mountains, …) and you’ll see that this “crumpled sheet of paper” analogy may not be so far off.

In short, it we find conditions where something appears to travel faster than light, it is exciting, it is interesting, it is worth investigating, but it’s certainly not the End of Science as Science et Vie claimed. Let’s not get used to this kind of crap.

Wolfram’s thoughts on physics

I just came across Stephen Wolfram’s latest post. I suppose that everybody knows who this guy is. In my opinion, he definitely qualifies as a genius. I do not say that lightly: on the contrary, you might actually think from reading this blog that I tend to be worried, skeptical or even angry at physics, physicists or science in general. But Wolfram embodies this wild spirit of folks who boldly go where no man has gone before.

A new kind of science?

I certainly was aware of his New Kind of Science (who wouldn’t?). But I didn’t really feel a need to read it, being under the impression that it was just a book about cellular automata. Having read the blog, which contains a number of links to the on-line version, I discovered that it was much richer than this. As a matter of fact, after reading just a few pages on line, I ordered the book right away.

I think, for instance, that there is something really deep in the following:

I’ve built a whole science out of studying the universe of possible programs–and have discovered that even very simple ones can generate all sorts of rich and complex behavior.

Why is studying the universe of possible programs interesting? Because mathematics is the manipulation of symbols using specific rules, so in that sense, mathematics as a whole are a subset of what Wolfram just described, the universe of possible programs. Even if for him it is only a “hobby”, I find the approach much less amateurish than more “professional” work on the same topic.


This does not mean that I immediately agree with the notion that everything can be described using an ultimate reductionist representation like network graphs or cellular automata. Instead, it means that like fractals, these tools look like an original way to explore physics. Not the way, mind you, one more way. And I like a rich vocabulary to express ideas. I see Wolfram’s “new science” as a useful tool in exploring the relationships we observe between our measurements.

On the other hand, I do not entirely subscribe (yet) to the idea that you can recover quantum mechanics from a deterministic set of causal relations. I am not even convinced at this point that there would be a single such network, in the sense that the network we detect might depend on what physics process we use to probe it. This dependency on the measurement is one of the core ideas in my own theory of incomplete measurements, and I gave enough examples in the article to explain why I lost my belief in “the ultimate spacetime metric”, which Wolfram is looking for as far as I understand. Time will tell.

Finally, the approach is fraught with difficulties, something that Wolfram is very aware of:

OK, but what is the rule for our universe? I don’t know yet. Searching for it isn’t easy. One tries a sequence of different possibilities. Then one runs each one. Then the question is: has one found our universe?

Clearly, the problem of exploring all possible programs is not really different from exploring the landscape in string theory. Where do you start? The difference I see with string theory is that the search could be largely automated. This is why I see this as a brilliant approach: if the rule is simple enough that it can be generated by enumeration, instead of probing slowly using the human mind, let’s probe quickly using computers. It will not necessarily work, but it will tell us something.

Anyhow: Recommended reading.

Why don’t you tell us what you found?

One comment I received was:

Dear Sir, Your posts are exactly like the hundreds of other crackpots on the web. A theory of physics must predict something, you are just babbling. Why don’t you tell us what your big theory is instead of incoherent ramblings about the state of physics?

Well, one way to share idea is by writing papers, which I did. But maybe it would be a good idea to summarize the general ideas here.

Physics is about measurements

The starting point is the following: mathematical entities in physics are not arbitrary, they are intended to model or predict the result of measurements. Therefore, it is interesting to define what a measurement is in physics. I suggest a definition in 6 postulates.

  1. A physical process
  2. with known input and output
  3. repeatable (in other words, with stable results)
  4. gathering information about its input
  5. represented by a change in its output
  6. that we can give a symbolic interpretation

Eliminate any of these postulates, and you have something that is not a measurement.

We can reason about these postulates

The second idea is that these postulates are strong. As long as you add the observation that there are measurements, you can deduce something about their behavior. For example, the fact that a measurement must be repeatable means that if I measure the length of a solid and find 5 cm, and then measure again, I must find 5cm again. This in turns means that if I use a quantum-mechanical formalism based on Hilbert spaces, the state representing the system immediately after the first measurement must be an eigenvector (otherwise, you might measure a different value). Therefore, that third postulate here implies the collapse of the wave-function, one of the axioms of quantum mechanics generally considered as unintuitive.

But if this axiom of quantum mechanics is seen as a consequence of the postulate instead of as an ad-hoc statement, this has consequences as well, some of them very measurable and testable with experiment. Notably, in traditional quantum mechanics, the collapse of the wave function is often said to be instantaneous (more recent work is more nuanced in that respect). By contrast, in the TIM, the collapse happens gradually as the measurement instrument converges, and the fully collapsed state is, in most cases, an idealized limit.

So some of the axioms of quantum mechanics may turn out to be weaker than others. The kind of reasoning above may lead us to tweak them, to make minor adjustments.

Discrete versus continuous

Another remark is that a physical measurement apparatus has a finite resolution. Therefore, we may build nice continuous mathematical models of things, and in quantum mechanics, use for example infinite-dimensional Fock spaces. But in physical reality, when we get back to experiment, we are only predicting the probabilities of the outcome among a finite set. So the question of the relationship between the continuous model and the discrete experiments has to be addressed.

One reason this is important is that going to the continuous limit is often where divergences arise. A well known example of this is the traditional law of gravitation. This is an inverse square law, so there is a singularity around distance zero. However, a physical measurement instrument cannot reach this distance zero continuously. You can split a metering rod in two, and then in two again, but after a few iterations, you cannot split it anymore without losing the physical properties that make it a valid measuring instrument.

So I think that if we can understand the relationship between continuous and discrete better, we stand a good chance to understand why some laws diverge in our mathematical models when they apparently don’t in the real world. A large fraction of my paper is dedicated to understanding what the continuous models actually mean respective to their discrete physical counterparts. A particular result is that I suggest that a discrete and finite, as opposed to continuous and infinite, normalization condition for the wave-function would allow us to build a better approximation of the real world.

One reason this would be a better approximation is because quantum mechanics, as traditionally formulated, does not incorporate the limits of the measurement instruments. If you detect a particle using a 10x10cm detector, quantum mechanics gives precise predictions of the probability of finding the particle at coordinates (x,y), irrespective of the values of x and y, including whether (x,y) is in the detector or not. The theory of incomplete measurements, by contrast, requires a normalization condition which is practically identical to quantum mechanics inside the detector, but has only a single probability for anything outside the detector. In layman’s terms: if the particle missed the detector, there is little point making statistical predictions about where it will be found.

Quantum mechanics seen as probabilistic predictions

Provided a few “convenience” ingredients are added to the recipe along the way (e.g. we tend to pick linear graduations rather than random ones because it makes mathematics on the results of measurements simpler), it turns out that practically all the axioms of quantum mechanics can be reconstructed from the six postulates above. The missing one is the “fundamental equation”, something equivalent to specifying a Lagrangian or qction or Schrödinger equation.

One result that I personally like a lot is explaining why the wave-function can be represented as a normalized complex function of the spatial coordinates. This can be explained relatively well, and it also clarifies what a “particle” is in my opinion. Here is a sketch of the construction.

The predictions you can make about future measurements are, by construction, probabilistic, i.e. 30% of chances you will get A and 70% you will get B. What you already know about the system with respect to any particular measurement can be entirely summarized by these probabilities. Since the sum of the probabilities for all possible outcomes must be 1, and since each probability must be greater than 0, we can write individual probabilities as squares, p_i=u_i^2, and write the condition that the sum of probabilities be one as \sum{u_i^2}=1.

If you try to detect a particle, an individual detector will give you two results: found or not-found. So the representation of the probabilities is a pair of numbers verifying u_{found}^2+u_{not-found}^2=1. We can also represent such a pair using a unit complex number e^{i\theta}.

But if you want to detect the trajectory of a particle, you now need a grid of detectors. Each detector has its own probability represented as above, but the probabilities are not independent, because assuming there is a single particle, there is an additional condition that at any given time, only one detector will find it. That’s not magic, that’s just how we know that there is one particle. I will leave it as an exercise to the reader to imagine what this “field of probabilities” would look like…


I hope that this short exposition demonstrates that I can explain my “theory” to 15-years old kids and have a chance to be understood. So this is what I think I found: an explanation of quantum mechanics that I can teach to my kids without having them frown at me like “dad, are you insane?“.

How do you recognize a "time"?

Trying to reformulate the question I had asked Lubos became a bit long-winded, so I made it a separate post

Let’s consider the much simpler problem of a photon travelling in the vacuum. It is legitimate to write an equation like x=ct to predict where the photon is, meaning that the distance measured from the starting point is proportional both to elapsed time and speed. We can write a very similar equation describing the rotation of earth, something like \alpha=\omega t. Finally, we can also consider a car travelling at constant speed, where the equation would be something like x=vt.

Are the time t or position x written in these equations the same? That is really the meaning of the question I asked. The fifth-grader answer would be something like “yes, we just use a clock and a ruler in both cases”. Apparently, this is also Lubos’ answer. And, indeed, for short enough durations, this works quite well. So, conversely, we may be tempted to define time or duration using one of these phenomena. For instance, we currently define distance using the first equation (check the definition of the metre). And we historically defined time based on cosmic events like earth rotation.

Scales tends to be fatal to simple linear laws

But obviously, for large enough values of t, a number of things will happen. For example, the earth rotation is now known to not be exactly regular compared to, say, cesium clocks. So “earth time” will not remain very well aligned with the kind of tool we now use to measure time. Furthermore, because the earth surface is not flat, the x we used for the car will very soon turn out to be quite different from the x for the photon, as is obvious if you consider where the photon and the car would be after 10000km (cars, unlike photons, don’t fly after all).

It is very tempting to say that the earth rotation is irregular and that’s it. But a time defined based on this particular physical process is still the only one that allows us to keep the simplest form for the second relation. For anything related to earth, it’s basically “a better time” than a definition based on cesium, in the sense that it keeps laws of physics simpler… This remark is just another formulation of one of the key insights of general relativity, that you can really pick the time or space coordinate you want, that there is no preferred one, and that you can still write physics laws with that somewhat arbitrary system of coordinates.

In my examples, some of the coordinates I used are not obviously definitions of time until you relate them to time with some law. Cosmic periodic events, such as earth rotation, may be chosen as a definition of time (day, month, year, and so on). I can relate this definition to another, for example counts of individual “ticks” in a cesium clocks. At small scale, the relation will be approximately linear. However, at larger scale, the law connecting this and that definition of time becomes essentially arbitrary, so I need some arbitrary calibration, something like t_1=f(t_2), to relate any two definitions.

How do you recognize that a physical measurement measures time?

Now I can ask the question backwards. How do I recognize that a particular physical measurement can be used as a definition of time? Or mass? Or position? To better understand what the question means, imagine that you are given an experiment, with measurement results (e.g. graphs, tabular results, …), and you need to determine which one is time, which one is mass, which one is speed, which one is position, and so on. How would you do that?

If there is any difficulty defining time or space coordinates for something as simple as a constant speed equation, can we blissfully ignore the problem for the kind of equations you find in Lubos’ papers? Can we say: “I just pick up a scale and put my blackhole on top of it” as Lubos asked me to do?

Why do I believe that Laurent Nottale is a genius?

Laurent Nottale, a french astrophysicist, may not be widely recognized, and may even cause knee-jerk reactions from some. I personally believe that he qualifies as a genius in physics, and in the light of some recent reactions, I believe that it is necessary to explain why.

It’s not because he’s been nice with me…

First, I must explain that my admiration for Nottale is not due to some personal relationship with him. As a matter of fact, all my interactions with him were rather unhappy.

Years ago, in 1994, Science & Vie published an article entitled “Le big bang en question” (questioning the Big Bang theory). This was about a guy named Laurent Nottale, totally unknown to the general public at the time, and to me in particular. I found his phone number and called him to mention that I had found the article extraordinary and that I had ideas I’d like to share (it took years for these ideas to materialize in a usable form). He was extremely cold, only asking how on earth I had gotten his phone number, and basically hung up on me.

Things did not improve much after that initial contact. In the years since then, I believe that I sent about 3 or 4 e-mails, mostly questions about his work, and I do not recall ever receiving an answer. So my personal relationship with Laurent Nottale is certainly not the reason I find him to be a genius.

Criteria for scientific genius

To me, there are 3 factors that are key to “genius class” scientific advances, which I will illustrate with Einstein:

  • Intuition about the foundations and principles. Einstein’s intuition was that it was possible for the speed of light to actually be invariant while preserving the fundamental Galilean property that all speeds as defined relative to an observer.
  • Technical prowess. Einstein’s use of tensors and non-Euclidean geometry in general relativity were masterful for the time. While he disagreed with quantum mechanics in general, it was certainly not by lack of understanding or inability to grasp their mathematics.
  • Experimental validation, i.e. retrodictions and predictions. Retrodictions “predict” already known phenomena, whether explained correctly by present theories (“classical retrodictions”) or at odds with them (“distinguishing retrodictions”). True predictions are even more impressive, since they announce before the experiment is carried out what the result should be. Einstein’s retrodiction included the standard law of gravitation (a classical retrodiction) with a slightly different formulation for the perihelion of Mercury (a distinguishing retrodiction). His predictions included the bending of light under gravity

The null test: Example of non-genius

Let us validate that these criteria are discriminant with a null test. Let’s take a non-genius, for example myself, and try them.

  • Intuition about the foundations and principles: I hope that there is some good intuition in my theory of incomplete measurements, but this may be over-evaluating myself.
  • Technical prowess: As the same article should prove, my technical abilities are relatively low. I’m certainly not advancing the world of mathematics with the few demonstrations I made in that article.
  • Experimental validation: There are some retrodictions in my article, including distinguishing retrodictions with quantum mechanics (e.g. that you cannot find a particle on the moon without putting a detector there, unlike what the most commonly held interpretation of quantum mechanics tells you). There are practically no predictions. Now, when I wrote the article, I thought that non-instantaneous or non-simultaneous collapse of the wave functions would lead me to genuine predictions, but I since then found experiments had already been done that turned these into boring retrodictions [Note to self: add links]…

OK, so depending how you evaluate my intuition and retrodictions/predictions, I get at most 2 out of 3, so I’m not a genius. Well, we knew that already, not a big deal…

Are my genius criteria selective enough?

But I’m just chaff. If the criteria are to be selective enough, they have to be really difficult to meet. So we can try them with someone like Brian Greene, according to Wikipedia one of the best known string theorists. Is he a genius according to my genius test?

  • Intuition: he has plenty of it, and as a matter of fact, I recommend the first half of a book like The Fabric of the Cosmos to give you a pretty exhaustive laundry list of all possible intuitions about physics today.
  • Technical prowess: Brian Greene has published many articles which are in general pretty impressive (at least to me). He knows how to write equations I would be totally unable to write.
  • Experimental validation: That’s where most string physicists fall short today. At this point, string theory is lacking in the predictions department, largely because it is a little bit under-constrained. There are too many free parameters to tweak, so much so that you need an “Anthropic Landscape” to explain why our own universe would even be the one that exists among the infinite set of possibilities in the multiverse.

So Brian Greene gets 2 out of 3. That is not bad, but he still fails (for the moment at least) the genius test. He still stands a chance, though, as soon as string theory starts making verifiable predictions.

Does Laurent Nottale pass the genius test?

So now we have a genius test that identifies Einstein as a genius, clearly identifies me as a non-genius, and for the moment puts Brian Greene only in the potential genius category. Based on these basic sanity checks, the test looks good to me.

Now back to our primary topic, Laurent Nottale. Does he pass my test?

  • Intuition: The original intuition of his scale relativity is beautifully simple, and closely mimics Einstein’s for special relativity: it is possible to write a theory where there is an invariant length while preserving the daily experience that we can only measure lengths in relationship to one another.
  • Technical prowess (This is where I may diverge from some of his critics): The use of fractal mathematics to model what happens beyond the horizon of predictability for a system is, I believe, masterful today, just like Einstein’s use of non-Euclidean geometry at the beginning of the twentieth century. Granted, this makes his work rather difficult to follow (though he writes in a way that I personally find rather easy to read compared to many scientists). I usually need 5 or 6 steps to really understand what he does in a single step. But with very few exceptions, every time I really dug into it, I ultimately figured out that his reasoning appeared well founded (if not necessarily well elaborated). Now, it’s possible that it’s simply my own limits that prevent me from seeing what is obviously wrong, but I don’t think so because of the third criterion.
  • Experimental validation: Laurent Nottale did not make one, two or three small predictions or retrodictions. With him, they come by the truckload. Actually, there are so many that it’s really easy to believe “it’s just not possible, it must be fake”. I think on the contrary that this indicates that a grand new principle has been found that lets us explore a variety of domains that were closed to us before.


So here you have it. I do believe that Laurent Nottale is a scientific genius. I may be wrong. Just give me your opinion in the comments area, it is open…

XL programming language

As I’m sure any of my readers knows, XL is the best programming language in the world. Unfortunately, due to distractions like the theory of incomplete measurements (TIM), I had not been able to find time to work on it in a while. I finally got back to compiler development, the progress report is on the XL web site blog entry for today.

Some might wonder what XL and the TIM have in common. I believe that in both cases, I have been trying to get to the core of what we know about a specific subject matter.

  • In the case of XL, the core question is: what is programming about? My answer in XL is: it’s about turning concepts in the programmer’s mind into code in the computer. It is not about functions, or objects, these are only techniques. XL is an attempt to facilitate that conversion between concepts and code.
  • In the case of the TIM, the core question is: what is physics about? My answer in the TIM is: it’s about writing mathematical laws relating measurement results. It is not about Fock spaces or manifolds or branes, these are only techniques. The TIM is an attempt to formulate that conversion between measurements and mathematics.

So, even if the link between the two is not obvious, I see the TIM and XL as pretty much “the same method” applied to two very dfferent fields.