The bogus “interpretations” of quantum mechanics

I’ve not written on this blog for a long time. A talk in Mouans-Sartoux yesterday prompted me to write this rant about what I will (demonstrably) call bogus interpretations of quantum mechanics. Specifically the “dead and alive cat” myth.

Schrödinger’s cat

One of the most iconic thought experiments used to explain quantum mechanics is called Schrödinger’s cat. And it is usually illustrated the way Wikipedia illustrates it, with a superposition of cats, one dead and one alive:


The article of Wikipedia on the topic is quite clear that the cat may be simultaneously both alive and dead (emphasis mine):

The scenario presents a cat that may be simultaneously both alive and dead,[2][3][4][5][6][7][8] a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur.

In other words, in this way of presenting the experiment, the entangled state of the cat is ontological. It is reality. In that interpretation, the cat is both alive and dead before you open the box.

This is wrong. And I can prove it.

Schrödinger’s cat experiment doesn’t change if the box is made of glass

I can’t possibly be the first person to notice that Schrödinger’s cat experiment does not change a bit if the box in which the cat resides is made of glass.

Let me illustrate. Let’s say that the radioactive particle killing the cat has a half-life of one hour. In other words, in one hour, half of the particles disintegrate, the other half does not.

Let’s start by doing the original experiment, with a sealed metal box. After one hour, we don’t know if the cat is dead. It has a 50% chance of being dead, 50% chance of being alive. This is the now famous entangled state of the cat, the cat being “simultaneously both alive and dead”. When we open the box, the traditional phraseology is that the wave function “collapses” and we have a cat that is either dead or alive.

But if we instead use a glass box, we can then observe the cat along the way. We see a dead cat, or a live cat, never an entangled state. Yet the outcome of the experiment is exactly the same. After one hour, we have 50% chances of the cat being dead, and 50% of chances of the cat being alive.

If you don’t trust me, simply imagine that you have 1000 boxes with a cat inside. After one hour, you will have roughly 500 dead cats, and 500 cats that are still alive. Yet you can observe any cat at any time in this experiment, and I am pretty positive that it will never be a “cat cloud”, a bizarro superposition of a live cat and a dead one. The “simultaneously both alive and dead” cat is a myth.

Quantum mechanics is what physics become when you build it on statistics

What this tells us is that quantum mechanics does not describe what is. It describes what we know. Since you don’t know when individual particles will disintegrate, you cannot predict ahead of time which cats will be alive, which ones will be dead. What you can predict however is the statistical distribution.

And that’s what quantum mechanics does. It helps us rephrase all of physics with statistical distributions. It is a better way to model a world where everything is not as predictable as the trajectory of planets, but where we can still observe and count events.

The collapse of the wave function is nothing mysterious. It is simply the way our knowledge evolves, the way statistical distributions change as we perform experiments and get results. Before you open the box, you have 50% chances of a dead cat, and 50% of a live cat. That’s the “state” not of the universe, but of your knowledge. After you open the box, you have either a dead cat, or a live cat, and your knowledge of the world has “collapsed” onto one of these two statistical distributions.

There is a large number of widespread quantum myths

Presenting quantum mechanics as mysterious, even bizarre, is appealing since it makes the story interesting to tell. It attracts attention. And it also puts physicists who understand these things above mere mortals who can’t.

But the result is the multiplication of widespread quantum myths. Like the idea that quantum mechanics only applies at a small scale (emphasis mine):

Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics.

Another example is the question “why is the wave function complex?” Clearly, this seem problematic to many. But if you see quantum mechanics as a statistical description of what we know, the problem goes away.

Going to deep space, now theoretically possible?

I have already discussed earlier why we need to go into deep space if we want to have a future as a specie. But the problem is how to do it? Until now, this was considered impossible.

A first theoretical step had been made, with the Alcubierre drive, an interesting solution to the equations of general relativity that would allow a section of space to move faster than light, although what is inside would not feel acceleration. The solution can be interpreted as moving space rather than matter. But until recently, it was considered as impossible in practice because of the amount of energy required.

This is apparently changing, and work in that field seems to have advanced a lot faster than I anticipated. Here is a video from people working on it, complete with artistic renderings of what the ships might look like:

Now I only need to reconcile that work with my own pet theory and see where that leads me🙂

By the way, I talk about this later because I saw this through an article about another interesting step in space engine technology, an electromagnetic drive that appears to be working in a vacuum, something which was until now considered hard to believe.

How to unify general relativity and quantum mechanics

Unifying quantum mechanics and general relativity has been a problem for decades. I believe that I have cracked that nut.

Special relativity:

Philosophical principle: Laws of physics should not depend on observer’s speed.

Math: Lorentz transform, new way to “add” speeds.

Issues it solved: Maxwell’s equations predict a value for the speed of light that does not depend on your own speed.

Physical observations: The speed of light is indeed independent on observers’ speed (Michelson and Morley’s experiment).

Counter-intuitive aspects: There is no absolute simultaneity and no absolute time. There’s an absolute speed limit for physical objects in the universe.

New requirements: Physicists must now pay attention to the “observer” or “referential”.

Thought experiment: Alice is in a train, while Bob is on the ground watching the train pass him by. What happens if Bob sees a flash hit the train “simultaneously” at both ends? Hint: what happens “at the same time” for Bob is not happening “at the same time” for Alice. That explains why we cannot consider simultaneity as absolute.

General relativity:

Philosophical principle: Laws of physics should not depend on observer’s state of motion, including acceleration.

Math: Non-euclidean geometry, tensor and metrics.

Issues it solved: Discrepancies in the trajectory of Mercury.

Physical observations: Gravitation has an impact on light rays and clocks.

Counter-intuitive aspects: Light has no mass, but is still subject to gravity. The presence of a mass “bends” space-time.

New requirement: Physicists must pay attention to the metric (including curvature) of a given region of space-time.

Typical thought experiment: Alice is in a box on Earth, Bob is in a similar box dragged by a rocket at 1 g. The similarity between their experience explains why we can treat gravitation as a curvature of space-time.

Quantum mechanics:

Philosophical principle: Several, “Shut up and calculate” being the top dog today (meaning: if math flies against your intuition, trust the math).

Math: Hilbert spaces, Hamiltonian.

Issues it solved: Black body radiation, structure of matter.

Physical observations: Quantization of light, wave-particle duality, Young’s slits experiment.

Counter-intuitive aspects: Observing something changes it. There are quantities we can’t know at the same time with arbitrary precision, e.g. speed and position of a particle.

New requirement: Physicists must pay attention to what they observe and in which order, as observation may change the outcome of the experiment.

Typical thought experiment: Schrödinger puts his cat in a box where a system built on radioactive decays can kill it at an unknown time in the future. From a quantum mechanical point of view, before you open the box, the cat is in a superposition of two states, alive and dead.

Theory of incomplete measurements:

Philosophical principle: Everything we know about the world, we know from measurements. Laws of physics should be independent from the measurements we chose.

Math: “Meta-math” notation to describe physical experiments independently from the mathematical or symbolical representation of the measurement results. The math of quantum mechanics and general relativity applies only to measurement results, the “meta-math” describes the experiments, including what you measure and what physical section of the universe you use to measure it.

Issues it solved: Unifying quantum mechanics and general relativity. Quantum measurement problem. Why is the wave function complex-valued. Why doesn’t quantum mechanics apply at macroscopic scale (the answer being that it does). Why are there infinities appearing during renormalization, and why is it correct to replace them with observed values?

Physical observations: Room-scale experiments with quantum-like properties. How to transition the definition of the “meter” from a solid rod of matter to a laser beam. Physically different clocks and space measurements diverge at infinity. How can we talk about the probability of a photon being “in the Andromeda galaxy” during a lab experiment? Every measurement of space and time is related to properties of photons. Space-time interpreted as “echolocation with photons”.

Counter-intuitive aspects: Quantum mechanics is the necessary form of physics when we deal with probabilistic knowledge of the world. In most cases, our knowledge of the world is probabilistic. All measurements are not equivalent, and a “better” measurement (i.e. higher resolution) is not universally better (i.e. it may not correctly extend a lower-resolution but wider scale measurement). Space-time (and all measurements) are quantized. There is no pre-existing “continuum”, the continuum is a mathematical simplification we introduce to unify physically different measurements of the same thing (e.g. distance measurements by our eye and by pocket rulers).

New requirement: Physicists must specify which measurement they use and how two measurements of the “same thing” (e.g. mass) are calibrated to match one another.

Typical thought experiment: Measure the earth surface with the reference palladium rod, and then with a laser. Both methods were at some point used to define the “meter” (i.e. distance). Why don’t they bend the same way under gravitational influence? In that case, the Einstein tensors and metrics would be different based on which measurement “technology” you used.

More details: IntroductionShort paper.

So how does the unification happen?

To illustrate how the unification happens without too much math, imagine a biologist trying to describe the movement of ants on the floor.

The “quantum mechanical” way to do it to compute the probability of having an ant at each location. The further away from the ants’ nest, the lower the probability. Also, the probability to find an ant somewhere is related to the probability of finding it someplace near a short time before. When you try to setup the “boundary conditions” for these probabilities, you will say something like: the ant has to be somewhere, so the probability summed over all of space is one; and the probability becomes vanishingly small “at infinity”.

The general-relativistic way to do it will consider the trajectories of the ants on the 2D surface. But to be very precise, it will need to take into account the fact that ants are on a large-scale sphere, and deduce that the 2D surface they walk on is not flat (euclidean) but curved. For example, if an ant travelled along the edges of a 1000km square (from its point of view), it would not return exactly where it left off, therefore proving that the 2D surface is not flat.

At a relatively small scale, the two approaches can be made to coincide almost exactly. But they diverge in their interpretation of “at infinity”. Actually, assuming observed ants stay within a radius R of the nest, there are an infinite number of coordinate systems that are equal on that radius R, but diverge beyond R. Of course, the probabilities you compute depend on the coordinate system.

In particular, if you take a “curved” coordinate systems that loops around the earth to match the “general relativistic” view of the world, the physically observed probability does not match the original idea we have that probability becomes vanishingly small at infinity and that the sum is one. In that physical coordinates system, the probability to see ants is periodically non-zero (every earth circumference, you see the same ant “again”). So your integral and probability computation is no longer valid. It shows false infinities that are not observed in the physical world. You need to “renormalize” it.

In the theory of incomplete measurements, you focus on probabilities like in quantum mechanics, but only on the possible measurement results of your specific physical measurement system. If your measurement system follows the curvature of earth (e.g. you use solid rods of matter), then the probabilities will be formally different from a measurement system that does not follow it (e.g. you use laser beams). Key topological or metric properties therefore depend on the measurement apparatus being chosen. There is no “x” in the equations that assumes some underlying space-time with specific topology or metric. Instead, there is a “x as measured by this apparatus”, with the topology and metric that derives from the given apparatus.

Furthermore, all the probabilities will be computed using finite sums, because all known measurement instruments give only finite measurement results. There may be a “measurement not valid” probability bin. But if you are measuring the position of a photon in a lab, there cannot be a “photon was found in the Andromeda galaxy” probability bin (unlike in quantum mechanics), because your measurement apparatus simply cannot detect your photon in the Andromeda galaxy. Such a probability is non-sensical from a physical point of view, so we build the math to exclude it.

So in the theory of incomplete measurements, you only have finite sums that cannot diverge, and renormalisation is the mathematical equivalent of calibrating physically different measurement instruments to match one another.

The analogy is not perfect, but in my opinion, it explains relatively well what happens with as little math as possible.


No little thing is to small for grandiose words chiseled by some marketing war machine.

Seen on a Lampe Berger anti-mosquito product this morning:

Parfum “Absolu de vanille”

Vanilla Gourmet Scent

Not only is this ridiculously hyperlative, but they also have a different “tint” for the Engish and French version. English reader will notice that the French version sounds more like “Absolute Vanilla”, because that’s basically what it means. Who on Earth paid people to tell their customers that their anti-mosquito drug had a “Vanilla Gourmet scent?”

Let’s not get used to this kind of marketing hyperbole…

Hyperbole in science

In despair, I turned to a slightly more serious text, the first page of this month’s issue of Science et Vie. And here is what I read there about faster than light neutrinos:

Incroyable? Alors là oui, totalement! Et même pis. Que la vitesse de la lumière puisse être dépassée, ne serait-ce que de très peu, n’est pas seulement incroyable, mais totalement impensable. Absolument inconcevable. […] c’en serait fini d’un siècle de physique. Mais, et ce serait infiniment plus grave, c’en serait aussi fini avec l’idée selon laquelle la matière qui compose notre univers possède des propriétés, obéit à des lois. Autant dire que la quête de connaissance de notre monde deviendrait totalement vaine.

Incredible? Absolutely! And even worse. That the speed of light can be exceeded, even a little, is not only unbelievable, but totally unthinkable. Absolutely inconceivable. […] This would end a century of physics. Even more serious, we would be done with the the idea that matter making up our universe has properties, obeys laws. This would mean that the quest for knowledge in our world would become totally hopeless.

Whaaaaat? I really don’t like this kind of pseudo-science wrapped in dogma so pungent to be the envy of the most religious zealots. How can anybody who understood anything about Einstein’s work write something like that? Let’s backpedal a little bit and remember where the speed of light limit comes from.

Where does the speed of light limit come from?

At the beginning was Maxwell’s work on the propagation of electromagnetic waves, light being such a wave. These equations predicted a propagation of light at a constant speed, c, that could be computed from other values that were believed at the time to be physical constants (the “epsilon-0” and “mu-0” values in the equations). The problems is that we had a physical speed constant, in other words a speed that did not obey the usual law of speed composition. If you walk at 5 km/h in a train that runs at 200 km/h, your speed relative to the ground is 205 km/h or 195 km/h depending on whether you walk in the same direction as the train or in the opposite direction. We talk about an additive composition rule for speed. That doesn’t work with a constant speed: if I measure the speed of light from my train, I won’t see c-200 km/h, since c is constant. The Michelson-Morley experiment proved that this was indeed the case. Uh oh, trouble.

For one particular speed to be constant, we need to change the law of composition. Instead of adding speeds, we need a composition law that preserves the value of c. It’s the Lorentz transformation. What Einstein acknowledged with his special relativity theory is that this also implied a change in how we consider space and time. Basically, Lorentz transformation can be understood as a rotation between space and time. And in this kind of rotation, the speed of light becomes a limit in a way similar to 90 degrees being the “most perpendicular direction you can take”. Nothing more, nothing less. Of note, that “c” value can also be interpreted as the speed at which we travel along time when we don’t move along any spatial dimension.

There are limits to limits

Once you understand that, you realize how hyperbolic what Science et Vie wrote is.

First, the value of c was computed as a speed of light, for equations designed for electromagnetism. It was never intended to say anything about neutrinos. We don’t know how to measure space and time without electromagnetic interactions somewhere. So the speed of light limit is a bit like the speed of sound limit for bats who would measure their world using only echo-location. It doesn’t necessarily mean nothing can travel faster than light, it only means that no measurement or interaction based on electro-magnetic interactions can ever measure it. I have tried to elaborate a bit on this in the past.

Second, Einstein revised his initial view to include gravity, and this made the world much more complex. Now space-time could be seen as modified locally by gravity. Now imagine how solid your “90 degrees is the most perpendicular direction” argument is if you look at a crumpled sheet of paper. The reasoning doesn’t mean much beyond very small surfaces. Remember that in the neutrinos experiments, we are in a very complex gravitational environment (mountains, …) and you’ll see that this “crumpled sheet of paper” analogy may not be so far off.

In short, it we find conditions where something appears to travel faster than light, it is exciting, it is interesting, it is worth investigating, but it’s certainly not the End of Science as Science et Vie claimed. Let’s not get used to this kind of crap.

Dark matter as a proof of E.T. intelligence?

Here is a random thought… What if dark matter was a sign of intelligent extra-terrestrial life?

The idea is simply that a Type III civilization on the Kardashev scale would control the flows of energy escaping their galaxy. Many solar systems (the inhabitable or useful ones) would end up with mechanisms such as Dyson spheres, therefore lowering the amount of energy escaping these system, to the point where we would no longer be able to identify such systems as stars.

I don’t know if the idea has any merit, but a quick Google search shows that I’m not the first one to have it

Dark matter, the modern aether

Today, my 16-year old son asked me what dark matter was. I was surprised that he would even have heard about dark matter, but it turns out that even junior science magazines talk about the search for dark matter these days. I must say that I’m not too happy about that. The junior science article, like many other, present dark matter practically as a fact.

The reason this makes me rather nervous is because of the rather obvious parallel with aether. Just like the luminiferous aether, dark matter is something that was postulated when no physical evidence justified it, in order to preserve existing theory.

Those of you who were already dabbling in physics during the 1850s1 may recall that luminiferous aether was hardly a ridiculous idea at the time. Aether was very simply the medium carrying light waves, much like air or water carry sound waves. It was initially postulated by Isaac Newton to explain things like refraction. According to Wikipedia, Augustin Fresnel proposed in 1818 a theory of light as a transverse wave in aether. To quote Wikipedia, from this point on, no one even seems to question its existence. In other words, the existence of aether was postulated in order to preserve the existing theory of waves. All existing waves required a medium, such as air or water, therefore it was natural to assume that light waves also needed a medium to carry them.

The key point to remember here is that the brightest minds of the time did not question aether at all. Some of them, like Newton or Fresnel invented it. Later, the vast majority of scientists were busy trying to refine the concept to make it work. Yet today, luminiferous aether is seen as the canonical example of an obsolete physics theory. Einstein’s relativity made the very notion of aether not just useless, but actually wrong. Relativity simplified things by removing the need for a system of coordinates that would be special, but this simplification meant that aether could not exist, because otherwise aether itself would have defined a system of coordinates that was unique.

Back to dark matter. We find ourselves in a similar situation today. There’s something about the universe that we very plainly, very visibly do not understand. The original problem, as identified by Fritz Zwicky, was that galaxies do not spin the way they should according to our best theory of gravitation, general relativity. They behave as if there was more matter in them than we can see.

The operational keyword here is as if. At the moment, we really have no idea whether it’s the theory of gravitation that is flawed, or whether there really is 95% of the universe’s mass that we can’t detect. Talking about “dark matter” is choosing one option over the other. It’s pretending that we know, when in reality we still lack a model that really explains all the evidence. In my humble opinion, the jury is still out on what this model will look like.

In short, I’m unhappy about references to dark matter made as if it was a settled topic, a known, validated scientific fact on a par with photons or Pluto. Maybe the problem is with the terminology. Talking about dark matter rather than, say, “gravitational anomaly in galaxies” (GAG) is a good way to preserve the illusion that we know what we are talking about. It makes it sound real. But just because we gave it a fancy name doesn’t make it more real than aether or the tooth fairy.

Let’s be humble and honestly face the simple fact that our model of mass and gravitation breaks down in face of quite a bit of physical evidence. We find ourselves in the situation of physicists in 1850 whose aether-based theories predicted phenomena like aether drag and aether wind, which experiments repeatedly didn’t find. It’s exciting, it’s fun. It’s a good thing for physics, because it means there is something new to be found.

Note 1: My editor tells me it’s considered bad taste to live past 150 on this planet. My apologies to those of my readers I might have offended…

It’s alive!

Every single blog or web site that vaguely talks about physics is going to tell you about the LHC today… And nobody else has found the Higgs Boson yet… As Bee noticed, this has some people frightened.

There is no reason to worry, most physicists would tell you, the LHC is orders of magnitude to puny to be really dangerous… To create a really dangerous black-hole machine, physicsists would really need a few trillion dollars more… Or whatever the unit is above trillion…

Still, if we were going into a singularity, would we even see it?