The bogus “interpretations” of quantum mechanics

I’ve not written on this blog for a long time. A talk in Mouans-Sartoux yesterday prompted me to write this rant about what I will (demonstrably) call bogus interpretations of quantum mechanics. Specifically the “dead and alive cat” myth.

Schrödinger’s cat

One of the most iconic thought experiments used to explain quantum mechanics is called Schrödinger’s cat. And it is usually illustrated the way Wikipedia illustrates it, with a superposition of cats, one dead and one alive:

1995px-schrodingers_cat-svg

The article of Wikipedia on the topic is quite clear that the cat may be simultaneously both alive and dead (emphasis mine):

The scenario presents a cat that may be simultaneously both alive and dead,[2][3][4][5][6][7][8] a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur.

In other words, in this way of presenting the experiment, the entangled state of the cat is ontological. It is reality. In that interpretation, the cat is both alive and dead before you open the box.

This is wrong. And I can prove it.

Schrödinger’s cat experiment doesn’t change if the box is made of glass

I can’t possibly be the first person to notice that Schrödinger’s cat experiment does not change a bit if the box in which the cat resides is made of glass.

Let me illustrate. Let’s say that the radioactive particle killing the cat has a half-life of one hour. In other words, in one hour, half of the particles disintegrate, the other half does not.

Let’s start by doing the original experiment, with a sealed metal box. After one hour, we don’t know if the cat is dead. It has a 50% chance of being dead, 50% chance of being alive. This is the now famous entangled state of the cat, the cat being “simultaneously both alive and dead”. When we open the box, the traditional phraseology is that the wave function “collapses” and we have a cat that is either dead or alive.

But if we instead use a glass box, we can then observe the cat along the way. We see a dead cat, or a live cat, never an entangled state. Yet the outcome of the experiment is exactly the same. After one hour, we have 50% chances of the cat being dead, and 50% of chances of the cat being alive.

If you don’t trust me, simply imagine that you have 1000 boxes with a cat inside. After one hour, you will have roughly 500 dead cats, and 500 cats that are still alive. Yet you can observe any cat at any time in this experiment, and I am pretty positive that it will never be a “cat cloud”, a bizarro superposition of a live cat and a dead one. The “simultaneously both alive and dead” cat is a myth.

Quantum mechanics is what physics become when you build it on statistics

What this tells us is that quantum mechanics does not describe what is. It describes what we know. Since you don’t know when individual particles will disintegrate, you cannot predict ahead of time which cats will be alive, which ones will be dead. What you can predict however is the statistical distribution.

And that’s what quantum mechanics does. It helps us rephrase all of physics with statistical distributions. It is a better way to model a world where everything is not as predictable as the trajectory of planets, but where we can still observe and count events.

The collapse of the wave function is nothing mysterious. It is simply the way our knowledge evolves, the way statistical distributions change as we perform experiments and get results. Before you open the box, you have 50% chances of a dead cat, and 50% of a live cat. That’s the “state” not of the universe, but of your knowledge. After you open the box, you have either a dead cat, or a live cat, and your knowledge of the world has “collapsed” onto one of these two statistical distributions.

There is a large number of widespread quantum myths

Presenting quantum mechanics as mysterious, even bizarre, is appealing since it makes the story interesting to tell. It attracts attention. And it also puts physicists who understand these things above mere mortals who can’t.

But the result is the multiplication of widespread quantum myths. Like the idea that quantum mechanics only applies at a small scale (emphasis mine):

Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics.

Another example is the question “why is the wave function complex?” Clearly, this seem problematic to many. But if you see quantum mechanics as a statistical description of what we know, the problem goes away.

How to unify general relativity and quantum mechanics

Unifying quantum mechanics and general relativity has been a problem for decades. I believe that I have cracked that nut.

Special relativity:

Philosophical principle: Laws of physics should not depend on observer’s speed.

Math: Lorentz transform, new way to “add” speeds.

Issues it solved: Maxwell’s equations predict a value for the speed of light that does not depend on your own speed.

Physical observations: The speed of light is indeed independent on observers’ speed (Michelson and Morley’s experiment).

Counter-intuitive aspects: There is no absolute simultaneity and no absolute time. There’s an absolute speed limit for physical objects in the universe.

New requirements: Physicists must now pay attention to the “observer” or “referential”.

Thought experiment: Alice is in a train, while Bob is on the ground watching the train pass him by. What happens if Bob sees a flash hit the train “simultaneously” at both ends? Hint: what happens “at the same time” for Bob is not happening “at the same time” for Alice. That explains why we cannot consider simultaneity as absolute.

General relativity:

Philosophical principle: Laws of physics should not depend on observer’s state of motion, including acceleration.

Math: Non-euclidean geometry, tensor and metrics.

Issues it solved: Discrepancies in the trajectory of Mercury.

Physical observations: Gravitation has an impact on light rays and clocks.

Counter-intuitive aspects: Light has no mass, but is still subject to gravity. The presence of a mass “bends” space-time.

New requirement: Physicists must pay attention to the metric (including curvature) of a given region of space-time.

Typical thought experiment: Alice is in a box on Earth, Bob is in a similar box dragged by a rocket at 1 g. The similarity between their experience explains why we can treat gravitation as a curvature of space-time.

Quantum mechanics:

Philosophical principle: Several, “Shut up and calculate” being the top dog today (meaning: if math flies against your intuition, trust the math).

Math: Hilbert spaces, Hamiltonian.

Issues it solved: Black body radiation, structure of matter.

Physical observations: Quantization of light, wave-particle duality, Young’s slits experiment.

Counter-intuitive aspects: Observing something changes it. There are quantities we can’t know at the same time with arbitrary precision, e.g. speed and position of a particle.

New requirement: Physicists must pay attention to what they observe and in which order, as observation may change the outcome of the experiment.

Typical thought experiment: Schrödinger puts his cat in a box where a system built on radioactive decays can kill it at an unknown time in the future. From a quantum mechanical point of view, before you open the box, the cat is in a superposition of two states, alive and dead.

Theory of incomplete measurements:

Philosophical principle: Everything we know about the world, we know from measurements. Laws of physics should be independent from the measurements we chose.

Math: “Meta-math” notation to describe physical experiments independently from the mathematical or symbolical representation of the measurement results. The math of quantum mechanics and general relativity applies only to measurement results, the “meta-math” describes the experiments, including what you measure and what physical section of the universe you use to measure it.

Issues it solved: Unifying quantum mechanics and general relativity. Quantum measurement problem. Why is the wave function complex-valued. Why doesn’t quantum mechanics apply at macroscopic scale (the answer being that it does). Why are there infinities appearing during renormalization, and why is it correct to replace them with observed values?

Physical observations: Room-scale experiments with quantum-like properties. How to transition the definition of the “meter” from a solid rod of matter to a laser beam. Physically different clocks and space measurements diverge at infinity. How can we talk about the probability of a photon being “in the Andromeda galaxy” during a lab experiment? Every measurement of space and time is related to properties of photons. Space-time interpreted as “echolocation with photons”.

Counter-intuitive aspects: Quantum mechanics is the necessary form of physics when we deal with probabilistic knowledge of the world. In most cases, our knowledge of the world is probabilistic. All measurements are not equivalent, and a “better” measurement (i.e. higher resolution) is not universally better (i.e. it may not correctly extend a lower-resolution but wider scale measurement). Space-time (and all measurements) are quantized. There is no pre-existing “continuum”, the continuum is a mathematical simplification we introduce to unify physically different measurements of the same thing (e.g. distance measurements by our eye and by pocket rulers).

New requirement: Physicists must specify which measurement they use and how two measurements of the “same thing” (e.g. mass) are calibrated to match one another.

Typical thought experiment: Measure the earth surface with the reference palladium rod, and then with a laser. Both methods were at some point used to define the “meter” (i.e. distance). Why don’t they bend the same way under gravitational influence? In that case, the Einstein tensors and metrics would be different based on which measurement “technology” you used.

More details: IntroductionShort paper.

So how does the unification happen?

To illustrate how the unification happens without too much math, imagine a biologist trying to describe the movement of ants on the floor.

The “quantum mechanical” way to do it to compute the probability of having an ant at each location. The further away from the ants’ nest, the lower the probability. Also, the probability to find an ant somewhere is related to the probability of finding it someplace near a short time before. When you try to setup the “boundary conditions” for these probabilities, you will say something like: the ant has to be somewhere, so the probability summed over all of space is one; and the probability becomes vanishingly small “at infinity”.

The general-relativistic way to do it will consider the trajectories of the ants on the 2D surface. But to be very precise, it will need to take into account the fact that ants are on a large-scale sphere, and deduce that the 2D surface they walk on is not flat (euclidean) but curved. For example, if an ant travelled along the edges of a 1000km square (from its point of view), it would not return exactly where it left off, therefore proving that the 2D surface is not flat.

At a relatively small scale, the two approaches can be made to coincide almost exactly. But they diverge in their interpretation of “at infinity”. Actually, assuming observed ants stay within a radius R of the nest, there are an infinite number of coordinate systems that are equal on that radius R, but diverge beyond R. Of course, the probabilities you compute depend on the coordinate system.

In particular, if you take a “curved” coordinate systems that loops around the earth to match the “general relativistic” view of the world, the physically observed probability does not match the original idea we have that probability becomes vanishingly small at infinity and that the sum is one. In that physical coordinates system, the probability to see ants is periodically non-zero (every earth circumference, you see the same ant “again”). So your integral and probability computation is no longer valid. It shows false infinities that are not observed in the physical world. You need to “renormalize” it.

In the theory of incomplete measurements, you focus on probabilities like in quantum mechanics, but only on the possible measurement results of your specific physical measurement system. If your measurement system follows the curvature of earth (e.g. you use solid rods of matter), then the probabilities will be formally different from a measurement system that does not follow it (e.g. you use laser beams). Key topological or metric properties therefore depend on the measurement apparatus being chosen. There is no “x” in the equations that assumes some underlying space-time with specific topology or metric. Instead, there is a “x as measured by this apparatus”, with the topology and metric that derives from the given apparatus.

Furthermore, all the probabilities will be computed using finite sums, because all known measurement instruments give only finite measurement results. There may be a “measurement not valid” probability bin. But if you are measuring the position of a photon in a lab, there cannot be a “photon was found in the Andromeda galaxy” probability bin (unlike in quantum mechanics), because your measurement apparatus simply cannot detect your photon in the Andromeda galaxy. Such a probability is non-sensical from a physical point of view, so we build the math to exclude it.

So in the theory of incomplete measurements, you only have finite sums that cannot diverge, and renormalisation is the mathematical equivalent of calibrating physically different measurement instruments to match one another.

The analogy is not perfect, but in my opinion, it explains relatively well what happens with as little math as possible.

New draft of the TIM

I’ve just posted a new draft (draft 25) of the theory of incomplete measurements. I’m working on clarifying the text more than on the actual contents. There is one change in contents, however, which is to add a reference to Dr Charles Francis’ Relational quantum mechanics.

Of particular interest to me is his observation that space-time curvature normally attributed to mass can also be seen as a proper-time delay between absorption and emission of a photon. This seems to work well for one particle. I’m still struggling to understand how this would work for multiple masses. I’m going to ask😉

Skolem’s paradox

Today, I learned about Skolem’s paradox, which I find pretty interesting. Here is a rough overview:

  • Georg Cantor demonstrated in 1874 that there are sets that are not countable. An example is the set of real numbers. Such sets are also said to be uncountably infinite.
  • But mathematics can be represented as a countable language. Such a technique was used by Kurt Gödel to prove his famous incompleteness theorem.
  • This leads to the Löwenheim-Skolem theorem, which essentially shows that you can enumerate the propositions in such a mathematical system. In other words, the propositions in the system are countable. For example, there must exist a countable set that obeys all the relationships defined on the uncountable set of real numbers.

This leads to Skolem’s paradox. You cannot count real numbers, but you can count the mathematical propositions that define them… The Wikipedia page indicates that some do not see that as a paradox because even if there is no bijection within the mathematical model, there may be a bijection outside the model. My “perception” of the paradox is that this means that our mathematical model cannot define all real numbers. There are real numbers that are not the subject of any theorem, that are not the limit of any “suite” (i.e. some expression written with a finite number of symbols, like “limit of 1/n”).

And this is only the beginning. We can keep building sets, such as surreal numbers, which are even larger than the set of real numbers.

For those interested, Paul Budnik created a video discussing these topics. I contacted him today regarding his Quantum Mechanics Measurements FAQ, since I honestly believe that I have answered several of these questions with my theory of incomplete measurements.

Life and Death of a photon

There was recently an article in Nature about how a team of French physicists managed to observe a single photon. The trick, of course, is to observe the photon without destroying it. Apparently, the trick is to use an interaction between atoms of rubidium and photons, that causes them to tick a little late. There are a few more details in this article (in French), but not much.

Now, I wonder how one can talk about “a single photon” for a particle that has been bouncing around and interacting with all sorts of particles. If a photon is absorbed, and then re-emitted, is this the “same” photon? At least, that’s the meaning I believe was given to the term “the same”. In any event, a photon with identical properties.