Saw this on Twitter a few days ago.
Saw this on Twitter a few days ago.
I’ve not written on this blog for a long time. A talk in Mouans-Sartoux yesterday prompted me to write this rant about what I will (demonstrably) call bogus interpretations of quantum mechanics. Specifically the “dead and alive cat” myth.
One of the most iconic thought experiments used to explain quantum mechanics is called Schrödinger’s cat. And it is usually illustrated the way Wikipedia illustrates it, with a superposition of cats, one dead and one alive:
The article of Wikipedia on the topic is quite clear that the cat may be simultaneously both alive and dead (emphasis mine):
The scenario presents a cat that may be simultaneously both alive and dead, a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur.
In other words, in this way of presenting the experiment, the entangled state of the cat is ontological. It is reality. In that interpretation, the cat is both alive and dead before you open the box.
This is wrong. And I can prove it.
I can’t possibly be the first person to notice that Schrödinger’s cat experiment does not change a bit if the box in which the cat resides is made of glass.
Let me illustrate. Let’s say that the radioactive particle killing the cat has a half-life of one hour. In other words, in one hour, half of the particles disintegrate, the other half does not.
Let’s start by doing the original experiment, with a sealed metal box. After one hour, we don’t know if the cat is dead. It has a 50% chance of being dead, 50% chance of being alive. This is the now famous entangled state of the cat, the cat being “simultaneously both alive and dead”. When we open the box, the traditional phraseology is that the wave function “collapses” and we have a cat that is either dead or alive.
But if we instead use a glass box, we can then observe the cat along the way. We see a dead cat, or a live cat, never an entangled state. Yet the outcome of the experiment is exactly the same. After one hour, we have 50% chances of the cat being dead, and 50% of chances of the cat being alive.
If you don’t trust me, simply imagine that you have 1000 boxes with a cat inside. After one hour, you will have roughly 500 dead cats, and 500 cats that are still alive. Yet you can observe any cat at any time in this experiment, and I am pretty positive that it will never be a “cat cloud”, a bizarro superposition of a live cat and a dead one. The “simultaneously both alive and dead” cat is a myth.
What this tells us is that quantum mechanics does not describe what is. It describes what we know. Since you don’t know when individual particles will disintegrate, you cannot predict ahead of time which cats will be alive, which ones will be dead. What you can predict however is the statistical distribution.
And that’s what quantum mechanics does. It helps us rephrase all of physics with statistical distributions. It is a better way to model a world where everything is not as predictable as the trajectory of planets, but where we can still observe and count events.
The collapse of the wave function is nothing mysterious. It is simply the way our knowledge evolves, the way statistical distributions change as we perform experiments and get results. Before you open the box, you have 50% chances of a dead cat, and 50% of a live cat. That’s the “state” not of the universe, but of your knowledge. After you open the box, you have either a dead cat, or a live cat, and your knowledge of the world has “collapsed” onto one of these two statistical distributions.
Presenting quantum mechanics as mysterious, even bizarre, is appealing since it makes the story interesting to tell. It attracts attention. And it also puts physicists who understand these things above mere mortals who can’t.
But the result is the multiplication of widespread quantum myths. Like the idea that quantum mechanics only applies at a small scale (emphasis mine):
Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics.
Another example is the question “why is the wave function complex?” Clearly, this seem problematic to many. But if you see quantum mechanics as a statistical description of what we know, the problem goes away.
I have already discussed earlier why we need to go into deep space if we want to have a future as a specie. But the problem is how to do it? Until now, this was considered impossible.
A first theoretical step had been made, with the Alcubierre drive, an interesting solution to the equations of general relativity that would allow a section of space to move faster than light, although what is inside would not feel acceleration. The solution can be interpreted as moving space rather than matter. But until recently, it was considered as impossible in practice because of the amount of energy required.
This is apparently changing, and work in that field seems to have advanced a lot faster than I anticipated. Here is a video from people working on it, complete with artistic renderings of what the ships might look like:
Now I only need to reconcile that work with my own pet theory and see where that leads me🙂
By the way, I talk about this later because I saw this through an article about another interesting step in space engine technology, an electromagnetic drive that appears to be working in a vacuum, something which was until now considered hard to believe.
Unifying quantum mechanics and general relativity has been a problem for decades. I believe that I have cracked that nut.
Philosophical principle: Laws of physics should not depend on observer’s speed.
Math: Lorentz transform, new way to “add” speeds.
Issues it solved: Maxwell’s equations predict a value for the speed of light that does not depend on your own speed.
Physical observations: The speed of light is indeed independent on observers’ speed (Michelson and Morley’s experiment).
Counter-intuitive aspects: There is no absolute simultaneity and no absolute time. There’s an absolute speed limit for physical objects in the universe.
New requirements: Physicists must now pay attention to the “observer” or “referential”.
Thought experiment: Alice is in a train, while Bob is on the ground watching the train pass him by. What happens if Bob sees a flash hit the train “simultaneously” at both ends? Hint: what happens “at the same time” for Bob is not happening “at the same time” for Alice. That explains why we cannot consider simultaneity as absolute.
Philosophical principle: Laws of physics should not depend on observer’s state of motion, including acceleration.
Math: Non-euclidean geometry, tensor and metrics.
Issues it solved: Discrepancies in the trajectory of Mercury.
Physical observations: Gravitation has an impact on light rays and clocks.
Counter-intuitive aspects: Light has no mass, but is still subject to gravity. The presence of a mass “bends” space-time.
New requirement: Physicists must pay attention to the metric (including curvature) of a given region of space-time.
Typical thought experiment: Alice is in a box on Earth, Bob is in a similar box dragged by a rocket at 1 g. The similarity between their experience explains why we can treat gravitation as a curvature of space-time.
Philosophical principle: Several, “Shut up and calculate” being the top dog today (meaning: if math flies against your intuition, trust the math).
Math: Hilbert spaces, Hamiltonian.
Issues it solved: Black body radiation, structure of matter.
Physical observations: Quantization of light, wave-particle duality, Young’s slits experiment.
Counter-intuitive aspects: Observing something changes it. There are quantities we can’t know at the same time with arbitrary precision, e.g. speed and position of a particle.
New requirement: Physicists must pay attention to what they observe and in which order, as observation may change the outcome of the experiment.
Typical thought experiment: Schrödinger puts his cat in a box where a system built on radioactive decays can kill it at an unknown time in the future. From a quantum mechanical point of view, before you open the box, the cat is in a superposition of two states, alive and dead.
Philosophical principle: Everything we know about the world, we know from measurements. Laws of physics should be independent from the measurements we chose.
Math: “Meta-math” notation to describe physical experiments independently from the mathematical or symbolical representation of the measurement results. The math of quantum mechanics and general relativity applies only to measurement results, the “meta-math” describes the experiments, including what you measure and what physical section of the universe you use to measure it.
Issues it solved: Unifying quantum mechanics and general relativity. Quantum measurement problem. Why is the wave function complex-valued. Why doesn’t quantum mechanics apply at macroscopic scale (the answer being that it does). Why are there infinities appearing during renormalization, and why is it correct to replace them with observed values?
Physical observations: Room-scale experiments with quantum-like properties. How to transition the definition of the “meter” from a solid rod of matter to a laser beam. Physically different clocks and space measurements diverge at infinity. How can we talk about the probability of a photon being “in the Andromeda galaxy” during a lab experiment? Every measurement of space and time is related to properties of photons. Space-time interpreted as “echolocation with photons”.
Counter-intuitive aspects: Quantum mechanics is the necessary form of physics when we deal with probabilistic knowledge of the world. In most cases, our knowledge of the world is probabilistic. All measurements are not equivalent, and a “better” measurement (i.e. higher resolution) is not universally better (i.e. it may not correctly extend a lower-resolution but wider scale measurement). Space-time (and all measurements) are quantized. There is no pre-existing “continuum”, the continuum is a mathematical simplification we introduce to unify physically different measurements of the same thing (e.g. distance measurements by our eye and by pocket rulers).
New requirement: Physicists must specify which measurement they use and how two measurements of the “same thing” (e.g. mass) are calibrated to match one another.
Typical thought experiment: Measure the earth surface with the reference palladium rod, and then with a laser. Both methods were at some point used to define the “meter” (i.e. distance). Why don’t they bend the same way under gravitational influence? In that case, the Einstein tensors and metrics would be different based on which measurement “technology” you used.
To illustrate how the unification happens without too much math, imagine a biologist trying to describe the movement of ants on the floor.
The “quantum mechanical” way to do it to compute the probability of having an ant at each location. The further away from the ants’ nest, the lower the probability. Also, the probability to find an ant somewhere is related to the probability of finding it someplace near a short time before. When you try to setup the “boundary conditions” for these probabilities, you will say something like: the ant has to be somewhere, so the probability summed over all of space is one; and the probability becomes vanishingly small “at infinity”.
The general-relativistic way to do it will consider the trajectories of the ants on the 2D surface. But to be very precise, it will need to take into account the fact that ants are on a large-scale sphere, and deduce that the 2D surface they walk on is not flat (euclidean) but curved. For example, if an ant travelled along the edges of a 1000km square (from its point of view), it would not return exactly where it left off, therefore proving that the 2D surface is not flat.
At a relatively small scale, the two approaches can be made to coincide almost exactly. But they diverge in their interpretation of “at infinity”. Actually, assuming observed ants stay within a radius R of the nest, there are an infinite number of coordinate systems that are equal on that radius R, but diverge beyond R. Of course, the probabilities you compute depend on the coordinate system.
In particular, if you take a “curved” coordinate systems that loops around the earth to match the “general relativistic” view of the world, the physically observed probability does not match the original idea we have that probability becomes vanishingly small at infinity and that the sum is one. In that physical coordinates system, the probability to see ants is periodically non-zero (every earth circumference, you see the same ant “again”). So your integral and probability computation is no longer valid. It shows false infinities that are not observed in the physical world. You need to “renormalize” it.
In the theory of incomplete measurements, you focus on probabilities like in quantum mechanics, but only on the possible measurement results of your specific physical measurement system. If your measurement system follows the curvature of earth (e.g. you use solid rods of matter), then the probabilities will be formally different from a measurement system that does not follow it (e.g. you use laser beams). Key topological or metric properties therefore depend on the measurement apparatus being chosen. There is no “x” in the equations that assumes some underlying space-time with specific topology or metric. Instead, there is a “x as measured by this apparatus”, with the topology and metric that derives from the given apparatus.
Furthermore, all the probabilities will be computed using finite sums, because all known measurement instruments give only finite measurement results. There may be a “measurement not valid” probability bin. But if you are measuring the position of a photon in a lab, there cannot be a “photon was found in the Andromeda galaxy” probability bin (unlike in quantum mechanics), because your measurement apparatus simply cannot detect your photon in the Andromeda galaxy. Such a probability is non-sensical from a physical point of view, so we build the math to exclude it.
So in the theory of incomplete measurements, you only have finite sums that cannot diverge, and renormalisation is the mathematical equivalent of calibrating physically different measurement instruments to match one another.
The analogy is not perfect, but in my opinion, it explains relatively well what happens with as little math as possible.
La SNCF affirme ne pas pratiquer l’IP tracking, J’ai du mal à y croire.
Il y a quelques minutes, ma femme va sur le site Voyages SNCF, et demande un billet Paris-Antibes. Prix du billet: 80€. “Attention, dernières places à ce prix”, bien sûr. Mais à un moment, elle fait une erreur, et décide de refaire une recherche sur le même site. Le même billet passe soudainement à 113€.
Je fais un essai depuis un autre navigateur, puis depuis une autre machine dans la même maison (donc même adresse IP depuis l’extérieur). Le billet reste coincé à 113€.
Mais, histoire de vérifier si les billets à 80€ ont vraiment été épuisés (la théorie du blog de la SNCF ci-dessus), je décide de passer par mon smartphone en 3G. Du coup, forcément, changement d’adresse IP. Et là, surprise (pas vraiment, en fait), je retrouve le billet à 80€. Que j’achète.
Alor, si la SNCF ne fait pas d’IP tracking, pourquoi ce que je viens de décrire se passe à chaque fois? Ce phénomène ne peut pas s’expliquer par l’épuisement des billets à un certain palier de tarif; parce que les prix affichés sur un même ordinateur dépendent de l’IP utilisée !
As usual, Paul Graham writes an interesting piece about startups. He recommends doing things that don’t scale. Thinking like a big company is a sure way to fail. It’s a reassuring piece for the startup creator that I am, because at Taodyne, we are indeed in this phase where you do everything yourself and you’d need 48 hours a day to do the basics. Good to know that the solution to this problem is to keep working.
Connect this to the survivor bias. This is a very serious cognitive bias, which makes us look only at the survivors, at the planes who return from combat, at the successful entrepreneurs. Because we don’t look at the dead startups or planes that were shot down, we build our statistics on a biased sample. As a result, we make incorrect assumptions. For example, if the planes that return have mostly been shot in the tail and wings, you might deduce that this is where planes are being shot at, so that’s the parts you need to protect, when in reality what this proves is that these are the parts that don’t prevent a plane from returning when shot. Very useful.
Last interesting link of the day is the discussion about bullying on the Linux Kernel Mailing List (LKML). Sarah Sharp, a female Intel engineer, stands up to Linus Torvalds and asks him to stop verbal abuse. It’s an interesting conflict between very smart people. To me, there’s a lot of cultural difference at play here (one of the main topics of Grenouille Bouillie). For example, I learned from Torvalds what Management by Perkele means. On one side, it’s legitimate for Sarah to explain that she is offended by Linus’ behavior. On the other hand, it’s legitimate for Linus to keep doing what works.
Sarah reminds me of a very good friend of mine and former colleague, Karen Noel, a very sharp engineer who joined me on the HPVM project and taught me everything I forgot about VMS. Like Sarah, Karen was willing to stand up her ground while remaining very polite.
This post from Dear Apple is just so true, and so clearly on topic for Grenouille Bouillie!
Have we reached the point in complexity where we can’t make good quality products anymore? Or is that some kind of strategic choice?
The original post is mostly about Apple products, but the same is true with Linux, with Android, with Windows.
Here is my own list of additional bugs, focusing on those that can easily be reproduced:
I’ll keep updating this list as more come to mind. Add your own favorite bugs in the comments.
First update (Feb 13, 2013):
Updated February 28th after restoring a machine following a serious problem: