The bogus “interpretations” of quantum mechanics

I’ve not written on this blog for a long time. A talk in Mouans-Sartoux yesterday prompted me to write this rant about what I will (demonstrably) call bogus interpretations of quantum mechanics. Specifically the “dead and alive cat” myth.

Schrödinger’s cat

One of the most iconic thought experiments used to explain quantum mechanics is called Schrödinger’s cat. And it is usually illustrated the way Wikipedia illustrates it, with a superposition of cats, one dead and one alive:


The article of Wikipedia on the topic is quite clear that the cat may be simultaneously both alive and dead (emphasis mine):

The scenario presents a cat that may be simultaneously both alive and dead,[2][3][4][5][6][7][8] a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur.

In other words, in this way of presenting the experiment, the entangled state of the cat is ontological. It is reality. In that interpretation, the cat is both alive and dead before you open the box.

This is wrong. And I can prove it.

Schrödinger’s cat experiment doesn’t change if the box is made of glass

I can’t possibly be the first person to notice that Schrödinger’s cat experiment does not change a bit if the box in which the cat resides is made of glass.

Let me illustrate. Let’s say that the radioactive particle killing the cat has a half-life of one hour. In other words, in one hour, half of the particles disintegrate, the other half does not.

Let’s start by doing the original experiment, with a sealed metal box. After one hour, we don’t know if the cat is dead. It has a 50% chance of being dead, 50% chance of being alive. This is the now famous entangled state of the cat, the cat being “simultaneously both alive and dead”. When we open the box, the traditional phraseology is that the wave function “collapses” and we have a cat that is either dead or alive.

But if we instead use a glass box, we can then observe the cat along the way. We see a dead cat, or a live cat, never an entangled state. Yet the outcome of the experiment is exactly the same. After one hour, we have 50% chances of the cat being dead, and 50% of chances of the cat being alive.

If you don’t trust me, simply imagine that you have 1000 boxes with a cat inside. After one hour, you will have roughly 500 dead cats, and 500 cats that are still alive. Yet you can observe any cat at any time in this experiment, and I am pretty positive that it will never be a “cat cloud”, a bizarro superposition of a live cat and a dead one. The “simultaneously both alive and dead” cat is a myth.

Quantum mechanics is what physics become when you build it on statistics

What this tells us is that quantum mechanics does not describe what is. It describes what we know. Since you don’t know when individual particles will disintegrate, you cannot predict ahead of time which cats will be alive, which ones will be dead. What you can predict however is the statistical distribution.

And that’s what quantum mechanics does. It helps us rephrase all of physics with statistical distributions. It is a better way to model a world where everything is not as predictable as the trajectory of planets, but where we can still observe and count events.

The collapse of the wave function is nothing mysterious. It is simply the way our knowledge evolves, the way statistical distributions change as we perform experiments and get results. Before you open the box, you have 50% chances of a dead cat, and 50% of a live cat. That’s the “state” not of the universe, but of your knowledge. After you open the box, you have either a dead cat, or a live cat, and your knowledge of the world has “collapsed” onto one of these two statistical distributions.


There is a large number of widespread quantum myths

Presenting quantum mechanics as mysterious, even bizarre, is appealing since it makes the story interesting to tell. It attracts attention. And it also puts physicists who understand these things above mere mortals who can’t.

But the result is the multiplication of widespread quantum myths. Like the idea that quantum mechanics only applies at a small scale (emphasis mine):

Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics.

Another example is the question “why is the wave function complex?” Clearly, this seem problematic to many. But if you see quantum mechanics as a statistical description of what we know, the problem goes away.

Restarting a Blogmax “private” blog

Back when I was working for HP, I was using Blogmax to build a daily blog of my activities. That was quite useful as a self-reference, but also helped my team members follow what I was doing (I was the only one working from France, most of the team being in the US).

When I started Taodyne, I stopped doing that because a) we were all in the same room, b) we did not necessarily want to publicise everything we were doing, and c) I didn’t have the time. I now really regret it, as this would have been very interesting to me as a searchable archive.

So I’ve decided to restart a private blog. Not private in the sense that it’s hidden or that you can’t read it, but in the sense that it’s really notes for myself. If they happen to be useful for someone else, good. But be warned, it’s unlikely my private blog will be of any interest to you. I insert a reference here so that Google starts indexing it🙂

5 ways Tao3D is so much better than HTML5

It’s the ultimate showdown. On the left side, the current contender, the web. Thousands of developers building exciting new technologies using JavaScript, HTML5 and mashed potatoes. On the right side, a tiny French startup with Tao3D, a brand new way to describe documents. Guess who wins? (TL;DR: You know the web has to win. But why?)




Why are there new Javascript frameworks every day?

At Google I/O 2015, Google announced Polymer 1.0, a “modern web API” according to the web site. To me, it looks a bit like AngularJS mixed with Bootstrap.js, except it’s entirely different. Google also recently bought Firebase which looks to me a bit like Ionic, except of course it’s entirely different. And just now, I discovered Famous, which seems a bit similar to Reveal.js along with Three.js, except of course it’s entirely different.

Don’t get me wrong, I’m all in favor of competition, and there is something unique about all these frameworks. But this proliferation also demonstrates that there’s something seriously wrong with the web today. And I’d like to explain what it is.


Reason #5: HTML5 is way too verbose

Consider the Hello Famous on the Famous front page. It’s a whole 38 lines of JavaScript just to make a single picture spin. What the Hulk? In Tao3D, it only takes 4 lines. Four tiny miserable lines on one side, vs. 38 with today’s leading web technologies? We are not talking about a mere 2x or 3x factor, but practically 10x. And it’s not an exception either. On a slightly more complex clock animation, Tao3D is 33x shorter than JavaScript. Don’t you want to save your carpal tunnel from premature death?

Due to limitations with WordPress and code colorization, I have to ask you to read the rest on the Taodyne web site.

Shader-based text animations

With shaders, it is possible to create interesting text animations.

The code

The following code lets you create a dancing script by using a vertex shader to move the y coordinate up and down over time, and use a fragment shader to create colorization effects:

import BlackAndWhiteThemes

theme "WhiteOnBlack"
base_slide "Dancing script",

    contents 0,
        // Create a shader transforming a sphere
            vertex_shader <>
            fragment_shader <>
        shader_set time := page_time mod 60

        text_box 0, 0, 0.6*slide_width, slide_height,
            align 0.5
            vertical_align 0.5
            color "#F4B63C"
            font "Tangerine", 120, bold
            shader_set parms := 0.8, 0.03
            paragraph "Fun text effects, 50 lines of code"
            color "white"
            font "Arial", 60
            shader_set parms := 0.3, 0.01
            paragraph "Animate text easily with Tao3D!!!"

            color "lightblue"
            font "Courier", 20
            align 1.0
            shader_set parms := 0.2, 0.7
            paragraph ""

Going further

This technique is extremely powerful. By adjusting the shader, you can easily get effects such as fuzzy text, text with punched holes inside, flaming text, glowing text, and so on.

Christophe de Dinechin

Detecting video changes

Tao3D can capture live video, and makes it quite easy to perform image analysis using shaders. Here is a simple example that shows how to highlight moving parts in a video by colorizing them in red and blue.


Capturing live video

Tao3D can capture live video in a texture by using the VLCAudioVideo module and using the qtcapture:// URL on MacOSX or dshow:// on Windows (I am not sure what the right URL would be on Linux).

So below is a simple program that would capture a live video stream and display it at the center of the screen:

import VLCAudioVideo
WEBCAM -> if is_available mac then "qtcapture://" else "dshow://"

movie_texture WEBCAM
rectangle texture_width, texture_height

Saving a texture

If we want to detect movement, we need to save a reference frame to compare to. This can be done by using a dynamic texture in which we play the movie, but that we only update at specific times. Here is what it looks like to capture a snapshot every second:

lastTex -> 0
if time mod 1 < 0.1 then
    frame_texture 640, 480,
        color "white"
        movie_texture WEBCAM
        rectangle texture_width, texture_height
    lastTex := texture

The solution above is a bit crude, since it hard-codes the frame texture size. Normally, we should capture the actual frame size from the input video:

lastTex -> 0
frameWidth  -> 640
frameHeight -> 480
if time mod 1 < 0.1 then
    frame_texture 640, 480,
        color "white"
        movie_texture WEBCAM
        rectangle texture_width, texture_height
    lastTex := texture

Comparing textures

A simple shader program that compares two textures and highlights the difference in red/blue will look like this:

uniform sampler2D last;
uniform sampler2D cur;

void main()
    vec4 old = texture2D(last, gl_TexCoord[0].st);
    vec4 new = texture2D(cur, gl_TexCoord[1].st);
    gl_FragColor = vec4(0.3*vec3(old.r + old.g + old.b),1) + vec4(1,0,0,0) * (new - old);

The last line adds a grey image and a red channel that contains the difference between the new and old picture. Apparently, when this difference becomes negative, GLSL will push up the blue and green channels of the resulting fragment color as a result of color normalization.

I will save this program in a file called diff.fs for easier editing. Remember that Tao3D will reload the shader code when you change it.

Comparing the snapshot and the current image

We can now run the shader program to compare the snapshot and the current image from the camera:

import Slides
import VLCAudioVideo

WEBCAM -> if is_available mac then "qtcapture://" else "dshow://"

base_slide "Test",
    lastTex -> 0

    color "white"
    contents 0,
        if time mod 1  640
            frameHeight -> 480
            frame_texture frameWidth, frameHeight,
                color "white"
                movie_texture WEBCAM
                rectangle texture_width, texture_height
                frameWidth := texture_width
                frameHeight := texture_height
            lastTex := texture
            texture lastTex
        rectangle -400, 0, 640, 480

    contents 0,
        texture_unit 0
        texture lastTex
        texture_unit 1
        movie_texture WEBCAM
            fragment_shader_file "diff.fs"
        shader_set last := 0
        shader_set cur  := 1
        rectangle 400, 0, 640, 480

The result will look something like this:

It’s of course more interesting when it helps highlight small movements:

Going further

This is only a starting point. You can explore ideas such as:

  • Using other sources such as a webcam.
  • Exploring the capabilities of shaders. The Filters module shows a few common algorithms in image processing that can serve as a starting point.
  • Storing more than one texture for finer analysis.

That was easy!

Christophe de Dinechin

Digital art in 3D

For one month starting today, in Sophia-Antipolis is the first place in the world where you can see real-time digital art in glasses-free 3D. If you are in the area, come and see it at Espace Saint-Philippe, Avenue de Roumanille in Biot.

Today, Taodyne installed a new glasses-free 3D screen at This screen, powered by Taodyne’s unique Tao3D digital signage solution, offers an unprecedented visual experience, combining amazing visuals and ever-renewed interesting or useful contents.

3D interactive visual art (3DIVART): beautiful and always new

The 3D screen exposed at features a number of pieces from the 3DIVART collection. 3DIVART, it’s more than 100 digital art pieces specially crafted for glasses-free 3D displays (many of them thanks to our friends at ShaderToy). And the list keeps growing. Each piece is beautiful, intriguing, mind-blowing, or just funny.



Ray Manta



What’s even more amazing, each of these pieces is computed in real-time, for your eyes only. As a result, you will never see the same thing twice.

More artistic ways to present information

Art can be used to present information in a more entertaining way. For example, the collection includes a Dali-style clock (inspired by The Persistence of Memory), which shows the actual time. Watch as the clock’s hands spin in a surrealistic movement around a floppy hourglass body.

Dali donne l'heure

Showing real-time information

Using the NewsFeed module of Tao3D, the digital signage solution exposed at includes a ticker that shows real-time news from a variety of RSS sources, including Google News or Clubic.

Real-time News

This means that the screen is not just staying there passively, showing the same movies over and over. Instead, the screens shows something new every single day, useful data from sources of information you know and trust.

Christophe de Dinechin

Going to deep space, now theoretically possible?

I have already discussed earlier why we need to go into deep space if we want to have a future as a specie. But the problem is how to do it? Until now, this was considered impossible.

A first theoretical step had been made, with the Alcubierre drive, an interesting solution to the equations of general relativity that would allow a section of space to move faster than light, although what is inside would not feel acceleration. The solution can be interpreted as moving space rather than matter. But until recently, it was considered as impossible in practice because of the amount of energy required.

This is apparently changing, and work in that field seems to have advanced a lot faster than I anticipated. Here is a video from people working on it, complete with artistic renderings of what the ships might look like:

Now I only need to reconcile that work with my own pet theory and see where that leads me🙂

By the way, I talk about this later because I saw this through an article about another interesting step in space engine technology, an electromagnetic drive that appears to be working in a vacuum, something which was until now considered hard to believe.