One comment I received was:

Dear Sir, Your posts are exactly like the hundreds of other crackpots on the web. A theory of physics must predict something, you are just babbling. Why don’t you tell us what your big theory is instead of incoherent ramblings about the state of physics?

Well, one way to share idea is by writing papers, which I did. But maybe it would be a good idea to summarize the general ideas here.

#### Physics is about measurements

The starting point is the following: *mathematical entities in physics are not arbitrary, they are intended to model or predict the result of measurements.* Therefore, it is interesting to define what a measurement is in physics. I suggest a definition in 6 postulates.

- A physical process
- with known input and output
- repeatable (in other words, with stable results)
- gathering information about its input
- represented by a change in its output
- that we can give a symbolic interpretation

Eliminate any of these postulates, and you have something that is not a measurement.

#### We can reason about these postulates

The second idea is that these postulates are strong. As long as you add the observation that there are measurements, you can deduce something about their behavior. For example, the fact that a measurement must be repeatable means that if I measure the length of a solid and find 5 cm, and then measure again, I must find 5cm again. This in turns means that if I use a quantum-mechanical formalism based on Hilbert spaces, the state representing the system immediately after the first measurement must be an eigenvector (otherwise, you might measure a different value). Therefore, that third postulate here implies the collapse of the wave-function, one of the axioms of quantum mechanics generally considered as unintuitive.

But if this axiom of quantum mechanics is seen as a consequence of the postulate instead of as an ad-hoc statement, this has consequences as well, some of them very measurable and testable with experiment. Notably, in traditional quantum mechanics, the collapse of the wave function is often said to be instantaneous (more recent work is more nuanced in that respect). By contrast, in the TIM, the collapse happens gradually as the measurement instrument converges, and the fully collapsed state is, in most cases, an idealized limit.

So some of the axioms of quantum mechanics may turn out to be weaker than others. The kind of reasoning above may lead us to tweak them, to make minor adjustments.

#### Discrete versus continuous

Another remark is that a physical measurement apparatus has a finite resolution. Therefore, we may build nice continuous mathematical models of things, and in quantum mechanics, use for example infinite-dimensional Fock spaces. But in physical reality, when we get back to experiment, we are only predicting the probabilities of the outcome among a finite set. So the question of the relationship between the continuous model and the discrete experiments has to be addressed.

One reason this is important is that going to the continuous limit is often where divergences arise. A well known example of this is the traditional law of gravitation. This is an inverse square law, so there is a singularity around distance zero. However, a physical measurement instrument cannot reach this distance zero continuously. You can split a metering rod in two, and then in two again, but after a few iterations, you cannot split it anymore without losing the physical properties that make it a valid measuring instrument.

So I think that if we can understand the relationship between continuous and discrete better, we stand a good chance to understand why some laws diverge in our mathematical models when they apparently don’t in the real world. A large fraction of my paper is dedicated to understanding what the continuous models actually mean respective to their discrete physical counterparts. A particular result is that I suggest that a discrete and finite, as opposed to continuous and infinite, normalization condition for the wave-function would allow us to build a better approximation of the real world.

One reason this would be a better approximation is because quantum mechanics, as traditionally formulated, does not incorporate the limits of the measurement instruments. If you detect a particle using a 10x10cm detector, quantum mechanics gives precise predictions of the probability of finding the particle at coordinates (x,y), irrespective of the values of x and y, including whether (x,y) is in the detector or not. The theory of incomplete measurements, by contrast, requires a normalization condition which is practically identical to quantum mechanics inside the detector, but has only a single probability for anything outside the detector. In layman’s terms: if the particle missed the detector, there is little point making statistical predictions about where it will be found.

#### Quantum mechanics seen as probabilistic predictions

Provided a few “convenience” ingredients are added to the recipe along the way (e.g. we tend to pick linear graduations rather than random ones because it makes mathematics on the results of measurements simpler), it turns out that practically all the axioms of quantum mechanics can be reconstructed from the six postulates above. The missing one is the “fundamental equation”, something equivalent to specifying a Lagrangian or qction or Schrödinger equation.

One result that I personally like a lot is explaining why the wave-function can be represented as a normalized complex function of the spatial coordinates. This can be explained relatively well, and it also clarifies what a “particle” is in my opinion. Here is a sketch of the construction.

The predictions you can make about future measurements are, by construction, probabilistic, i.e. 30% of chances you will get A and 70% you will get B. What you already know about the system with respect to any particular measurement can be entirely summarized by these probabilities. Since the sum of the probabilities for all possible outcomes must be 1, and since each probability must be greater than 0, we can write individual probabilities as squares, , and write the condition that the sum of probabilities be one as .

If you try to detect a particle, an individual detector will give you two results: `found` or `not-found`. So the representation of the probabilities is a pair of numbers verifying . We can also represent such a pair using a unit complex number .

But if you want to detect the trajectory of a particle, you now need a grid of detectors. Each detector has its own probability represented as above, but the probabilities are not independent, because assuming there is a single particle, there is an additional condition that at any given time, only one detector will find it. That’s not magic, that’s just how we know that there is one particle. I will leave it as an exercise to the reader to imagine what this “field of probabilities” would look like…

#### Conclusion

I hope that this short exposition demonstrates that I can explain my “theory” to 15-years old kids and have a chance to be understood. So this is what I think I found: an explanation of quantum mechanics that I can teach to my kids without having them frown at me like “*dad, are you insane?*“.