Archive

Archive for the ‘Peer reviews’ Category

Everything is broken and no one cares

February 10, 2013 1 comment

Everything is broken and no one cares

This post from Dear Apple is just so true, and so clearly on topic for Grenouille Bouillie!

Have we reached the point in complexity where we can’t make good quality products anymore? Or is that some kind of strategic choice?

The original post is mostly about Apple products, but the same is true with Linux, with Android, with Windows.

Here is my own list of additional bugs, focusing on those that can easily be reproduced:

  1. Open a file named X in any of the new Apple applications, those without Save As. Open another file named Y. Save Y as X. Beachball. For every application. Worse yet, since applications often remember which windows were open, you get the beachball again when you reopen the application. It takes another force quit for the application to (fortunately) offer to not reopen the windows.
  2. A relatively well known one now: Type F i l e : / / / in practically any OSX application. Without the spaces. Hang or assert depending on your luck.
  3. Use a stereoscopic application like Tao Presentations (http://www.taodyne.com). Activate stereoscopy. Switch spaces or unplug an external monitor. Kernel panic or hang to be expected. Go tell to your customers that the kernel panic is Apple’s fault, not ours…
  4. If you backup over the network, set your computer to sleep after say 1 hour while on power. Change your disk enough that the backup takes more than one hour. Backup disk will come up as corrupt after a couple of days, and OSX will suggest you start a new one (and the cycle will repeat).
  5. Use the “Share” button. It takes forever to show up the window (like 2-3 seconds in general on my 2.6GHz quad-core i7 with 8GB of RAM). Since what I type generally begins with an uppercase letter, I usually prepare myself by having the finger on the shift key. But to that stupid animation framework, “shift” means “slow animation down so that Steve can demo it”. Steve is dead, but the “shift” behavior is still there.

I’ll keep updating this list as more come to mind. Add your own favorite bugs in the comments.

First update (Feb 13, 2013):

  1. Safari often fails to refresh various portions of the screen. Visible in particular when used in combination with Redmine. This used to be very annoying, but it has gotten much better in more recent updates of Safari.
  2. iTunes 11 no longer has Coverflow. It was a neat way to navigate in your music, which wasn’t even the default, why remove it?
  3. Valgrind on OSX 10.8 is completely broken. I have no idea what’s wrong, but it’s a pretty useful tool for developers, and Apple has nothing in its own development tools that is even remotely close.
  4. “Detect displays” is gone, both from the Monitors control panel and from the Monitors menu icon. Combine that with the fact that OSX 10.8, unlike its predecessors, sometimes totally fails to detect that you unplug a monitor. And you find yourself with windows stuck on a screen that is no longer there…
  5. That little Monitor menu icon used to be quite handy, e.g. to select the right resolution when connecting to an external projector for the first time. Now, it’s entirely useless. It only offers mirroring, fails to show up 90% of the time when there is a possibility to do mirroring, shows up when mirroring is impossible (e.g. after you disconnected the projector). It used to be working and useful, it’s now broken and useless. What’s not to love?
  6. Contacts used to have a way for me to format phone numbers the way I like. That’s gone. Now I have to accept the (broken) way it formats all phone numbers for me.
  7. I used to be able to sync between iPhone and Contacts relatively reliably. Now, if there’s a way to remove a phone number, I’ve not found it. Old numbers I removed keep reappearing at the next sync, ensuring that I never know which of the 2, 3 or 4 phone numbers I have is the not dead one.
  8. Still in Contacts, putting Facebook e-mail addresses as the first choice for my contacts? No thanks, it was heinous enough that Facebook replaced all genuine email addresses with @facebook.com aliases. But having that as the first one that pops up is really annoying.
  9. Now fixed, but in the early 10.8, connecting a wired network when I also had Wifi on the same network would not give me higher speed. It would just drop all network connectivity.

Updated February 28th after restoring a machine following a serious problem:

  1. Time machine restores are only good if your target disk is at least as big. But with Apple’s recent move to SSD, this may no longer be affordable to you. In my case, I’d like to squeeze 1TB of data into 512G. Time machine does not give me the level of fine-grained control I’d need to restore what I really need. So I need to try and do it manually, which is a real pain.
  2. Calendar sync is a real mess. Restoring calendars from a backup is worse.
  3. Spaces? Where are my good old spaces? Why is it I had spaces on the original machine, no longer have them, and find myself unable to say “I want 6 spaces” or to setup keyboard shortcuts for them as they used to be.

P versus NP

A researcher from HP Labs named Vinay Deolalikar announced a new proof that complexity classes P and NP are different. The paper made it to the Slashdot front-page (more on this below).

What constitutes a “proof”?

This is far from being the first claimed proof. There are about as many proofs that P is the same as NP than proofs of the opposite. With, for good measure, a few papers claiming that this is really undecidable… This just shows that the problem is not solved just because of one more paper. Indeed, the new “proof” takes more than 60 pages to explain, and it references a number of equally complex theorems and proofs.

This is interesting, because it means that very few, except some of the most advanced specialists in the field, will be able to understand the proof by themselves. Instead, the vast majority (including the vast majority of scientists) will accept the conclusion of a very small number of people regarding the validity of the proof. And since understanding the proof is so difficult, it may very well be that even the most experience mathematicians will be reluctant to draw very clear-cut conclusions.

Sometimes, clear-cut conclusions can be drawn. When I was a student, another student made the local news by announcing he had a proof of Fermat’s last theorem. We managed to get a copy of the paper, and shared that with our math teacher. He looked at it for about five minutes, and commented: “This is somewhat ridiculously wrong”.

However, In most cases, reaching such a definite conclusion is difficult. This puts us in the difficult position of having to trust someone else with better understanding than ours.

Understanding things by yourself

That being said, it’s always interesting to try and understand things by yourself. So I tried to read the summary of the proof. I don’t understand a tenth of it. However, the little I understood seemed really interesting.

If I can venture into totally bogus analogies, it looks to me like what Deolalikar did is build the mathematical equivalent of ice cube melting, and drew conclusions from it. Specifically, when ice freezes, phase change happens not globally, but in local clusters. You can infer some things about the cluster configuration (e.g. crystal structure) that were not there in the liquid configuration. In other words, the ice cube is “simpler” than water.

Now, replace atoms with mathematical variables, forces between atoms with some well-chosen Markov properties that happen to be local (like forces between atoms). The frozen cube corresponds to a P-class problem where you have some kind of strong proximity binding, so that you can deduce things locally. By contrast, liquid water corresponds to NP-class problems where you can’t deduce anything about a remote atom from what you learn about any number of atoms. Roughly speaking, Deolalikar’s proof is that if you can tell water from ice, then P and NP classes are distinct.

Of course, this is only an analogy, and it is very limited like any analogy, and I apologize in advance for totally misrepresenting Deolalikar’s subtle work. Nevertheless, I found the approach fascinating.

Crowds are stupid

Now, an alternative to personal understanding is to trust the crowd. Democracy. Unfortunately, if Slashdot is any indication, this doesn’t work at all.

Slashdot has a moderation system, where people vote for comments they find “Interesting” or “Insightful” or “Funny”. You’d think that this would let good comments rise to the top. But what really happens is that people with “moderation points” apparently feel an urge to moderate as quickly as they can. So the very first comments get moderated up very quickly, and drown any later comments in the noise.

Here are some of the comments on the P vs. NP announcement that Slashdot thought were “good”:

  • P is not NP when N is not 1 (“Funny”)
  • A random series of totally uninformed opinions on the cryptography impact (“Interesting” and “Funny”)

There are a few relevant gems in there, like the opinion of a professor at MIT that the proof is not valid. That leads to another more serious analysis. But there are a few redeeming gems even on Slashdot. Still, it’s too bad you have to sift through mud to find them.

Science and the fear of innovation

Michael Nielsen writes:

The disincentives facing scientists have led to a ludicrous situation where popular culture is open enough that people feel comfortable writing Pokemon reviews, yet scientific culture is so closed that people will not publicly share their opinions of scientific papers. Some people find this contrast curious or amusing; I believe it signifies something seriously amiss with science, something we need to understand and change.

The whole article is worth reading. It suggests the same kind of innovation I am hoping for in science. Right now, scientists often congratulate themselves for open science. But as Michael and others point out, arXiv is not exactly a model of openness, at least relative to standards such as open-source software development.

Slashdot on the relevance of academic journals


I’ve written a few times about peer reviews, in particular advocating non-anonymous peer reviews. The idea is not popular.

That’s why I was so interested to see an article on Slashdot asking if academic journals are still relevant. Do they still ensure quality? Are they nimble enough? Can they compete with the information glut on the Internet? All these are really relevant questions. One reader expresses some of my own thoughts about anonymous peer reviews:

There are certainly parts of the peer review process that are less than ideal–reviewers don’t take the time to understand what they’re reviewing, or they have an emotional reaction to something that seems to undercut their fond hopes for how something will turn out and make stupid, picky attacks on a paper, or they realize that they’re about to get scooped and so ask for every pedantic little thing so they gain more time for their own work. But even with these flaws, the process does a pretty good job at rejecting junk; it just rejects a little too much non-junk, too, or at least makes the process more painful than necessary.

Interestingly, Slashdot itself has some of the properties one might want in a modern-age on-line academic journal: quick to respond to news, filtering data with the help of its readers, offering support for anonymous as well as non-anonymous comments (and favoring non-anonymous ones slightly). In that sense, it is also similar to Wikipedia, or in a different direction, the ArXiv, which both share much of these properties.

Yet all these examples also show the limits of the system. They are not recognized as offering the highest quality information (even if, in reality, they do most of the time), because bad information is shoved through them all the time and there is no guarantee that what you read will not be garbage. And, paradoxically, while they are openly democratic by construction, these outlets give enormous power (censorship, favoring certain view, or even rewriting contents) to a small circle of individuals.

So the question is: how would you fix these flaws? I have ideas, but I’m interested in hearing yours.

Categories: Peer reviews

Peer reviews…

Bee has posted a number of comments on peer reviews lately, notably this one, which reports about a survey made among scientists. The summary is that the majority feel that peer reviews are necessary, but that the majority also would like to see them improve (notably regarding timing).

Another nice link I found on BackReaction was this presentation. Food for thought.

Categories: Peer reviews

An intelligent review of the TIM

An anonymous reader wrote an interesting comment about my own pet theory (which I called theory of incomplete measurements, or TIM for short). This anonymous comment exemplifies both what I like and what I dislike about physics discussions these days:

  • On one hand, it gives honest feedback, both positive and negative, and this hints at the kind of exchange of ideas that I enjoy when discussing with knowledgeable people. For instance, the anonymous reader cites a number of relevant sources, and makes a number of intelligent objections to my work.
  • On the other hand, I cannot imagine any good reason why this reader would choose to remain anonymous, only bad ones. Too much work? The risk of backlash if anybody knew this person dedicated any time to fringe physics? A fear of true, bi-directional discussion? Whatever his reasons were, I suspect that I would not find it admirable.

I first wrote a quick reply, but I think that this comment demands a more prominent answer.


Below are excerpts from the anonymous reader’s comment, which I will hereby refer to as “Dr. Anonymous”, with my thoughts (not necessarily answers) inserted.

A theory, or just a descriptive framework?

1. The title would be better as “A general framework for describing physical measurements”, as this is all that is provided. Theories are rather more specific than what is given, and make predictions.

The obvious implication here is that my choice of title is a bit too grandiose and that the article delivers too little to merit such a title. But is it really true that the article only provides a framework for describing physical measurements? Is it true that it makes no predictions?

Granted, I never intended the TIM to be some theory of everything. What it is is a theory of measurements. What you need for this is to make predictions about measurements, not about gravitation or bananas. Otherwise, it would be called a theory of gravitation or a banana theory… However, with respect to measurements, the TIM does a number of predictions, starting I believe with equation (9) in the article. Most of these predictions may be qualitative or physical instead of mathematical or numerical. But that does not make them any less predictive, and it does not make them any less falsifiable, which is the most important thing for progress in science.

In particular, I can think of at least three predictions that are apparently suprising, if not shocking, to most physicists who commented about my paper. Specifically, according to the TIM:

  • All our measurements of space-time can be reduced to properties of electromagnetic interactions (roughly, counting wave-fronts of electromagnetic waves).
  • It does not make sense to normalize the wave-function on a continuum ranging from “minus infinity” to “plus infinity”, the correct way being to normalize it on a discrete sets of points that are in a time-like connected region. In particular, there is no such thing as an “instantaneous collapse” of the wave function.
  • The laws of general relativity and quantum mechanics are not absolute, but depend on specific choices of physical measurement. In particular, two measurements with distinct resolutions may give distinct metrics. This is a generalization of the “scale relativity principle” dear to Laurent Nottale.

theories can either can be fitted or not be fitted into the particular framework given.

Isn’t it useful enough to have a relatively simple framework where both general relativity and quantum mechanics can be fitted?

Other frameworks are often less general, including, for example, ‘quantum’ logic (measurements represented by propositions), W*-algebras (measurements represented by algebras with a ‘complete’ set of idempotent elements), and convex set theory (states represented by convex linear measures) – see, eg, Hans Primas, “Chemistry, quantum mechanics and reductionism” (Springer), and Stanley Gudder’s “Stochastic quantum mechanics”, for overviews. However, more general approaches, at similar levels to your own, are the operational frameworks of Guenther Ludwig and of Charlie Randall and David Foulis – see, eg, the references at http://plato.stanford.edu/entries/qt-quantlog/ .

All these are interesting references. I’ve only scanned through those available on the web. But the main articles of Randall and Foulis for example require a subscription that I don’t have.

Personal gratification, or improvement of the collective understanding?

2. The paper is sincerely and honestly written, and often quite well written, and clearly the ideas have provided you with a personally satisfactory way of thinking about physical phenomena.

It is true that I was looking for an understanding of physics I would be satisfied with. But having made some progress, I hope to offer a way of thinking that is also satisfactory to others. It’s a worthy goal too. Last time I checked, there were over 2 million Google hits on “quantum measurement problem”. That can only mean that I’m not alone being dissatisfied with the “mainstream” way of thinking about physical phenomena.

However, there is often little more than this to be achieved by embedding known theories into a particular framework unless they (i) suggest better theories, and/or (ii) allow better ‘understanding’ of known theories.

The TIM does suggest better theories. It suggests a theory of relativity that would explicitly incorporate an additional parameter specifying how the measurement is done. This is pretty much what Nottale’s scale relativity or doubly-special relativity boil down to when looked at from a TIM point of view. The TIM also suggests an extended quantum mechanics where space-time coordinates do not commute and are discretized. Aren’t these suggestions enough for starters?

More imporantly in my view, the TIM also allows better understanding of known theories. Or, more precisely, it provided me with a so far more satisfactory illusion of better understanding, but as I wrote in my reply, it may just be that ignorance is bliss… Anyway, if I understood things correctly, the TIM shows for example that the “collapse of the wave function” is a necessary consequence of the fact that we demand repeatability from our measurements. One of the meanings of “theory of incomplete measurements” is that it suggests a theory for when measurements do not collapse instantaneously, or perfectly. That is yet another “better theory” that the TIM suggests.

Point (i) is the important one, as point (ii) is quite subjective – for example, one person’s interpretation of QM is very often another person’s anathema!

I would argue that “one person’s interpretation of QM is very often another person’s anathema” is a bad thing. Isn’t it embarassing for physics that QM is not defined sharply enough to leave room for multiple conflicting interpretations? After all, there is very little room for interpretation in Newton’s second law, or in Ohm’s law. So why do we need an interpretation of quantum mechanics at all?


Here, I apparently disagree somewhat strongly with Dr. Anonymous. Point (ii) is probably the most important one for quantum mechanics, because QM is not clearly understood yet, and that ought to be fixed. The TIM might be a step in the right direction, or maybe not. But at least, it’s an attempt to address this problem.

Dr. Anonymous is in very good company here. The problem of unambiguously interpreting quantum mechanics has resisted physicists for 80 years. So now it’s more commonly qualified of “philosophical” or “uninteresting”. In other words it became what Douglas Adams called Somebody Else’s Problem… Well, maybe it does not interest mainstream physicists. Maybe, actually, you need to be able to ignore the problem to become a mainstream physicist. But it definitely interests me. And it is definitely relevant to modern physics.

Since no better theories are in fact suggested, the content of the paper has only subjective significance – this is not necessarily a bad thing, but it does make it more of interest to philosophers rather than to mainstream physicists such as myself. Similar comments apply to the operational frameworks mentioned above.

I explained why I disagree with the assertion that no better theories are suggestd. But it is interesting that the final objective of all this discussion is to kick the TIM outside of “mainstream physics” into the lower realm of “philosophical interest only” with severely restricted “subjective significance only”. This is a pretty good way to invalidate the whole thing.

But the TIM has a physical significance that I think is not so easily dismissed. It shows that practically any equation in quantum mechanics or general relativity is only an approximation. And it tells physicists how to fix that. That’s a pretty broad challenge, and you cannot address this challenge by saying that the problem is only subjective. It is not a subjective issue that our laws of physics were not rewritten when we changed the definition of the metre from a reference solid to some distance travelled by light during such and such time. Why not? What makes a 1/r^2 law relative to a solid rod remain a 1/r^2 law relative to a ray of light? Seriously, is this a subjective question?

Is it OK to disagree with Einstein on general relativity?

3. I have little to remark on the relativity sections, as my only disagreements are at the ‘philosophical’ level – eg, I would have given Einstein a little more credit for physical justification of GR (principle of equivalence, and the recognition of curvature effects in SR from a mass rotating about a fixed point);

Actually, I do give credit to Einstein for these “physical justification”, only to explain why I believe they are not valid. So, despite a lot of admiration for Einstein, I simply point out a couple of logical flaws in the reasoning. It’s not flaws in the mathematical results, fortunately, but conclusions that do not follow from the initial statements.


The recognition of curvature effects in special relativity from a mass rotating about a fixed point is the easiest to explain. If a mass rotates about a fixed point, there will be a contraction along the path it follows, but not along the radius joining the mass to the central point. In other words, the ratio of circumference to radius is not 2pi, it’s a little less. See this page for a more modern discussion than my reference [12].

My objection to this is that you define a 3D surface that is curved, namely the 4D helicoid corresponding to the area that the radius covers over time. But we all know that you can find a 2D surface, like an helicoid or a sphere, in a 3D space, without necessarily inducing a “curvature effect” on the 3D space. In other words, the rotating mass in special relativity is not sufficient to imply a 4D non-euclidean geometry.

Again, there is nothing “philosophical” about this. It’s a discussion of whether there is a flaw of logic in the original reasoning, as I think there is, or not.

writing down the local Minkowski metric or geodesic equation does not give them any physical significance per se;

That is true. In the context where they are written in the TIM, the only point is to connect the traditional notation to the TIM notation, and to show why they are compatible.

the two-slit experiment is a poor justification for curved spacetime

It is part of a series of observations to show that light rays are a poor foundation to build an euclidean geometry on. I think that it is a better justification to say: “we don’t know how to build an euclidean geometry because the closest we have is not euclidean in this and that case” than to incorrectly infer that a 3D curved surface implies a 4D curved space-time. But that’s just the biased opinion of someone who saw a logical flaw in the generally accepted justification, and spent some serious time finding a justification he thought might be less flawed to salvage the beautify theory of general relativity.

and the existence of the Planck length could just as well be used to argue against instead of for scale invariance of physical laws.


This is how it is used in the TIM. If there is a scale-invariant constant in physics, a scale-invariant physics requires a pretty serious rethinking, similar to what Laurent Nottale attempted with his scale relativity. Again, this is a pretty serious issue that one cannot just paper over with a “philosophical question – dismissed” sticker… :-)

But the majority of “mainstream physicists” are just not ready to make their space-time fractal, as this kind of theory apparently demands. And they are probably right that a lot of assertions in the current presentation of scale relativity sound ad-hoc, that the mathematics is still a bit inconsistent and immature, and so on. But that only means the solution is not completely satisfying yet, certainly not that the problem is not there. In my opinion, this issue is a thorn in XXIst century physics about as painful as the constancy of the celerity of light implied by Maxwell’s equations was for XIXth century physics. Which is a pretty long-winded way of saying it’s worth solving.

Linearity and complex Hilbert spaces

The objections Dr. Anonymous makes to the quantum mechanics part are by far the most interesting ones. They are also the most technical ones, and it’s possible that my answers will be unconvincing, if only by lack of space. Expect more iterations here, and possibly some serious changes in the paper to make things more precise or easier to understand.

4. I think that the quantum sections are a little misleading to a careless or non-expert reader(though not intentionally so!), both in terminology and presentation

It is definitely not my intention to mislead. I think that a reason they may be misleading is because it might seem like I attempt to deduce all of quantum mechanics, when in reality there are things I know I deduced, and others I simply borrow because we know from quantum mechanics that they work. The line is a bit blurred, not the least because I tried to push as much as I could into the “demonstrated” half, and sometimes failed (for the moment at least).

the arguments given to introduce linear operators and wavefunctions and the like have little substance,

This is pretty vague, but fortunately specific examples are given later, which I will address then. I acknowledge that the TIM allows only to derive certain properties of QM, but not all of them. What is missing may simply be because I don’t know how to derive it, because it derives from other reasons, or because it’s not necessary for a working physics. I found cases in each category.

  • Linearity or real-valuedness appear to be not necessary for a working physics, because the TIM highlights counter-examples. So they are considered as added hypotheses that make some cases easier to solve. For example, the “principle of superposition” is demoted to a simple “hypothesis of superposability” which, when applicable, makes more theorems and results applicable.
  • The evolution law, for example the Schrödinger equation, is something that I have not entirely been able to derive. There are multiple derivations of Schrödinger, but none of them seems to apply too well in the TIM. I have, however, been able to offer a quick qualitative “sanity test”, which requires an ad-hoc, but not unreasonable hypothesis. So that’s all there is in the TIM article, and I would certainly understand if Dr. Anonymous or anybody else qualified this section of “having little substance”, because it does.
  • The collapse of the wave function, on the other hand, can entirely be deduced from the need to have repeatable measurements. I have several lines of reasoning that all lead to the same result, so I’m pretty convinced that this is solid.

and do not lead even to the usual Hilbert space description.

By this, I think that Dr. Anonymous means primarily that only a real-valued HIlbert space is implied, not a complex-valued one. Since she makes the objection more explicit below, I’ll address it there. Another possible interpretation is that it’s always a finite-dimension Hilbert space, even for space-time coordinates. But this is intentional.

Now, if the TIM is not deriving the “correct” Hilbert spaces, isn’t it easy to show that it makes incorrect predictions? In other words, instead of not being a theory, it would simply be a wrong theory. So these little differences are what makes things interesting and challenging.

The best one can say is that standard QM fits into your general framework, but that very little of it is implied by this framework.

If it were only the collapse of the wave function, it would already be an interesting result. But I think that more is implied, specifically practically all the axioms but the evolution law. If it’s not implied by my reasoning, it is important to tell me why not. The specific remarks that follow are not entirely convicing to me.

Probabilities and evolution

4a. One can certainly, and trivially, rewrite measurement probability distributions p=(p1, p2, …, pn), which live on the n-simplex, as “unit vectors”, psi=(psi1, psi2, …,psin), which live on the surface of an n-ball (where psi1=sqrt[p1], etc).

This rewrite, made in section 4.1 of the TIM, is indeed mathematically “trivial”, but a lot is trivial in hindsight. The really difficult question is: if it’s so trivial, why aren’t we taught more often that the state vector in quantum mechanics is the only form a state describing predictions of future measurement results can take? It’s possible someone else wrote it before me, but I know this is not something I’ve been taught, and I tried to find references to such a result after discovering it. Honestly, it was so “trivial” I thought impossible no one else would have made this observation. But it is still not a standard staple in QM teaching.

However, there is no reason whatsoever in the TIM framework to postulate that measurements and evolution are linear with respect to psi, rather than being linear with respect to p (or indeed with respect to any any other function g=(g1, g2, …, gn) of the probabilities – eg, g1=exp[p1]).

Indeed. This is why it is not postulated in the TIM.

For measurements, it is only observed that if a measurement is real valued, and if psi is the probability vector as defined in the TIM, then one can define an operator that 1) is linear in psi, 2) has the measurement results as eigenvalues with specific vectors as eigenvectors, and 3) can be used to compute the right expectation value. So from these properties, one has to deduce that it’s the “observable” of quantum mechanics.

For the evolution, I have to introduce it ad-hoc, and I think it is pretty clear in the article. I can’t say that I’m happy with it, and I honestly don’t have a satisfying answer to offer yet.

And it is merely ex post facto analysis in the light of QM to suggest such postulates are even reasonable!

It is normal to use “ex post facto analysis in the light of QM” to find the results of QM, just like it is reasonable to use “ex post facto analysis in the light of metrology” to find a definition of time that is based on the TIM, but fits our standard definition of time pretty accurately. So this is not cheating, this is simply verifying that QM “fits”.

One needs something else to ‘derive’ a physical significance for linearity in psi (eg, Kaehler manifolds).

Here, I’m really lost. I don’t see how Kaehler manifolds can give a physical significance to anything. The physical significance for linearity in psi is that if you construct a linear operator on psi with the right eigenvalues and eigenvectors, it trivially allows you to compute the right expectation value. What else do you need?

In the absence of this something else, it would seem far more reasonable, for example, to postulate that the evolution and operators should be linear with respect to the probability vector p !

I don’t think so. To get the eigenvectors, you need the nxn matrix. Then, to get the right expectation value (i.e. average observed value), it cannot be linear in p.

Complex or real Hilbert spaces?

4b. Even when one formally writes down a vector psi as above, it only inhabits a real Hilbert space, rather than the complex Hilbert space needed for QM (eg, to explain the physical double-slit experiment, the Aharanov-Bohm effect, the singlet state, etc).


The argument that complex Hilbert spaces are needed for QM is a frequent one. If that is true, if QM probabilities are somehow special and dependent on complex numbers, how can we get a “double slit experiment”, i.e. interference, with classical waves over water? Only classical probabilities are at play in that case.

In reality, complex numbers are only a tool in the computation of probability, to sum probabilities that are not independent from one another. In the water interference scenario, like in the double slit experiments, the Aharonov-Bohm effect or the singlet state, you have probabilities that are “entangled”. It’s simpler to explain with the water waves, but it’s fundamentally the same mathematics in all cases. The waves can propagate through two slits in a wall, but they come from the same source. The height of water at one point (hence the probability of presence of water particles) can be computed by summing the contribution of the wave coming from one slit and from the other. But these contributions are not independent from one another. They are always at a constant phase from one another, which for water waves is a function of the difference in distance between the point where you compute the probability and each of the two slits.

I tried to illustrate how complex numbers emerge in the subsections “Trajectory measurement” and “Normalization of the wave function” in the TIM paper, but to be honest, these are some of the sections I consider the least satisfying of the whole article… It attempts to demonstrate that having a field of “probability of presence” is the correct way to represent the predictions about an experiment where the question is “where is a single particle”, but it does not make a very good case that this is the same thing as the wave function in traditional quantum mechanics.

Hence, while the formulas written down “look” quantum mechanical, much more is needed.

A little more may be needed in terms of explanations. But f the formulas look quantum mechanical and behave like quantum mechanical formulas, why can’t I legitimately use the quantum mechanical toolset to solve them? And if the predictions and the formulas are quantum mechanical, why is “much more” needed?

Can one use an operator notation for a non-injective and non-linear operator?

4c. The notation M^|psi> for general nonlinear operators, and M^|psi>=m|psi> for eigenvectors, is misleading. One does not have, for example, the property

(M^ + N^) |psi> = M^|psi> + N^|psi>

in general, even though the notation suggests it.

The TIM article is very careful to not use the property you cite, and goes to extraordinary lengths to explain that it does not hold in the general case. That’s probably one of the key point of the whole article. Isn’t it a bit disingenuous to say that the notation suggests something when the text explicitely says the opposite?

The “hat-M” notation is an intermediate step in reaching the “inverse-hat-M” or “check-M” notation, which does have the property you cite. It would be very confusing to change the notation meaning “apply an operator” just because some operators are linear and some are not, don’t you think? It’s a bit as if I insisted that the notation f(x) implies a smooth function because I’m used to sin(x) or cos(x), and then required that you use a different notation for any discontinuous functions…

It would be more accurate to represent nonlinear operators as functions mapping the sphere to itself, eg, M(psi), and to refer to vectors satisfying the property M(psi) = m psi as ‘fixed points up to scaling’ or similar.

It would not really be more “accurate”, it would be a change of notation that would obscure the fact that hat-M and check-M are cousins, hat-M being constructed to be a linear operator based on the eigenvectors (or fixed points) identified with hat-M. My notation would only be inaccurate if I did not carefully explain that the notation “hat-M” is used for a non-linear operator, or if I used properties that implicitly require linearity. I do not think that I do either.

As I point out in my short answer, in the general case, M(psi) would be even worse as a choice of notation, because it implies that hat-M is a function, and in the most general case hat-M is not injective, something which the text also insists on. We have a mathematical object which is a “random function with fixed points”, and this strange thing will do strange things to your notation. The best I can do is to take the closest notation, and the closests / simplest notation for what I wanted to express is the operator notation.

Collapse limited by the speed of light, or normalization limited by the speed of light?

4d. The QM of entangled systems (more than one particle) is not easily explainable by postulating that collapse is limited to the speed of light, and does not appear to be compatible with recent experiments by Gisin et al (eg, http://prola.aps.org/abstract/PRA/v63/i2/e022111 and http://lists.paleopsych.org/pipermail/paleopsych/2006-April/005093.html,).

The article does not postulate that collapse is limited to the speed of light. It only focuses on how to normalize the wave function, and it points out that collapse is not instantaneous, which is not the same thing. Our computation of probabilities must obey constraints set of the experiment. One of these constraints is that we cannot collect data or setup entanglement between particles faster than the speed of light. Consequently, the space-time points that are summed in a single probability normalization may not be simultaneous in a given reference frame. The experiments listed would only tend to prove that this is the correct approach, not to invalidate it.

For example, suppose Alice and Bob live in spacelike separated regions and share a singlet state (or ensemble thereof), and each decides to measure spin in directions a or b, at random. The probability for Bob to measure spin -up is 1/2 in any run. Now, if collapse is at the speed of light, then his result cannot depend on whether Alice has found spin up or spin down for her measurement – it can only depend on local variables in his vicinity. But in fact Alice and Bob find perfect correlations when they happen to make the same choice of measurement direction – which cannot be explained via such local variables (Bell’s theorem). So, whatever entanglement is, its effects cannot easily be explained, and indeed are exacerbated in strangeness, by restricting collapse to the speed of light.

First, there is a somewhat ambiguous use of Bell’s theorem in your sentence. Bell’s theorem does not prove that there are no hidden variables in quantum mechanics, it shows how to detect if there are any. Only experiments using Bell’s theorem such as those of Alain Aspect have led us to think that there are no hidden variables by applying the theorem. I think that this is what you meant…

Second, in your experiments, Alice and Bob’s particle can only be entangled if we made them entangled to start with. For example, they can be the result of two particles generated by the same event. These particles then travel at most at the speed of light. So entanglement itself is carried at most at the speed of light.

More to the point, when the experiment is performed, no correlation can be established until the data of both Alice and Bob can be collected in a single point. All the TIM is suggesting is that the normalization condition, if seen as a sum of probabilities being one, can only be collected at that point, where both Alice and Bob are in the past light cone. What will matter is that Alice and Bob’s events are in the past light cone. It does not matter if they occured at the “same time” or not, which is good, because the notion of “same time” is quite obsolete in relativity. One of the experiments I found funny, because it’s done in such a way that both Alice and Bob think they made the experiment first. Well, it does not really matter, they both did it in the past for the guy collecting the data!


The final word…

I apologise for corresponding anonymously, but after a friend pointed me to your paper and said that you were keen for feedback, I thought this might be helpful as a once-off thing.

That makes me sad. It all started as a helpful and intelligent discussion. But all of a sudden, it’s no longer a discussion, it’s just a one-way, once-off “thing”, a quick attempt to bring the poor fool out of his erroneous ways before moving on to more important “things”.

Oh well… Better having anonymous discussions than no discussion at all… :-)

Categories: Peer reviews, Physics

Scientific writing style

Funny, a couple of days ago, I was wondering if I was right using the plural first person (we) in the TIM article when I am so obviously alone writing it. And I not only found that it was the norm, I even found an explanation for this. So I guess I was lucky.

Advocating non-anonymous peer review

In a previous post, I mentioned that Einstein had been unhappy with the idea of anonymous peer review.

The key word here is anonymous. I am certainly not suggesting that peer review is bad, but simply that making it anonymous degrades the efficiency of the review system. The reason, as I see it, is that the reviewer is not held to the same standard as the person being reviewed. It does not foster discussion, it does not give much of an incentive for the reviewer to actually try and understand work that seems obscure. I know that in my industry, code review is the norm, but I’ve never heard of anonymous code review.

In the case of ideas and research, anonymous peer review also works as a form of censorship. On average, it is much easier to get past an anonymous reviewer you don’t know anything about by presenting stuff that everybody in the field understands as opposed to stuf that is hard to understand and hard to explain. I took examples in the field of computer science, but I’m pretty sure that the same thing happens to obscure fields of physics.

So, how can we fix that? I think that we now have the technology, or at least we are getting pretty close to it. We know how to build a web application that displays mathematical formulas pretty well. We know how to build moderated public forums of discussion that can deal with very high levels of traffic. Most publications are now in electronic form. Why not combine these technologies? What about a public forum where you post pre-print articles in electronic form for everybody to review, where reviews can contain maths and diagrams and whatever or even annotate or modify the original document? The evolution and discussion of the article could then take a nicely laid out discussion thread.

I think we have the technology.

Physics: will anybody read the work of an amateur?

Another reason for being late with XL is that I recently spent a lot of free time trying to evangelize my ideas about incomplete measurements in physics. The main idea is not too complicated, I’m just asking if there is any good reason why two physically distinct measurements of, let’s say, a coordinate we call x, should behave identically at all scales. I have good arguments justifying that the answer is no. But then, what does it mean exactly when we use x in an equation? [If you have any interest, there is an article (PDF) developing these ideas.]

Advocating a serious change like this is pretty tough. Professional physicists tend to look down on the work of an amateur, and probably with good reasons. After all, there is a lot of crackpot physics out there. But still, I wish that after 6 months, I could have gotten a single serious physicist to actually read the damn article! I mean, in my own domain of expertise, I do not routinely leave e-mails unanswered, or ignore a request just because someone did not publish 25 papers…

This came as a surprise to me. I think that there is now so much incentive to write article that practically nobody bothers reading anymore, everybody just writes, writes, writes, hoping to be the next Einstein. The peer review system introduces further undesirable side effects, because it tends to favor articles that “fit in the mold”. I recently came across an article suggesting that Einstein was taken aback by the idea of anonymous peer review.

At first, I thought that the newly created sci.physics.foundations newsgroup would be an interesting outlet for such ideas. After all, it had been created just for that purpose. But over time, I got disappointed by the slowly decaying quality of the discussion.

Follow

Get every new post delivered to your Inbox.

Join 365 other followers

%d bloggers like this: