In the comments following my lament on the state of physics, many physicists, including a Czech blogger with a Don Quichottesque flair for strawman arguments, pointed out that there are a lot of so-called crackpots who write to reputable physicists and waste their time. Funny that, I don’t think that many programmers ever complained of computer science crackpots. Why not?
[Update: Actually, there are computer science crackpots, it seems…]
One difference I see is that in computer science, reputable authorities of the field do not spend their time explaining a the same time that our stuff is almost perfect, but that we have no idea how it works.
Oh, it works, we just don’t know how…
But, as Peter Woit noticed recently, a respectable scientist like David Gross publicly stated It’s only in particle physics that a theory can be thrown out if the 10th decimal place of a prediction doesn’t agree with experiment. This kind of bragging stance is common. I recently read in a French popular science publication that “No experiment has ever contradicted general relativity, which at the very least ignores the Pioneer anomaly or the galaxies rotation problem.
Our beloved Czech blogger takes a similar stance that for virtually all practical purposes, the correct interpretation of quantum mechanics has been known since the late 1920s. In reality, the interpretation of quantum mechanics very much remains an open issue, including for the physics community today. This basic data, namely 1.9M pages on “interpretation of quantum mechanics”, flys in the face of the Czech blogger’s uninformed opinion. His predictable argument that all these guys are crackpots just like me is, to put it charitably, laughable. I find that there is much more truth in basic humor. You know, the kind that states that string physicists don’t build bridges, but that they have a really nice proof that 26-dimensional spherical bridges can’t possibly break.
It’s not just a matter of complexity…
So maybe physics is exceptionally complex? Complex it is, but I don’t think it is exceptionally so. I know that the Czech blogger challenged my right to put my own industry on equal footing with Ed Witten’s, but here is the news: the humble hard disk in the computer of every physicist has an error rate of about one part in 10^11 to one part in 10^16, the same kind of precision David Gross claimed was unique to physics research. I mean no disrespect here, but David Gross is simply misguided if he really believes that one can build hard drives (or chips, or high-reliability software) without a highly accurate theory of how that stuff works.
To take another example, XL (computer science) and the TIM (physics) are both the result of my personal reflection over roughly the same amount of time (in reality, XL is quite a bit younger). When it comes to a technical description sufficient to allow someone else to build on my work, it takes about 40 pages to describe the TIM, whereas for XL it is closer to 300 pages. And XL is a small and negligible project by computer science standards, just like the TIM is on the physics side.
No comparison is ever completely fair, but I think that there is some ground to claim that the obviously condescending attitude I have been complaining about is not justified.
The problem may be on your side, folks…
Back to the question of why there are no computer science crackpots, I think that we would also gets hundreds of messages trying to tell us how to do our job if we gave such a public display of intense puzzlement… To illustrate this, imagine for a moment what the typical computer sales pitch would look like if Brian Greene were giving it instead of Steve Jobs…
Hello! Today I am going to tell you about the latest and greatest of our operating systems research projects. This will be amazing! If we are right, it will allow you to edit your videos, store all your photos and your music, to fly into space or even to make coffee. Everything! So for the first time since the beginning of computer science, we have a serious prospect of figuring out how to build a Grand Unified operating system.
But first, I need to explain why it is difficult to bring such an amazing product to the market. See, we have this extremely precise theory of how the computer works at the lowest level, which we call Processor Theory. We began understanding how processors work with the discovery that computers are quantified. Everything is made of bits, really really tiny things, so small, in fact, that they can only contain a zero or a one. But there are millions of bits in any program, so at a macroscopic level, we don’t perceive bits at all. However, we know they exist, and that they are laid out in an infinitely long string of zeros and ones. [Insert flashy animation of dancing zeros and ones here]
You need to understand that bits don’t work at all like macroscopic programs. You must forget everything you know about macroscopic programs to understand bits. The fundamental reason for this is what we call the current/bit duality. You see, every bit can also be seen as a small electrical current. Sometimes, it behaves as a bit, a zero or a one, but sometimes it behaves as a current. We don’t really know why that is, but it’s been confirmed by so many experiments that it’s now known beyond any doubt. We have been able to develop a complete set of relation between bits based on highly advanced mathematics like Boolean Algebra, which we call the Standard Model.
Well, at a larger scale, we have this other theory that has also worked extremely well, and it’s called the General Theory of Operating Systems. It predicts how the Operating System executes programs, and we have been able to use it to understand how to run iPhoto or iTunes or all these extremely useful applications. When you start a program, for instance if you double-click here, the hard disk starts spinning and whirring, so we think that the program creates an attractive force dragging the bits from the hard disk into the computer. It’s like a vortex pulling the bits into that infinitely long string of bits I was talking about earlier, but now the string bends and warps and vibrates to adjust to all the new bits that are coming it. So this is what the General Theory of Operating System really boils down to: you double click on an icon, and boom, the program runs!
You must understand that no experiment has ever contradicted the General Theory of Operating Systems. [Right as he speaks, the screen turns blue with tons of hexadecimal digits, and the presenter looks at the screen with a slightly annoyed smile…] Oh, look, this is what we call the Blue Matter. It is an additional program the operating system must be running, but we have not been able to find where it is exactly on the hard disk. And we don’t know who double clicked to launch it, so we have called that mysterious entity that must be double-clicking somewhere the Blue Energy. Our experiments indicate that about 95% of all computer users must have some Blue Energy in them, because that’s the fraction of the user base who has seen the screen turn blue like this.
Anyway, it’s really difficult to unify the General Theory of Operating Systems and the Processor Theory, because Processor Theory predicts that there is an infinite number of bits. And as I mentioned, it is extremely precise, so the axioms of Processor Theory are not frequently challenged [Thin condescending smile for the poor slobs who try]. But the problem is, if Processor Theory is correct, since we have an infinite string of bits, this means that you should be able to run an infinite number of programs. Unfortunately, when we try that, we run into trouble. This is really technical, so I won’t get into the gory details, but a crude analogy is that you would need an infinite number of double clicks to run an infinite number of programs.
So we thought: maybe the problem is that we think there are only zeros and ones, maybe the problem is the binary representation of the computer. Initially, it started as a way to model what we call the Strong Binary Or Operator. Someone noticed that it behaves a lot like the addition of real numbers: 0+0=0, 0+1=1, 1+0=1, 1+1=… well, OK, we have to figure out how we can write 1+1=1, but maybe 1 and 2 are more or less the same thing. Anyhow, we thought: maybe we should use real numbers instead of binary.
This was a very exciting idea, because we can write something that is practically equivalent to boolean algebra using what we call dualities, but at the same time, we can vary how large things are, just like with currents. We do not understand everything yet, but it looks really promising. We have found very interesting mathematical results that indicate some really strong internal consistency to our model. For example, using something called a Knuth cosine operator, we can “wrap around” real numbers onto the small range needed for binary digits in a periodic way. There is an alternative non-periodic model, the Backus-Naur square exponential flattening, and the topic of which one models reality is hotly debated. The key result here is that we can use these to perform operations on the bits. So we think we have found something really profound here, there must be some unification between boolean operators and real numbers. It would be really great if we only needed one entity, the real number, instead of two, the bit and the boolean operator.
Another great success with this approach is that it predicts for the first time the dimensionality of computer storage. You see, we know that there is this infinite sequence of bits, but why is it one-dimensional? This has always been a big open question. Our new approach is the first one to predict the number of dimensions, because it’s only consistent in two dimensions, one for the bit position in the computer, and one for the bit value. We don’t see these two dimensions in the computer, of course, so we think the extra dimension must be small and that’s why we don’t see it.
So this is really an exciting time to be a computer scientist. We are not completely there yet, however, because there are many ways to map bits onto real numbers. We have proven an extremely powerful theorem demonstrating that there are at least 10500 different ways, so now we have started a very systematic research project to categorize these 10500 ways we call the landscape. We think that the reason computers work is really that they happen to be built using the right sequence of bits to execute our programs, but this may just be random chance. In other words, they work because if they did not work, we wouldn’t be using them, would we? This extremely interesting idea is what we call the anthropic principle.
In order to get a better insight, we are building this two-billion dollars facility called the Large Hopeful Contraption, where we will send enormous currents and provoke all sorts of short circuits in a hope of better understanding boolean algebra. We don’t know what we are going to see, but it will be very exciting!
Anyway, we thought it would take us two years to write MacOS X. It’s been 30 years, but it’s just as exciting as ever. I hope to have great news from the LHC to share with you at next year’s keynote!
I can guarantee you that if Steve Jobs was selling his stuff this way, it would not be long before he’d start getting tons of e-mails from various physicists saying “Huh… Are you sure this is right? You know Steve, I wonder if there isn’t a simpler explanation on how computers actually work… Now, maybe Steve would look at the e-mail and just shrug: Oh well, another crackpot, these guys are soooo annoying!
Instead, Steve Jobs stated the key point I want to make here (a few seconds into the movie, when he talks about Pixar): it’s relatively easy to do something that appeals to specialists but is totally obscure to the layman. What is difficult is to make something that appeals to both.
C’est mon opinion et je la partage…