The poster explains:
Regarding the ubuntu version, was it build with a g++ 4.6 ?
It seems to require a GLIBCXX_3.4.15 which is just above the one I have
I am going to install a g++ 4.6 from source in order to get the right
libstdc++ but I wanted to let you know just in case someone else get the
So here is the problem : if we upgrade our build systems to the latest and greatest patch level, we will also include a number of recent dependencies that our customers may not have yet.
Some development environments provide options to select which build tools you use, allowing you to revert to older build configurations. For example, my MacOSX system shows the following variants of G++:
g++ g++-4.0 g++-apple-4.2 g++2 g++-3.3 g++-4.2 g++-mp-4.4 g++3
Unfortunately, that facility doesn’t automatically extend to all the libraries you depend on. For example, we use Qt to develop cross-platform GUI code, and versions before the still unreleased 4.8 version emit compilation warnings while compiling on MacOSX Lion. So if I want to build with Lion, my options are either to build in an unsupported configuration (and stop using the -Wall option in our builds), or to build with a not-yet-released library.
So is the best solution to keep your build systems behind, with all the implied security risks? Is there a better overall strategy I didn’t think of?
Reviewing my daily feed of tweets this morning, I ran across a presentation called “It’s time to fix HTTPS“. The topic itself is of interest to all of us, since it concerns the security of e-commerce transactions, among other things.
Yet the slide deck lacks basic appeal:
- Only text (or busy screen snapshots)
- No obvious organization or story
- Three boring slides of disclaimers and acknowledgments at the beginning,
- Acronyms, jargon,
- Long sentences, broken apparently at random
This kind of presentation is not an infrequent occurrence, unfortunately. For some reason, many scientists and computer scientists seem to take pride in showing horrible slides. I resisted the urge to make a catalog.
So let me state something that should be obvious, but obviously is not: Just because you are smart doesn’t mean your presentations have to suck. Or put it another way: Your time is not so precious that you shouldn’t help your readers get your point.
Sharing an idea
The whole point of a presentation is to share an idea, to convince someone. This requires some work at two distinct levels, form and contents. Let’s assume that you have the contents, what can you do about the form?
Here are three simple things to keep in mind to build a presentation that is useful for you and for your readers:
- Tell a story
- Keep their attention
- Be a guide
Remember above everything else that your objective is to share your idea, not rehash it to yourself. Therefore, if the idea does not contaminate your audience, the presentation failed its objective.
The “storytelling” word has been used and abused. The gist of storytelling is that sharing an idea is not just about sharing facts, it’s about making your audience take ownership of your idea, make it their own.
This is often hard to accept for the scientific minds. Aren’t facts enough? In reality, all facts can be disputed. All opinions have to be defended, explained, elaborated. Even if the idea is obvious to you, it may still be wrong, or dangerous, or you may need to explain the basics to avoid losing half of you audience.
Storytelling is not about what you say, but about how you say it. Don’t write “It’s time to fix HTTPS“. Prefer “Do you know it’s really your bank talking to your browser?” Instead of “Global PKI, as currently implemented in browsers, does not work“, what about “The browser chain of trust hangs to weak links“? (assuming I understood the core argument correctly)
What am I doing with this simple rephrasing? I’m trying to deliver the same facts, the same core idea, but in a way that the audience may relate to. Not everybody knows what HTTPS means, but anybody (reading Google Docs) knows what a browser is or that security matters when it talks to a bank.
Even the best facts need a good story for people to get interested or remember them.
Keep their attention
The slide deck about HTTPS is on a topic that interests me, but I had some trouble following it to the end.
In these days of soundbites when wisdom has to fit in 140 characters, sometimes you need all the help you can get from fancy visuals, animations, speaker charism simply to keep the audience awake. And if you don’t have a speaker (e.g. for an on-line presentation), you may need other tricks.
Google Docs is clearly not the best tool when it comes to delivering fancy presentations. It’s not a limitation of on-line tools, though. Actually, some of the most convincing innovation in that spaces comes from on-line tools. SlideRocket delivers really nice presentations, arguably much better than the average PowerPoint. And what’s the best reason to use Prezi, if not fancy visuals?
Still, do not go overboard. Beware that a movie does not replace a presentation. Who has not seen one too many on-line video like this one?
It certainly took a lot of work. But in my option, it’s the exact opposite of the HTTPS slide, i.e. it’s all about showing off effect after effect. It doesn’t keep my attention either, it smites it to bits.
So how could the HTTPS slide deck retain my attention better? There needs to be some level of organization, some key message, some way for me to understand “Ah, that’s what they are talking about now”. We don’t want raw data, we are already over-fed with data. So whatever we pay attention to needs to be structured.
Fancy visuals do not replace the presentation. But, utilized well, they make it live.
Be a guide
Sharing innovation is even more difficult. To paraphrase A.C. Clarke, any sufficiently advanced idea is indistinguishable from gibberish.
It takes a fair amount of marketing and communication to correctly explain the value and benefits of some new technology. I remember being very happy that VMware was doing all the work of educating our customers about the value of server virtualization, which meant we didn’t have to do that work when talking about HPVM.
Innovation is about telling others what to do. And nobody wants to follow directions, so you need to do it not with brute force, but by getting the audience to actually follow you. One way to do that is by showing a better way. Another is by inducing fear of the current situation.
The HTTPS presentation tries both approaches, but without much conviction. The fear is too implicit, you really need to understand the technology. The better way, the greener pasture just over the fence is a little bit too vague. So it’s not entirely convincing. The technical arguments could be made into a much more appealing proposal, however.
To be a guide, you need to already know where you are going.
You may have heard of the Joel Test to evaluate startups:
- Do you use source control?
- Can you make a build in one step?
- Do you make daily builds?
- Do you have a bug database?
- Do you fix bugs before writing new code?
- Do you have an up-to-date schedule?
- Do you have a spec?
- Do programmers have quiet working conditions?
- Do you use the best tools money can buy?
- Do you have testers?
- Do new candidates write code during their interview?
- Do you do hallway usability testing?
At Taodyne, we pass all these, yet our project took a major blow because there is at least one other key rule that goes above all these. I will call it rule zero, and it impacts how you write code.
- Do you go in a single direction?
The code-oriented formulation of rule zero is the following:
- Do you avoid branches like the plague?
Do not ever fork your codebase
With tools like Git, Mercurial or Bazaar, branches are much easier to manage than they were in the past. They actually became part of the process. With these distributed source control systems, you keep creating and merging branches for practically everything you do.
So why the title, “Branches are evil” (a favorite of über-geek Dennis Handly)? Here, by “branch” I don’t mean the kind of minor branching where two variants of the codebase differ by a few lines of code. This has to happen as you develop code. The problem is when the fork affects the design itself, or the direction the project is following.
My biggest mistake at Taodyne was ignoring this key rule. It all started with a simply thought experiment about an alternative design, quickly followed by a little bit of experimental code. And I forked our code. But then, others started building useful stuff on top of that branch.
When we decided to drop that alternate design, we had a big problem on our hands. We needed to salvage all the good stuff on that branch, that had been written against a completely different design. We have spent several months trying to reconcile these two conflicting designs that had conflicting objectives. As a result, I spent countless hours trying to fix bugs that were a result of merging code with different assumptions.
When a single mistake delays the release of your product by a few months, you know you did something solidly wrong. I thought I’d share the experience so you won’t do the same mistake.
I spent the last four days at a rather exciting entrepreneurial event on the French Riviera, which really combined three distinct events under the umbrella of the brand new RivieraCube association:
- An Open Coffee with the Sophia-Antipolis team. Open Coffee is an informal gathering of (mostly Java) geeks around a coffee (or, more often in our case, a beer, since we do that in the evening). This was so successful that a new Open Coffee group for Nice spontaneously emerged.
- A BarCamp the next day, with a small (and cramped) Startup Corner where Taodyne presented its flagship product, Tao Presentations. We had some exceptional unconferences from well-known French serial-entrepreneurs, including Kelkoo founder Pierre Chappaz or Kipost founder Pierre-Olivier Carles.
- A Startup Week-end which gathered about 100 enthusiasts with the intent to create a startup in 54 hours. And some of them actually managed to pull it off, which is pretty amazing when you think about it. But the talent and energy in that room were simply amazing, and reminded me of some of the best moments I had in the Silicon Valley.
Reports on the web
There are already a large number of blogs reporting on this event, but I believe the best indicator of how lively it was is its twitter hashtag, #swnsa. There was actually a friendly contest with another Startup Week-end held the same day in Lausanne, Switzerland:
And the winner is…
There was a number of exciting projects, but there was generally little surprise as to who the winners were. The first three projects get a lot of help from local consulting companies, and the leader of the winning team gets a free Geek Trip to the Silicon Valley.
The winner was “Mamy Story” (@MamyStory), which I believe surprised no one in the audience. The concept is simple (tell the story of your grand-parents), has an interesting innovation (which I won’t disclose here), a catchy name (“papy” or “mamy” in French is a common nick-name for grandparents), but more importantly, appeals to our emotions, something which they largely exploited during their pitch.
As a matter of fact, they managed to get a member of the jury to tell them they could reach a larger market than what they presented in the plan. Here is another example of why they have a market.
The runner ups were :
- Dynoo (@dynoo_com), a project to “spread the word” (the French pronunciation for Dynoo sounds like “Dis nous” or “tell us”, although they sometimes said it the english way, which I think weakens it. They should consider renaming it to deenoo),
- Qwinti (@qwinti), a web site to save your social activity, who had a really good designer on the team,
- JustMyTrends (@JustMyTrends), a web site offering a personalized shopping experience for hard-to-find items (the founder has a hard time finding shoes fitting his über-large feet).
And the winner is (redux)…
There was also an iPad2 to win, offered by Everything4Mobile (a very cool web site created by Virgile Cuba, a regular at the Sophia Open Coffee).
The winner was Matthieu Loutre, who was a member of our team. He lives in Montpellier, but he will happily drive on the 25th of March to Nice just to get his new gadget from the friendly team at the Apple Store (and when I say “friendly”, I don’t say that lightly – The user experience in that store is remarkable, doubly so by French standards).
First use of Tao Presentations in a conference
On Friday evening, I joined a project that I won’t talk about, because I believe the project leader has needs a bit of time to flesh his idea out, and even more time to turn it into a real product.
That being said, that was an occasion to try our prototype of Tao Presentations in a real, competitive environment. I learned a number of things :
- It’s a really competitive way to tell a story. You think about the story first, the way to tell it follows, something which is often harder with other presentation software.
- The presentation part just works. It didn’t crash once during the two days of rather heavy use, and the worst misbehavior was transient lighting glitches on the screen when using OpenGL lights.
- One of the challenges was to test whether creating live mock-ups of software to explain an idea was possible. It worked, it was easy, it really added to the presentation, but then we couldn’t really use that part because the question we expected didn’t come up
Some aspects were less positive:
- Editing slides triggers an elusive bug on a relatively regular basis. I had the issue about half a dozen times in two days. The program crashes, which is not a real issue because of the way the workflow is organized (I never lost a single bit of what I had done), but still is annoying.
- The software doesn’t automatically reload images when they change on disk, which means you sometimes need to restart it just to load a new version of the pictures. To be fixed.
Overall, I had some rather good feedback on the presentation. I showed a talk about XL to half a dozen true geeks, talked about programming techniques.
Young programmers and compilers…
I think it’s a good thing overall, but then someone still needs to get interested enough by system software. I’m afraid system software programmers are getting old. We need to train the new generation, to get them interested in languages that can run fast.
The good news, then, is that XL got rather positive comments. No “why invent a new language” in this crowd.
There is a little bit of activity on the XLR-Talk mailing list, dedicated to discussions about the XL programming language and runtime. XL is a high-level general purpose programming language I designed and implemented over the years. It exists in two primary flavors, XL2, an imperative statically-typed language, and XLR, a functional flavor. Both share a very similar syntax and basic concepts.
One of the interesting threads is about using XL for high-performance computing. I like it when someone writes:
Thank you for releasing Xl and in particular Xl2, this is a most
interesting and exciting development. I am a very long-time C++ user
and appreciate the power of generic programming through templates but
the syntax is clunky and I often find myself going off the end of what
is currently possible and end up messing around with the C pre-
processor which is frustrating. I am hoping that Xl2 will prove to be
an effective alternative to C++ templates and provide the programming
flexibility I crave.
Now, XL2 is not ready for prime-time yet. Its library is significantly lacking. But the core compiler is already quite advanced, and can compile very interesting pieces of code. For instance, XL was as far as I know the first language to introduce variadic templates, for code like this:
generic type ordered where A, B : ordered Test : boolean := A < B function Max (X : ordered) return ordered is return X function Max (X : ordered; ...) return ordered is result := Max(...) if result < X then result := X
What happens in this little piece of code is interesting. It introduces two key features of XL: true generic types and type-safe variadics.
True generic types
ordered type is an example of “true generic type”, meaning that it can be used as a type in function declarations, but it implicitly makes the corresponding function generic (C++ programmers would say “template”). In other words,
ordered can represent types such as
real, and you can use
Max with all these types.
In that specific case,
ordered is a validated generic type, meaning that there are some conditions on its use. Specifically,
ordered only represents types that have a less-than operator, because that operator is necessary to implement
Max. Note that a compile-time failure will occur if you attempt to use
Max with a type that doesn’t have a less-than, even if no less-than operation is used in the instantiation.
The second interesting feature demonstrated on this small example is the use of
... to represent arbitrary lists of arguments. This is used here to implement type-safe variable argument lists. You can for example write
Max(1, 3, 5, 6, 9), and the compiler will recursively instantiate all the intermediate
Max functions until it can compute the result.
These same features are also used for functions that have lists of argument with differing types, such as
WriteLn. The XL2 implementation of WriteLn is found here:
to WriteLn(F : file; ...) is any.Write F, ... PutNewLineInFile F
Write function itself is implemented with a similar recursion that ends on functions that write a single argument, e.g. an integer value.
How does it help HPC
So how do these features help high-performance computing? They allow you to easily write highly generic code, covering a large range of uses, without paying a run-time penalty for it. No objects are constructed. No garbage collector is required to clean up memory allocations : there are no memory allocations. Everything can easily be inlined.
There are other features in XL that also help with HPC:
- Expression reduction allows you to combine operations for performance or logical reasons, e.g.
- Configurable code generation allows you to take advantage of specific hardware and integrate it directly into your code.
Where do we go from here?
XL2 is currently a little bit on hold because I’m currently focusing a lot of my energy on the functional variant, XLR, used by Taodyne in its products.
However, I believe that it reached a stage where other people can contribute relatively easily. For example, it would be useful to implement the various “fundamental” data structures in the library, i.e. go a little bit beyond arrays. If you want to contribute to XL2, nothing would please more than to give pointers. Simply join xlr-talk.
A colleague sent me an interview with Kalani Thielen about trends in programming languages.
I’m fascinated by this interview. Kalani is obviously an expert. But what should we think of the following?
The function, or “implication connective” (aka “->”), is an important tool and ought to feature in any modern language.
I have a problem with this kind of jargon. Why? Because if concept programming teaches us anything, it’s that a computer function is anything but a mathematical “implication connective”. Programming languages are not mathematics.
Programming languages are not mathematics
Computer science courses often spend a lot of time teaching us about various aspects of mathematics such as lambda calculus. So we are drawn to believe that programming languages are a sub-branch of mathematics.
And indeed, this is the view that Kalani expresses:
The lambda calculus (and its myriad derivatives) exemplifies this progression at the level of programming languages. In the broadest terms, you have the untyped lambda calculus at the least-defined end (which closely fits languages like Lisp, Scheme, Clojure, Ruby and Python), and the calculus of constructions at the most-defined end (which closely fits languages like Cayenne and Epigram). With the least imposed structure, you can’t solve the halting problem, and with the most imposed structure you (trivially) can solve it. With the language of the untyped lambda calculus, you get a blocky, imprecise image of what your program does, and in the calculus of constructions you get a crisp, precise image.
The statement that I strongly dispute is the last one: In the calculus of constructions, you get a crisp, precise image. The Calculus of Constructions (CoC) is the theoretical basis for tools such as Coq. These tools are intended to assist with computer-generated proofs. So the very thing they talk about is represented crisply by the CoC. But if I want is to represent the behavior of malloc() (a function whose primary role are its side effects) or how CPUs in a system communicate with one another (physical interactions), then CoC is of no use. It doesn’t give me a crisp, precise image.
In other words, high-level languages with first-class functions, implicit garbage collection or dynamic typing are really cool, but they give me a blurry, imprecise picture of how the computer actually works. The reason is that a computer is not a mathematical object, it’s a physical object. An “integer” in a computer is only an approximation of the mathematical integer, it’s only an approximation of the mathematical entity with the same name.
Trying to hide the differences only makes the picture more blurry, not more crisp. For example, you can use arbitrary-precision arithmetics instead of integers with a fixed number of bits. And now, your “integers” start consuming memory and have other side effects. In any case, these arbitrary-precision numbers are not “native” to the computer, so they are “blurry”.
Programming languages are languages
With concept programming, I’ve consistently argued that programming languages are, first and foremost, languages. Trends in programming languages are a problem of linguistics, not mathematics. The goal should not be to make the language more precise, but to make it more expressive. Nobody cares if you can solve the halting problem regarding programs in your language, if to achieve that objective you have to give up expressiveness.
Let’s make an analogy with real-world languages. Imagine that we decided that it’s important to make English “precise”. We’d set the goal that any sentence in Precisenglish could be provably true or false. First, you’d end up with a language where it’s impossible to write “This sentence is false”, since it’s impossible to prove this sentence true or false. Now, this particular sentence may not seem indispensable. But what about questions? “What time is it?” wouldn’t be a valid Precisenglish sentence… Isn’t that a hefty price to pay for precision?
The same is true for programming languages. You can impose constraints on the language that make it easier to prove things. And then, simple things like side effects become really complex. In Haskell, the simple task of writing something to the console requires complex constructions such as monads…
Mathematics and programming both derive from languages
It’s interesting to observe that mathematics is also a form of language. It’s a precise, if restricted, language that helps us reason about symbols, build theorems, prove things. So it makes sense that mathematics and programming languages are related: they both are forms of languages. But it’s not because programming languages derive from mathematics. It’s because programming languages and mathematics both derive from languages.
In my opinion, progress in programming languages will happen if we decide to give up mathematics and embrace linguistics. When we try to start focusing on how we translate concepts in our head into computer approximations.
A good language by that definition can adapt to mathematics as well as to other domains of knowledge. The ideal programming language should be capable of representing mathematical properties. It should be possible to write a subset of the language that is precise, just like mathematics can be seen as a precise subset of human languages. But it should also be possible to have a subset that is not necessarily as precise because it addresses other domains. There should be a subset that deals with I/Os not using monads or other abstractions, but representing what’s going on in the machine as precisely as possible. There should be a subset that deals with parallelism, or computations on a GPU, or floating-point arithmetic.
And you basically have a definition of what I’m trying to achieve with XL.
Someone asked on the Go language mailing list about the placement of curly braces. The thread currently has 101 posts. And my guess is that this is just the beginning.
Programmers are familiar with holy wars. This thread reinforces my belief that Go should put a little more emphasis on flexibility or extensibility, and a little less on compile time.
Practically since its creation, the XL programming language project had relied on SourceForge mailing lists. Unfortunately, these mailing lists attracted spam like flies, and after a while, they became unreadable.
So today, I created a Google group, XLR-Talk, dedicated to discussing XL-related matters. I hope that this will make discussions much easier than they were before. The Google Groups interface is quite handy.
If you want to view the group or subscribe:
|Subscribe to XL programming language and runtime|
|Visit this group|
As many of you may have read, Google Wave is no more. This is sad news. At the same time, I must admit that I had a Google Wave account early on, invited many of my friends there, and found myself totally unable to use it.
The majority of the many articles I read on the subject explain that this is a marketing failure on Google’s part. I take the opposite viewpoint. I believe that the marketing was more than adequate (remember all the hoopla when Wave launched?) In my view, this was really a technical failure. Interestingly, there are articles on the web that predicted Wave’s demise for technical reasons (focusing on developer adoption).
Google Wave vs. iPhone
Let’s compare Google Wave to another product presented as life-changing: the iPhone. There are a number of obvious similarities between both products:
- Redefining state-of-the-art in an existing, flourishing market (respectively email and cell phones),
- Helping people communicate (respectively in written and spoken form)
- Visibly enhancing interactivity (respectively live collaboration and fluid multitouch GUI)
- Designed for third-partiy value-add (respectively Wave extensions/protocols and App Store)
Based on this list, it seems hard to understand why the iPhone would immediately conquer hearts and minds (mine included), while Wave languished. More specifically, Google Wave conquered my mind, but not my heart. I liked the idea, I wanted to use it, but I just couldn’t find a way to do it.
Keep it Simple, Stupid
In my opinion, the technical failure of Google Wave is more readily apparent if you go to the main Google page. What is truly remarkable about this page is that a kid can use it. Grandmothers can use it. Just like the iPhone, it is not intimidating. You just get it in seconds, and you feel at ease trying stuff with it.
It doesn’t mean that there isn’t a lot of power packed in the iPhone or the Google front page. The iPhone can host some of the best applications there is on any computing platform, programs that use your location, that detect movements, that know who your friends are, and so on. In short, stuff that explains why, in Steve Jobs’ mind, the PC is obsolete. Similarly, the Google front-page is the average Joe’s entry point to Ali-Baba’s cave Internet and its thousands of applications, games, encyclopedias, … In short, stuff that explains why in Page and Brin’s mind, the PC is obsolete.
By contrast, if you look at Google Wave, this life-changing simplicity is just absent. It is not that the basic ideas are complicated. Actually, the user interface is relatively simple, because it’s based on somewhat familiar concepts. I know how to make text bigger, I know how to drag and drop an image into it. That aspect of the product was actually quite impressive.
Wave makes the workflow more complex
The real problem using Google Wave is the workflow. You don’t use Wave or Google or the iPhone in a vacuum, you do something that prompts you to need them.
If I want to place a phone call, I take my iPhone and I can place a phone call. Actually, this particular task is the one task that is more complicated than on my good old Treo650, but I digress. If you need to know about something with Google, I just type my text in the search box, and Google does a really remarkable job figuring out what I want. Simplifying your workflow is how Google displaced Yahoo or AltaVista.
With Google Wave, keeping the simplest conversation somewhat organized proves difficult. If we are collaborating on a document, there is stuff happening all over the place. My head quickly spins just trying to figure out all the stuff that is going on. Even trying to sort my conversations is more complicated than with good old mail, when one of the big selling points is that it was supposed to be simpler.
Lessons to learn
All this is immediately relevant to what we do at Taodyne. We are trying to change the world in a pretty big way. But then, our product today is way too intimidating. In that respect, it still looks a lot more like Wave than like Google Search. You need a lot of work polishing things, making stuff “just work”, removing icons and menu entries instead of adding more.
Google claims that they learned things from Wave’s failure. So should we all.