Why writing proprietary software is not fun

Eric Raymond explained why he hates proprietary software. And the reason is not with the software itself, but with how it is being written:

In that world, the working programmer’s normal experience includes being forced to use broken tools for political reasons, insane specifications, and impossible deadlines. It means living in Dilbert-land, only without the irony. It means sweating blood through the forehead to do sound work only to have it trashed, mangled, or buried by people who couldn’t write a line of code to save their lives.

If you love programming, trying to do work you can be proud of in this situation is heartbreaking. You know you could do better work if they’d just goddamn give you room to breathe. But there’s never time to do it right, and always another brain-dead idea for a feature nobody will ever actually use coming at you from a marketing department who thinks it will look good on the checklist. Long days, long nights and at the end of it all some guy in a suit owns all that work, owns the children of your mind, owns a piece of you.

Raymond concludes with:

I will have no part of helping it do to the young, malleable, innocent programmers of today and tomorrow what was done to me and my peers.

Because two decades later, my scars still ache.

Unfortunately, based on my own experience, I must admit that he is right on the mark. Writing software in a corporate environment is often painful: artificial deadlines, artificial constraints, artificial priorities, artificial pressure. But it has gotten a lot worse in the past decade, at least in the company I work for.

Why it’s worse today than Raymond remembers

In today’s cost-saving business environment, there is even more than what Raymond identified based on an experience that is over two decades old now. These days, the tools are not just broken for political reasons, more and more they are broken for cost reasons.

My team and I work on several locations, in the US and in Europe. Three of these locations host our servers. On these three locations, in the past month alone, we had no less than six electrical outages (!), five of which were unplanned, and no location was immune. You’d think that having backup power is really basic. But in one case, the backup generator itself was so old and so broken that it prevented power from being restored when the grid came back up! We also lost networking or primary infrastructure tools (e.g. bug tracking system or e-mail) on at least five occasions. These are not accessories, these are all critical tools that directly impact our work. If electricity or network or mail is down, there isn’t much you can do to develop software.

Meanwhile, the release schedule itself tightens, for us it’s now two releases a year. The team shrinks despite the increasing amount of work: we lost key contributors and we don’t even try to replace them. Servers break down because they are too old: my most recent machine was obsoleted in 2004, and it’s a leftover from some other team. Travel is restricted and inconvenient: I once had to wait 12 hours in New-York on my way back from California, which practically doubled the length of an already exhausting trip; the corporate travel booking system chose that flight to save $28, which was not very smart when you know how much two meals in New-York cost…

And to top it off, we have all the small daily frustrations. Coffee machines are not repaired, or worse yet are removed, with explanations that a colleague called “soviet style communication”, like “this coffee pot is not being used” (talk to the guy who’s been filling it three-five times a day). Work PCs are replaced with the lowest-end model, and are already obsolete when you get them: you give employees what customers wouldn’t buy anymore. Signs are posted in the restrooms asking us to be “green” and limit water usage, but the nearby toilets have been leaking for months, not to mention that they are clogged too. Raises are harder and harder to get, bonuses are recomputed with new formulas that make them smaller each year…

This is why I wrote recently to some of my colleagues that “developing software for this company is like being asked to win a marathon while wearing ski boots and carrying three people“… which my colleagues apparently did not think was even strong enough, since the first response was “… on one leg“.

Incompetent bosses were replaced by powerless bosses

Raymond talks about incompetent bosses, and this is still the picture given by Dilbert. I was lucky enough that I did not have that many incompetent bosses. Sure, they all made a mistake here or there, but who doesn’t. On the other hand, my bosses appeared to be less and less empowered, and more and more trapped into a system that dictates what they can do and cannot do.

For example, “standard” applications and processes have been made mandatory in the name of cost savings. It has become more and more difficult to not be punished for maintaining local applications that do what you need, something now called “shadow IT“. And too bad if the standard, centralized applications lack the capacity or features, if they don’t scale, if they are hosted on servers that are too small, if there is practically no redundancy. In the name of cost savings, you accept that there will be several outages per month.

Again, what is really frightening these days is that you can talk to your manager about that problem, and he will talk to his manager, and so on, yet nothing will ever change. You never get any feedback like “we heard you, we will make this and that change”. Instead, what you get is top-down self-congratulatory message explaining that our IT is now so good we could practically run another company with it! In short, whatever you ask your boss, chances are he can’t do a thing about it, and chances are you won’t get it.


How can it be bad to reduce costs? Running a company is, after all, all about competing: competing for customers, competing for best costs, competing for highest revenues, competing for employees. So saving costs seems like a good way to get a competitive advantage.

But the key asset of a company is its employees. Everything else is really just support, tools to do the job. If you consider employees as an asset, you ultimately win, because employees work together to deliver great products. If you start considering them as a liability, as a cost center, as something that you need to eliminate rather than as something you want to optimize, you might get some short-term gains out of it. But I believe that in the long run, you can only lose. The only reason this strategy is so popular today is because most CxOs get more benefit from boosting short-term profits than they could from building a sustainable business model. It is more profitable to “cash out” on the accumulated assets of an existing big company than to solidify these assets for the future.

This is why right now, what I just described is too often the norm in the good old corporations, but not in companies still driven by their original founders. The founders of a company tend to have the same pride for their creation as Raymond has for his software. It is not a matter of scale: even large companies like Google still get it. Some corporations used to get it until their founders were replaced. But I think that we have enough evidence to know that companies can deliver a lot of good products and shareholder value while treating their employees really well.

Open-source vs. Corporate?

There is still one point where I differ slightly from Raymond’s point of view. I am not sure that there is anything mandatory about corporations crushing software developers. There is enough difference between one corporation and the next, enough difference even over the lifetime of a single corporation to believe that treating employees well has little relationship with how you distribute software.

On the contrary, I think that corporations who nurture their employees can provide the best possible environment to develop software, including FOSS. They can provide money, resources, financial safety that makes it easier to concentrate on the work, a sense of purpose or direction (like Mark Shuttleworth is trying to do with Canonical and Ubuntu). Linus Torvalds, the icon of free software developers, has worked for various corporations. An alternative is foundations. In all cases, you can only work on free software if you have enough money to buy a computer.

Where Raymond is right is that open-source software gives developers a whole lot more freedom and control about what will happen with the software. All the work that I put in a product once called “TS/5470 – ECUTEST”, a world-class real-time measurement and control software, was lost when Agilent (HP at the time) decided to shut it down. Nowadays, you can barely find this on Google. It’s too bad, it was really useful. Before even being released, it found bugs in every single piece of car electronics we tested with it, including production ones that had been running car engines for years. Even today, I think there is still nothing like it. But as far as I know, it’s lost.

Where FOSS falls short

Still, FOSS is not the ultimate solution. It is generally very good at replicating infrastructure and commodity software, where cost becomes marginal. It is not so good at innovating. I can’t think of any FOSS innovation similar in scope and impact to the iPhone, Google or Mosaic (which was a proprietary program, even if the source was available).

Even Linux, the poster child of FOSS software, a very innovative platform these days, started as a copycat of a proprietary product (Unix), and started making real progress only with the help of corporations. And as I wrote several years ago, the OS itself is a commodity:

The OS itself will probably fade into the background where it belongs. You don’t care much about the OS of a Palm Pilot or a network appliance or an ATM, and you shouldn’t. The OS would probably have disappeared from the public consciousness five years ago, weren’t it for Microsoft’s insistence on making it its main source of profit

This is exactly where Linux stands today: it is most successful in applications where you don’t see it (e.g. cell phones or appliances.)

What I’d like to see happen is genuine open-source innovation. But I’m afraid this cannot happen, because real innovation requires a lot of money, and corporations remain the best way to fund such innovation, in general with high hopes to make even more money in return.

We all need to eat

This is personal experience too. In the past year, I have been contacted by three companies to develop open-source software. But it was always working on their own agenda. None of them offered to work on my personal open-source project. And that’s the real problem. If, as Raymond points out, the pride you may have about the children of your minds matters that much (and clearly, it does matter to me), do I really win by leaving a product I invented for one I did not invent, even if it’s an open-source one?

Eric Raymond may have a second income that allows him to do what he wants. I personally don’t have this luxury. I would like nothing better than to work full time XL, on creating the most advanced programming language there is, but this is not going to happen. Unless someone, maybe me, suddenly realizes how much money they could make out of it, and decided to fund a company or to add this to an existing company’s projects.

My point is that, in the end, corporations fund innovation based on their own objectives. And in the end, we all need to eat, we all need someone to pay us. It’s not that different in the open-source world, except maybe for a few lucky stars that are about as representative of the open-source community as Bill Gates is representative of the corporate programmer.

Hyper-V: Linux or not Linux?

A recent column by Mitchell Ashley argues that Microsoft’s upcoming Hyper-V virtualization platform (formerly known as Viridian) “leaves out Linux in the cold“, because it only supports SuSE Linux and not the bigger contenders like RedHat and Ubuntu.

I believe that Mitchell Ashley misses two important points in his analysis:

  • The US market, where RedHat and Ubuntu dominate, is not the market where Microsoft has the most trouble with Windows. In Europe and Asia, their dominance is not as clear. SuSE is a key player in Europe, in particular in Germany, and these are also the locations where governments threaten to standardize on “open platforms”. So instead of focusing on markets where it has little to gain, Microsoft may be after markets where Windows is threatened.
  • The relative cost of software has gone up tremendously, and now is the majority of the purchase cost of any IT infrastructure. Long gone are the days when IBM simply gave the software for free when you purchased its hardware. So Microsoft may be playing catch-up, but as long as they can offer deals you can’t refuse regarding the licenses of Microsoft Windows (e.g. it’s much cheaper to run 4 Windows VMs under Hyper-V than under VMware), they have the possibility to tie rocks to the other guys’ ankles…

On a different topic, one of the comments suggests that SuSE only runs thanks to a binary-only kernel module. That would prove interesting if this is indeed the case. While binary kernel modules have been used for specific proprietary hardware such as 3D graphics cards, I don’t think it’s ever been the case before that you needed one for the kernel itself.

If it’s some kind of paravirtualization or acceleration as I suspect (another comment about someone running other kernels tends to confirm that viewpoint), then it’s a bit different. But if you need some proprietary binary simply to run Linux, I believe that this will cause some backlash from the Free Software community.

Microsoft Surface

A friend pointed me to Microsoft Surface. Apart from the annoying fact that the site does not work with Apple’s Safari (but it does with Firefox), I find it pretty interesting. Surface is, to put it shortly, a new breed of user interface using the same kind of multitouch screen you find on the Apple iPhone. (Update: It turns out this is not a touchscreen).

This is not the first time I actually see something like that: HP also had something similar in the works. There is even a blog dedicated to these topics. But Microsoft Surface is interesting for two reasons. The first one is that it’s the first time I see something that might actually be a real user-interface and not a mock up or an impressive hardware test. The second one is… secret.

Anyway, if we one day get to the point where we have multitouch capabilities in foldable screens

Bill Gates 1989 predictions

In 1989, Bill Gates gave a talk at the University of Waterloo. An audio transcript recently became available.

Money money money

The talk is interesting from a historical perspective. You get Bill Gates’ personal perspective on the history. Like “software is where we belong”, because once it works, it keeps working. The reasoning is really: can we split between the easy money (software, where once it’s done, it’s pretty much like printing money) that we keep to ourselves, and complicated business, like making the chips or the computers themselves, where Microsoft will just “influence” other companies…

… and Bill Gates invented the Internet

If you listen to him, they really designed what is now known as the “IBM PC”, and IBM’s contribution to the whole thing is barely evoked. In this talk, he also says “I decided to reserve the top 384K, put video memory here, I/O there, and so on”… I wonder if this is really true, it might very well be. He definitely had the technical ability to do that. What I wonder is if IBM would just have taken Bill Gates’ blueprintes as is, or whether this was really a dialogue. It’s hard to tell, but there are things in this discourse that can be discounted based on what we know today, like “this MS/DOS software we had invented“.

32-bit will last us more than 10 years

Looking towards the future into what is now our past, Bill Gates makes a few questionable predictions, like “a 32-bit address space is going to last us more than 10 years. In reality, the first 64-bit CPUs from MIPS appeared in 1991, less than 2 years later. But then today, in 2007, we are only starting to see the 4GB limit as a limit for personal computers, and the transition to 64-bit is well underway. This time, it was prepared ahead of time.

Small teams are good

There are some remarkable insights as well. For example, at one point, Gates explains that the individual pieces of software are not valuable to Microsoft, because in a few years, “who cares”, there will be better software. So he sees the key asset of Microsoft as being the ability to keep small teams focused and efficient, so that they remain nimble and ahead of the competition. That is so very smart. I would bet that practically nobody understood that so clearly in 1989, and even today, big companies (including Microsoft itself, ironically), often forget this key characteristic of our industry.

To use an example close to home, the core functionality of HP Integrity VM was developed by only three people, myself included, over the span of 3 years. Better yet, it was a quite pleasant experience, and each of us commented that this was the best team he ever had worked in. As a matter of fact, all the successful projects I participated in started as “small things”, something that a single person can do by himself. At one point later in the project, the thing grows big enough that you can parallelize the work, and then it makes sense to build a larger team (and boy, did we ask, when this happened!). But even at that point, it still works much better if the larger team is made of “loosely connected subteams”. I see it as a key strength of open-source software: individual contributors are not forced on a schedule that does not match their own needs, they jump on the bandwagon of whatever release train suits them best. In my opinion, this makes a world of difference.