Archive

Archive for the ‘HP’ Category

Goodbye HP, Hello Taodyne

After almost 16 years working for HP, almost half of which on HPVM, I have decided to do something else. Please welcome my new little company, Taodyne, into the fierce world of software development startups. Taodyne is supported and funded by Incubateur PACA-Est.

For me, this is a pretty major change. So it was not an easy decision to make. Leaving behind the HPVM team, in particular, was tough. Still, the decision was the right one for a couple of reasons.

  • First, I had to face the fact that HP is not trying too hard to retain its employees. To be honest, in my case, they actually offered something at the last minute. But it was way too late. The decision had already been made over one year ago. There was no going back, as the creation of the new company commits other people who I could not simply leave behind.
  • However, the primary reason for leaving HP was my desire to invent new stuff, something that I felt was less and less possible at HP. Don’t get me wrong, it’s still possible to “invent” at HP, but you have to do it on demand, it has to fit in the corporate agenda, to belong to the corporate portfolio. Nobody could invent a totally new business like the Inkjet or the calculator at HP nowadays. Well, at least I could not (and I tried).

The tipping point for me was probably when one of my past managers told me that I should feel content with my contributions to HPVM, that I “had been lucky” to have this as the “high point to my career”. That manager probably meant his remark as a compliment. But I personally think sweat had more to do with the success of HPVM than luck, and I’m not sure how to take the “high point part” (“from there, you can only go down”?)

Ultimately, the choice was relatively obvious, if hard to make. Simply put, I didn’t have a dream job at HP anymore, nor any hope that things would improve. I was working long hours (meetings from 5PM to 1AM 5 times a week, anyone?), more or less officially had two jobs for only one pay, only to be told year after year that the only way forward was down… I owed it to my family to look for a way out.

So really, for me, HPVM was hopefully not the high point in a career, it was hopefully just one step along the way. Taodyne is the next one. Let’s hope I still have something to offer. When we deliver our first product, I hope that you will agree that I took the right decision. In the meantime, please check out our progress on the XLR web site.

Update: This entry was shortened compared to the original to remove useless material.

HP Pay Cuts – an unfair act of economic opportunism and greed.

Damian Saunders writes:

HP’s CEO, Mark Hurd announced, on the 20th February that he would be implementing a company wide cut in pay for all employees. Starting with a reduction in his own salary by 20%, followed by senior executives who would take a drop between ten and fifteen percent, regular employees 5 percent and exempt employees 2.5 percent. All this in reaction to a 13.5 percent fall in the company’s first quarter profit.

[...] First we need to put Mark Hurd’s 20% salary cut into perspective, remember he is only taking a cut to his base salary ($1,450,000) which amounts to a $290,000 drop. Seems quite reasonable until you examine the following, publicly available, information. Mark Hurd’s total compensation in 2008 was $42,514,524 [...] a 68% increase in the total package from 2007 to 2008. [...] So, the question is; what’s the significance of his stated 20% cut in base salary? I would suggest next to nothing. [...] Put bluntly, 6 people at the top of the HP pyramid accounted for $142,774,325 in compensation in 2008 alone. That is an obscene amount of money.

[...] Let’s look at the plight of the HP employee. The first thing we have to consider is that, unlike Mark Hurd, a 5% cut in salary is in fact a 5% cut in total compensation. Someone on a salary of $65,000 would be losing $3250 per year before tax, or $270.00 per month. Some would say this is a small price to pay for keeping your job but I think holding that gun to an employee’s head is outright exploitation and can not be condoned, especially when they have already been exploited enough for the sake of high profit margins and Mark Hurd’s stellar career performance. Ask a majority of HP employees about their current remuneration and you will be lifting a rock that you don’t want to look under.

I recommend you read the rest of the post, and spend some time browsing through the many comments. You will get a pretty good idea of the morale of HP employees these days.

Categories: HP

Why writing proprietary software is not fun

Eric Raymond explained why he hates proprietary software. And the reason is not with the software itself, but with how it is being written:

In that world, the working programmer’s normal experience includes being forced to use broken tools for political reasons, insane specifications, and impossible deadlines. It means living in Dilbert-land, only without the irony. It means sweating blood through the forehead to do sound work only to have it trashed, mangled, or buried by people who couldn’t write a line of code to save their lives.

If you love programming, trying to do work you can be proud of in this situation is heartbreaking. You know you could do better work if they’d just goddamn give you room to breathe. But there’s never time to do it right, and always another brain-dead idea for a feature nobody will ever actually use coming at you from a marketing department who thinks it will look good on the checklist. Long days, long nights and at the end of it all some guy in a suit owns all that work, owns the children of your mind, owns a piece of you.

Raymond concludes with:

I will have no part of helping it do to the young, malleable, innocent programmers of today and tomorrow what was done to me and my peers.

Because two decades later, my scars still ache.

Unfortunately, based on my own experience, I must admit that he is right on the mark. Writing software in a corporate environment is often painful: artificial deadlines, artificial constraints, artificial priorities, artificial pressure. But it has gotten a lot worse in the past decade, at least in the company I work for.

Why it’s worse today than Raymond remembers

In today’s cost-saving business environment, there is even more than what Raymond identified based on an experience that is over two decades old now. These days, the tools are not just broken for political reasons, more and more they are broken for cost reasons.

My team and I work on several locations, in the US and in Europe. Three of these locations host our servers. On these three locations, in the past month alone, we had no less than six electrical outages (!), five of which were unplanned, and no location was immune. You’d think that having backup power is really basic. But in one case, the backup generator itself was so old and so broken that it prevented power from being restored when the grid came back up! We also lost networking or primary infrastructure tools (e.g. bug tracking system or e-mail) on at least five occasions. These are not accessories, these are all critical tools that directly impact our work. If electricity or network or mail is down, there isn’t much you can do to develop software.

Meanwhile, the release schedule itself tightens, for us it’s now two releases a year. The team shrinks despite the increasing amount of work: we lost key contributors and we don’t even try to replace them. Servers break down because they are too old: my most recent machine was obsoleted in 2004, and it’s a leftover from some other team. Travel is restricted and inconvenient: I once had to wait 12 hours in New-York on my way back from California, which practically doubled the length of an already exhausting trip; the corporate travel booking system chose that flight to save $28, which was not very smart when you know how much two meals in New-York cost…

And to top it off, we have all the small daily frustrations. Coffee machines are not repaired, or worse yet are removed, with explanations that a colleague called “soviet style communication”, like “this coffee pot is not being used” (talk to the guy who’s been filling it three-five times a day). Work PCs are replaced with the lowest-end model, and are already obsolete when you get them: you give employees what customers wouldn’t buy anymore. Signs are posted in the restrooms asking us to be “green” and limit water usage, but the nearby toilets have been leaking for months, not to mention that they are clogged too. Raises are harder and harder to get, bonuses are recomputed with new formulas that make them smaller each year…

This is why I wrote recently to some of my colleagues that “developing software for this company is like being asked to win a marathon while wearing ski boots and carrying three people“… which my colleagues apparently did not think was even strong enough, since the first response was “… on one leg“.

Incompetent bosses were replaced by powerless bosses

Raymond talks about incompetent bosses, and this is still the picture given by Dilbert. I was lucky enough that I did not have that many incompetent bosses. Sure, they all made a mistake here or there, but who doesn’t. On the other hand, my bosses appeared to be less and less empowered, and more and more trapped into a system that dictates what they can do and cannot do.

For example, “standard” applications and processes have been made mandatory in the name of cost savings. It has become more and more difficult to not be punished for maintaining local applications that do what you need, something now called “shadow IT“. And too bad if the standard, centralized applications lack the capacity or features, if they don’t scale, if they are hosted on servers that are too small, if there is practically no redundancy. In the name of cost savings, you accept that there will be several outages per month.

Again, what is really frightening these days is that you can talk to your manager about that problem, and he will talk to his manager, and so on, yet nothing will ever change. You never get any feedback like “we heard you, we will make this and that change”. Instead, what you get is top-down self-congratulatory message explaining that our IT is now so good we could practically run another company with it! In short, whatever you ask your boss, chances are he can’t do a thing about it, and chances are you won’t get it.

Competing

How can it be bad to reduce costs? Running a company is, after all, all about competing: competing for customers, competing for best costs, competing for highest revenues, competing for employees. So saving costs seems like a good way to get a competitive advantage.

But the key asset of a company is its employees. Everything else is really just support, tools to do the job. If you consider employees as an asset, you ultimately win, because employees work together to deliver great products. If you start considering them as a liability, as a cost center, as something that you need to eliminate rather than as something you want to optimize, you might get some short-term gains out of it. But I believe that in the long run, you can only lose. The only reason this strategy is so popular today is because most CxOs get more benefit from boosting short-term profits than they could from building a sustainable business model. It is more profitable to “cash out” on the accumulated assets of an existing big company than to solidify these assets for the future.

This is why right now, what I just described is too often the norm in the good old corporations, but not in companies still driven by their original founders. The founders of a company tend to have the same pride for their creation as Raymond has for his software. It is not a matter of scale: even large companies like Google still get it. Some corporations used to get it until their founders were replaced. But I think that we have enough evidence to know that companies can deliver a lot of good products and shareholder value while treating their employees really well.

Open-source vs. Corporate?

There is still one point where I differ slightly from Raymond’s point of view. I am not sure that there is anything mandatory about corporations crushing software developers. There is enough difference between one corporation and the next, enough difference even over the lifetime of a single corporation to believe that treating employees well has little relationship with how you distribute software.

On the contrary, I think that corporations who nurture their employees can provide the best possible environment to develop software, including FOSS. They can provide money, resources, financial safety that makes it easier to concentrate on the work, a sense of purpose or direction (like Mark Shuttleworth is trying to do with Canonical and Ubuntu). Linus Torvalds, the icon of free software developers, has worked for various corporations. An alternative is foundations. In all cases, you can only work on free software if you have enough money to buy a computer.

Where Raymond is right is that open-source software gives developers a whole lot more freedom and control about what will happen with the software. All the work that I put in a product once called “TS/5470 – ECUTEST”, a world-class real-time measurement and control software, was lost when Agilent (HP at the time) decided to shut it down. Nowadays, you can barely find this on Google. It’s too bad, it was really useful. Before even being released, it found bugs in every single piece of car electronics we tested with it, including production ones that had been running car engines for years. Even today, I think there is still nothing like it. But as far as I know, it’s lost.

Where FOSS falls short

Still, FOSS is not the ultimate solution. It is generally very good at replicating infrastructure and commodity software, where cost becomes marginal. It is not so good at innovating. I can’t think of any FOSS innovation similar in scope and impact to the iPhone, Google or Mosaic (which was a proprietary program, even if the source was available).

Even Linux, the poster child of FOSS software, a very innovative platform these days, started as a copycat of a proprietary product (Unix), and started making real progress only with the help of corporations. And as I wrote several years ago, the OS itself is a commodity:

The OS itself will probably fade into the background where it belongs. You don’t care much about the OS of a Palm Pilot or a network appliance or an ATM, and you shouldn’t. The OS would probably have disappeared from the public consciousness five years ago, weren’t it for Microsoft’s insistence on making it its main source of profit

This is exactly where Linux stands today: it is most successful in applications where you don’t see it (e.g. cell phones or appliances.)

What I’d like to see happen is genuine open-source innovation. But I’m afraid this cannot happen, because real innovation requires a lot of money, and corporations remain the best way to fund such innovation, in general with high hopes to make even more money in return.

We all need to eat

This is personal experience too. In the past year, I have been contacted by three companies to develop open-source software. But it was always working on their own agenda. None of them offered to work on my personal open-source project. And that’s the real problem. If, as Raymond points out, the pride you may have about the children of your minds matters that much (and clearly, it does matter to me), do I really win by leaving a product I invented for one I did not invent, even if it’s an open-source one?

Eric Raymond may have a second income that allows him to do what he wants. I personally don’t have this luxury. I would like nothing better than to work full time XL, on creating the most advanced programming language there is, but this is not going to happen. Unless someone, maybe me, suddenly realizes how much money they could make out of it, and decided to fund a company or to add this to an existing company’s projects.

My point is that, in the end, corporations fund innovation based on their own objectives. And in the end, we all need to eat, we all need someone to pay us. It’s not that different in the open-source world, except maybe for a few lucky stars that are about as representative of the open-source community as Bill Gates is representative of the corporate programmer.

Virtual machines and scalability

I already pointed out many problems regarding the comparison of virtual machines recently posted by IBM. But there is one topic which I thought required a separate post, namely scalability.

What is scalability?

Simply put, scalability is the ability to take advantage of having more CPUs, more memory, more disk, more bandwidth. If I put two CPUs to the task, do I get twice the power? It is not generally true. As Fred Brooks said, no matter how many women you put to the task, it still takes nine months to make a baby. On the other hand, with nine women, you can make nine babies in nine months. In computer terminology, we would say that making babies is a task where throughput (the total amount of work per time unit) scales well with the number of baby-carrying women, whereas latency (the time it takes to complete a request) does not.



Computer scalability is very similar. Different applications will care about bandwidth or latency in different ways. For example, if you connect to Google Maps, latency is the time it takes for Google to show the map, but in that case (unlike for pregnant women), it is presumably improved because Google Maps sends many small chunks of the map in parallel.

I have already written in an earlier post why I believe HP has a good track record with respect to partitioning and scalability.

Scalability of virtual machines

However, IBM has very harsh words against HP Integrity Virtual Machines (aka HPVM), and describes HPVM scalability as a “downside” of the product:

The downside here is scalability. With HP’s virtual machines, there is a 4 CPU limitation and RAM limitation of 64GB. Reboots are also required to add processors or memory. There is no support for features such as uncapped partitions or shared processor pools. Finally, it’s important to note that HP PA RISC servers are not supported; only Integrity servers are supported. Virtual storage adapters also cannot be moved, unless the virtual machines are shut down. You also cannot dedicate processing resources to single partitions.

I already pointed out factual errors in every sentence of this paragraph. But scalability is a more subtle problem, and it takes more explaining just to describe what the problems are, not to mention possible solutions… What matters is not just the performance of a single virtual machine when nothing else is running on the system. You also care about performance under competition, about fairness and balance between workloads, about response time to changes in demand.

The problem is that these are all contradictory goals. You cannot increase the performance of one virtual machine without taking something away from the others. Obviously, the CPU time that you give to one VM cannot be given to another one at the same time. Similarly, increasing the reactivity to fast-changing workloads also increases the risk of instability, as for any feedback loop. Finally, in a server, there is generally no privileged workload, which makes the selection of the “correct” answers harder to make than for workstation virtualization products.

Checkmark features vs. usefulness

Delivering good VM performance is a complex problem. It is not just a matter of lining up virtual CPUs. HPVM implements various safeguards to help ensure that a VM configuration will not just run, but run well. I don’t have as much experience with IBM micro-partitions, but it seems much easier to create configurations that are inefficient by construction. What IBM calls a “downside” of HPVM is, I believe, a feature.

Here is a very simple example. On a system with, say, 4 physical CPUs, HPVM will warn you if you try to configure more than 4 virtual CPUs:

bash-2.05b# hpvmmodify -P vm7 -c 8
HPVM guest vm7 configuration problems:
    Warning 1: The number of guest VCPUS exceeds server's physical cpus.
    Warning 2: Insufficient cpu resource for guest.
These problems may prevent HPVM guest vm7 from starting.
hpvmmodify: The modification process is continuing.


It seems like a sensible thing to do. After all, if you only have 4 physical CPUs, you will not get more power by adding more virtual CPUs. There are, however, good chances that you will get less performance, in any case where one virtual CPU waits on another. Why? Because you increased the chances that the virtual CPU you are waiting on is not actually running at the time you request its help, independently of the synchronization mechanism that you are using. So instead of getting a response in a matter of microseconds (the typical wait time for, say, spinlocks), you will get it in a matter of milliseconds (the typical time slice on most modern systems).

Now, the virtual machine monitor might be able to do something smart about some of the synchronization mechanisms (notably kernel-level ones). But there are just too many ways to synchronize threads in user-space. In other words, by configuring more virtual CPUs than physical CPUs, you simply increased the chances of performing sub-optimally. How is that a good idea?

IBM doesn’t seem to agree with me. First, in their article, they complain about HP vPars implementing a similar restriction: The scalability is also restricted to the nPartition that the vPar is created on. Also, the IBM user-interface lets you create micro-partitions that have too many virtual CPUs with nary a peep. You can create a micro-partition with 16 virtual CPUs on a 2-way host, as illustrated below. Actually, 16 virtual CPUs is almost the maximum on a two way host for another reason: there is a minimum of 0.1 physical CPU per virtual CPU in the IBM technology, and 16 * 0.1 is 1.6, which only leaves a meager 0.4 CPU for the virtual I/O server.

IBM Host configuration Too many CPUs

The problem is that no matter how I look at it, I can’t imagine how it would be a good thing to run 16 virtual CPUs on a 2-CPU system. To me, this approach sounds a lot like the Fantasia school of scalability. If you remember, in that movie, Mickey Mouse plays a sorcerer apprentice who casts a spell so that his broom will do his chores in his stead. But things rapidly get wildly out of control. When Mickey tries to ax the brooms to stop the whole thing, each fragment rapidly grows back into a full grown broom, and things go from bad to worse. CPUs, unfortunately, are not magic brooms: cutting a CPU in half will not magically make two full-size CPUs.

Performing well in degraded configurations

Now, I don’t want you to believe that I went all defensive because IBM found a clever way to do something that HPVM can’t do. Actually, even if HPVM warns you by default, you can still force it to start a guest in such a “stupid” configuration, using the -F switch of hpvmstart. And it’s not like HPVM systematically performs really badly in this case either.

For example, below are the build times for a Linux 2.6.25 kernel in a variety of configurations.

4-way guest running on a 4-way host, 5 jobs

[linux-2.6.26.5]# gmake -j5
real    5m25.544s
user    18m46.979s
sys     1m41.009s

8-way guest running on a 4-way host, 9 jobs

[linux-2.6.26.5]# time gmake -j9
real    5m38.680s
user    36m23.662s
sys     3m52.764s

8-way guest running on a 4-way host, 5 jobs

[linux-2.6.26.5]# time gmake -j5
real    5m35.500s
user    22m25.501s
sys     2m6.003s

As you can verify, the build time is almost exactly the same, whether the guest has 4 our 8 virtual CPUs. As expected, the additional virtual CPUs do not bring any benefit. In that case, the degradation exists, but it is minor. It is however relatively easy to build cases where the degradation would be much larger. Another observation is that running only enough jobs to keep 4 virtual CPUs busy actually improves performance: less time is spent for the virtual CPUs to wait on one another.

So, why do we even test such configurations or allow them to run, then? Well, there is always the possibility that a CPU goes bad, in which case the host operating system is most likely to disable it. When that happens, we may end up running with an insufficient number of CPUs. Even so, this is no reason to kill the guest. We still want to perform as well as we can, until the failed CPU is replaced with a good one.

In short, I think that HPVM is doing the right thing by telling you if you are about to do something that will not be efficient. However, in case you found yourself in that situation due to some unplanned event, such as a hardware failure, it still does the hard work to keep you up and running with the best possible performance.

Remaining balanced and fair

There is another important point to consider regarding the performance of virtual machines. You don’t want virtual machines to just perform well, you also care a lot about maintaining balance between the various workloads, both inside the virtual machine itself, and between virtual machines. This is actually very relevant to scalability, because multi-threaded or multi-processor software often scales worse when some CPUs run markedly slower than others.

Consider for example that you have 4 CPUs, and divide a task into four approximately equal-sized chunks. The task will only complete when all 4 sub-tasks are done. If one CPU is significantly slower, all other CPUs will have to wait for it. In some cases, such as ray-tracing, it may be easy enough for another CPU to pick up some of the extra work. For other more complicated algorithms, however, the cost of partitioning may be significant, and it may not pay off to re-partition the task in flight. And even when re-partitioning on the fly is possible, software is unlikely to have implemented it if it did not bring any benefit on non-virtual hardware.

Loading virtual machines little by little…

In order to get a better feeling for all these points, readers are invited to do the following very simple experiment with their favorite virtual machine technology. To maximize chances that you can run the experiment yourself, I will not base my experiment on some 128-way machine with 1TB of memory running 200 16-way virtual machines or anything über-expensive like that. Instead, I will consider the simplest of all cases involving SMP guests: two virtual machines VM1 and VM2, each with two virtual CPUs, running concurrently on a 2-CPU machine. What could possibly go wrong with that? Nowadays, this is something you can try on most laptops…

The experiment is equally simple. We will simply incrementally load the virtual machines with more and more jobs, and see what happens. When I ran the experiment, I used a simple CPU spinner program written in C that counts how many loops per second it can perform. The baseline, which I will refer to as “100%”, is the number of iterations that the program makes on a virtual machine, with another virtual machine sitting idle. This is illustrated below, with the process Process 1 running in VM1, colored in orange.

CPU 1 CPU 2
Process 1 Idle

Now, let’s say that you start another identical process in VM2. The ideal situation is that one virtual CPU for each virtual machine gets loaded at 100%, so that each process gets a 100% score. In other words, each physical CPU is dedicated to running a virtual CPU, but the virtual CPUs belong to different virtual machines. The sum of the scores is 200%, which is the maximum you can get on the machine, and the average is 100%. This is both optimal and fair. As far as I can tell, both HPVM and IBM micro-partitions implement this behavior. This is illustrated below, with VM1 in orange and VM2 in green.

CPU 1 CPU 2
Process 1 Process 2

However, this behavior is not the only choice. Versions of VMware up to version 3 used about a mechanism called co-scheduling, where all virtual CPUs must run together. As the document linked above shows, VMware was boasting about that technique, but the result was that as soon as one virtual CPU was busy, the other physical CPU had to be reserved as well. As a result, in our little experiment, each process would get 50% of its baseline, not 100%. This approach is fair, but hardly optimal since you waste half of the available CPU power. VMware apparently chose that approach to avoid dealing with the more complicated cases where one virtual CPU would wait for another virtual CPU that was not running at the time.

CPU 1 CPU 2
Process 1 Idle
Idle Process 2

Now, let’s fire a second process in VM1. This is where things get interesting. In that situation, VM1 has both its virtual CPUs busy, but VM2 has only one virtual CPU busy, the other being idle. There are many choices here. One is to schedule the two CPUs of VM1, then the two CPUs of VM2 (even if one is idle). This method is fair between the two virtual machines, but it reserves a physical CPU for an idle virtual CPU half of the time. As a result, all processes will get a score of 50%. This is fair, but suboptimal, since you get a total score of 150% when you could get 200%.

CPU 1 CPU 2
Process 1 Process 3
Process 2 Idle

In order to optimize things, you have to take advantage of that ‘idle’ spot, but that creates imbalance. For example, you may want to allocate CPU resources as follows:

CPU 1 CPU 2
Process 1 Process 2
Process 1 Process 3

This scenario is optimal, since the total CPU bandwidth available is 200%, but it is not fair: process 1 now gets twice as much CPU bandwidth as processes 2 and 3. In the worst case, the guest operating system may end up being confused by what is going on. So one solution is to balance things out over longer periods of time:

CPU 1 CPU 2
Process 1 Process 2
Process 1 Process 3
Process 3 Process 2

This solution is again optimal and fair: process 1, 2 and 3 each get 66% of a CPU, for a total of 200%. But other important performance considerations come into play. One is that we cannot keep all processes on a single CPU. Keeping processes bound to a given CPU improves cache and TLB usage. But in this example, one of the processes (at least) will have to jump from one CPU to the other, even if the guest operating system thinks that it’s bound to a single CPU.

Another big downside as far as scalability is concerned is with respect to inter-process communication. If processes 1 and 3 want to talk to one another in VM1, they can do so without waiting only half of the time, since during the other half, the other CPU is actually running a process that belongs to another virtual machine. A consequence is that the latency of this inter-process communication increase very significantly. As far as I can tell, this particular problem is the primary issue with the scalability of virtual machines. VMware tried to address it with co-scheduling, but we have seen why it is not a perfect solution. Statistically speaking, adding virtual CPUs increases the chances that the CPU you need will not be running, in particular when other virtual machines are competing for CPU time.

Actual scalability vs. simple metrics

This class of problems is the primary reason why HPVM limits the size of virtual machines. It is not that it doesn’t work. There are even workloads that scale really well, Linux kernel builds or ray-tracing being good examples. But even workloads that scale OK with a single virtual machine will no longer scale as well under competition. Again, virtual machine scalability is nowhere as simple as “add more virtual CPUs and it works”.

This is not just theory. We have tested the theory. Consider the graph below, which shows the results of the same benchmark run into a variety of configurations. The top blue line, which is almost straight, is perfect scalability, which you practically get on native hardware. The red line is HPVM under light competition, i.e. with other virtual machines running but mostly idle. In that case, HPVM scales well up to 16-way. The blue line below is HPVM under heavy competition. If memory serves me right, the purple line is fully-virtualized Xen… under no competition.

In other words, if HPVM supports 8 virtual CPUs today, it is because we believe that we can maintain useful scalability on demanding workloads and even under competition. We know, for having tested and measured it, that going beyond 8-way will usually not result in increased performance, only in increasing CPU churn.

One picture is worth 210 words

As we have shown, making the right decisions for virtual machines is not simple. Interestingly, even the very simple experiment I just described highlights important differences between various virtual machine technologies. After launching 10 processes on each guest, here is the performance of the various processes when running under HPVM. In that case, the guest is a Linux RedHat 4 server competing against an HP-UX partition running the same kind of workload. You can see the Linux scheduler granting time to all processes almost exactly fairly, although there is some noise. I suspect that this noise is the result of the feedback loop that Linux puts in place to ensure fairness between processes.

By contrast, here is how AIX 6.1 performs when running the same workload. As far as I can tell, IBM implements what looks like a much simpler algorithm, probably without any feedback loop. It’s possible that there is an option to enable fair share scheduling on AIX (I am much less familiar with that system, obviously). The clear benefit is that it is very stable over time. The downside is that it seems quite a bit unfair compared to what we see in Linux. Some processes are getting a lot more CPU time than others, and this over an extended period of time (the graph spans about 5 minutes).

The result shown in the graphs is actually a combination of the effect of the operating system and virtual machine schedulers. In the case of IBM servers, I must say that I’m not entirely sure about how the partition and process schedulers interact. I’m not even sure that they are distinct: partitions seem to behave much like processes in AIX. In the case of HPVM, you can see the effect of the host HP-UX scheduler on the total amount allocated to the virtual machine.

Conclusion

Naturally, this is a very simple test, not even a realistic benchmark. It is not intended to demonstrate the superiority of one approach over the other. Instead, it demonstrates that virtual machine scalability and performance is not as simple as counting how many CPUs your virtual machine software can allocate. There are large numbers of complicated trade-offs, and what works for one class of workloads might not work so well for others.

I would not be surprised if someone shows benchmarks where IBM scales better than HPVM. Actually, it should be expected: after all, micro-partitions are almost built into the processor to start with; the operating system has been rewritten to delegate a number of tasks to the firmware; AIX really runs paravirtualized. HPVM, on the other hand, is full virtualization, which implies higher virtualization costs. It doesn’t get any help from Linux or Windows, and only very limited help from HP-UX. So if anything, I expect IBM micro-partitions to scale better than HPVM.

Yet I must say that my experience has not confirmed that expectation. In the few tests I made, differences, if any, were in HPVM’s favor. Therefore, my recommendation is to benchmark your application and see which solution performs best for you. Don’t let others tell you what the result should be, not even me…

IBM’s comparison of virtual machines…

IBM just posted a comparison of virtual machines that I find annoyingly flawed. Fortunately, discussion on OSnews showed that technical people don’t buy this kind of “data”. Still, I thought that there was some value in pointing out some of the problems with IBM’s paper.

nPartitions are tougher than logical partitions

What HP calls ‘nPartitions’ are electrically isolated partitions. IBM correctly indicates that nPartitions offers true electrical isolation. The key benefit, using IBM’s own words, is that nPartitions allow you to service one partition while others are online. You can for example extract a cell, replace or upgrade CPUs or memory, and re-insert the cell without downtime.

The problem is when IBM states that this is similar to IBM logical partitioning. In reality, electrically-isolated partitions are tougher. Like on IBM systems, CPUs can be replaced in case of failure. But you can replace entire cells without shutting down the system, contrary to IBM’s claim that systems require a reboot when moving cells from one partition to another.

Finally, IBM remarks that Another downside is that entry-level servers do not support this technology, only HP9000 and Integrity High End and Midrange servers. Highly redundant, electrically isolated partitions have a cost, so they don’t make so much sense on entry level systems. But it’s not a downside of HP that they only have partitioning technologies similar to IBM’s on entry level systems, it’s a downside of IBM not to have anything similar to nPartitions on their high-end servers.

Integrity servers support more operating systems

IBM writes: It’s important to note that while nPartitions support HP-UX, Windows®, VMS, and Linux, they only do so on the Itanium processor, not on the HP9000 PA Risc architecture. It is true that HP (Microsoft, really) doesn’t support Windows on PA-RISC, a long obsolete architecture, but what is more relevant is that IBM doesn’t support Windows on POWER even today. This is the same kind of spin as for electrically-isolated partitions: IBM tries to divert attention from a missing feature in their product line by pointing out that there was a time when HP didn’t have the feature either.

IBM writes that Partition scalability also depends on the operating system running in the nPartition,. Again, this is true… on any platform. But the obvious innuendo here is that the scalability of Integrity servers is inconsistent across operating systems. A quick visit to the TPC-H and and TPC-C result pages will demonstrate that this is false. HP posts world-class results with HP-UX and Windows, and other Itanium vendors demonstrate the scalability of Linux on Itanium.

By contrast, the Power systems from IBM or Bull all run AIX. And if you want to talk about scalability, IBM as of today has no 30TB TPC-H results, and it often takes several IBM machines to compete with a single HP system.

Regarding scalability of other operating systems, HP engineers have put a lot of effort early into Linux scalability. I would go as far as saying that 64-way scalability of Linux is probably something that HP engineers tested seriously before anybody else, and they got some recognition for this work.

HP offers both isolation and flexibility

IBM also tries to minimize the flexibility of hard partitions. If this is electrically isolated, it can’t be flexible, right? That’s exactly what IBM wants you to believe: They also do not support moving resources to and from other partitions without a reboot.

However, HP has found a clever way to provide both flexibility and electrical isolation. With HP’s Instant Capacity (iCap), you can move “right to use” CPUs around. On such a high-end machine, it is a good idea to have a few spare CPUs, either in case of CPU failure or in case of a sudden peak in demand. HP recognized that, so you can buy machines with CPUs installed, but disabled. You pay less for these CPUs, you do not pay software licenses, and so on.

The neat trick that iCap enables is to transfer a CPU from one nPartition to another without breaking electrical isolation: you simply shut it down in one partition, which gives you the right to use it in another partition without paying anything for it. All this can be done without any downtime, and again, it preserved electrical isolation between the partitions.

Shared vs. dedicated

Regarding virtual partitions, IBM is quick to point out the “drawbacks”: What you can’t do with vPars is share resources, because there is no virtualized layer in which to manage the interface between the hardware and the operating systems. Of course, you cannot do that with dedicated resources on IBM systems either, that’s the point of having a partitioning system with dedicated resources! If you want shared resources, HP offers virtual machines, see below. So when IBM claims that This is one reason why performance overhead is limited, a feature that HP will market without discussing its clear limitations, the same also holds for dedicated resources on IBM systems.

What is true is that IBM has attempted to deliver in a single product what HP offers in two different products, virtual partitions (vPars) and virtual machines (HPVM). The apparent benefit is that you can configure systems that have a mix of dedicated and shared resources. For example, you can have in the same partition some disks directly attached to a partition like they would be in HP’s vPars, and other disks served through the VIO in a way similar to HPVM. In reality, though, the difference between the way IBM partitions and HPVM virtual machines work is not that big, and if anything, is going to diminish over time.

A more puzzling statement is that The scalability is also restricted to the nPartition that the vPar is created on. Is IBM really presenting as a limitation the fact that you can’t put more than 16 CPUs in a partition on a 16-CPU system? So I tested that idea. And to my surprise, IBM does indeed allow you to create for example a system with 4 virtual CPUs on a host with 2 physical CPUs. Interestingly, I know how to create such configurations with HPVM, but I also know that they invariably run like crap when put under heavy stress, so HPVM will refuse to run them if you don’t twist its arm with some special “force” option. I believe that it’s the correct thing to do. My quick experiments with IBM systems in such configurations confirms my opinion.

IBM also comments that There is also limited workload support; resources cannot be added or removed. Resources can be added or removed to a vPar, or moved between vPars, so I’m not sure what IBM refers to. Similarly, when IBM writes: Finally, vPars don’t allow you to share resources between partitions, nor can you dynamically allocate processing resources between partitions, I invite readers to check the transcript of a 3-years old demo to see for how long (at least) that statement has been false…

Virtual Machines

Finally, IBM also gives false or outdated information regarding HP Integrity Virtual Machines, aka HPVM (not “IVM”, which is an IBM product.)

HPVM now supports 8 virtual CPUs, not 4, and it has always been able to take advantage of the much large numbers of physical CPUs HP-UX supports (128 today). Please note that 8 virtual CPUs is what is supported, not what works (the red line in the graph shows 16-way scalability of an HPVM virtual machine under moderate competition using a Linux scalability benchmark; the top blue line is the scalability on native hardware; the bottom curves are HPVM under maximal competition and Xen under no competition.)

IBM also states that Reboots are also required to add processors or memory. , but this is actually not very different from their own micropartitions. In a Linux partition with a kernel supporting CPU hotplug, for example, you can disable CPU3 under HPVM by typing echo 0 > /sys/devices/system/cpu/cpu3/online, just like you would on a real system (on HP-UX, you would use hpvmmgmt -c 3 to keep only 3 CPUs). HPVM also features dynamic memory in a virtual machine, meaning that you can add or remove memory while the guest is running. You need to reboot a virtual machine only to change the maximum number of CPUs or the maximum amount of memory, but then that’s also the case on IBM systems: to change the maximum number of CPUs, you need to modify a “profile”, but to apply that profile, you need to reboot. However, IBM micro-partitions have a real advantage (probably not for long) which is that you can boot with less than the maximum, whereas HPVM requires that the maximum resources be available at boot time.

IBM wants you to believe that There is no support for features such as uncapped partitions or shared processor pools. In reality, HPVM has always worked in uncapped mode by default. Until recently, you needed HP’s Global Workload Manager to enable capping, because we thought that it was a little too cumbersome to configure manually. Our customers told us otherwise, so you can now directly create virtual machines with capped CPU allocation. As for shared processor pools, I think that this is an IBM configuration concept that is entirely unnecessary on HP systems, as HPVM computes CPU placement dynamically for you. Let me know if you think otherwise…

The statement IBM made that Virtual storage adapters also cannot be moved, unless the virtual machines are shut down. is also untrue. All you need is to use hpvmmodify to add or remove virtual I/O devices. There are limits. For example, HPVM only provisions a limited number of spare PCI slots and busses, based on how many devices you started with. So if you start with one disk controller, you might have an issue going to the maximum supported configuration of 158 I/O devices without rebooting the partition. But if you know ahead of time that you will need that kind of growth, we even provide “null” backing stores that can be used for that purpose.

But the core argument IBM has against HP Virtual Machines is: The downside here is scalability. Scalability is a complex enough topic that it deserves a separate post, with some actual data…

Preserving the investment matters

Let me conclude by pointing out an important fact: HP has a much better track record at preserving the investment of their customers. For example, on cell-based systems, HP allowed mixed PA-RISC and Itanium cells to facilitate the transition.

This is particularly true for virtual machines. HPVM is software, that HP supports on any Integrity server running the required HP-UX (including systems by other vendors). By contrast, IBM micropartitions are firmware, with a lot of hardware dependency. Why does it matter? Because firmware only applies to recent hardware, whereas software can be designed to run on older hardware. As an illustration, my development and test environments consists in severely outdated zx2000 and rx5670 (the old 900MHz version). These machines were introduced in 2002 and discontinued in 2003 or 2004. In other words, I develop HPVM on hardware that was discontinued before HPVM was even introduced…

By contrast, you cannot run IBM micro-partitions on any POWER-4 system, and you need new POWER-6 systems in order to take advantage of new features such as Live Partition Mobility (the ability to transfer virtual machines from one host to another). HP customers who purchased hardware five years ago can use it to evaluate the equivalent HPVM feature today, without having to purchase brand new hardware.

Update: About history

The IBM paper boasts about IBM’s 40-plus year history of virtualization, trying to suggest that they have been in the virtualization market longer than anybody else. In reality, the PowerVM solution is really recent (less than 5 years old). Earlier partitioning solutions (LPARs) were comparable to HP’s vPars, and both technologies were introduced in 2001.

But a friend and colleague at HP pointed out that a lot of these more modern partitioning technologies actually originated at Digital Equipment, under the name OpenVMS Galaxy. And the Galaxy implementation actually offered features that have not entirely been matched yet, like APIs to share memory between partitions.

Where did the HP Way go?

Recently, I discussed with some HP colleagues about the old “HP Way”. This happens a lot, actually. I’d say that this is a topic of discussion during lunch at HP maybe once a week, in one form or another.

Employees who were at Hewlett-Packard before the merger with Compaq, more specifically before Carly Fiorina decided to overhaul the corporate culture, will often comment about the “good old days”. Employees from companies that HP acquired later, most notably Compaq or DEC, are obviously much less passionate about the HP Way, but they generally show some interest if only because of the role it used to play in making HP employees so passionate about their company.

Oh, look, the HP Way is gone!

One thing I had noticed was that the “HP Way” was nowhere to be found on any HP web site that I know of. It is not on the corporate HP History web site, nor does a search for “HP Way” on that site get any meaningful result. It’s possible that there is a better search string that would get the result, it may even be somewhere I did not look, but my point is that it’s not very easy to find. (Update: Since one reader got confused, I want to make clear that I’m looking for the text of the HP Way, the description of the values that used to be given to employees, which I quote below. I am not looking for the words “HP Way”, which are present at a number of places.)

Contrast this with the About HP corporate page in 1996, and what do you see here as the last link? Sure thing: the HP way is prominently displayed as an essential component of the HP culture. Every HP employee was “brainwashed” with the HP Way from his or her first day in the company. No wonder that years later, they still ask where it’s gone. Back then, the HP way even had its own dedicated web page.

Did someone rewrite HP corporate history?

So the fact that there is no reference to it anywhere on today’s corporate web site seems odd. It almost looks like history has been rewritten. It get the same feeling when I enter the HP building in Sophia-Antipolis. Why? Because there are two portraits in the lobby: Bill Hewlett and Dave Packard. That building was initially purchased and built by Digital Equipment Corporation. If any picture should be there, it should show Ken Olsen, not Hewlett or Packard, nor Rod Canion for that matter.

I would personally hate to have built a company that left the kind of imprint in computer history DEC left, only to see it vanish from corporate memory almost overnight… Erasing the pictures of the past sounds much more like Vae Victis or the work of George Orwell’s Minitrue than the kind of fair and balanced rendition of corporate history you would naively expect from a well-meaning corporate communication department.

Google can’t find the HP Way either…

But the truth is, I don’t think there is any evil intended here. One reason is that Google sometimes has troube finding the HP Way too. Actually, your mileage may vary. I once got a link on the HP alumni as the second result. But usually, you are much more likely to find Lunch, the HP way, an extremely funny story for those who were at HP in those days (because it is sooo true).

As the saying goes, “never ascribe to malice that which is adequately explained by studidity“. As an aside, this is sometimes attributed to Napoleon Bonaparte, but I can’t find any French source that would confirm it. I think that the closest confirmed equivalent from Bonaparte would be “N’interrompez jamais un ennemi qui est en train de faire une erreur” (never interrupt an enemy while he’s making a mistake), but that is not even close.

Back on topic, I’m tempted to think that it’s simply a case where the HP Way was no longer considered relevant, and nobody bothered to keep a tab on it in the HP corporate web site. As a result, when looking for “HP Way” on the web today, it’s become much easier to find highly critical accounts of HP than a description of what the HP Way really represented. Obviously, that can’t be too good for HP’s image…

Where can we find the HP Way today?

To finally find a reference to the HP way as I remembered it being described to HP employees, I had to search Google for a comparison with Carly Fiorina’s “Rules of the garage”. And I finally found a tribute to Bill Hewlett that quotes both texts exactly as I remembered them.

As the link to the historical HP way page on the HP web site shows, another option is to use the excellent Wayback Machine to look at the web they way it used to be at some point in the past. But that’s something you will do only if you remember that there once was something to be searched for. Again, the point here is not that you cannot find it, it’s that finding the original HP Way has become so much more difficult…

The original HP Way: It’s all about employees

The original HP Way was not so much about a company as it was about its employees:

The HP Way
We have trust and respect for individuals.
We focus on a high level of achievement and contribution.
We conduct our business with uncompromising integrity.
We achieve our common objectives through teamwork.
We encourage flexibility and innovation.

There are only five points and very few words. It’s a highly condensed way to express the corporate policy, which trusts the employees to understand not just the rules, but most importantly their intent and spirit. There is no redundancy, each point is about a different topic. These rules have been written by engineers for engineers. It’s almost a computer program…

The rules of the garage: What was that all about?

By contrast, the so-called “Rules of the Garage” introduced by Carly Fiorina, look really weak:

Rules of the Garage
Believe you can change the world.
Work quickly, keep the tools unlocked, work whenever.
Know when to work alone and when to work together.
Share – tools, ideas.
Trust your colleagues.
No politics. No bureaucracy. (These are ridiculous in a garage.)
The customer defines a job well done.
Radical ideas are not bad ideas.
Invent different ways of working.
Make a contribution every day.
If it doesn’t contribute, it doesn’t leave the garage.
Believe that together we can do anything.

To me, it sounds much more like the kind of routine you’d give to little kids in kindergarden… It’s not precise, highly redundant. More importantly, the relationship between the company and the employee is no longer bi-directional as it was in the HP Way. Notice how “we” changed into “you” except in the last one. If I were cynical, I’d say that this new set of rules is: “you do this, and we’ll get the reward”… Isn’t that exactly what happened in the years that followed?

The Rules of the Garage did not last long. They quietly went the way of the dodo, but I don’t think it was ever intended for them to last decades as the HP Way had. Instead, I believe that the problem was to evict the HP Way without giving the impression that nothing replaced it. But in reality, nothing replaced the HP Way: the Rules of the Garage were essentially empty, and after the Rules of the Garage, there was nothing…

How is that relevant today?

Despite its long absence on the HP Corporate web site, the HP Way is still seen as the reference for a successful corporate culture nowadays. It’s widely recognized that HP and the HP culture ignited the Silicon Valley. There is a good reason for that: highly creative people are what makes this economy thrive. See Phil McKinney’s ideas on the creative economy to see just how relevant this remains today. But guess what: creativity is motivated by the confident belief that there will be a reward.

The HP Way was about the various aspects of that reward: respect, achievement, integrity, teamwork, innovation. I can’t think of much beyond that in terms of recognition. I can’t think of better reasons to work hard. Now, I don’t care much about calling it the HP Way, but I do care about these values being at the core of what my company does. This is not by accident if the top technology companies in the world, including a large fraction of the Silicon Valley, have applied the HP Way in one form or another. It’s not charity, it’s simply the most efficient way to do business when the majority of your employees are highly creative individuals.

With all the respect that I have for HP’s current management, as far as corporate culture is concerned, they could still learn a thing of two from such history-certified business geniuses as Hewlett and Packard. About eight years after having been actively erased from corporate communication, the HP Way is still very much being talked about; it is still regarded as a reference. Maybe that’s a sign that there is something timeless about it…

Update: HP Way 2.0?

Today, Google “HP Way” and HP’s corporate values show as the second entry. No keyword stuffing here (I checked), but Google apparently decided that the page was relevant to the topic somehow. It’s a good thing: the keywords in HP’s corporate objectives are much closer to the old HP Way than to the Rules of the Garage. The five original keywords, respect, achievement, integrity, teamwork, innovation, are all there. New keywords have been added, including agility or passion. But the style is the same, very terse, very dense. The HP web site labels this as “our shared values”, but it wouldn’t be unfair to call it “HP Way 2.0″.

The next step is to make employees (and from there, outside observers) really believe that these values are back. People outside HP may not realize just how hard it was to turn HP around. Reorganizations were frequent. A number of good people, colleagues and friends, lost their jobs. This left scars. Many employees don’t feel valued or safe anymore. Many will no longer believe that “trust or respect for individuals” applies to them. This can be fixed, and for the long term of HP, this has to be fixed. Not because HP should be a charity, but precisely because the HP Way is what made HP such a successful business.

Il est dans le caractère français d’exagérer,
de se plaindre et de tout défigurer
dès qu’on est mécontent.

Napoléon Bonaparte

Categories: Computer history, HP

HP TechCon 2008

TechCon 2008 in Boston
I just came back from HP TechCon 2008, Hewlett-Packard’s internal conference for technologists, which was held in Boston this year.

HP TechCon is something serious. It works like a real conference: you have to submit papers to be invited, there is a formal selection process, and technologists end up presenting their work to a wide audience.

The two differences with other conferences are what makes it very interesting:

  • it’s internal to HP, so we get to see and talk about stuff that is confidential or really far out.
  • It’s all about HP, which is a big company, so there are a lot of different topics. Most conferences are really monolithic by comparison.

Meeting great people

Obviously, there is a lot I can’t talk about. But I can at least say that it was the occasion for me to meet many amazing people. Below are some public links telling you more about some people I met there:

Phil McKinney is the VP and CTO of HP’s Personal Systems Group. He has a blog, a Facebook profile, a Wikipedia entry, publishes podcasts, has more than 500 LinkedIn connections, and even a Google Phil entry on his web pages. Recommended reading.

Exploring the web around Phil’s blog and comments, I discovered a number of things, like mscape and the HP Game On series of ads. And for serious gamers, the Blackbird 002 game system (adding “002″ to the name is, by itself, a really cool piece of serious geekery.)

Duncan Stewart is a physics researcher at HP Labs. You need to add “memristor” to Google Duncan, because there are other people with the same name. Occasional readers of my blog may know that physics is a topic I’m quite interested in. He’s one of the researchers behind the memristor, which I will talk about below.

There are a number of pages about Duncan Stewart, including this one on the HP Labs web site. I really enjoyed talking to him.

The memristor

In case you did not hear about it, the memristor is the fourth passive device, after the resistor, the capacitor and the inductor. But it would be surprising if you had not heard about it. The number of papers and articles on such a recent topic is really amazing. Here is a link at HP Labs, the article in Nature that I think started it all, or the Wikipedia entry.

To explain what the memristor is, a little hydraulic analogy is in order. As you know, a good way to think about electricity is to see “voltage” as the height of water (the pressure, really), and “current’ as the flow of water.

  • A resistor is like a grid or something that blocks the flow: to get more water to flow through (more current), you will need a higher level of water (more voltage). This is expressed as Ohm’s law , .
  • A capacitor is like a tank, where inbound current elevates the level of water, and outbound current depletes the contents of the tank and therefore the level of water. Therefore, it relates a change in voltage to a current, , which you can also write as
  • An inductor is like a heavy paddle wheel in a current, which prevents it from changing quickly. In that case, changes in current are related to the height of water: if you try to reverse the current for example, water will accumulate until the paddle wheel changes direction. This is traditionally expressed as , but you can also write it as
  • Finally, a memristor is like a gravel-filled pipe near a constriction. If flow brings the gravel towards the constriction, the gravel blocks the pipe and the resistance to flow will increase. On the other hand, if the flow brings the gravel away from the constriction, water can flow freely. This relates a change in current to a change in voltage, which you can write as .

The equations show why this is the “fourth passive element”: if you consider current and voltage, and accumulated current and voltage, there are four ways to combine them.

Fine, so… what do you do with it?

So… why does it matter, what’s the big deal? Well, very simply put, the memristor looks, among other things, like a very cheap and dense way to build memory devices. There are still a good number of questions to be addressed before this unseats Flash memory and other persistent storage, and to be honest, there is even a small chance that it never will (the hard disk is still with us, no matter how many times its death was predicted.)

There are many other applications as well. One of them, “artificial intelligence”, looks a bit overhyped to me at the moment. What seems true, however, is that memristors might enable cheap neural-like circuits to be mass-produced efficiently.

Conclusion

This was overall a great TechCon. Still, I’m glad to be back, it was exhausting.

Categories: HP, Physics, Sociology, Technology

CloudPrint


HP launched CloudPrint, a way to print documents using your cell phone. The New York Times writes:

The underlying idea is to unhook physical documents from a user’s computer and printer and make it simple for travelers to take their documents with them and use them with no more than a cellphone and access to a local printer.

This is pretty neat. You print to a special driver, send it over the Internet, and then you can print the document anywhere. No more carrying a laptop around just for a few slides.

Printer Virtualization

Memory, terminals, then whole computers, storage, everything had been virtualized… Now, CloudPrint looks to me like the first step towards true printer virtulization.

Categories: HP, Printing, Virtualization
Follow

Get every new post delivered to your Inbox.

Join 365 other followers

%d bloggers like this: