The article is a little low on details. Based on the fact that the article mentions Philips displays, my guess is that this is based on the 2D+Z technology. In other words, the picture is split between a color map and a depth map.
The benefits of this approach is that it is easy to transmit over a regular channel, since the picture is basically the same size and format as a regular HDTV picture. The drawback is that the display has to reconstruct at least two images (and for auto-stereoscopic displays, several pictures) from the information.
It may seem simple to reconstruct the required pictures from a color map and a depth map. But in reality, it’s a bit problematic, because a single color map cannot encode both the color of the front objects and the color of the objects it hides.
Imagine for example that you have a plane in the front, and the sky in the background. The sky is blue, the plane is grey. Now, at the border of the plane, the gray hides the blue. But the two eyes don’t see this limit at the same exact location. It’s a phenomenon called parallax.
So what happens is that for at least one of the pictures, the system has to “invent” colors. Basically, it’s going to extrapolate the background color of the parts you can’s see based on the color of the parts you can see.
This is one of the drawbacks of the Philips approach compared to traditional stereoscopic and auto-stereoscopic systems that send all the pictures separately (meaning that they don’t need to invent colors).
This is not very visible on movies, if at all. That’s because borders of real objects tend to be a bit fuzzy. But for synthetic images as those created by Tao Presentations, this tends to cause relatively visible artifacts.
I wonder if Dolby solved that specific problem…