It’s not exactly talented just yet, but it’s a start:
Here are a couple of things this robot also does:
It’s not exactly fast yet (notice the “15x” in the video?)
Another one where acceleration is more clearly visible because of humans in the background:
Clearly, robotics is making progress, but that also makes the gap between robots and animals more humbling. The other day, a dog was running alongside me while I was biking, and I couldn’t help but admire the agility of the run: 30km/h in the bushes, downhill on slippery gravel, avoiding a multitude of obstacles with a large variety of strategy (run around, jump over it, …) all the while checking where I was…
A truly amazing Google Talk presented by David Rock, author of the book Your Brain at Work. And if the title of the book has at least two meanings, it’s definitely intentional.
IEEE Spectrum has a special report about the Singularity, that point in our future where predictions fall apart because major technical changes make any extrapolation we may make based on today’s trends essentially obsolete. Even the New-York Times has an article, entitled The Future Is Now? Pretty Soon, at Least, which quickly brushes up some of the ideas.
The special issue in IEEE is more extensive. There are many interesting articles. In one of them, Ray Kurzweil, arguably the inventor of the concept of Singularity, debates with Neil Gershenfeld, and Vernor Vinge shares what he sees as the signs of the Singularity.
One important point, I believe, is that “there will be a singularity at time t” is a proposition that might depend on the time it’s being enunciated. It seems very likely to me that when you are in the middle of a singularity, you have no idea that it’s there. That’s why I am a bit wary of the use of a singular noun, the singularity, when I think really that there have been many singularities over the course of history.
How could someone from the middle-age, for example, predict the structure of a society after motorized personal transportation became not only possible, but mainstream and relatively cheap (I know, I know, gas prices…)? In other words, seen from the middle-age, the invention of the automobile or, even more so, the airplane, were singularities that might be predicted (e.g. by Leonardo da Vinci), but whose impact on society was really difficult to grasp. The same is true for remote communication, from the telephone to television to the Internet.
Now, one singularity is somewhat special, and it’s when we started building enhancements to our intelligence, and not just our physical abilities. That’s the very definition Vernor Vinge uses when he writes:
I think it’s likely that with technology we can in the fairly near future create or become creatures of more than human intelligence. Such a technological singularity would revolutionize our world, ushering in a posthuman epoch.
But that already happened. The first modest pocket calculators enabled computations so complex that they completely changed the course of engineering. Any engineer with a calculator has “more than human intelligence”, for he can compute faster than any human being without a calculator can. It’s only recently that we redefined intelligence to exclude the ability to perform computations, and the only reason we did that is because computers were so much better at it than we are.
So that’s my personal view on that question: the most important singularity, the one that Ray Kurzweil sees sometime in the future, has already happened, and we are right in the middle of seeing its effects.
Another progress made on brain scanning technology. I predict that in a few years, the interface to computers will probably no longer use keyboards as much. But programming such interfaces will take some innovative programming techniques.
The fact that this guy worked so much on the interaction between human nervous systems and machines is a reason to hope that this is almost real. A lot of progress seems to have been made since last time I wrote about this topic.
There is an interesting article on the New Scientist web site about 13 things that do not make sense. One of them I particularly like:
Madeleine Ennis, a pharmacologist at Queen’s University, Belfast, was the scourge of homeopathy. She railed against its claims that a chemical remedy could be diluted to the point where a sample was unlikely to contain a single molecule of anything but water, and yet still have a healing effect. Until, that is, she set out to prove once and for all that homeopathy was bunkum.
The reason I like it is that, if this report is true, this Madeleine Ennis is a courageous person, twice: once because she shows readiness to admit, both privately and publicly, that she was wrong. This commitment to truth even when you don’t like it is the key ingredient to make a real scientist. But Ennis is courageous once more because homeopathy has such a bad name that she simply cannot ignore the flak she is going to receive, irrespective of the quality of her work. Kudos to her.
… and you might live longer. OK, I’m just slightly over-interpreting the contents of this article, which tells in substance that cells that were deprived from oxygen might not die from oxygen starvation (they apparently survive a few hours without oxygen), but kill themselves when oxygen comes back. According to the article, some mitochondria mechanism to fight cancer cannot tell the difference between lack of oxygen and cancer, or something like that.
Interestingly, this was “the way I knew things”, and when I talked about this with colleagues here, they seemed to agree. Is it possible that this would be old news for European or French emergency response teams, and a recent discovery or something still under debate in the US? If you know the answer, please post a comment.
A lot of progress seems to have been made recently towards practical artificial limbs. Thirty years after the TV series, is a big step towards the six million dollars man. Though, if it’s funded through DARPA, I seriously doubt it will have cost only six million dollars!
The French agency for UFOs just opened their database. Unfortunately, the CNES/GEIPAN web site has generally been down since then, presumably due to the heavy traffic. In these conditions, I’m not sure that the add prominently featured at the bottom is really helpful:
Why should we show any interest in UFOs?
I don’t think opening up this archive will change anybody’s mind on the topic. For those who believe, there are already a large number of sites with tons of “evidence”. For those who don’t, there are similarly convincing arguments against. So this is one case where the human brain has trouble sorting things out, simply because the available evidence is not strong enough one way or another.
Is there any point talking about this, if we can’t prove anything? I believe there is. In many fields of science, evidence is statistical. It’s the accumulation of facts that, individually, mean very little, which together form evidence. Over time, we learned how to locate small solid bodies outside of the Earth’s atmosphere with sufficient precision that some predictions and useful observations can be made about meteors. For a very long time, this was not the case. Similarly, if one person sees little men in a flying saucer, it does not mean much, but if dozens of people report similar incidents over the span of a few decades, then there may be some truth to the observations.
Not scientific, but not necessarily “false” either
It remains very frustrating for scientists, of course, because they can’t reproduce the phenomemon at will. So there is a strong temptation to classify this as “non science”. And, in the present state of knowledge, that’s really what it is. It is not a science not necessarily because it is not true, but because we do not know what to make of the little evidence we have. There is no theory which would allow us to predict when and where UFOs will land, for example.
But from “not science”, another step is often taken, which is: “it’s false”, or “it’s bogus”, or “there can be no extraterrestrial because Albert Einstein said that we can’t travel faster than light”. That step is irrational. We cannot deny evidence on the basis that we don’t know what to do with it. It would be like saying “Dear Mr. Sherlock Holmes, this person cannot be dead, because I am unable to explain how the murderer proceeded”. That would have made for much less interesting books, don’t you think?
Personal UFO experience
But in this “polluted” atmosphere surrounding UFOs, making a personal opinion on the topic in today’s context is difficult, in particular when you have not seen a UFO yourself. Even when you have seen one, you still won’t know what to believe. There is a big gap between “unidentified” and “extraterrestrial”. I saw a “UFO” once, but “U” here only means I could not identify it personally, and found it “misbehaved”. “Misbehaved flying object”, that might be a better acronym… What I know, however, is that I saw something I could not explain, and denying it would simply be lying to myself. Again, not a very logical attitude…
So, what did I see? Well, it was not that impressive: walking in the countryside one evening, I saw a light shoot rapidly skywards. It could have been an amateur rocket, though in my recollection, it was a bit fast for that, and I don’t remember hearing a noise nor seeing any smoke. To this day, I still have no idea what it was, and I will probably die ignoring whether it was a weather balloon (unlikely), martians (unlikely as well), some optical effect (why not), or a probe from a remote region of the galaxy (now, that is quite likely…).
Whether the archives of the GEIPAN will help solving that mystery, only time will tell.
Thought recognition is coming. New articles on this topic pop up regularly. But what would a thought-driven user interface look like?
The inception of the XL programming language began with questions like this. I was thinking more of speech recognition at the time. I was trying to figure out how it would be possible to use object-oriented programming (which I had just discovered back then) to program a speech-centric user interface. It turns out that it’s probably quite difficult.
The reason is relatively easy to explain. In a graphical user interface (GUI), you have a finite (and relatively small) number of objects on screen. You pick up one object, for example a menu, and then another, and so on. One of the key design features of the GUI is that it should be non-modal, i.e. at any given point in time, you should be able to pick this or that menu freely. This is very different from old text-based programs, where you would typically switch, for instance, between text editing mode, text formatting mode, page layout mode, printer selection mode, and so on. This basic tenet was a mindset revolution for programmers at the beginnings of GUIs. The original Macintosh Human Interface Guidelines insists on that point as early as page 12. Today, it’s much harder to find web pages explaining that fundamental aspect, because programmers only know about modeless programming.
But a speech-based user interface, on the contrary, is extremely modal. Everything depends on what was said before. For example, the word it in Find the Smith file and print it. I will often use the more general term vocabulary-based user interface (VUI), which covers all kinds of user interface where you “talk” to a machine. For example, with a voice mail system, the vocabulary can be digits you type on a keypad, like 1221 to get voice mail. The problem is that the vocabulary for speech can be thousands of words. So at any given point in time, you have thousands of possible modes.