The little fellow below is Sven, the budgerigar (grasparkiet). He’s holding a toy, because he’s playing fetch with his owner. A lot of dog owners would be pretty proud for such a feat, mosts owned dogs would not do it as well.
Birds like this are sold starting at around $15. If you want the smartest bird though, $15 is a rip-off. You could catch yourself a raven, which were observed using strategy while hunting in the wild, or crows who were observed in lab tests to precisely make their own tools to reach food. The smartest bird is arguably actually the European Magpie (ekster), which you could -though obviously, you shouldn’t…- capture as well. Then you could harness the power of its 5.8 gram brain, and teach it to speak. Most importantly though: it’s the only non-mammal that scientists agree recognize themselves in a mirror, and recognize when something’s wrong with their appearance. In other words, they use mirrors like you might in the morning.
Now meet the PR2. You can get your own starting at 400,000$.
Much like Sven, the PR2 is pretty cute and engaging. He has 2 computers for ‘feet’. Good computers… A very high-end CPU and 24 GB RAM each. But, they’re plain computers. Pretty close to the desktops at groupT, or the one I have under my desk (but closer to the one under my desk ;-)). They run Ubuntu, though you could install Windows or OSX on them, and they communicate with each other over a network cable. They have a bunch of cameras and sensors plugged in to them, as well as drivers for some attached motors. All of that combined, is the robot you’re looking at. All you have to do to make it work, is run a set of applications on the computers, that form the interface to the hardware.
Suppose you have a desktop at GroupT (or your work, institute, whatever…), and we give it some distinctive visual features. We give it a webcam, and write some software for it so that when presented with it’s reflection the desktop can detect itself. And it has to be able to do that with the mirror reflecting from any random location. A student drops a plate of spaghetti on the computer, covering it, and it’s taken to the basement to wash it off. Would our program still enable the computer to recognize itself, covered in spaghetti in the basement? A magpie would.
The PR2 doesn’t have to recognize itself covered in spaghetti. Worse… My task is to get such a robot to autonomously bake pancakes. More specifically, to provide and analyze the visual feedback. I’m required to only use visual feedback, so I don’t just get to poke at stuff, use microphones,…
He has to pour dough until he sees it’s enough, on a surface he determined he’s made greasy enough. The surface or tools aren’t known in advance. And he has to see whether or not the pancake is ready, or perhaps burning. “Should I turn up the heat, because the pancake isn’t doing much?” He has to see and recognize, whatever I think he needs to see, to complete his task.
My blog will deal with a single, two-fold, question: how do you get a desktop computer to bake a pancake, and what are the implications and other uses of the tools you use to do that?