The field applications of modern robots are limited because they are largely unable to cope with damage and injuries. But a new robot called Starfish could change all that – it is programmed to learn about itself and adapt to new situations, with no pre-built contingency plans.
I am walking strangely. About a week ago, I pulled something to my left ankle, which now hurts during the part of each step just before the foot leaves the ground. As a result, my other muscles are compensating for this to minimise the pain and my gait has shifted to something subtly different from the norm.
In similar ways, all animal brains can cope with injuries by computing new, often qualitatively different, movements to compensate. Because this isn’t a conscious process, we often take it for granted. But by looking at how difficult it is to program a robot to do the same thing, we can get a sense of how hard it actually is.
Robots have been used for years to perform structured, repetitive tasks. As engineering has advanced, their movements have become more and more stable and life-like.
But they still have severe limitations, not the least of which is inflexibility in the face of injury or changes to its body shape. To put it simply, if a robot’s leg falls off, it becomes as useful as so much scrap metal.
So for robots, adaptiveness is a desirable virtue. Modern bots can independently develop complex behaviours without any previous programming but this requires trial and error and lots of time.
Josh Bongard and colleagues at Cornell University , New York, have solved this problem by developing a robot (see right; image from Science magazine) that continuously assesses its body structure and develops new ways of moving if anything changes.
It differs from other models in that it has no built in redundancy plans, no strategies for dealing with anticipated problems. It’s simply programmed to examine itself and adapt accordingly.
The concept of a robot that can adapt to new situations is often the precursor to nightmare scenarios in many a science-fiction film. So it is fortunate that Bongard’s robot isn’t armed or threatening, but instead looks more like a four-armed starfish.
Each arm has two joints, and sensors that record the angle of these joints, and the tilt of the arms.
At first, Starfish performs some experiments. Humans have an instinctive understanding of how our body parts connect with each other, but this sense, called kinaesthesia, must be programmed into robots. Bongard’s robot doesn’t need that – it can work out its structure on its own.
It does this by performing random actions and using an array of sensors to see what these do to its body. It then creates several ‘self-models’ – representations of how its body is joined together – in the same way that a forensic scientist pieces together how a crime occurred based on the evidence.
Starfish then compares the models and performs actions designed to distinguish between them. After several rounds of this, the robot has a fairly accurate idea of how it’s built, what sorts of things it can do, and which parts it needs to move to do them. If it’s given an instruction, such as ‘move forward’, it can plan the best way of doing that.
However, the robot detects something funny that goes against its self-model, it initiates the whole process again. If it’s leg falls off, it notices, re-creates its picture of itself, and plans new behaviours to cope with its new situation.
These abilities will be instrumental in the future of robotics. Robots will become exponentially more useful if they can respond to new environments, or cope with the bodily changes that happen when they grasp a tool or suffer damage.
They can be deployed to unstable disaster sites to help with recovery, or to the depths of space for exploration. They may even give us an insight into how the human brain develops self-awareness and adapts to new situations.