A future full of helpful robots, quietly going about their business
and assisting humans in thousands of small ways, is one of technology's
most long-deferred promises. Only recently have robots started to
achieve the kind of sophistication and ubiquity that computing's
pioneers originally envisioned. The military has hundreds of UAVs
blanketing the skies above Iraq and Afghanistan, and Roombas are
vacuuming living rooms across the country. At the bleeding edge,
there's the DARPA Grand Challenge in 2005. This grueling, 140-mile,
no-humans-allowed race through the desert showcased full-sized,
completely autonomous robot cars that could navigate across rugged
desert terrain, avoiding rocks and cliffs and cacti in a race for a $2
million cash prize. The follow-on 2007 Urban Challenge went even
further, with the robotic competitors required to drive alongside humans
on crowded roads, recognizing and avoiding other cars and following the
rules of the road. Suddenly, the robotic future doesn't look so far
off.
In some ways, the remarkable thing is that it took so long to get
here. In the 1960's, researchers in artificial intelligence were boldly
declaring that we'd have thinking machines fully equivalent to humans
in 10 years. Instead, for most of the past half-century, the only
robots we saw outside of movies and labs were arms confined to factory
floors and were remotely operated by humans. Building machines that
behaved intelligently in the real world was harder than anyone imagined.
The biggest challenge for robots then and now lies in making sense of
the world. With perfect information, many of the hardest problems in
robotics would be nearly trivial. We've gotten very good at building
and actuating robots, but in order for them to use their abilities to
the fullest they need to make sense of their surroundings. A robot car
has to know where the road is and where other cars and people are. A
robot servant needs to be able to recognize household items.
Today's robots are starting to be able to make these difficult
determinations. The question we're here to answer is: how? What
allowed robots to go from blind, dumb, immobile automatons to fully
autonomous entities able to operate in unstructured environments like
the streets of a city? The most obvious answer is Moore's Law, and it
has certainly been a huge factor. But raw processing power is useless
without the right algorithms. A revolution has taken place in the
robotics world. By embracing uncertainty and using the tools of
probability, robots are able to make sense of their surroundings like
never before.
In this article, we'll explore how robots use their sensors to make
sense of the world. This discussion applies mostly to robots that carry
an internal representation of the world and act according to that
representation. There are lots of successful robots that don't do such
"thinking": the military's UAVs are mostly remotely piloted, linked by
an electronic tether to human eyes and brains on the ground. The Roomba
does its job without building a map of your house; it just has a series
of simple behaviors that are triggered by timing or bumping into things.
These robots are very good at what they do, but to autonomously carry
out more complicated tasks like driving, a robot needs to have some
understanding of the world around it. The robot needs to know where it
is, where it can and can't go, and decide what to do and where to go.
We'll be discussing how modern robots answer these questions.
Sensing and Probability
As it turns out, the big challenge in many robotics applications is
the same: it's easy to do the right thing, but only if you know what the
right thing is. We've known how to steer a car automatically for a
long time. What's hard is knowing where the road is and whether that
shape by the road is a fire hydrant you can ignore or a child about to
run across the street. To operate in an unstructured environment, a
robot needs to use sensing to understand the state of the world relative
to itself. Sensing is the key to successful robots, and probability is
the key to successful sensing.
Sensing is difficult because the world is a complicated,
unpredictable place. Remember that the robot doesn't get to "see"
reality directly. It can only take measurements through its sensors,
which don't perfectly reflect the true state of the world. Just because
your sensor tells you something doesn't mean it's true. For example,
GPS position measurements can jump by several meters, even when the
receiver is stationary. Some things aren't even possible to measure
directly; if you're trying to distinguish between a person and a cactus,
there's no sensor that directly measures "humanness." You have to look
at different measurable properties like shape and size and so on to
infer if you're seeing a person.