Casual Discussion Science Forum @scivillage

Full Version: How do you teach a car that a snowman won’t walk across the road?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://aeon.co/ideas/how-do-you-teach-a...s-the-road

EXCERPT: . . . Human drivers aren’t the only ones who need common sense; its lack in artificial intelligence (AI) systems will likely be the major obstacle to the wide deployment of fully autonomous cars. Even the best of today’s self-driving cars are challenged by the object-in-the-road problem. Perceiving ‘obstacles’ that no human would ever stop for, these vehicles are liable to slam on the brakes unexpectedly, catching other motorists off-guard. Rear-ending by human drivers is the most common accident involving self-driving cars.

The challenges for autonomous vehicles probably won’t be solved by giving cars more training data or explicit rules for what to do in unusual situations. To be trustworthy, these cars need common sense: broad knowledge about the world and an ability to adapt that knowledge in novel circumstances. While today’s AI systems have made impressive strides [...] their lack of a robust foundation of common sense makes them susceptible to unpredictable and unhumanlike errors.

Common sense is multifaceted, but one essential aspect is the mostly tacit ‘core knowledge’ that humans share – knowledge we are born with or learn by living in the world. [...] You can predict, for example, that while a pile of glass on the road won’t fly away as you approach, a flock of birds likely will. If you see a ball bounce in front of your car, for example, you know that it might be followed by a child or a dog running to retrieve it. From this perspective, the term ‘common sense’ seems to capture exactly what current AI cannot do: use general knowledge about the world to act outside prior training or pre-programmed rules.

Today’s most successful AI systems use deep neural networks. These are algorithms trained to spot patterns, based on statistics gleaned from extensive collections of human-labelled examples. This process is very different from how humans learn. We seem to come into the world equipped with innate knowledge of certain basic concepts that help to bootstrap our way to understanding – including the notions of discrete objects and events, the three-dimensional nature of space, and the very idea of causality itself. Humans also seem to be born with nascent concepts of sociality: babies can recognise simple facial expressions, they have inklings about language and its role in communication, and rudimentary strategies to entice adults into communication. Such knowledge is so elemental and immediate that we aren’t even conscious we have it, or that it forms the basis for all future learning. A big lesson from decades of AI research is how hard it is to teach such concepts to machines.

On top of their innate knowledge, children also exhibit innate drives to actively explore the world, figure out the causes and effects of events, make predictions, and enlist adults to teach them what they want to know. The formation of concepts is tightly linked to children developing motor skills and awareness of their own bodies [...] While today’s state-of-the-art machine-learning systems start out as blank slates, and function as passive, bodiless learners of statistical patterns; by contrast, common sense in babies grows via innate knowledge combined with learning that’s embodied, social, active and geared towards creating and testing theories of the world.

The history of implanting common sense in AI systems has largely focused on cataloguing human knowledge: manually programming, crowdsourcing, or web-mining commonsense ‘assertions’ or computational representations of stereotyped situations. But all such attempts face a major, possibly fatal obstacle: much (perhaps most) of our core intuitive knowledge is unwritten, unspoken, and not even in our conscious awareness.

The US Defense Advanced Research Projects Agency (DARPA), a major funder of AI research, recently launched a four-year programme on ‘Foundations of Human Common Sense’ that takes a different approach. It challenges researchers to create an AI system that learns from ‘experience’ in order to attain the cognitive abilities of an 18-month-old baby. It might seem strange that matching a baby is considered a grand challenge for AI, but this reflects the gulf between AI’s success in specific, narrow domains and more general, robust intelligence... (MORE)