Does the future still need us? That was the question computer scientist Bill Joy famously posed a couple of decades ago. If the things we
read in the media about artificial intelligence can be believed, the answer's a
definite "no." Smart computers already grade our exams,
help us choose our next YouTube video, and decide what we can post on social media;
and they'll soon be doing much more, from
powering vast armies of robot soldiers to safely steering our cars
and trucks. But just because machines like these do seemingly
smart things, does it follow that they really are intelligent?
And, even if they can be described that way, is their growing ability
a sure signpost to a future of redundant, human stupidity? What
exactly is artificial intelligence and why should we care?
Artwork: Can we imagine a world where computers can imagine a world without
us? Composite photo by explainthatstuff.com incorporating dummy photo by Conrad Johnson
courtesy of US Army and
DVIDS.
Some of the real (human) minds that have wrestled with the problem
of artificial (machine) intelligence have offered deceptively simple
definitions:
John McCarthy, the US computer scientist who coined the
term "artificial intelligence" back in the 1950s, said: "It
is the science and engineering of making intelligent machines,
especially intelligent computer programs." [1]
Marvin Minsky, another leading scholar in the field, suggested AI is "the
science of making machines do things that would require intelligence
if done by men." [2]
Alan Turing, one of the founding fathers of this field, essentially defined artificial
intelligence as the ability of a machine to pass itself off as a completely convincing
human. [3]
But what is intelligence?
Of course, to understand definitions like these, we need
to reflect a little bit on what we mean by intelligence. When
we say someone is "clever," typically we mean they're good
at brainy, book-type stuff, although you can certainly be a clever
football player, pastry chef, or
just about anything else. If we say someone is "talented,"
we mean they have some sort of natural ability—a born head-start
that makes them better than average.
So what does intelligence mean? Effectively, it's a talent for
cleverness that you can choose to deploy however you want. You're not
just clever at being a lawyer or a historian: you have a general
aptitude for stuff that you can apply in different ways at
different times. You're like an actor who can play many roles. Today
you might be a world-class lawyer; tomorrow, you might turn your hand
to learning classical piano or becoming a chess grand
master. Like an actor, you have an overarching talent and a
conscious, free-willed ability to play any part you choose. So
intelligence is a kind of general-purpose thinking talent underpinned
by qualities like a good memory, excellent language and reasoning
skills, an ability to communicate with other people, often
combined with creativity, emotional awareness, morality, self-awareness, and
so on. A clever person might be good at just one
thing (because they've been doing it a long time) and flounder when
they try something else; an intelligent person has the potential to
be good at anything and everything (or, at least, many different
things), even when they've not been explicitly trained in those
things.
Photo: Map-reading, fire-fighting, flying a plane, playing football—which of these things takes intelligence? They all do. We could build "clever" computers or robots to do all these things, but they still might not be what we consider "intelligent." Yet if we could build a machine that could do all these things, and many more, without any human intervention, would that be truly "intelligent"... or is there more to it than that? Photos by William Johnson, Christopher Quail, Trevor T. McBride, and Trevor Cokley, all courtesy of
US Air Force.
Sponsored links
But "general problem-solving ability" might seem like a vague
and woolly definition of intelligence—and some people prefer to be
more specific. So, in the field of psychology (the science of human
behavior), you'll find more than a few people willing to define
intelligence as the ability to pass intelligence (IQ) tests. That's a
circular definition, but it's not as cynical as it sounds. An
intelligence test is something specific, well known, and well
studied. You could take a typical IQ test and break it up into all
the different puzzles it contains. You could figure out step-by-step
ways of solving each one and train a computer in how to do it. Give
your computer an IQ test and it ought to score pretty highly, in
theory making it an artificially intelligent machine, at least by
Marvin Minsky's definition. Well, no. Because if you gave it a
slightly different test it hadn't been trained for, it would probably
have no idea what to do and fail, without any of the crushing
self-awareness that a human would have in the same situation. Still,
we might say it has some of the qualities of an intelligent machine
or an intelligent human.
Strong AI
If we define intelligence this way, it means machines with artificial
intelligence would have similar general-purpose problem-solving
ability: they might behave as human-like brains in computer-like
boxes. This way of conceiving things is sometimes called strong AI
or artificial general intelligence (AGI) and it's what most
people assume we mean when we talk about intelligent machines: if not
machine replicas of humans, machines with most (or all of) the
intellectual qualities of a thinking human being—and sometimes their emotional
and creative qualities too.
Weak AI
The idea of making machines that can behave in clever ways to solve
specific, limited problems (like playing chess, driving a car,
operating a factory robot, or whatever) is called weak AI or
artificial narrow intelligence (ANI). Most of the vaguely
"intelligent" computers and robots that we come across at
the moment fall into this category. A chess-playing computer like
IBM's Deep Blue made world headlines when it defeated grand master
Gary Kasparov back in 1997, but much of its cleverness was down
to brute-force computation—and just imagine it trying to drive a car
down the freeway. By the same token, self-driving Google cars look
pretty impressive, but don't expect them to sit down across the chess
board anytime soon.
What's the point of AI?
Strong versus weak, general versus narrow, human-like versus machine-like,
intelligent versus merely clever—there are all sorts of ways of
considering "degrees" of artificial intelligence. We can
picture a scale ranging from simple, narrow artificial cleverness
(clanking, preprogrammed, machine-like ability to carry out specific
tasks, like a Roomba vacuum cleaner scrabbling round your living room
without any awareness of what it's doing); through chess-playing
computers, self-driving cars, and robots that can rescue people from
disaster zones; right up to a supremely, generally artificially
intelligent computer that's smart enough to reprogram itself,
recognize its flaws and design better versions of itself, and even
has some sort of consciousness of what it's doing.
Yet many would argue that this isn't a simple scale, however, and that AGI is entirely
different from ANI (qualitatively different), and not just "turbocharged" ANI (quantitatively different).
And if that's true, it definitely doesn't follow that our ultimate goal should be to create the most
generally, artificially intelligent, humanlike computer we possibly
can; that's where much of the debate about AI gets bogged down.
Our first goal is to understand our goal. If we want a
rescue robot
that we can deploy in emergencies to pull people out of a
burning building, it needs a selected range of useful behaviours. It
probably doesn't need to play chess, drive cars, or speak
Spanish, though the more things it can do, obviously the better it
can improvise in unpredictable situations, just like humans.
One goal of AI is simply to do something that humans do as well as they do it,
without getting hung up on trying to do everything that humans do
better than they do them or doing it exactly as a human would.
"Strong" is not automatically better than "weak,"
even if the use of those emotionally loaded words make it sound so.
Weak is a misnomer. There's nothing "weak" about a
world-class human chess player if all you want is someone who can
play chess. A "weak" Roomba robot can make a very nice job of
cleaning your carpet.
You can take a very pragmatic approach to AI, in other words, without
worrying excessively about wider philosophical questions, even if
those things are of great interest to philosophers. In
other words, you can program a computer to play chess without sitting
down to define the word "intelligent" or worrying about
just how intelligent your computer will turn out to be. The important
thing is to be true to our own goals, whatever they might be.
One of the often-overlooked goals of AI—overlooked in the popular media, at least—is to
shed light on how our own brains work through the scientific study of how machines
could mimic them. Since the 1950s, psychology has undergone a "cognitive" revolution in which quite a bit of the mystery hidden in our heads has been unraveled by
research that assumes brains process information in similar ways to computers. AI research
sheds more light on cognitive psychology and neuropsychology in a similar way: by figuring out how machines could be intelligent, we learn more about how humans are already intelligent.
How will we know when we've finally created something intelligent?
If our goal is to make intelligent machines, how will we know when we've achieved it?
We could see see if they pass something called the Turing test, but that doesn't guarantee
either "intelligence" or "understanding"...
The Turing test
This all-important question was what Alan Turing honed in on in a
scientific paper he penned in 1950 called "Computing Machinery
and Intelligence," which gave rise to a famous scientific
experiment now called the Turing test (which Turing himself called
the "imitation game"). [4]
The basic idea is to compare how well a
human and an "intelligent" computer can pass the same, real-life
test: having a conversation. The experimenter sits in a room chatting
with someone, through their computer, who is sitting outside and out
of sight. What the experimenter doesn't know is whether they're
chatting with another human or with a computer programmed to analyze
the conversation and respond like a human. Simply speaking, if
they're chatting with a computer and the experimenter thinks they're
chatting with a human, they might as well be chatting with a
human—and, in that case, we can consider the computer intelligent.
In effect, Turing replaced the abstract question "Can a machine
think?" with the much more practical question "Can a machine
imitate a human?"
Artwork: The Turing test. Suppose you're sitting at the red computer
in the red room, communicating (through on-screen chat) with either a computer
or another person in the blue room. If you're chatting to a computer, but it
can convince you you're chatting to another person, we can regard that computer
as intelligent.
Testing the test
Turing's test is ingenious. Rather than quibbling over the definition of
intelligence, it offers a simple comparison as an acceptable test:
can a computer pass itself off as a human? We could reasonably argue
that this isn't, in fact, a valid test of intelligence. It's a test
of what computer scientists call natural language processing (NLP)—and related cognitive abilities like logical
reasoning, judgment, and memory—but is it a useful test of
intelligence? Perhaps that's not the point. If our interest is in
developing "weaker" forms of AI, such as superbly safe and
dependable self-driving cars, it really doesn't matter whether they
can chat to humans—and maybe we could consider them intelligent
(usefully clever) all the same?
Why should human intelligence be the yardstick for machine intelligence?
Why should a human definition of human intelligence be the
yardstick? What if we compared an expertly designed and trained
self-driving car with a teenager who'd had a couple of driving
lessons. Would we consider the teenager unintelligent because they
couldn't drive as well? Who would you rather be driven by: a human
who could pass an intelligence test with a certain score or an expert
driver, human or machine, trained in the best possible way?
One objection to the Turing test is that it encourages us not to develop
intelligent machines but simply machines that can pass the Turing
test; not to develop "intelligent" machines but machines that are
as plausibly human as possible. Passing the Turing test
may not be the be-all and end-all of intelligence any more than
scoring highly on IQ tests is the be-all and end-all of academic
success—or a prediction for leading a happy and successful life.
There are plenty of things we need to do in our world that don't
necessarily need general-purpose human intelligence—and maybe
solving those problems in the best possible way, rather than the most
human way, should be our real focus?
The Chinese room
Even if a machine can pass the Turing test, that doesn't mean it has any
conscious awareness of what it's doing or that it's consciously
passing itself off as a human (in the way that an actor might play a
role). You can imagine a machine that's given a huge database of
every possible conversation anyone has ever had in the whole of
history so all it has to do is look up what the human says to it and,
having analyzed what's already been said to establish some context,
offer a plausible reply from its almost infinite repertoire.
This is a variation on another thought experiment called the Chinese room,
devised in 1980 by philosopher John Searle as an objection to the
whole idea of strong AI. [5]
He argued that a machine could do something
apparently very intelligent (such as holding a Turing test
conversation) just by following rules and without understanding what
it was doing in any way. He imagined the machine to be like an
English-speaking person sitting in a room being fed sentences in
Chinese that they didn't understand on pieces of paper posted under
the door. The person would look up the sentences in a huge book of
rules, find appropriate responses, and pass those back
under the same door. Just because the person can flawlessly
answer questions in Chinese, Searle argues, doesn't mean they
understand a word of what they're doing. A machine programmed to
translate Chinese is different from a human who understands Chinese
and translates as a byproduct of that.
So what?
These sorts of tests and thought experiments rapidly bog down in semantic
arguments over definitions of things like intelligence,
knowledge, and understanding, which may be
intensely interesting to philosophers but aren't necessarily that
important for solving practical, real-world problems.
Searle's Chinese room continues to stir up philosophical debate about the meaning
of understanding, intelligence, syntax, semantics,
the relationship of the mind to the body, and very much more besides.
Some agree with Searle's position, highlighting what they see as
the absurdity of AI geeks claiming that computer models of human
behavior are essentially no different from the behavior they
simulate. Others, such as MIT robot scientist Rodney Brookes, believe
these sorts of arguments are based on an unshakable
(and often unscientific) belief that humans are somehow special. [6]
That makes the whole argument essentially circular: machines can't be intelligent because humans
are special; humans are special because machines can't be
as intelligent as them—is more or less how it goes.
Sponsored links
Types of artificial intelligence
Since the dawn of the field in the 1950s, most real-world, AI computer
programs have fallen into several broad types, which (for the sake of
simplicity) I'm going to categorize into just three: heuristic search, expert
systems, and machine learning. There are also hybrid systems that combine two or more of them.
Let's consider these in turn.
Heuristic search
How do you build a computer that can play chess? If you play chess
yourself, your strategy is probably to arrive at each move by
considering every move you could possibly make, moves those might
lead to, and so on, running your lithe, monkey mind along a tree of
possibilities until you figure out the move most likely to win the
game. If you have unlimited brainpower, you could theoretically
consider every possible move and rank them accordingly. But, do the math, and
you'll find the number of possible moves is well beyond the limits of
memory and time. According to Marvin Minsky, writing about this
problem back in 1966, we're talking something like 10120 moves,
whereas even a simpler game like checkers can come in at 1040
possible moves. [7]
Photo: Chess-playing computers typically use heuristic search.
Modern chess programs often work the same way as similar programs designed in the 1960s, but because today's
machines are much more powerful, they can consider far more moves in the same amount of time.
That's essentially why today's computers are better than yesterday's.
Our brains can't process so much stuff—or anything like it. So
instead of considering every possible move, you (or a computer) can
use basic rules of thumb to narrow down the searches you make,
turning an overwhelming problem into something your brain (or a
processor chip) can reasonably handle. This is called a heuristic
search. An obvious heuristic most of us apply in
game situations, if only out of consideration for the people we're
playing with, is to spend at least a certain amount of time
looking for moves but not too much. Or, getting more complex,
you might use strategies like trying to dominate the center of the
board, recognizing certain key board patterns, or preserving
important pieces like your bishops and queen at the expense of losing
less-valuable pieces. The key point about heuristic search is that it
settles on a good-enough solution in the time available rather than
trying to find the one and only perfect outcome.
Heuristic search is great for logical board games like chess, checkers,
Scrabble, and so on, which involve considering lots of very similar
potential moves, but how useful is it in the real world? You can
imagine an artificially intelligent app that searches through
thousands of houses and flats for sale or rent to help you find the
best one using a heuristic approach to narrow things down. Instead of
showing you everything, it might show you homes like ones you've
tended to look at before within a certain radius or price bracket.
But what about more complex problems like offering legal advice, figuring out
what's wrong with a broken-down car, or diagnosing illnesses?
Expert systems
You can probably see straight away that medical
diagnosis doesn't necessarily lend itself to a simple heuristic search: if
you're a critically ill patient, you're not looking for a
good-enough diagnosis in the time available; you want the right
diagnosis, however long it takes. Your doctor doesn't think "Well
I'll consider the first 10 diseases that pop into my head and then
just pick the most likely one."
Expert systems (sometimes called knowledge based systems or KBS)
are computer programs designed to go beyond simple search and
decision making using more detailed "if X then Y" analysis
and reasoning. Typically they have a database of knowledge gleaned from studying real, human
experts and a separate system that can reason by dipping into that
knowledge. The limitation of expert systems, and it's not
always a problem, is that a computer trained in one domain of
knowledge (like legal advice or medical diagnosis) isn't any use to
us in a different field. It's perfectly possible for humans to change
careers and switch from being brain surgeons to corporate lawyers,
but expert-system machines can't voluntarily do the same thing
without swapping their databases. Indeed, some medical knowledge is
so very complex, so expert, that even an expert system trained
in one medical field (say, cancer) might be of limited use in another
medical field (emergency medicine). While that's true of human
doctors, the key difference is that human experts tend to recognize
their own limitations and know when to ask for help; machine experts
don't know when they're making incompetent decisions.
Machine learning
Machine learning is one of the buzzwords of cutting-edge AI, although it's actually a very old
term that dates back to the 1950s. In practice, it often means
training a neural network
(very loosely, a hugely simplified computer model of a brain-like structure, made from layers of interconnected cells called "units") on millions, billions, or trillions of examples of something so it can
quickly recognize or classify something it hasn't seen before. So,
trained to look at millions of pictures of tables and chairs, it can
tell you whether a photo it's never seen before shows a table or a
chair—and that's how the automatic photo classification
algorithms work on your phone. Machine learning explains how Google
can filter out explicit adult images from your search results if you
don't want your family to see them: algorithms trained on adult
photos can spot tell-tale signs in other photos as people upload
them. Machine learning also underpins the automated translation you
can find on Google, Bing, and Skype. Programs like these are now so
good that they can convert almost any language into a fairly decent
(at least understandable) translation of any other language without
"understanding" a word of either; they're great examples of
Searle's Chinese room, except they're capable of operating in any
tongue you like.
One of the characteristics of machine learning programs is—the clue is
in the name—that they learn as they go along. So unlike a
preprogrammed expert system, they get better and better at
what they do the more they do it, just like a real human, to the
point where they can do whatever you program them to do better than
you can do it yourself.
A few more things are worth quickly noting. Although the terms "machine learning"
and "neural network" sound like they stem from psychology, they're much more to do with
complex math and statistics. And when we talk about "neural networks" being "brain-like," that's really
just an analogy. There's not necessarily anything brain-like in a neural
network (synapses that work like one-way streets and chemical neurotransmitters are two very obvious differences, to start with). What neural machine learning and human brains do have in common is that they both
process large amounts of data in parallel (or "pseudo-parallel," in the case of neural networks,
which are usually models of parallel, brain-like structures implemented on traditional, serial computers).
Artwork: How would an AI computer/robot go about climbing a tree? Left: It could use heuristic search to consider a number of the most likely paths along the branches; Middle: It could use an expert system database of knowledge about trees and climbing, and some IF... THEN... rules to arrive at the best route by reasoning; Right: Using machine learning, perhaps it could analyze thousands of photos of people climbing trees to figure out a good route when presented with a new photo of a tree?
Hybrid systems
I've adopted a fairly arbitrary, three-way classification here purely for the purposes of
an easy-to-understand explanation. But you can classify AI however you wish. Heuristic search and expert systems are examples of what some call symbolic AI or Good Old-Fashioned AI (GOFAI), and work in a classically, cognitive fashion like the "this-leads-to-that" flowchart diagrams we draw to explain simple computer programs. Machine learning, on the other hand, proceeds in a more parallel, brain-like, "connectionist" fashion, without
obvious serial logic. Symbolic AI systems work things out by serial, logic reasoning using a limited amount of data,
while connectionist systems use parallel processing on massive amounts of data.
Artwork: Serial versus parallel processing. In traditional symbolic AI (such as an expert system), one step proceeds logically after another; neural-network machine learning uses parallel processing (although, confusingly, it's usually modeled on traditional computers that work through
serial processing!
If you're trying to build a self-driving car or a robot that can rescue
people from buildings, you're going to be using elements of the three
previous types of AI in different ways, at different times. So hybrid systems
are increasingly interesting to researchers who might once have worked exclusively
with either symbolic, serial AI or parallel, connectionist, machine-learning systems.
DeepMind's Atari-game playing system is one very recent example of a hybrid
system that works partly through symbolic AI and partly through a neural network.
Self-driving cars rely on machine learning to interpret images of the
streets they're driving down in real time. We've all see those Google
captcha tests that get us to prove we're not robots by classifying
fire hydrants, bridges, bicycles, and taxis. That exercise is part of
Google's effort to train machine learning systems to recognize
different objects that their cars will be able to tap into later. But
driving isn't just about recognizing objects; you also need to
recognize and analyze situations. For example, if you're driving
along and you see a round red object rolling out into the road, you
might think "Oh look, there's a ball", but what you should
really think is "That ball probably belongs to a child and if
the ball's rolled onto the road, a child may be right behind it, so I
need to slow down and be prepared to stop". This is more like
expert system decision making. A self-driving car will probably
always have human occupants and a backup human driver to get it out
of trouble, so it doesn't need to be completely autonomous in the
same way as a robot soldier, which might need to extricate itself
from a wider variety of unexpected situations. So, revisiting our
original idea of artificial intelligence as a spectrum between narrow
and general, we can see that the more autonomous and general purpose
a computer or machine, the wider the range of different AI tactics or
techniques it's likely to need to draw on. And it will also need the
ability to figure out which type of thinking to use in different
situations.
One key problem—for all types of AI—is representing some aspect of the world
in a way that a computer system can understand and process. At a simple level, if
you're using a neural network to recognize faces, how exactly do you "translate"
holistic faces into discrete bits of data that the network can work with? And
how do you convert the computer network's output into a form that makes sense to humans?
Sponsored links
What is AI used for in the real world?
When John McCarthy died in 2011, an
obituary in the British Independent newspaper noted how he'd once observed that a major breakthrough in the field could come in anything from "five to 500 years."
McCarthy set his sights on the very distant horizon of strong AI and
despaired at researchers who satisfied themselves with narrower
goals. As he wrote in 2006:
"I have to admit dissatisfaction with the lack of ambition displayed by
most of my fellow AI researchers....For example, the language used by
the Deep Blue program that defeated world chess champion Garry
Kasparov cannot be used to express "I am a chess program, but
consider many more irrelevant moves than a human does." and draw
conclusions from it. The designers of the program did not see a need
for this capability." [8]
Just because we don't have computers that are smart enough to go
head-to-head with humans at anything and everything, it doesn't mean
we've made no progress with artificial intelligence because, as we've
seen already, strong AI was not always the goal. Look around the
modern world and you'll see endless applications of less spectacular
(but still very impressive) artificial intelligence, including things
like Alexa, Siri, and machine-driven customer agents, automated stock
trading, "Things you might like" recommendations on Amazon and
eBay, online advertising software that shows you ads based on who you
are and what you're likely to buy, vacuum cleaning Roomba robots that
use increasingly intelligent tactics to clean more efficiently, and
much more.
AI in action
Here are some diverse examples of AI in action I've pulled out from recent news stories:
GPT3, a controversial text-generating AI system, is now being trialed for use in online customer service agents, mental health apps, and similar conversational applications.
Deep-speare, a Shakespearean poetry generator, has been using AI to compose sonnets.
Photo: Roomba vacuum cleaning robots use increasingly sophisticated models to clean your home. We'd hardly call them intelligent, but why would we want to?
In a world packed with intelligent humans, why
bother developing intelligent machines? Is it a sign of our own
intelligence that we recognize our potential stupidity as a
limitation we can overcome? Or yet another sign of human
arrogance—that we somehow always think we can do better than
nature? Is the argument between "supporters" and "opponents"
of machine intelligence two different sides of that arrogance: that
humans are special because we think we can develop machines better
than ourselves but can't... or special because we really can't? Is the quest
for artificial intelligence rather like the quest for the perfect
move in a long and difficult game of chess... something we feel that's
out there somewhere, always just beyond reach? Or is it more like a
heuristic search in a board game, where quick, practical,
good-enough solutions to limited problems are better than waiting
around for perfection?
AI raises all kinds of philosophical questions,
but it raises social problems too, like whether smart factory robots
and machine-learning computer systems will put "intelligent"
people out of work. There are ethical problems as well. For example,
people are already asking difficult questions about the legal
responsibilities of self-driving cars. If you get mown down by a car
like this on a crosswalk, is the car somehow to blame? Is the
passive "driver" sitting with their hands on a dummy steering wheel to blame for not intervening?
Or is the designer to blame even if they're on the other side of the world?
If robot soldiers kill civilians by accident, does the buck stop with
the general who gave the order or the engineers who built and
programmed the machines?
One of the big problems we're already seeing is
that algorithms using machine-learning can come up with decisions
that we have no way to understand or challenge. We know the human
assumptions on which the algorithms are based, but there's no easy
way of seeing why a neural network trained on billions of disparate
items of data will, for example,
wrongly flag a world-famous war photo as pornography
or scandalously miscategorize a photo of three black teenagers.
In the second case, after Google's algorithms tagged people as gorillas, Google didn't
even bother trying to fix the problem: it simply stopped its algorithms labeling any
photo as a gorilla, chimpanzee, or monkey. The AI problem was too hard to solve, so
they solved an easier problem instead.
What next for artificial intelligence?
“This dispute is unresolved—and perhaps unresolvable... no-one
knows, for sure, whether an AGI could really be intelligent.”
While pragmatic computer scientists get on with building ever cleverer machines, philosophers continue to wrestle with endless variations on essentially the same stale question: whether ingenious (or brute-force) computational "cleverness" (the cake of AI) can truly replicate human "intelligence" if it can't replicate quintessential,
more subtle human qualities like understanding, empathy, morality, emotion, creativity, free-will, and consciousness (the all-important icing). Perhaps understanding the difference between computational "cleverness" and human "intelligence" is the real Turing test?
Arriving where we started, let's ask again: does the future really need us? Arguably, the biggest limitation of today's relatively weak AI systems is that they don't recognize their own limitations: for that, they
still need us. Until artificially intelligent machines are smart enough to understand how really
stupid they can be, perhaps they pose no ultimate threat to us humans.
In the longer term, is humankind at risk from ultra-intelligent AGI machines—or is that just science fiction nonsense, as skeptics like
Hubert Dreyfus were arguing over a half century ago? Are such ideas a dangerous distraction from a far more plausible threat: how many livelihoods are at risk as machine-learning-type algorithms become increasingly "clever" at doing jobs we once regarded as absolutely human? The jury is still divided. Where pessimists like Bill Joy have warned that AI is a Pandora's box, optimists, such as AI "prophet" Ray Kurzweil, look to the singularity—effectively, where machines surpass human intelligence—and a bold, rosy future where "stupid," intractable human problems like war and poverty are deftly swept aside by brainy machines. Meanwhile, pragmatic robot scientists such as MIT's Rodney Brooks argue that machine intelligence is simply the latest human technology
that helps people overcome their all-too-human limitations: "We will become a merger between flesh and machines. We will have the best that machineness has to offer but we will also have our bioheritage to augment whatever level of machine technology we have so far developed." [10]
So the question isn't really whether the future still needs us; it's "What kind of future do we want?"—and how can we use technologies like AI to bring it about?
AI timeline: A brief history of artificial intelligence
Early days
1637: Wondering about the possibility of
machines that can imitate people, French philosopher René Descartes
feels certain we will always be able to tell the difference. He
anticipates the Turing test by over 300 years when he writes: "If
there were machines which bore a resemblance to our body and
imitated our actions as far as it was morally possible to do so, we
should always have two very certain tests by which to recognize
that, for all that, they were not real men."
Broadly speaking, he argued that 1) machines cannot respond to everything
you might say to them or 2) anticipate and cope with every situation they might meet.
[11]
1737: Jacques de Vaucanson, a French artist and inventor, builds delightful automata, including a mechanical
flute player and a "digesting duck" (a realistic eating, drinking,
walking model of a bird).
1763: Thomas Bayes develops "Bayesian"
inference ( a type of probabilistic reasoning), which will form an
important part in artificial intelligence programs and algorithms in
centuries to come.
~1770: Pierre Jaquet-Droz dazzles emperors and kings with intricate, mechanical, animated dolls that do
astonishing things using simple programmable memories.
1842: Charles Babbage and Ada Lovelace develop mechanical, gear-driven computers that can be programmed and
reprogrammed.
1910: Spanish inventor Leonardo Torres y Quevedo develops El Ajedrecista, an early chess-playing machine.
1943: Physiologists Warren McCulloch and
Walter Pitts build simple, algorithmic models of brain cells that
can learn things, so developing the first primitive neural networks.
1943: Cambridge psychologist Kenneth Craik
outlines the concept of internal "mental models" in a book
called The Nature of Explanation. The idea of machines that
can use models of the world to solve problems of various kinds
proves highly influential.
1950: Alan Turing sets out the "Imitation
Game" (now called the Turing test) in a hugely influential paper
called "Computing Machinery and Intelligence."
1950s: Sci-fi writer Isaac Asimov proposes
three
laws of robotics to help keep robots in check.
1950s: British scientist Oliver Selfridge develops a computational model of pattern recognition called
Pandemonium,
which influences neural networks and machine learning.
1955: Allen Newell explores heuristic-search approaches to playing chess in The Chess
Machine: An Example of Dealing with a Complex Task by Adaptation.
1956: John McCarthy coins the term
"artificial intelligence" during a groundbreaking workshop at Dartmouth College in Hanover, New Hampshire.
1958: Computer scientist John von Neumann coins the term "singularity."
1959: Arthur
Samuel coins the expression "machine learning" to describe computer programs that gradually
get better at games than the people who program them.
1965: Joseph Weizenbaum develops ELIZA, the first "chatbot" program that can hold a vaguely human
conversation.
1965: Stanford's Edward Feigenbaum
develops the first expert system, DENDRAL, designed to identify
unknown molecules by reasoning from various input data using its
pre-learned, expert knowledge of chemistry.
1967: Frank Rosenblatt develops the Mark I
Perceptron, a brain-like computer based on a neural network design.
1968: Interest in neural networks stalls following publication of Perceptrons, a seminal book by Marvin Minsky and Seymour Papert, which attacks Rosenblatt's work.
1968: Tom Evans develops an AI program
called ANALOGY that can solve geometric problems commonly used in IQ
tests.
1968: Stanley Kubrick and Arthur C.
Clarke's dystopian 2001: A Space Odyssey offers a scary
vision of what happens when an artificially intelligent computer,
HAL 9000, goes out of control.
1970: Terry Winograd's
SHRDLU program
learns to understand a simple world made of blocks using ordinary,
"natural language."
1970s: MYCIN expert system is developed to
diagnose and suggest treatments for blood diseases.
1980: Philosopher John Searle outlines the
Chinese room, his classic objection to "strong AI."
Modern AI
1980s: Neural networks become hugely popular again
thanks largely to the "parallel distributed processing" or
"connectionist" models of James McClelland, David Rumelhart,
Ronald Williams, and Geoff Hinton.
1980s: John Laird, Alan Newell and others develop SOAR, a set of "building blocks" that could be used to create AGI.
1987: Digital Equipment Corporation (DEC)
develops R1, an expert system to help design its VAX computers.
1988: IBM researcher Peter Brown pioneers automated,
"statistical" language translation using machine learning.
1989: Roger Penrose, a British
mathematician, argues that artificial intelligence cannot be
replicated by a conventional, Turing-style computer in a bestselling
book named
The Emperor's New Mind.
1990: US computer scientist Ray Kurzweil
popularizes the singularity—effectively the point at which
artificial intelligence overtakes the human kind.
1990: Robot pioneer Rodney Brooks rejects symbolic AI and embraces pragmatic, "bottom-up," "situated" AI in a provocative paper and book called Elephants Don't Play Chess.
1995: Inspired by ELIZA, a chatbot called
ALICE (Artificial Linguistic Internet Computer Entity) sets new
standards for machine conversation, though it still cannot pass the
Turing test. Some years later, it spawns the Spike Jonze film Her.
2012/3: A series of polls of AI researchers
suggests a 50 percent chance that strong AI (artificial general
intelligence) will be developed around 2040–2050.
2014: A chatbot named Eugene Goostman,
developed by Vladimir Veselov, (arguably) passes the Turing test.
2015: Conversational neural networks make
the news. Baidu's Minwa uses the technique to classify images more accurately than people, while Google's DeepDream develops the
spooky human knack of seeing images that aren't really there—faces
in clouds and so on.
2016: Google's DeepMind beats top player Lee Seedol at the board game Go. Seedol subsequently retires arguing that AI "cannot be defeated."
2020: DeepMind succeeds at the problem of protein folding, potentially speeding the development of novel
medical drugs.
Sponsored links
Don't want to read our articles? Try listening instead
Scientific American publishes great articles about AI, usually written by leading lights in the field, roughly once a decade. Take a look at Artificial intelligence by Marvin Minsky. Scientific American, September 1966; Artificial intelligence by David L. Waltz. Scientific American, October 1982.
Artificial Intelligence: A Very Short Introduction by Margaret A. Boden. Oxford, 2018. I'd describe this as more summary than introduction, since it assumes quite a lot of knowledge on the part of the reader.
The Computer and the Mind by Philip Johnson-Laird. Fontana, 1993. A great introduction to cognitive science—the meeting point of computer science and experimental psychology. This wonderful book covers computational theories of the mind, including the key concepts of cognitive science, and also looks at the question of how robots could be taught to behave in human-like ways.
The Age of Intelligent Machines by Raymond Kurzweil. Viking, 1992. The past, present, and potential future of artificial intelligence as seen by one of its most provocative supporters.
Artificial intelligence by Patrick Henry Winston. Addison Wesley, 1984. The all-time classic introduction remains relevant today.
↑ Searle, J., 1980, "Minds, Brains and Programs," Behavioral and Brain Sciences, 3: pp/417–57.
For a broader discussion, see The Chinese Room Argument by David Cole. Stanford Encyclopedia of Philosophy, 2004/2020.
Please do NOT copy our articles onto blogs and other websites
Articles from this website are registered at the US Copyright Office. Copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties.