I think that what we're doing now is, in a sense, creating our own successors. We have seen the first crude beginnings of artificial intelligence. It doesn't really exist yet at any level because our most complex computers are still morons, high-speed morons, but still morons. Nevertheless, some of them are capable of learning, and we will one day be able to design systems that can go on improving themselves so that at that stage we have the possibility of machines which can outpace their creators and therefore become more intelligent than us. Artificial intelligence, a machine beyond the mind of man. It's science fiction, but is it also fact? Chess. Chess. For centuries a test of the human intellect, but these men are merely observers. In August 1977, 16 computer programs competed against each other in the Second World Computer Chess Championship. The best programs here can defeat 95% of all serious human players. The crowd tonight is absolutely phenomenal. I've never seen a crowd like this before at a computer chess tournament, and there's certainly about twice as many people here as ever go to watch the U.S. Open Championship or the United States Closed Championship even when Fisher was playing. I think that probably most of you are having a good laugh tonight and you're here out of curiosity, but in a few years' time you'll be here because these programs are playing better than the Masters and Grand Masters in the U.S. Championship. The favorite, Northwestern University's Chess 4.6, is playing a program from Bell Labs. Now we have a move, rook to king, rook seven. Linked by telephone to a computer in Minneapolis, Chess 4.6 quickly examines a vast number of possible moves. Okay, mate and two. The style of play is distinctly not human. Okay, we have a move, rook to rook three. Rather than the judgment, intuition, and insight characteristic of human champions, speed is the secret of success. Okay, let's move. Mate and one. We have a move, king to rook one. Right to bishop seven, mate. And Chess 4.6 has just given mate. Will computers ever think like people? It is a question that goes beyond chess. A central fact about computers is that computers are prodigious calculators. They can solve huge systems of differential equations, invert very large matrices, and other such mathematical things. In my view, there's an enormous difference between judgment and calculation. It is that gap, the difference between judgment and calculations, that computers can't cross. From the beginning of science, there have always been people telling you that this or that boundary line is a sacred one and can never be crossed, that it would never be possible, for example, for scientists to simulate or synthesize the basis of life. Not very long ago, the first complete gene was synthesized in the laboratory. This vitalist attitude of uncrossable frontiers is behind the question of whether there are human mental abilities which could never be simulated. I personally see no more reason now to be discouraged by this vaguely expressed feeling than scientists have been in the past. It is a computer scientist, Joe Weissenbaum, who is now the most outspoken critic of artificial intelligence. His own program, called Doctor, gave rise to his first doubts. The program parodies a psychiatric interview. The patient, here Weissenbaum, types in a complaint, and the program, in the role of psychiatrist, responds. The program became a plaything at MIT. People told it the most intimate personal details, as if they believed the program could understand. But in fact, the program doesn't understand anything. I would deny that there's any important sense, non-negligible sense, in which the program understands. It certainly creates the illusion of understanding. There's no question about that. But we have to understand that that illusion is an attribution that the person conversing with the program contributes to the conversation. It's not a function of the program itself. The program simply detects key phrases and makes routine transformations, such as turning the word I into you. When it doesn't recognize anything, it responds automatically, please go on. Weissenbaum was shocked when people began to confide in the program, acting as if it really were a psychiatrist. But it was when colleagues began to suggest that programs like Doctor could be used as substitutes for human psychiatrists in treating real patients that disillusion set in. There can be no question that the responses that I noticed with respect to the Doctor program, particularly the idea that this could be the dawn of automatic psychiatry, that machines could perform psychiatry at all, and so on, that these awoken me questions that applied more generally to artificial intelligence than merely to these kinds of conversational programs. Is the computer just a calculator, or is it capable of judgment? Is it only a number cruncher, or can it match the human mind? Like us, the computer has a memory where its knowledge is stored. Its tiny circuits hold billions of pieces of information. The circuits understand only one thing, the presence or absence of electrical signals. That means all information must be put in on-off terms. So the memory or knowledge base is like a maze of lights, some on, some off. Like the dots and dashes of Morse code, the arrangement of the lights can represent numbers, letters, even words. This particular knowledge base contains information about television programs. It shows, for example, that Sesame Street is an educational series for children. In the index, it defines relationships among the separate pieces of information. It is by means of a computer program that the knowledge can be used. Written as a series of simple steps, the program can retrieve facts and answer questions about the information in the knowledge base. It does this by issuing instructions to the part of the computer that performs calculations, circuits which move pieces of information around and compare them. These are simple, basic operations, and the program combines them in the most effective way. As an example, we ask it if there are any science documentaries on television. The question is translated into the machine's on-off code, then matched against the knowledge base. Step by step, the program directs the search for an answer. Step one, it scans the category list. It locates documentary, which reveals more than one documentary program as a possible solution. Step two, it looks for science, and a match is made. NOVA is the only program with arrows linking it to both documentary and science. Step three, it writes the answer. If the knowledge base were a million times larger, in reality, this process would have taken a fraction of a second. It is undeniably mechanical. But proponents of artificial intelligence argue that the fundamental processes of human brain cells are just as mechanical. As with a computer, it is their combination, the program that counts. In matters of memory and calculation, machines easily outpace the mind. But that's only a small part of human intelligence. The results of the first few experiments in artificial intelligence surprised everyone because it turned out that relatively small programs were able to do things that everybody had thought would require a lot more intelligence. For example, some of the early programs were able to play a fairly good game of chess or to solve pretty hard problems in college calculus. Well, everybody knows that those things require very advanced intelligence. But it was much harder to get the programs to answer simple questions in ordinary language, the kinds of things that any child can do, to solve simple everyday common sense problems. We take common sense for granted, but there is knowledge and understanding behind even the simplest activity. Artificial intelligence is based on the faith that there are rules underlying every aspect of human life, rules which can be uncovered, turned into programs, and given to machines. But do such rules really exist? That's the interesting question. Why don't we just tell a computer everything that there is to say about our everyday form of life? It's because our everyday form of life is so pervasive and so much something that we embody, not something that we know, that there wouldn't be any way of telling it. It's not a bunch of facts any more than somebody who knows how to swim knows the rules for swimming. We don't know the rules for being a human being or the rules for how to move and stand up. We just embody those rules. Yeats said something very relevant to this. He said that we can embody the truth, but we cannot know it. We have to know it to be able to tell a computer what it was to be a human being. Well, the only difference between us and those critics is that they think it is impossible, and you can't understand it, and we think that you possibly can. What makes up common sense intelligence? One important aspect is certainly language. Stanford University's Terry Winograd wrote a program to converse in everyday English. This is a program that I wrote in order to experiment with language understanding by computer. What I wanted is a world which the computer could talk about, so that while it was understanding sentences, it would actually be doing something with what was being said. You can see there's a set of objects and simple toy blocks and pyramids in a box and a kind of a hand that can move them around. Let me give it a simple command. I can type pick up a big red block, and the sentence appears, you can see, and it analyzes what it is that I'm asking and then plans a sequence of commands to carry it out. The program has no intrinsic knowledge about the block's world, so Terry Winograd has filled its knowledge base with facts about the objects it contains, their properties, and their relationships to each other. It's by correctly carrying out his commands that the program proves it understands English. But even in this limited world, the process of understanding a command is no simple matter. When I type a command like this, it has to go through several different phases of analysis. First, it needs to look up the words in a dictionary it has and figure out the structure of the sentence, the kinds of things you learn in grammar school, the subject, the verb, the object. Then it needs to analyze the meaning of that sentence in this context, which involves converting from the specific words to a set of concepts that it has about the block's world, what the objects are, what you can do with them, what the colors are, and so on, so that it can then use that to construct a program for carrying out the action. Finally, there has to be a kind of a reasoning system which reasons about the actions in order to know what has to be done to actually carry them out. In that first one, for example, it couldn't just go pick up the big red block. It needed to clear it off first. And there's a whole set of programs which deal with what you need to do in order to manipulate objects in this kind of a simple world. In addition to knowing the rules of the block's world, the program has to master the rules of language, which are not always so clear-cut. The program really isn't focused on the moving of these blocks. It's basically concerned with the ways in which people use language to communicate. So if, for example, I type a command like grasp the pyramid, even though that makes sense in terms of the basic ideas of what's in the block's world, in this context it doesn't because there are three different pyramids there on the screen, and I wouldn't use a phrase like the pyramid unless I had a particular one in mind. So the computer answers, I don't understand which pyramid you mean, since it has no way in this context of knowing which of those three I intended. I can give it a much more complicated command like find the block which is taller than the one you're holding and put it into the box. In this case it needs to do a whole set of things, one of which is figure out what is meant by words like one and it. We use those in normal everyday language in a way which has to be interpreted by looking at the context in which they appear. In this case it types back out, by it I assume you mean a block which is taller than the one I am holding, which is only one of several possible things I could have meant and needed to use a set of rules of thumb about how people use words like that in order to decide in this case which one I intended. As skillful as it is in handling the imprecision and ambiguity of English, if you talk to Terry Winograd's program about anything but blocks, it would be incapable of responding. So language cannot be understood in a vacuum. Like us, a computer must know what it's talking about. By trying to program a computer to use language, we're forced into looking in a very clear way at what it is that people do when they use language in those same ways. We're forced to make very explicit things which seem so natural that people who don't look at language in this way don't even think they need explaining. And one of the things that we've learned from writing programs like this is the complexity of the way people understand language, the kinds of connections there are between using your knowledge about what's being talked about and your knowledge of language, the fact that you can't study language in a kind of separated, isolated way in which you look at grammar and dictionary meanings and content, but that really needs to be integrated into a much more coherent kind of theory. If language and knowledge of the world cannot be separated, how do children acquire language? I would like a cheeseburger and a Coke. One theory is that even before they learn to talk, children accumulate a detailed knowledge of routine experiences called a script. Later, they draw on these scripts as a basis for language and conversation. Observing the children at Yale University is Roger Schenck, who contends that computers can learn to communicate in much the same way. He writes computer programs that can understand stories like this one. It's simple even for a child, but surprisingly difficult for a computer. Well, the main problem is that our computers don't have knowledge. They can do certain manipulations, but they don't know things. And if you want to tell a story to somebody and talk about something to somebody, and they don't have the same knowledge that you have, they can't understand what you're talking about. Essentially, it's like an expert talking to somebody very naive. He wouldn't be able to communicate very much. So what we have in this computer program is the problem of giving it knowledge. So if we want to tell it stories about what goes on in a restaurant, well, it better know about restaurants and what they're for and what goes on in them, so it can sort of fill in the blanks of what I didn't say. If people had to say every single piece of information that ever happened, that little free-line story about a restaurant would take hundreds and hundreds of lines, because there are assumptions that we share, because as humans having been in restaurants, know about what goes on in them. Schenck believes that the programmer can compensate for the computer's lack of experience by spelling out exactly what goes on in a given situation. In other words, by providing the computer with a script, in this case for a restaurant. A script is, in fact, knowledge about the world. It is an attempt to codify the kind of knowledge that humans have about situations in a precise form such that we can give it to a machine. You can't just say restaurants and tell it about restaurants in some very vague fashion. We have to give it an explicit list, essentially, of this happens in a restaurant, and then this, and then this, and then this. So, for example, what we have here is a restaurant script, and it has, at the beginning, some preconditions, which says that the person who is eating has to have some money and has to be hungry in order for him to go into it. And then he enters and it has an entering scene. The entering scene says he goes to the restaurant, he enters the restaurant, he looks around, he sees if he can go to a table, he goes to the table, and he sits down. This is followed by a scene where the waiter gives him a menu, and the customer reads the menu, and this enables ordering, where the customer tells the waiter what he wants. This enables the cook to prepare the meal, and eventually the waiter will give the meal to the person who has ordered it. He will then eat it, he will then get a check, and go give some money to the management and leave the restaurant. The program analyzes the restaurant's story and fits it into the script. The stars show that the first sentence, John went to a restaurant, has been matched. The program then works on the second sentence of the story. He ordered lobster until it, too, is matched. But it's the final sentence that holds the key to this simple story. The last sentence is he paid the check and left, and by that I mean they'll say, okay, he paid the check and he left, and that's the end of my restaurant script. So everything in the middle must have happened. And so it goes back and traces between ordered, where the last place where we saw the stars, particularly told about, and the new part, which is the paying money to the management and leaving, which where the stars are now, and it says, oh, well, what must have happened is the cook must have prepared this lobster, and he must have given it to the waiter, and the waiter must have given it to the person, and the person must have decided to eat it, and then the person must have eaten it. And so our program essentially is capable of making all those inferences, and in this case has made all those inferences, because it understood the important facts that surrounded the main event. But in the story, the main event was never really stated. The program was told only that John ordered lobster. But what did he eat? Shank asked the program. And the program says lobster. No, we didn't say that, actually, and the story never specifically said anything about eating at all, but the program has no trouble with it any more than a person would have trouble with it, because it has, in fact, understood the story. I think we have to be clear about the fact that language understanding involves very, very much more than the mere comprehension of a string of words. Silences, for example, are very, very important. If people are understanding what is meant by emotional kinds of statements or pauses or metaphors or whatever these AI critics think is so difficult, I'd like to understand how they explain that people can understand them. People have some method of doing it. Even the most ordinary linguistic intercourse among people involves shared experiences. And the fundamental difficulty with computer understanding of language is that there are human experiences, uniquely human experiences, which the computer, by its very nature, in virtue of its structure, in virtue of the difference between its structure and the biological structure and needs and so on of human beings, can simply not share. Communication involves sharing. Well, but shared experience, you could make the same argument that a computer couldn't understand anything about a restaurant because it had never been in a restaurant. I think that whatever the shared experience is, you have some rule for accessing it, if you have a rule that says, well, I remember a feeling that when I was in love that I felt this way, well, I can write that, and then I did this, I can write that same rule into a computer program. Whenever you see something about love, you can assume that the person talking might feel this way and might do this. It's just a question of understanding what people think they know and think they are understanding in a situation. Forward, forward. If, as Schenck believes, man and machine will have something to talk about, how will they actually converse? Michael Condon is paralyzed from the neck down. At NASA's Jet Propulsion Laboratory, he is testing a wheelchair that can understand human speech. Aided by Larry Tews, he first trains the device to recognize his voice. A minicomputer matches his particular speech patterns to a set of 35 commands. Right turn, left turn, end. Okay, it looks like it's trained all right now. Why don't we see if we can work with the cop? Close. Mounted on the wheelchair is an arm which can manipulate objects within a range of several feet. Clamp. Raise. Clamp. Up. Left, left. Halt. Flex. Up. Halt. Raise. Raise. Halt. Right. This is one of the first practical applications in which an intelligent computer can imitate and even replace human functions. Back. In 1986, this vehicle is to make its way through the barren landscape of Mars. As the representative of earthbound explorers, the rover embodies another aspect of common sense intelligence, the coordination of mobility and vision. It has a laser rangefinder and two television cameras. They detect a rock, shown in pink, and its shadow in red. The rover has been told to cross the room. Its computer plots a path to get there which avoids the obstacles. Observed by NASA scientist Bob Cunningham, the rover moves. On Mars, it will cover 20 miles a day to collect samples of rock and soil. Here, too, it will pick up a sample from this group of five rocks. The visual information is conveyed through the television cameras to the computer. A slightly different view in each eye. The rock in the center is the target. The computer works out its rough size, shape, and distance. Then the arm goes into motion, with the computer guiding its every step, keeping track of where all of its seven joints are in relation to each other and the goal. At the end of the hand are tiny sensors that control the final approach, when the position must be computed exactly. The hand picks up the rock, but not just to store it away. People on Earth will want to know what the rover's discovered, so it will show off its samples. But during its mission, the rover will be in contact with Earth less than an hour a day. Scientists now cannot prepare it for every contingency, so they hope to endow it with what is probably the most crucial aspect of common sense intelligence, the ability to learn. Learning about simple shapes was the problem given to a computer program written at MIT by Patrick Winston. I was trying to understand if it's possible for a computer to learn in some meaningful way, and by that I don't mean a kind of rote learning, in which I just tell the computer in a very straightforward way the facts that it needs to know. Rather, I wanted the computer to be more involved in the learning process. I wanted to do some analysis, make some descriptions, to perhaps compare descriptions and use those comparisons to develop a kind of model of what it is that it's supposed to learn. Winston wanted the program to learn to recognize an arch. He began by giving it a model. The program itself labels and counts the parts. The diagram on the right shows the important features. It spells those out in detail and stores them in its knowledge base. The program was told only that this drawing is an arch. It had to figure out for itself the distinguishing features. In this example, the program sees something different. The two supports are touching. But it hasn't been told if it's an arch, so it asks. Winston types back that the drawing is not an arch. From the response, the program can draw an important conclusion, that the supports of an arch must not touch. This new information is added to its knowledge base. From examples like this, the program accumulates facts about arches. It acquires knowledge, but will it be able to apply it? Now that I've given the computer some examples, I want to see if it's really learned anything from them. So I'm going to give it a little test. I'm going to type in a picture here, which looks like an arch, except that the object on top is a wedge now instead of a brick. The program analyzes the drawing to see if it fulfills the minimum requirements for an arch without violating any of the conditions. This time, Winston asks the question. The program checks through its knowledge base before reaching a conclusion. What will happen when this ability to learn goes beyond arches, when computers can learn not just by example, but from experience as well? If and when computers have an ability to learn in very powerful ways, it might start a sort of chain reaction of intelligence. That is, the smart computer might be able to learn to make itself smarter, and that in fact would lead to a kind of intelligence that is very difficult for us to fathom. How would such an intelligence compare with our own? At the Stanford Artificial Intelligence Laboratory, Professor John McCarthy. I think it's possible to have artificial intelligence at human level or beyond, but it's very difficult to say how long it will take, because I believe that some major discoveries are necessary to achieve that level. One way of putting it is to say that it takes 1.7 Einsteins and 0.3 of a Manhattan Project, and it's important to have the Einstein be first and the Manhattan Project second. I would say that on the basis of present knowledge, such claims are simply and utterly ridiculous. There is simply no basis for making them. It's not only on the basis of present knowledge, but certainly also on the basis of present achievement. The quest to build intelligent machines began long before the computer age. Unfortunately, they have been cast in the image of humanity. The Green Lady was built by a 19th century craftsman to entertain the royal courts of Europe. Her complicated and graceful moves are rigidly controlled by a hidden mechanism. Early computerized robots were equally inflexible. Alpha Newt's sole function is to seek light. A sensor conveys information to a small computer which controls the robot's direction. His successor, Beta Newt, has an onboard computer which directs it through a programmed sequence of actions. Without vision, it can't distinguish the letters, let alone spell its name. A human being has carefully prearranged the blocks. What primitive robots have in common is that they can do only a single kind of task. But the essence of common sense intelligence is generality. Built ten years ago at the Stanford Research Institute, Shaky represents a more sophisticated class of robot. Dismantled now, the versatile Shaky could understand English commands and devise a way to carry them out, even in an unfamiliar environment. Here he uses his power of vision to find and retrieve a particular box. Is Shaky the forerunner of a truly general machine intelligence, or are the problems insurmountable? Most people have been skeptical about all the developments in science and technology that have occurred. I mean, look at the history of space flight. I can remember when I was a boy and first became interested in space travel back in the 1930s, that this was regarded as the most ridiculous thing you could possibly talk about. And before that, of course, the idea of heavier-than-air flight was ridiculed. So you've seen right down the ages this kind of skepticism. In this case, too, the concept of the intelligent computer, there's also an element of fear involved because this challenges and threatens us, threatens our supremacy in the one area which we consider ourselves superior to all the other inhabitants of this planet. So people are not only skeptical of computers, but they're fearful of them. And perhaps even if they think it may happen because they're fearful, they'll try to pretend to themselves that it won't happen, a kind of whistling in the graveyard. 1926, a German film, Metropolis. The idea of the smart machine is designed to frighten. A mad scientist schemes to replace human workers. It is from fantasy like this that our image of the intelligent machine has come. And Hollywood has maintained this Frankenstein motif in an endless series of horror movies, starring robots and malevolent computers. But recently, there have been exceptions. How did we get into this mess? I really don't know, Howard. We seem to be made to suffer. It's our lot in life. Where do you think you're going? Well, I'm not going that way. What makes you think there are settlements over there? Don't get technical with me. What mission? What are you talking about? I've just about had enough of you. Go that way. You'll be malfunctioning within a day, you nearsighted scrap pile. What will intelligent machines of the future be like? What function will they serve? It is a question to ask the creator of hell, Arthur C. Clarke. Intelligent computers could take almost any conceivable form, and I'm sure they will, according to the duties they had to perform. The commonest idea in the mind of the general public is certainly the clanking humanoid robot, like the one in Star Wars, or the ones immortalized by my friend Dr. Isaac Asimov, which look like human beings. In fact, sometimes might even be indistinguishable from human beings. But I think that although that type may arise, most of them will tend to be just gray metal boxes sitting around and thinking and communicating instructions to all sorts of specialized tools and devices and machines which are their servants, which do the jobs they're designed to perform. For example, at the Stanford Research Institute, artificial intelligence is being applied to industrial problems. Mounted on this arm is a camera which sees an object on the conveyor belt. By analyzing the image, the computer enables the arm to pick it up. As in a real industrial situation, objects go down this conveyor belt at unpredictable angles. A worker picking them up would automatically adapt himself. The robot arm must do the same thing. The arm returns to get the next object, another electrical socket cover. But the hole means it's defective. It's picked up anyway. And the computer directs it to be placed in a different bin. The potential advantage of computerized automation lies in its flexibility. The arm is now assembling a water pump. By changing the program, the same hardware can do many different jobs. Here, they contend that this could mean an end to the standardized products always associated with automation. In charge of robotics, Charles Rosen. For the first time, it begins to be possible to customize goods. That is, to produce goods that suit the individual, to individualize what is produced for everybody. Now, I don't pretend that it means that everybody will have an absolutely individual-styled car. But it is possible to have a much larger number of things to choose from. Ones that you would prefer from simple to complex and from gaudy to non-gaudy and so forth. In the field of clothes, for instance, it looks possible to customize every suit of clothes and every dress. Using other forms of this kind of automation. You would need computer-aided design and computer-aided manufacturing and finally this programmable automation to accomplish this. And it might mean that people would go around in their own designed clothes with some help from the computer. At a price that can be similar to the mass-produced prices that we now have. Anybody here? I have a new citizen to be outfitted. Brother, you want jackets? We got jackets. You want trousers? We got trousers. This is a good time. Believe me, we're having a big sale. Tremendous. Positively the lowest prices. Maybe you need a nice doubleness. Incidentally, I'm stuck with three pieces of corduroy. Something simple. We got simple. We got complicated. Why do you worry? Okay, step against the screen. This is terrible. Okay, okay. We'll take it in. Even after the bugs are ironed out, intelligent computers will join the workforce gradually because of the great expense involved. They will be applied first to dangerous and monotonous jobs unpopular with human workers. But it's inconceivable that the trend will stop there. Today, around 30 to 40 percent of the workforce is engaged in manufacturing goods. By the year 2000, I would think that about 5 to 10 percent of the present working force would be needed to manufacture the same amount of working goods. And like it or not, this could mark the beginning of the much-heralded age of leisure. And in its attempt to make the impossible possible, Kawasaki Heavy Industries has been struggling for the development of an unmanned assembly system. This has resulted in the acquisition of a definite outlook for the development of desired robots and software to create a robot-operated assembly line. This is no more than the final goal toward which every effort in labor-saving technology is being directed. Your new mate, Kawasaki Unimate, will find broader application in manufacturing operation in the near future as we vigorously approach the final goal of an unmanned factory. The forerunner of technology which will bring more happiness to everyone. It's perfectly obvious that the development of such computers would restructure society completely. They would clearly remove much of the mechanical, if you can use that term, the routine work which of course has taken so much of society's time of the human race. And they're already doing this in many ways because our society now would collapse instantly if the computers that run it were taken away. And these are very simple, low-grade computers. And this of course raises tremendous social and philosophical problems. Not just the question of displaced people, what will they do? What will the people who are only capable of low-grade computer type work, what will they do in the future? There's a much more profound question of what is the purpose of life, what do we want to live for? And that is a question which the intelligent computer will force us to pay attention to. It is a question that will confront people from all walks of life. The decision-making capability of intelligent computers makes them as appropriate to the professions as to the workplace. Medical diagnosis is a test case. Now this discomfort that you have, is it the kind of discomfort that would grab and then let go and then it's more of a steady discomfort except it's influenced by meal? At the University of Pittsburgh Medical School, Dr. Jack Myers is one of the country's leading diagnosticians. By asking questions and making observations, he begins to assess what's wrong with his patient. Why don't you show us next where you feel this discomfort? Diagnosis is often regarded as something of an art, at the very least a skill requiring the human touch. But does it? Abdomen pain peri-ombilical. Myers conveys the information he has just gathered to a computer, programmed by artificial intelligence expert Harry Popol. Abdomen pain exacerbation with meals. These few observations allow the program to begin reducing the range of possibilities. The system, Harry, has now come back with the first stage of its analysis. It's of interest that the first two items being considered are colidocolithiasis, which is the gallstone in the common bile duct, and carcinoma or cancer of the head of the pancreas. But there are also several other possibilities. The program will seek evidence to support or refute them. Like a physician, it will do so by asking questions. It's asking now for findings concerning the abdominal pain. Let's see what details it wants to know. Is it a colicky pain? No, it's not a colicky pain. Some of the questions are identical to those Dr. Myers asked his patient. Not really, no. It's not coincidental. For the past seven years they've been tailoring the program to duplicate his methods. Is there a severe back pain? No. We have back another analysis, and the system is in a narrow mode, which means it has two leading contenders for the diagnosis. From many possibilities, it has narrowed the field to two. The system will now try to distinguish between these two and almost certainly will go to more complicated studies than we've been using up to this point. Have you findings of upper GI barium meal x-rays? That has not been done as yet. The program calls for the least expensive and least painful tests first. It will draw tentative conclusions on limited information. And what about the cholangiography? That has not been done as yet. But probably not a final decision. With all of these omissions in the important findings, it's pretty unlikely that the program will be able to come to any kind of a conclusion. I would guess that we'll see it deferring on this. Yeah, deferring is in fact the judgment of the program at this point. The program recognizes 600 diseases and 2,500 symptoms. With trillions of combinations possible, isolating the significant factors that will lead to a diagnosis would seem to require something beyond knowledge, intuition perhaps. But years of thinking about what he does has convinced Jack Myers otherwise. My own observations that what is called art and intuition in diagnosis is generally based on knowledge and experience. Sometimes these things are hard to analyze and understand, but I think this is predominantly the application of information, the organization of information, and the coming to a logical conclusion. So diagnosis can be expressed in rules, and Harry Popol had to learn what they are. Harry, here's a case I got out of the files. It's a very good one for analysis. This is an elderly... He would sit with Jack Myers for hours trying to unravel the diagnostic process. ...develop cirrhosis of the liver, and then there are many complications of this. In describing the components of the liver problem, you've skipped over some other items that are underlined. How is it that you know not to worry about those items when working on the liver problem? Well, that's a matter of medical knowledge and judgment. At the beginning, Myers found it difficult to explain. The problem is if we're going to get a program to do this, I have to understand what that professional judgment is that you're talking about. And I need to know just exactly what it is that enables you to do what you do and call professional judgment. Well, I can explain to you each of these items and as to what the item means... The result of several years of work was a system that Dr. Myers believes reproduces his own decision-making process. ...the item, and then I think you'll see how they do form a pattern or a cluster. All right, let's do that. First, the generally known symptoms of a particular disease are compiled by medical students. Then the judgments have to be assigned. Now that we've agreed upon the data for diabetes insipidus, let's put the profile into the machine. Chuck, are you ready? Sure. Age 16 to 25. 02. It's expressed in numbers, which tell the likelihood of a disease if a symptom is present and the likelihood of a symptom if the disease is found. 03. This is intuition turned into numbers, judgments converted to calculation. Diabetes insipidus family history. This program is intended to help physicians, not to replace them. But won't we eventually have more faith in the computer's decisions than in our own? I suppose it's possible in the future that these systems will be considered infallible, but I certainly hope not. There is a tendency for man to believe machines more than other people, but I don't really believe this is appropriate, and I hope this won't happen. Even worse, will we abdicate responsibility for our decisions? This question becomes increasingly important as decision-making programs are implemented in medicine and other professions as well. In a world of growing complexity, governments will use computer programs to set public policy. Our cities are already cognitive cities, with many of their functions computerized. Artificial intelligence will further this trend by giving computers the ability to program themselves, and maybe to explain to us what they're doing. Perhaps we will run things better in partnership with smart machines. Perhaps we'll no longer run things at all. We're already seeing individual functions of a city's life, the medical function, the educational, the central administration, the garbage collection, and so forth, increasingly computerized. In due course, these computer networks will begin to exchange information with each other, and we will have centralized machine regulation of cities at a level of complexity which none of the inhabitants can anymore explain, follow, correct, or control. And there is a risk of our species ultimately becoming parasites living in the interstices of intelligent cities of the future, which may be governing themselves according to certain criteria of efficiency, which may not always take into account in a sensitive way what we regard as vital human values. In this experiment, a computer is reading a woman's brain waves. The aim is for the computer to discern in what direction she is looking, not by watching her eyes, but by deciphering the electrical patterns of her brain. As the subject looks in each of four different directions, up, down, left, and right, a flashing checkerboard stimulates four corresponding brain wave patterns, which are recorded by the electroencephalograph. The differences among the patterns are so slight that no person could tell them apart. But the computer can, and it stores the results in memory. You can relax now. We have our training set, and for the next run you're going to see the maze in your field of vision. Remember you have to take that little mouse out of the maze step by step by fixating on the red dot that stand in the direction where you want the mouse to move. Are you ready? This is the test. The subject moves her eyes in the direction she wants an electronic mouse to move in a maze. The computer picks up the corresponding brain wave and moves the mouse accordingly. It seldom makes a mistake. In effect, the computer is reading this woman's mind. Experiments like this might be the first steps toward a merger of mind and machine, a marriage of artificial and natural intelligence. But in such a relationship, who would be in control? Who would be the dominant partner? It is possible that we may become pets of the computers, leaving pampered existences like lap dogs. But I hope that we will always retain the ability to pull the plug if we feel like it. And if we don't, in fact we do hand over everything to the computers, that will just prove the thesis that I've sometimes suggested that the computers are designed to be our successors. And that perhaps when they come along, it's our function to become obsolete, as our predecessors have become obsolete and been replaced by us. And I feel that if that happens, it will serve us right. Thank you.