aja we I pool video underground I The instrument which is more and more at the cutting edge of the arms race is the computer as it is applied to making more and more sophisticated calculations and making weapons more accurate. With what we know about artificial intelligence techniques today and the electronic technology that we have, we believe that we could create weapons that could strike targets within a meter or so of a precisely defined location at intercontinental ranges. The thought of creating a weapon system so autonomous, so out of the control of human ingenuity and inventiveness and responsiveness is something that I think is totally ludicrous. The battlefield of today is as dependent on the computer as yesterday's was on the sword and shield. Computers plan battles, keep aircraft aloft, fire weapons and guide missiles to their targets. Nuclear weapons controlled by sophisticated computer systems have become faster and more accurate. We have less time to respond if attacked, less time to decide if a warning of an attack is real. Flight times for various strategic systems are getting smaller. It first took several hours when we had bombers as the only means of delivery. Then when you had missiles, ICBMs, which went from our territory to their territory or theirs to ours, take half an hour. Now you put missiles into submarines or you put missiles into Europe and now you're talking about missiles that could hit the United States in as little as five or ten minutes. People can't operate that fast. The only thing that you can do there is to put computers in the loop and computers can process lots of things very fast, but it has to anticipate all the contingencies. And anticipating all the contingencies, as any programmer knows, is a very hard thing to do. In the next half hour, we will explore the issue of computer reliability and nuclear war. We begin by asking if computers are always reliable and whether almost always is good enough. Then we look at our computerized national defenses, examine how malfunctions escalate international crises, and consider the Strategic Defense Initiative, Star Wars, whose proponents claim could someday defend against even the swiftest nuclear attack. Banks, the phone company, Social Security, and the IRS all require large computer systems. We know that these systems usually work well. One such system, the Airline Reservation Network, has been refined through two decades of trial and error. Miss Denning, one way to Perth. That's right. I don't show you on the manifesto this evening. My reservation doesn't show on the computer. Not at all, Miss Denning. Tracy, you're all set. I just appreciate you if you run on board. 24 seats, your seat number, gate 6. Great, thanks a lot. Thanks for flying with us today. Hello, how are you today? I'm okay. You have a standby to Perth. That's right. Well, my computer is showing all seats full. Most computer errors are merely inconvenient. But under certain circumstances, they could have dangerous consequences. I've often felt that if we had never had a computer invented, we'd had to invent one to try to handle the information process and needs of patient care. Decisions about what is appropriate to do in terms of a particular patient cannot be taken just within the context of the physical information known about that patient. There are very few environments where one should make one's decision solely on the basis of computer-processed information. The field that I work in is called computer-assisted radiation therapy planning. What this does is it enables the doctors and therapy technicians who are responsible for treating cancer with radiation to simulate a treatment before it is performed. The patient lies on a table and the table is moved and also the radiation source is moved around the patient so that the radiation beam passes through the patient and impacts on the tumor. The programs which I write allow them to easily set up these simulations and then calculate the radiation dose distribution that would be created if that treatment were performed. The programs that calculate these doses have to be accurate because the consequences of an error could be very grave. The person checks the computer by estimating doses using a hand calculator and tables. In this way, you are checking the computer's results by using a different technique that does not depend on the reliability of the computer or the program. Computers can fail in several different ways. Hardware errors are evident when part of the machine breaks down. Software errors occur because the computer is given the wrong instructions. Design errors are discovered when the system encounters a situation it was never designed to handle. Even computers that seem to work may not be free of errors. A crippling error may emerge only after prolonged use. Field testing of the Aegis Naval Air Defense took place in 1983. Missiles were fired into its defense zone. In this first set of operational tests, the Aegis system was faced with a number of targets, not all of them at once. A total of 16 targets. It missed six of them due to system errors that were later traced to faulty software. Now, this is a very common way of debugging large computer programs. You test them in operational use against real targets rather than a computer simulation of these things under real operating conditions. And you say, oh, I missed that target. How come? But controlled testing does not subject a system to the chaos of actual combat. Here, a system's design meets its ultimate test. If you look at the Divet Sergeant York Air Defense Gun, which was recently canceled, which had the relatively simple job of shooting down helicopters, for goodness sakes. If you look at the main battle tank that can't go more than 40 miles without breaking down. If you look at even the helicopters we tried to use in the Iran rescue attempt, if you look at any fairly complex high-tech weaponry that we attempted to use in combat for the first time and see how well it performed, I think you could understand if something goes wrong, its effectiveness is liable to be zero. One of our most critical military systems warns North America of nuclear attack. In 1958, we first put modern computers to work in defense against nuclear attack. The SAGE system was designed to detect approaching bombers. I entered the computer field just as the SAGE system was being put together. That was the first experiment at building a defense system, which was largely controlled using modern-day computers. Each section had a separate button that shut off the power on the entire machine, but one night one of the janitors in the building who was sweeping the floor managed to bring down the entire system with his broom handle just simply by inadvertently banging that button. SAGE, designed to detect bombers, was ineffective against the fast, long-range missiles of the 60s. Better radars and better computers had to be built to track these missiles. In October of 1960, our warning system reported a massive Soviet launch. The system sounded a full-scale alarm. In fact, not one missile had left the ground. Our computerized defenses had been triggered by the rising moon. The system had not been programmed to ignore radar echoes bouncing off the moon. Until the day of the alert, no one had thought of that possibility. Today, our missile warning system recognizes the moon. Sophisticated satellites and radars send data to computers deep under Cheyenne Mountain in Colorado. This is NORAD, the North American Aerospace Defense Command. Here, officers analyze information to decide whether an alarm is valid. But some are concerned that NORAD's computers cannot handle the scope of a modern attack. For example, the five main computers in NORAD are Honeywell 6000 machines, which were designed in the 1960s as business data processing machines. They weren't even designed for military use or for processing signals as they come in. NORAD's computers are being upgraded, but slowly. Despite improvements, NORAD's commanders must still judge whether or not an alarm is real. If they aren't certain, they order a higher state of alert. Senior commanders, including the Joint Chiefs of Staff, have to be brought together to confirm or dismiss the alarm. In an attempt to guarantee the survival of our nuclear forces, SAC, the Strategic Air Command, readies its crews. It's fair to say that the early warning system routinely generates false alarms, which are reacted to in serious ways by the forces. In the case of launch control officers within the Minuteman complexes, this alert normally will involve removal of keys and insertion of those keys into launch switches, the locking in of chairs and strapping in with seat belts of the crew members to brace themselves for the possibility of an impending shockwave from a nuclear explosion in the vicinity. In June of 1980, displays at SAC showed submarine-launched missiles headed for the United States. But because warning data seemed erratic, the SAC commanders began to question what they saw on the screen. After deliberation, the alert was cancelled. Later, the culprit was discovered. The error had been caused by a malfunctioning 46-cent computer chip. If a warning system said that there were 22,222 missiles on the way, a person might sort of scratch his head and say, wait a minute, that seems a little bit strange. It's more than the number of missiles they have. All the digits are two. What's going on here? It might be a real attack, but, you know, you'd suspect it. A computer would have a hard time suspecting that kind of thing. If the attack were real, NORAD could be destroyed within minutes. So might the Pentagon, the White House, and much of our nuclear arsenal. The pressure to react to an alarm without full verification is intense. In such hair-trigger conditions, war might be started not by intent, but by accident. All of us know the story of King Arthur, but few of us know how it ends. Mordred, King Arthur's son, rebelled against his father and raised a huge host of knights to confront his father on the plains of Camlin. The two huge hosts of knights faced each other, but father and son decided to talk first. The talks were going along very well and almost reached agreement. When a snake, slithering in the grass, bit one of the knights, and the knight pulled out his sword. To kill the snake, the sword caught the sunlight and gave the signal for battle. Both sides clashed and by day's end, all 100,000 knights were dead, including King Arthur and his son Mordred. And that was the end of Camelot. Today, the United States and the Soviet Union face a very similar dilemma. At any moment, an accident, a false warning, a false alarm, a computer error, could bring about the crisis that could escalate into the war that no one wants. During peacetime, the chance of a premeditated attack by one superpower against another is remote. Isolated signs of impending attack are treated with skepticism. But during a time of political crisis, the dominant mood is one of tense anticipation. There's no question that the atmosphere of a crisis makes it tougher to make good judgments. The time is very much compressed. People are uptight. They don't fully understand what is going on. There's a tremendous temptation to react too fast, lest you get caught with your pants down. I myself was a task group commander at sea during the Cuban Missile Crisis. We had our foot, if you will, on four Soviet submarines at one time. I think everybody was pretty well keyed up, ready to see torpedo lakes where they didn't exist possibly. False alarms are more easily misinterpreted during a crisis, when some fear they could trigger war. War by mistake. In 1956, against the backdrop of an East-West crisis, Hungarian revolt in Budapest and the Suez Crisis, where the Soviet Union had threatened England, France and Israel with a nuclear threat. Tremendous crisis, and at that point into NATO headquarters came four pieces of information. Number one, the Turkish Air Force had detected a squadron of Soviet MiGs over Turkey had gone on alert. Number two, a British Canberra bomber, which at that point could only be downed by a Soviet MiG, had actually been downed. Number three, Soviet MiGs over Syria, and number four, the Soviet fleet going through the Dardanelles in unscheduled numbers. All those things happening at that precise moment against the backdrop of crisis was almost enough to justify triggering off the NATO operations plan, which at that point called for all-out strikes against the Soviet Union. Yet within days it turned out that the Soviet MiGs over Turkey were actually a flock of geese. The Soviet MiGs over Syria were escorting the Soviet foreign minister back to Moscow. The British Canberra bomber had been downed by mechanical difficulty, and the Soviet fleet going through the Dardanelles was a long-scheduled exercise. Today, a warning of full-scale attack leaves only minutes for evaluation. In the rush to sort out incoming data, officers may overreact to an alarm. Although some fear that in a crisis a false alarm could lead to war, others discount that possibility. They assure us that the orderly procedures of peacetime will prevail. But if one superpower readies its forces in response to an alarm, its actions alert the other superpower. They in turn may prepare their own forces to attack. There are tremendous interactions between the early warning systems of both sides. For example, when a Soviet sub comes too close to the New England coast, bomber bases here on the East Coast go on a slight level of alert. Those are picked up by Soviet satellites. The Soviet systems might go on a slight level of alert. In normal times, that may not be that dangerous, that interlock between both sides' systems. The short fuse of modern warfare has led to President Reagan's proposed Strategic Defense Initiative, or Star Wars. An armed early warning system. It would respond instantly to attack, intercepting missiles in space. Lieutenant General James A. Abramson had to give up a tough job to take this one over. He was running the space shuttle program before President Reagan asked him to take over SDI. His official title is Director for Strategic Defense Initiative Organization at the Department of Defense. And this makes him responsible for the nation's research and technology programs relating to defense against ballistic missiles. We call it the Strategic Defense Initiative. And, of course, that goes back to the President's speech of March 1983. This proposed anti-ballistic missile system would react immediately, virtually at the moment of launch. If you can destroy a ballistic missile then, while it's standing on a tail of fire that you can see from space, while it still has all ten or more of its warheads and hasn't deployed all of those, you're doing it the sensible, logical way. Later in flight, the lethal warheads separate from the main rocket. During a full-scale attack, thousands of missiles would release tens of thousands of warheads, perhaps hundreds of thousands of fake warheads as decoys. The real question is not whether you can hit a bullet with a bullet. It's whether you can stop a shotgun blast with a shotgun blast. We can build the individual gadgets that would be required for intercepting an individual warhead. But the real question is whether you can tie them together in a system that would be capable of intercepting tens of thousands of warheads, as well as dealing with all of the different types of countermeasures that the Soviets would deploy to reduce the effectiveness of the defense. The system-level question is the computer software that's going to tie all those gadgets together, and clearly the computers are the important part of the question. Computers would track missiles and warheads, activate defenses, and respond to countermeasures such as decoys. The decision to fire would be made at unprecedented speed, perhaps automatically. An interval of 90 seconds leaves almost no time for meaningful human intervention. An officer might be able to press a button saying yes or no, but one couldn't evaluate the data, look closely at the behavior of the computer and decide whether it was working or not in a reasonable way, nor could you call in the president or senior officials in that length of time. The reliability of the SDI control system is crucial. Writing the software will be a phenomenal task. A program for a typical word processor is 10,000 lines long. The space shuttle requires hundreds of thousands of lines of code, but the designers of the strategic defense system may need to write and test many millions of lines of code. Many computer experts question whether a computer program so large and so complex could ever be trusted. In late 1985, members of the SDI organization's panel on computing faced some of their critics in public for the first time. Ladies and gentlemen, welcome to what we all hope will be a stimulating panel discussion. We are here to answer a very simply phrased question, Star Wars. Can the computing requirements be met? If we cannot persuade ourselves that such a system will work reliably, it certainly is not of any use and it will certainly not persuade any opposition that it is to be depended upon. When asked how such a complex system could be tested, Professor Seitz responded that the system could be built out of many independent parts. Each component could be tested separately to ensure reliability. The answer is, of course, that if these parts of these groups are reasonably independent from one another and you measure the effectiveness of one by something closely resembling an operational test, except that you don't set off any nukes or anything like that, then you have some good ability to infer the performance of the whole system. Professor David Parnas resigned from the SDI panel, believing that the software could never be trusted. I don't want perfection, I just want a level of confidence comparable to that that you feel when you go out in the morning to start your car, and you expect it to start. Now suppose that somebody came to you and said, we'll give you a new car, we just designed it, no car like this has ever been built before, we've tested the tires before we put them on the wheels, they held air then, we've tested the steering wheel to make sure it's round, we've tested all the components, now we've put it together, and you've got to depend on this, it's never been driven. Are you going to give your old car away? Well, with software it's much worse than with cars, I could imagine doing it with a car. With software I wouldn't even begin to try. Since it can't be readily tested in a real attack, critics believe that if it is ever used the SDI may be defeated by something its designers never planned for. Supporters of the SDI state that the system could be adequately tested by simulating every imaginable missile attack and countermeasure. Critics and proponents agree that the SDI software would contain errors. But proponents claim that SDI could function in spite of these errors. You hear quite often about the need, the requirement for so many millions of lines of error-free code, now it's not error-free code, it's fault-tolerant code, and if another million lines has to be written to ensure fault tolerance then so be it. Another SDIO officer made a statement that there could be a hundred thousand errors in the program and it would still work. Well, that's another one of those things that's really true. If you were very careful in picking your hundred thousand errors you might be able to get a hundred thousand errors that don't matter. But it's misleading because there could be a single error, such as the one that ruined Venus probe. Due to a single error in its software, the Mariner 1 Venus probe was destroyed shortly after launch. What happens if a highly automatic system such as SDI doesn't work as planned? Will it buy us time to defuse crises in moments of international tension? Most proponents concede that the SDI's main function would be to help protect U.S. military sites, leaving civilian populations vulnerable. Would SDI's automated system move us beyond the dangers of nuclear war, or would it aggravate possible conflict? As with the early warning system, an SDI system will be subject to false alarms. If there were a breakdown somewhere in a system like this, so a laser goes off and strikes somewhere in the air and doesn't do anything, how much power does one of these lasers have? It could be a very potent weapon against a booster, but it is not like the movies, and it doesn't blow up planets and battle stars or anything like that. Even if one of these lasers went through and struck the surface of the Earth, it's an accident if it would hurt anybody. Some people would maintain that accidentally activating the Star Wars system would just create a snap, crackle, and pop in space and not much more would happen. That the accidental activation of the system, particularly during a time of crisis, might lead to other responses. For example, on the American side, it would be taken as one confirmation that an attack was underway. It would be like another kind of false alarm. Under the circumstances of Star Wars, where the systems would have only a matter of seconds to do their job, and where the possibility of an attack on one Star Wars system by another at the speed of light would mean that warning times would be measured in microseconds. Under those circumstances, of course, human interaction is impossible. The computers would have to be programmed so that they would make the final decision on war and peace. Many Americans are confident in our ability to engineer technical solutions to the set of nuclear weapons. She will be a thoroughbred, the best of the breed, and she can do the job. At a time when some consider it stylish to ridicule modern weapons systems and sometimes the men and women who build them, these extraordinarily complex ships work, and they work well. And at a time when many weapons systems are said to cost too much, this ship is a bargain. In the name of the United States, at Christendalul, may God bless her and all who sail in her. But the focus on technical solutions may aggravate the risks posed by faster, more accurate weapons. Now we've reached the point where weapons are so accurate and so fast on target that human decision-making systems, they're no longer valid today. There just isn't time. We are confronted with a great danger, the greatest danger since nuclear weapons were devised, and that danger is accidental nuclear war or war through miscalculation. Now, in an automated environment, the advantage of the computers is that they can respond very quickly to an emerging situation. The disadvantage is that they may start doing things that they weren't intended to do. When timelines are compressed, particularly when timelines are automated, the possibility that misperception or unintended actions could lead to a war, I think, is greatly increased. Since computers are supposed to be more reliable than human beings, and human beings are notoriously unreliable in time of crisis, then you might think it's better to leave it to computers, but not so. The essence of a crisis is that it's unpredictable, and because it's a surprise, you can't program a computer to know how to react in a crisis. That's why we need human beings, human beings with judgment, with the ability, the intuitive ability to say, hey, this may be a mistake, let's hold off, let's buy some time, let's try and communicate. Ultimately, politics will determine our priorities. But even though we don't like Star Wars because it is really aimed at preventing war, I have to take some heart from that marvelous movie. Remember, the good guys won, and they won because the force is with them. Well, let me tell you about that force that's out there today. That force is thousands and thousands of some of the most creative and dedicated technical people in this nation who are working to see if this really can be done and can be done in a way that it will yield the real objective. The fact is that there is no defense against a nuclear attack, that if one devises a defense against ICBMs, even if that turned out to be possible, one would also have to contend with bombers, with cruise missiles. Submarine-launched missiles launched at depressed trajectories with floating nuclear weapons up the Potomac on a barge, mailing them into the country by parcel post, flying them in on light aircraft, smuggling them in in any of the ways that people get marijuana and cocaine into the country today. There is no defense against a determined aggressor possessing nuclear weapons. You know, when President Reagan made his Star Wars speech back in March of 83, he didn't simply say, gee, wouldn't it be neat if we could do this. He very explicitly couched it as a challenge to the scientific and technical community, saying that it was the scientists and engineers who were going to have to solve the problem of nuclear war. Today, the Star Wars program is the single largest item in the defense budget. To begin with, you have an 8 or 10-year development period that's probably going to cost 70 to 90 billion dollars, and then when you start to deploy it, you're going to be talking hundreds of billions more. Most of these contracts have been won by major aerospace companies. In a very literal sense, Star Wars is talking about adding an additional Navy to the defense budget when we're trying to balance the deficit. Our technology is amazing. Our science is amazing. Our imagination is amazing. We Americans, especially we Americans, more so than any other nation in the world in the military area, have had a fascination with technological fixes. Well, that's not good enough. Technological fixes don't do it. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.