World War I could be considered an “unintentional” war-a rather minor incident (the assassination of Archduke Ferdinand) inexorably led to war as the result of many interlocking military treaties, which produced mobilizations and counter-mobilizations. Today, technological innovations are increasing the chances of unintentional nuclear war. Weapons are becoming quicker, more accurate, smaller, more mobile, and harder to detect. Each side finds it harder to keep track of the other side’s weapons and to have warning of an impending nuclear strike. This fact is especially dangerous because military doctrines call for use of nuclear weapons before the other side does.
Into this situation the Star Wars initiative was introduced. Its proposed weapons have been portrayed as a technological fix to the problem of war. In fact, however, they will accelerate present destabilizing trends. Given the speed of the weapons, the counter-weapons must be extremely fast and smoothly functioning as a system, not just as separate parts. Computer systems will be central, and will probably be programmed with the ‘artificial intelligence” abilities that have recently caught the imagination of Pentagon research funders. DARPA, the Defense Advance Research Projects Agency, has announced a $600 million Strategic Computing Program” on artificial intelligence.
James Fletcher, who headed the original Star Wars study, estimates that a Star Wars system would have to be put through fifty million debugging runs to test it “adequately.” Inapaperon“Computer System Reliability and Nuclear War,” Prof. Alan Borning writes: “To be at all confident of the reliability of complex systems, there must be a period of testing under conditions of actual use.” Short ofhaving a nuclear war, neither the Star Wars computer systems nor those used for the command and control of nuclear forces can be adequately tested.
Common sense reminds us that fail-proof systems are unattainable in the best of circumstances. Paul Bracken reminds us that “the ability of governments and large organizations to screw up in breathtaking fashion is a demonstrated fact.”
Among the system failures cited in the literature on unintentional war are:
It is unfortunate that no such careful documentation of Soviet disasters is recorded. However, some examples such as the Korean Air Lines incident and the stray missile over Finland remind us that Soviet systems are also fallible-probably as much as Western ones.
Accidents become dangerous when they occur at a time of high tension between nuclear powers. One near-catastrophe occurred in 1956, during the Suez crisis and Hungarian uprising. As Bornmg relates, “on the night of November 5, the following four coincidental events occurred. First, U.S. military command headquarters in Europe received an urgent message that unidentified jet aircraft were flying over Turkey. Second, there were additional reports of 100 Soviet MiG-15 fighters over Syria. Third, there was a report that a British bomber had been shot down over Syria (presumably by the MiGs). Fourth, there were reports that a Russian naval fleet was moving through the Dardanelles, perhaps to leave the Black Sea in preparation for hostilities. General Andrew Goodpaster was reportedly afraid that the events might trigger off the NATO operations plan, which at the time called for a single massive nuclear attack on the Soviet Union… .As it turned out, all four reports were incorrect or misinterpretations of more innocent activities.”
What would be the consequences of a similar series of events in a world where computers make the decisions-a world that may be ushered in through the Strategic Defense Initiative?
The author is a computer consultant with Sunshine Software and the coordinator of INPUT (Initiative for the Peaceful Use ofTechnology). He may be contacted through: INPUT Box 248, Staton B, Ottawa, Ontario KJP 6C4. Phone 613-xxx xxxx.