Ghost in the machine
by Dwayne Day
|What will happen when the computers really are more intelligent than the humans that create them? Will artificial intelligence change spaceflight as we now understand it?|
While sitting in a booth overlooking historic Apollo Mission Control, our guide, a man in his 70s who had worked on Apollo, explained the various consoles and what happened during an Apollo landing. He added that the Lunar Module had far less computing power than people had in their cell phone. A young man, probably a college student in his early 20s, then asked how it was possible to land on the Moon with such a primitive computer. From his skeptical tone, it was clear that he believed that the Moon landings had been faked. Our guide paused for dramatic effect, and replied “We got lucky.”
That was the wrong answer, and it certainly was not the kind of answer that would convince somebody who believed the Moon landings never happened. What he should have said was that just below Mission Control there had been a couple of floors of computer equipment used for the mission. And in nearby rooms there had been dozens of engineers sitting at tables able to provide immediate support to the astronauts. Yes, the Apollo digital computer, which was quite sophisticated for its day, was primitive by today’s standards. But it was not the only calculating machine supporting the flight: there were many others, electronic and biological. However, the most important and powerful computer is the human brain, and there were two good ones on Apollo 11 that day in 1969, and thousands more that had developed the spacecraft and were supporting the flight. It wasn’t luck, it was brainpower.
What that little incident illustrated was the rather awkward and messy way that we understand our relationship to computers and their capabilities and what they mean for spaceflight. It perpetuated a belief that computers are somehow more important than people when it comes to exploring space, whether robotically or with astronauts. The reality is that the computers are calculating tools, but human brains possess ingenuity and insight that computers cannot rival.
But that incident got me thinking that there are two interrelated parts of this subject. The first is the fact that humans are more important and more powerful than computers, and that is relevant even when humans send “primitive” computers out into the void. The second issue is more difficult to conceptualize: what will happen when that is no longer the case? What will happen when the computers really are more intelligent than the humans that create them? Will artificial intelligence change spaceflight as we now understand it? Of course it will. How? We have no idea.
The Space Center Houston guide was perpetuating what has become modern mythology: a belief that the only way to accomplish sophisticated feats is with powerful computers. But that’s a fallacy that somehow skips past the fact that humans created those computers in the first place. Did the Egyptians use powerful computers to design the pyramids? (Well, maybe their space alien overlords did…) Did the Romans have computers to design their aqueducts? Were there computers available for the Panama Canal or Hoover Dam? What kind of computer power was available for designing the atomic bomb, or the hydrogen bomb or the X-15 or the Concorde or atomic reactors, all of which predate the iPhone by many decades? For some reason a few people believe that the primitive computers of the 1960s mean the Moon landings were faked, and yet they don’t apply the same irrational logic to other major engineering accomplishments that also lacked modern computers but nevertheless occurred.
Recently that mythological meme appeared again on the PBS documentary “The Farthest: Voyager in Space.” A scientist pointed out that the Voyager spacecraft, launched in 1977, had less computing power than the key fob you use to unlock your car doors. Voyager program manager John Casani scoffed at the idea. “What’s wrong with ’70s technology? I mean, you look at me, I’m a ’30s technology, right? I don’t apologize for limitations that we were working with at the time. We milked the technology for what we could get from it.”
|Will humans at some point design a spacecraft that will be far smarter and more capable at thinking and reasoning than a human? One possibility is that at that point, sending humans out into space to “explore” will no longer be necessary, since they could only slow down the learning process.|
Just a little more than a week ago we had another robotic spacecraft that was being discussed in a similar manner. As the Cassini spacecraft was about to do its fatal dive into Saturn’s atmosphere, various writers and commentators touted its achievements, and some of them pointed out that Cassini’s electronic brain dated from the early 1990s and had less computing power than your cellphone. Yet it still managed to revolutionize our understanding of the ringed planet and its moons.
Moore’s Law—which isn’t really a physical law—describes the constant increase in computing power over many decades and indicates that today’s spacecraft, with their computers that are far more powerful than the ones launched only a decade or two ago, will be incredibly primitive compared to the ones that will be launched a decade or two from now. In other words, not too long from now people will be referring to our “primitive” technology and wondering how we ever accomplished anything with it. Yet today, when NASA launches a spacecraft, nobody refers to it as incredibly primitive or unable to do the job, even though rapid obsolescence is inevitable. The belief that a technology is primitive only comes with hindsight, not foresight.
As John Casani pointed out in the documentary, spacecraft are built by people, and the most important calculating machine is the human brain, with its ability to be fired by imagination and ingenuity, including the ability to overcome technological limitations. That is going to be constant far into the future. Fifty or a hundred years from now when humans send incredibly capable spacecraft throughout our solar system, the most important factor is still going to be the people who designed them. Humans will be part of the machine, figuratively, if not literally.
Another scientist who was quoted just before Cassini’s demise in Saturn’s clouds discussed how the people who worked on the mission referred to Cassini as “she,” and how they became emotionally attached to a machine that many of them had never even seen in person or knew primarily as streams of data that they interpreted. “She is us,” the scientist said. That has been true of our space machines since the flight of Sputnik—our machines carry our DNA in their designs, and it is no surprise that we anthropomorphize them.
In the past couple of decades there has been a lot of writing and speculation about the computer “singularity,” a theorized gestalt shift where computers will become smarter than humans and capable of improving themselves on their own, which they will do at a lightning pace. Some writers have speculated that the world beyond the singularity will be unrecognizable to us today. Some speculators believe that humanity will be doomed when this happens.
There are other writers and computer experts that think this is a lot of bunk.
But how will artificial intelligence change the spaceflight equation? Will humans at some point design a spacecraft that will be far smarter and more capable at thinking and reasoning than a human? What would that mean? One possibility is that at that point, sending humans out into space to “explore” will no longer be necessary, since they could only slow down the learning process.
|But while we humans might consider biological life as a precursor to machine life, perhaps our own intelligent machines will be interested in other markers of machine life, and not concern themselves with biological life at all.|
Science fiction writers have toyed with the idea of intelligent spacecraft numerous times. Fred Saberhagen’s Berserkers were thinking machines built for war that at some point wiped out their creators and then set out on a quest to eliminate all sentient life in the galaxy. Star Trek The Motion Picture had Vejur, a lost NASA spacecraft that had encountered thinking machines that enhanced it and made it far more capable than humans. Vejur’s mission was to seek knowledge and return it to its creator, which—do you sense a trend here?—it inadvertently planned to destroy. Vejur still had its limitations, however, and could not comprehend very basic things like love and friendship. Vejur’s lack of humanity was its weakness.
Today we explore the solar system with our machines, but what happens when we reach the point where the machines start to explore on their own, and the humans who created them and seek to control them are far less important to the mission? Will the machines care about the human questions and motivations that created them? Will those machines still explore? Will they want to travel through space or just hang around Earth orbit, pondering bigger questions we cannot comprehend?
Almost certainly, the machines we send into space will redefine the science questions humans give them, but in ways that we cannot predict. Perhaps rather than a search for biological life, they will be motivated by a search for machine life instead. When searching for biological life, so far scientists have adopted a strategy consisting first of “follow the water” and later of “follow the carbon.” A machine might decide to follow the silicon and the electrons. But while we humans might consider biological life as a precursor to machine life, perhaps our own intelligent machines will be interested in other markers of machine life, and not concern themselves with biological life at all. Perhaps we will send an intelligent probe to Europa that decides that its efforts are better spent staring at distant stars, looking and listening for signs of other machines out there in the black. We might not like that, but it is preferable to having those machines deciding to come back to Earth to wipe us out. Of course, we will create intelligent machines on Earth before we build intelligent machines for spaceflight, and if they are malevolent, this entire thought experiment about intelligent space machines may be moot.
But perhaps a day will come when the machine we send out into space—she, is no longer us. And maybe that day is when we will have made our own luck.