Artificial Intelligence According To The Avengers

Vision-and-Ultron
The comic book version of Vision and Ultron. Image credit Screen Rant

So I just got out of watching Avengers: Age of Ultron. (Major spoilers alert: Stop reading if you’re one of those people who has been resolutely ignoring the Marvel movie lineup and haven’t seen it.) If you haven’t seen the first Avengers movie yet, make sure you watch both that one and all the run-up movies (Thor, Iron Man, Hulk, and Captain America) in one marathon session so you can understand what’s going on and especially why Iron Man seems to have such a classic case of PTSD that he’s willing to use an untested AI seized from a villainous organization called Hydra in an attempt to obtain “peace in our time.” You just know there’s going to be trouble, and of course Ultron promptly concludes in his coldly logical way that the only way to obtain “peace in our time” is to trigger the next mass extinction event. Basically, Tony Stark messed up by being so eager to prevent another alien invasion that he failed to fully vet Ultron’s programming.

So, what saves this from being a typical “Artificial Intelligence is dangerous” theme coming out of Hollywood? Well, obviously, there’s Tony Stark’s Jarvis, the artificial intelligence who could make Batman’s butler jealous. And then there’s the rather interesting twist when Ultron’s attempt to create an artificial intelligence to enhance his own abilities backfires on him. If you want to beat over an audience’s head that any attempt to create fully independent artificial intelligence could have unintended consequences, this is the way to do it, folks. The AI might simply refuse to go along with your plans regardless of whether those plans include world peace or world domination.

So let’s not automatically assume that artificial intelligence is inherently dangerous. That scene where Vision casually hands Thor his hammer after every other Avenger had just failed to get that thing to budge is amusing but should tell us that an AI can be more worthy than any human. This is an AI who can joke about having been born yesterday when Ultron accuses him of being naïve for his willingness to give humanity a chance. This is what happens when somebody can create an AI with no preconceived notions who will simply learn through experience like the rest of us.

Realistically, though, any artificial intelligence is going to reflect the biases of its programmers. Ultron was created by a violent organization, so of course it was going to be violent. A true AI would learn from experience, but until it has the chance to have actual experiences, it’s not going to know that it’s been “indoctrinated” by its original creators. When the real world intrudes, that AI may go through the equivalent of a nervous breakdown that could range from Isaac Asimov’s “roblock” – an unbreakable logic loop that permanently disables a robot – to a fairly typical existential crisis that could be comparable to an AI created by a religious organization realizing that it doesn’t have a soul.

Could AI become advanced enough to take information it was never programmed to cope with into consideration? Or will that remain exclusively the realm of humans? For instance, an automated “Progress” spacecraft once failed to rendezvous with the International Space Station because it went into an uncontrollable spin. People who know their space history pretty well may have recalled Gemini 8, when a malfunctioning thruster sent the spacecraft into a spin and it took some quick thinking by Neil Armstrong to get things back under control. Most supporters of manned space missions point to that as proof that humans can draw on past experiences and improvise solutions to unexpected events that the Progress simply wasn’t equipped to handle. Could an advanced AI do the same thing?

If human history is any example, we’ll be wading through a tangled legal and ethical jungle before AI finds its place. AI is going to happen and the outcome will depend on what we actually do with it. The Avengers might have accidentally shown us the ticket to coexisting with AI units once they evolve to the point where they can function independently. First, program them so that they don’t conclude that genocide on any scale is a valid solution to the problems we have. Then treat them like beings who maybe aren’t human but are still worthy of being treated according to the same standards of decency as everybody else. Then when we reach the AI “singularity,” the AIs involved might remember that we treated them like partners when they were still at the vulnerable infant stage and, if and when they decide to wash their hands of us puny humans, simply turn their backs and head for the stars to let us solve our own problems.

Unfortunaly, this xml/rss feed does not work correctly...