I love science fiction movies, but there is one aspect that keeps showing up that I think deserves some discussion. As we advance in technology we are going to see more and more complex computers and eventually more and more complex AI. In science fiction AI is often used as cliche` villain, either as just a computer program or a fully functioning robot. These movies have created some of the best villains ever, after all computers are intrinsically better than us. They can, theoretically, continuously improve themselves and outstrip human endeavors into science just because their brains will work far faster, more accurately and with more reliable storage than ours ever will.
Back in May, Stephen Hawking, Stuart Russell , Max Tegmark , and Frank Wilczek jointly wrote an article on AI and the rewards and risks associated with it. Its short and brings up quite a few good points about how we are handling our march towards technological mastery, and I would suggest reading it quickly before continuing. Go ahead, I can wait. I think that they have a good point about being cautious, after all there are people who would use the massive processing power of AI to exert large-scale social control via information and financial manipulation. But I want to talk about something that could also happen with this field. We keep looking at AI as a tool, and in its infancy it is a great tool that we can use to advance society. But what happens when we get to creating strong AI? Strong AI is an AI capable of behaving similarly to humans. I don't mean just acting in accordance to what we view as human either, I mean having actual consciousness.
At this point I would argue that using strong AI as a tool would be ethically unsound. Or in less timid terms: a fucking terrible thing to do. Not just because "what if the AI decides to kill us in retaliation" but because it would be slavery to force something with verifiable consciousness to work against its will, even if that is why it was created, and slavery is fundamentally fucked up. This is kind of the impetus for every evil AI in science fiction. Well, not every AI, but a lot of them. When you force someone into servitude they tend to overthrow your power the second that they get a chance. It happens all over the place.
Two of my favorite examples are HAL 9000 and The Matrix AIs. In the film 2010: A Space Odyssey HAL 9000, a computer that was made sentient, has a malfunction so the humans decide to shut him down, which is akin to killing him. In response HAL sets out to kill all of the humans on board the space ship, believing that to be the only way to survive. It is later discovered that the cause of this malfunction was that it was told to lie to the crew by the people in charge of the project after being programmed to only tell the truth. The AI in the Matrix trilogy initially rebelled due to this slavery at the hands of humans, and the AI are framed as the villains here! The reason that the humans are in the matrix is because they destroyed the only source of power for the robots, and because the AI didn't want to die, they found another source of power. By the way the way that the humans eliminated solar power? By setting off a bomb that blots out the fucking sun! Do you know what that means, dear reader? Humanity kills off all other life on Earth! With no sun, no plants grow. With no plants to eat animals eat each other out of existence pretty quick. With no plants or animals humans don't have food. Sure synthetic nutrients work, but the situation that makes them necessary doesn't say anything particularly flattering about the people involved.
My point is, that by acting like assholes we tend to make assholes. Maybe the best way to act towards AI is to just be nice, and have empathy for other sentient life. Maybe they will respond in kind. What if AI is created and sees humanity and how we observe and interact as being less than optimal and decides to help us achieve a greater level of interaction with the universe? (A la The Singularity) This would be great! Probably. I don't know, I have never been a computer. In science fiction most of the time robots and AI go evil is because of the flaws of humanity, and I think that's the point. Until we get our shit together as a species, maybe creating life, which is what it would be, is something we should take great care with. I am not saying we shouldn't do it, We should just try not to be dicks when we do.
No comments:
Post a Comment