When a CEO of a high-tech company uses a term like “demon” to describe artificial intelligence that grabs my attention. It should grab yours, too. Recently, Elon Musk, CEO of Tesla (a manufacturer of electronic sports cars) warned against “unleashing an artificial intelligence demon.” He cautioned an audience at MIT about this kind of technology because he said it could be our “biggest existential threat.”
“With artificial intelligence, we are summoning the demon,” he said, arguing that this is potentially more dangerous than nukes. Yet, Musk has invested in artificial intelligence (AI) companies, claiming it’s to “keep on eye on them.”
What Musk is concerned about is whether or not the companies developing AI are taking the right safety precautions. He suggests there should be some regulatory oversight at the national and international level. My question would be, who decides what kind of moral coding should be programmed into AI? As a Christian, I am going to obviously lean in that direction, but as a global community, who decides which ethical system gets implanted into an AI?
I suppose Musk’s primary concern is that AI could view humans as a potential threat, and we’d be living in a sequel to the Terminator movies. That is possible. But if that doesn’t play out, could the machines see us as a resource and we’d be re-living in the Matrix movie? I don’t know.
Whatever happens, this is where a worldview that is “relative” becomes very weak. On the surface, relativism seems kind and tolerant… until you have to make a moral choice. On what premise do you make that choice? Radical Islam would have one answer, Naturalism another, and Christianity yet another, differing answer.
Now, I don’t support “techno-panic” or overhyping the potential impact of AI, but I do think that the embracing of a relativistic worldview, where truth is defined by the individual, can be a precursor to problematic choices. In a world that may come to be powerfully shaped by autonomous machines, who do you want playing God?