In the wake of Alex Garland’s much-distinguished ‘Ex Machina’, the critic’s mind has been fixed on the delicate intersection between Law, ethics and technology. The movie has raised concerns about artificial intelligence.
Artificial intelligence has developed fast. It’s everywhere from Siri too Alexa. It’s even in our Android recommendations.
Robots and bionic technologies that improve or turn out to be a portion of humans raise many prickly legal and ethical questions. They will be programmed by humans to carry out tasks that many people can only dream of doing.
For example if a brain-computer interface is used to communicate for someone in a vegetative state, are these messages legally binding?
If a robotic body part (made for humans) is implicated in a murder, who is at fault? If a human with a prosthesis is still subject to the ordinary laws of human governance, should machines be entitled to the same legal rights and punishments, or does the very fact that their autonomy is at the discretion of human designers thwart the question of independent rights altogether?
Drawing up any kind of regulatory framework is a tricky issue, and not simply because policy-makers are faced with a haze of perpetual obfuscation – the kind of questions that tend only to inspire further questions, rather than conclusive answers – but because they find themselves entrusted with the punishing responsibility of preserving ethical norms without hampering the potential for technological innovation. Philosophy students should consider the advance of artificial intelligence in light of ‘existential risk’ – a department in Cambridge, the Centre for the Study of Existential Risk, was set up for the explicit purpose of studying the threat of risks which have the capacity to destroy mankind.
Tesla and SpaceX CEO Elon Musk has repeatedly said society needs to be more concerned about safety with the increased use of artificial intelligence.
“If you’re not concerned about AI safety, you should be,” Musk recently tweeted. He claimed it was worser than North Korea.
Musk agrees: “AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole.”
Google has been experimenting with self-driving cars. But what if a self-driving car is about to crash and it has to make a mathematical decision between saving the driver and crashing into a crowd, or avoiding a crowd and crashing the car with the driver in it?
The billionaire’s views on the risks of artificial intelligence are well-documented over here at Fortune and elsewhere. But last month when he described artificial intelligence as “the greatest risk we face as a civilization” at the National Governor’s Association, the reaction sparked criticism from Facebook founder Mark Zuckerberg as well as other tech magnates and roboticists.
No word on how Musk will encourage government to take up this charge. Perhaps the governors who heard him speak have already been prompted to take action.