The crucial fight for control between AI and its creators

The crucial fight for control between AI and its creators

The Chief Rabbi is among those who’ve expressed concerns about trusting  artificial intelligence to make moral choices. Stephen Oryszczuk investigates

Stephen Oryszczuk

Stephen is the Jewish News' Foreign Editor

High tech hand with a classic light bulb. How much will AI control its creator?
High tech hand with a classic light bulb. How much will AI control its creator?

Chief Rabbi Ephraim Mirvis has warned of “a desperate struggle for control between artificial intelligence (AI) and its creators”.

His concern follows the crashes of Lion Air Flight 610 in October, and Ethiopian Airlines Flight 302 last month, killing more than 340 people in total. Initial reports suggest the pilots lost their battle against the planes’ intelligent software. While no Luddite, Mirvis nevertheless believes events are prompting urgency, and says we should all be having these conversations immediately and openly.

“I am troubled,” Mirvis told BBC Radio listeners. “What happens when soulless artificial intelligence, devoid of feeling or emotion, is called upon to make moral or ethical choices on our behalf?”

He posed questions to illustrate the point. Will a driver-less car prioritise the life of a passenger or pedestrian in an emergency? How will healthcare AIs decide who gets life-saving treatment first? Who will military AIs choose to kill?

“The development of AI has the potential to be the source of enormous blessing for our world by augmenting human capacity, and not by replacing it,” said Mirvis, who reiterated his points at the European Parliament. “But it is imperative that this technology be harnessed to serve us, rather than the reverse.”

According to the Book of Genesis, says Mirvis, humankind is required to subdue the Earth and establish a dominion of morality over it. So can we ever justifiably abdicate that moral responsibility to computers or their programmers?

Chief Rabbi Ephraim Mirvis

Jewish computer scientists were among the first in the field of AI and ethics, long before intelligent robots were designed to work in nuclear reactors, fight wars or care for the elderly.

In 2000, Eliezer Yudkowsky founded the Machine Intelligence Research Unit in California. Its initial purpose was to facilitate AI’s development, but by 2005 Yudkowsky was warning it could become “super-intelligent” and pose risks to humanity.

Around the same time, American author, inventor and “futurist” Ray Kurzweil said: “Our strategy should be to optimise the likelihood that future non-biological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society.”

Ominously, he also warned “greater intelligence will always find a way to circumvent measures that are the product of lesser intelligence”. So, even if we hard-wire AIs to respect our cosmopolitan values, could they choose to “circumvent” them? To a large extent, we have already ceded control of an AI’s means to achieve its goals. The designers of IBM’s Deep Blue – which beat world champion Garry Kasparov at chess in 1997 – did not know the computer’s next move. They only knew it would try to win.

While Mirvis says all will be well if AI “is harnessed to serve us”, American philosopher and computer scientist Susan Schneider suggests this is too simple, presupposing AI will only out-think us while always lacking consciousness. But if AI does develop “conscious experience”, she says ethicists worry “it would be wrong to force AIs to serve us if they can suffer and feel emotions”.

Rabbi Jack Abramowitz of the US-based Orthodox Union agrees. “If they are determined to be intelligent and aware, and able to feel and suffer, then they should enjoy the same rights we would give to anybody.”

Schneider warns consciousness “could make AIs volatile or unpredictable” and is refining a test to measure this, but Yudkowsky thinks the only way AI would ever do us harm is if we somehow allowed it.

AIs may well “take actions undesirable to its programmers” he says, but this “will be as
a consequence of the programmers’ actions”. The code “will not want to circumvent its designed-in preferences, run amok and start rendering down humans for spare atoms, unless we write code that does so – or write a programme that writes a programme which does so”.

The mechanics of AI are moving faster than the ethics. In Israel, a shining witness to this is the staggering $100 million just spent on the Weizmann Institute’s new Artificial Intelligence Centre for Scientific Exploration.

In terms of Mirvis’ ethical parameters within its confines, there appear few boundaries at this point. Two of the centre’s top boffins told JTA last month: “We are not told what to investigate. It’s all driven by the curiosity and ingenuity of the researchers themselves.”

Who ultimately does the driving in the future – subduing the Earth to maintain dominion over it – remains to be seen. The final coding in the “desperate struggle for control between AI and its creators”, as described by Mirvis, remains unwritten.

read more: