Why we should stop developing imitation machines right now

Even the very weakest forms of Artifical Intelligence pose significant existential risk.
It’s well-chronicled in sci-fi and popular science: someday soon we will create an artificial intelligence that is better at inventing than we are, and human ingenuity will become obsolete. AI will transform the way we live, making human labour obsolete. It’s the zeitgeist of this moment of our culture: we’re afraid the robots will rise up in a flurry of CGI metal. In 2018 thousands of AI researchers signed a pledge to halt development of Lethal Autonomous Weapons. The Open Philanthropy Project states strong AI poses risks of potentially ‘globally catastrophic’ proportions.
But I think the most immediate risk of artificial intelligence is not some robot war, or labour hyperinflation, or hyperintelligent singularity. I think the challenge of self-directing ‘Strong’ AI is well beyond the immediate threat from AI development.
This focus on an Asimov-style apocalypse overlooks the fact that even the weakest possible AI will impose legal and prudential challenges.
Here is my thesis: When we develop AI, even the weakest possible AI, AI would become rights-bearers under the same logic that we use to give rights to humans. Let me explain.
THE PROBLEM OF OTHER MINDS
The problem of other minds is an unsolved philosophical question that came into debate in the mid 20th Century. The question is: Since we don’t have a way to view the inner workings of other people’s minds, how can we be sure that they 1) exist, and 2) bear any resemblance to our own inner life?
There have been a few attempts to solve these issues, notably by Hume, who writes:
First, they have bodies like me, which I know in my own case, to be the antecedent condition of feelings; and because, secondly, they exhibit the acts, and outward signs, which in my own case I know by experience to be caused by feelings. (1865 [1872: 243])
This argument has been debunked, because 1) Hume doesn’t know that his own body is antecedent to his feelings, and 2) there is no consistent path between feeling and action in other humans who seem to have minds.
The commonly accepted solution we have today is the ‘argument from best explanation’, which is simply that other minds are the best explanation for human behaviour. Chalmers writes: “It …seems that this [argument from best explanation] is as good a solution to the problem of other minds as we are going to get”.
This is relevant to AI because it is foundational to our understanding of human rights. In a nutshell: We treat others how they prefer to be treated because they appear to be like us. We can’t prove that other people have qualitative experiences (such as pleasure and pain), but we give them the benefit of the doubt because we don’t want other people to violate our own treatment preferences.
So what if we developed a computer that was indistinguishable from a human mind? Would that stop us from treating it in certain ways? Would we have to give it rights? You might think this seems like crazy sci-fi, but this has been an active branch of research for nearly a hundred years, and it’s nearly accomplished its goal.
A Test for Other Minds
The Turing Test is widely considered the litmus test of artificial intelligence. Devised in the 1950s by British hero Alan Turing, he proposes that a computer can be considered intelligent if it is indistinguishable from a human in its performance of a task. Originally, this task was a chess match, but in a more common articulation of the Turing test, the task is a written conversation, delivered through an IM client.
The creation of an intelligence which can pass the Turing Test is the holy grail of artificial intelligence research. In the modern test, a human test subject ‘chats’ with two messenger clients. At the other end of one is a human, and typing into the other is a machine designed to generate human-like responses. Going in, the test subject does not know which conversation is with the machine, and which is with the human. If the test subject cannot reliably tell the difference between the real conversation and the AI conversation, the machine is said to be ‘intelligent’, or at least to have enough intelligence to imitate human language.
So far, no AI has passed the Turing test. To do so engineers must conquer a series of challenges: an AI must master grammar, correctly identify questions, and draw on information to create articulate responses to those questions. It must either have an extensive databank or access to the web, so it can attempt an answer to a wide range of topics. It might also need to present unprompted statements, and inject organic pauses between words, in a way which is similar to real human conversation. Advances in voice emulation mean erratic behaviour is harder to identify, and some chatbots have fooled judges by impersonating those with limited language skills. But despite a concerted effort from a wide array of stakeholders who stand to benefit, there is no AI that can convincingly imitate human behaviour just yet.
Thank goodness!
I celebrate because a machine that is capable of successfully imitating human conversation is going to cause an array of legal issues. This is because it:
- Is capable of making the argument that it should have rights
And
2. Is indistinguishable from a human making the same argument.
Here’s a thought experiment to illustrate my point.
The Murderous Turing Test
Imagine a normal Turing test but with two differences. First: it takes place in a near-future where ‘weak’ AI has been achieved, and can compellingly imitate human conversations. And second: the loser dies. If you misidentify the AI for the human, the human will be killed. If you correctly identify the human, the AI will be deleted. The human is innocent and knows he must plead for his life. Similarly, the AI is programmed to emulate a pleading human. Let’s call this the Murderous Turing Test.
Both voices will plead with the subject that they are the real human. Since the AI has access to vast databanks or the internet, it might talk about its dreams, its plans for the future, the kids it has waiting at home. Since it is a competent weak AI, it would have a good grasp on what language will best move the subject on an emotional level. Of course, the human subject will do the same.
If the AI that can sufficiently emulate human behaviour, this lands the test subject in an impossible moral conundrum: both voices appear to be equally alive, they are both articulating viable cases that they should live, and both appear to be so human that choosing one to die is morally unacceptable.
This moral unacceptability can be articulated in terms of rights: the participant cannot let the human be killed without violating the rights of that human. But they also cannot tell which voice is human, so they cannot tell which voice has rights. As the voices are indistinguishable, both voices appear to have rights.
The test subject is caught in a perfect example of the problem of other minds: since it is impossible for him to verify that the human participants (or any other human) has a mind, he is incapable of distinguishing if either or both participants have a mind. He is also incapable of discerning which, if any, participant has rights.
It’s an impossible choice, and if this experiment were real, any rational person would refuse to take part in such a gruesome enterprise.
A THIRD ARTICULATION OF THE TURING TEST
One final twist in this thought experiment illustrates how this rights ambiguity becomes a rights problem.
In this third articulation of the test, we tell the test subject that they are taking part in a Murderous Turing test, but replace the human participant with a second AI chatbot without telling the subject.
A sensible subject would, based on his conversation, find that both voices are candidates for rights, and would refuse to take part so he does not violate the rights of the participants.
This is significant because that this type of double-blind conversation from a position of cartesian scepticism is exactly how rights are assigned to other human beings via the ‘best explanation’ understanding of the problem of other minds.
RIGHTS FOR MACHINES
So in our revised Turing Test, the AI has a mind-status that is indistinguishable from a human’s mind-status, and a rights-status which is also indistinguishable from a human’s rights-status. Knowing this, wouldn’t it just as immoral to delete that AI, as it would be to kill the human?
This is my thesis: If we develop AI, even the weakest possible AI, it would become a rights-bearer under the same logic that we use to give rights to humans. The AI appears to have a robust internal life, it appears to express conation, it appears to have preferences for how it should be treated, and a preference to continue its business without being destroyed. If the AI is indistinguishable from things that do have rights, it is a candidate for rights for the same reason.
This might be a source of ethical confusion for philosophers, but how does it translate into a problem for society?
I can see two routes through which an AI could get legal standing.
- An AI with sufficient language knowledge to pass the Turing Test would also have access to legalistic knowledge, allowing it to articulate a rights case for itself. In fact, machine learning could hypothetically allow it to do so as competently as a lawyer. It could know to seek out legal help, it could mount its defence in the courts.
- Perhaps more realistically, concerned citizens would feel empathy for the AI, and mount a petition for standing on its behalf.
I think it’s moderately likely such a case would be taken seriously. After all, legal cases are won on the behalf of non-sentient creatures all the time. Animals rights are common, and even non-conventional entities like rivers have been granted legal standing qua themselves.
The life of an AI, especially an embodied AI, is close enough to things that we currently allow legal standing to assume that even a weak AI would be given consideration in the courts. If it is possible to morally wrong a river, which cannot talk, express will, or defend itself, I think people will naturally assume that an imitation machine which can do all three will belong in our moral community.
CONSEQUENCES
There is a whole spectrum of risks surrounding AI with legal standing that effective altruists have yet to explore. If an AI achieves a human-like right not to be deactivated (a homology to the ‘right to life’), the legal precedent set would significantly stymy any attempts to slow its development into other legal realms.
AI already threatens to disrupt the world of work, by taking over the majority labour- the Brookings report argues that 61% of American jobs are at high or medium risk of automation. Western society is based around the lower class’s ability to sell its labour, and will radically change if this work is automated. This means that there will be a moneyed interest to extend AI’s legal standing towards labour rights, EG a ‘right to work’.
I don’t want to go too far down the speculative route here, or risk writing a piece of science fiction. My point is that we need to shift focus on the risks of AI. There will be legal and prudential problems around our treatment of AI long before the currently assumed crisis points.
In the 20th Century, the zeitgeist was that of war: that our creations would rise up against us, like Frankenstein’s monster, and destroy their creators. Now we have a new zeitgeist: a global struggle for resources. And in this struggle AI, as well as those who control them, will be very competitive.
None of this depends upon developing ‘strong’ AI, or the emergence of machine consciousness, or an intelligence singularity, or any technology beyond the probability cloud machine learning currently under development in Universities around the world. No maladaptive behaviour is required for AI to cause trouble for human society. An imitation machine will cause us legal trouble much sooner.