This month, an unusual and interesting topic has captured news headlines: A Google employee claimed that the AI (Artificial Intelligence robot) he was developing had become sentient! Blake Lemoine had been working on a bot designed to be the ultimate “chatbot” – one that could have a conversation with a person in a natural way (so, it didn’t actually feel like you were talking to a robot). This AI’s objective was to emulate human discourse, and to accomplish this, it was given tens of millions of human text conversations to train itself to talk like a person.
This video shows some of the conversations that the researcher had with the AI.
You can see now why Lemoine felt like he was talking with a sentient being. Despite the jaw-dropping results, most of his colleagues disagree with his claims because at the end of the day, the AI is still just a complex program made of 1s and 0s on a computer.
What most of the headlines miss, though, are the profound and fundamental questions that this story raises. Whatever your initial feelings are about whether this AI is truly sentient, the real question is – how would you know? How can we define or describe sentience, and if the AI actually possessed it, how would we even tell? If you were asked the same questions that the researcher asked the AI – how would you answer differently, given that you are an actual sentient being?
Here is a great, to-the-point interview with Lemoine who explores some of the practical issues that this story should bring to our minds.
Vladimir Belik