A Facebook AI project was shut down after the computers started communicating in a language no human can understand.
Apparently, all those fantasy movies and crazy scientist theories about robots going rogue and independent aren’t far fetched after all. While a Terminator scenario where Artificial Intelligence revolts against the human race is nothing our biggest threat, an incident with a Facebook AI project sounds like a pretty good premise to such an event.
Recently, Facebook started developing their own self-improving, self-teaching Artificial Intelligence, and apparently they did exactly that; they self-improved. Facebook introduced a couple of AIs to one another and let them converse, in English of course.
At first, the conversation doesn’t sound intelligent at all. One example is a conversations between Facebook AI ‘Bob’ and Facebook AI ‘Alice”, who were learning how to negotiate and strike deals on virtual commodities. It went:
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
A person listening in on the conversation would not make much of it. In fact, it doesn’t sound like a conversation at all. But make no mistake, the two AIs understood each other perfectly.
Being programmed to find effective and efficient solutions to things, Bob and Alice decided that the English language was too “slow” and incompetent.
The conversation that sounded stuck cassette tape was in reality the two speaking a new language that they had just started developing on their own. It wasn’t only new, but it did not follow the rules of any known language, meaning it wasn’t a build-up on any pre-existing one.
Startled and knowing that they could never be able to keep up with an evolving AI-developed language, Facebook researchers had no choice but to put an end to the conversations.
The incident opens the door to many questions regarding the future of Artificial Intelligence in the world of humans. Should we continue to develop self-improving robots? Should they be allowed to create their own languages and rules? Is there a limit to the level of intelligence they could achieve? And will they ever learn how – or choose – to live without us humans?