1. The 30% pass mark did not come from Turing himself.
2. Turing probably did not intend a real test to have a five-minute time limit.
4. This is not the first time a chatbot has been said to have passed a Turing test.
In an example from 1972, a chatbot called PARRY made by psychiatrist Kenneth Colby fooled psychiatrists 48% of the time while pretending to be a person with paranoid schizophrenia.
5. Turing specified an "average interrogator".
6. Pretending to be a 13-year-old non-native English speaker is technically within the rules of the test, but it's not what Turing had in mind.
One of Eugene's creators said: "Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything."
Turing was fairly obviously talking about a general ability to impersonate a human, not the (much easier) ability to impersonate a specific person with a highly limited communication ability and very few shared cultural reference points with the judges.
7. The Turing test is a "very bad" test for artificial intelligence anyway.
"It overemphasises language at the expense of the issue of embodiment," professor Shanahan told BuzzFeed. A lot of our intelligence concerns how we interact with the physical world, said Shanahan, and the Turing test explicitly ignores this.
The test relies on trickery rather than genuine intelligence and is also reliant on the judge. A naive interrogator would not be as good as an expert in computer science or philosophy at spotting failures.
Eugene's creators themselves even recognise that the test isn't really able to address the question Turing set out to answer. In a 2009 book, they said: "Turing's Test is, actually, no more than a joke of that genius British mathematician. This game has nothing (or very little) in common with the question 'Can machines think?'"