Skip To Content

    No, A Computer Did Not Just Pass The Turing Test

    Sorry everyone.

    Many media outlets are reporting that a computer programme pretending to be a 13-year-old boy from the Ukraine passed the Turing test for artificial intelligence on Saturday.

    But some are sceptical about whether this counts as a pass.

    Of course the Turing Test hasn't been passed. I think its a great shame it has been reported that way, because it reduces the worth of serious AI research. We are still a very long way from achieving human-level AI, and it trivialises Turing's thought experiment (which is fraught with problems anyway) to suggest otherwise.

    1. The 30% pass mark did not come from Turing himself.

    I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

    But those are just his beliefs about what the pass rate would be in 2000, not criteria for passing.

    2. Turing probably did not intend a real test to have a five-minute time limit.

    3. Eugene is not a supercomputer as has been widely reported (or even a computer): It's a chatbot.

    4. This is not the first time a chatbot has been said to have passed a Turing test.

    In an example from 1972, a chatbot called PARRY made by psychiatrist Kenneth Colby fooled psychiatrists 48% of the time while pretending to be a person with paranoid schizophrenia.

    5. Turing specified an "average interrogator".

    6. Pretending to be a 13-year-old non-native English speaker is technically within the rules of the test, but it's not what Turing had in mind.

    One of Eugene's creators said: "Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything."

    Turing was fairly obviously talking about a general ability to impersonate a human, not the (much easier) ability to impersonate a specific person with a highly limited communication ability and very few shared cultural reference points with the judges.

    7. The Turing test is a "very bad" test for artificial intelligence anyway.

    "It overemphasises language at the expense of the issue of embodiment," professor Shanahan told BuzzFeed. A lot of our intelligence concerns how we interact with the physical world, said Shanahan, and the Turing test explicitly ignores this.

    The test relies on trickery rather than genuine intelligence and is also reliant on the judge. A naive interrogator would not be as good as an expert in computer science or philosophy at spotting failures.

    Eugene's creators themselves even recognise that the test isn't really able to address the question Turing set out to answer. In a 2009 book, they said: "Turing's Test is, actually, no more than a joke of that genius British mathematician. This game has nothing (or very little) in common with the question 'Can machines think?'"

    So there's no need to start welcoming our robot overlords (yet).