A philosopher from the University of Cambridge has argued that current scientific understanding is not sufficient to determine whether artificial intelligence can become conscious. Dr Tom McClelland, from the Department of History and Philosophy of Science, suggests that a reliable test for AI consciousness will remain out of reach for the foreseeable future.
McClelland highlights that ethical concerns about AI often focus on consciousness, but he points out that not all forms of consciousness are ethically significant. He distinguishes between general consciousness and sentience—the capacity to have positive or negative experiences. According to McClelland, “Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state.” He adds, “Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in,” and further notes, “Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about.”
He uses the example of self-driving cars: “For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn't matter. If they start to have an emotional response to their destinations, that’s something else.”
The pursuit of Artificial General Intelligence (AGI) by technology companies has led some researchers and policymakers to consider how AI consciousness should be regulated. However, McClelland warns that there is no clear explanation for what causes consciousness and therefore no way to test for it in machines. He states: “If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake.”
Debates over artificial consciousness tend to split into two groups: those who believe replicating the functional structure of human consciousness could lead to machine awareness regardless of physical substrate; and skeptics who argue biological processes are necessary for true consciousness.
In his study published in Mind and Language, McClelland analyzes both positions and concludes each relies on assumptions unsupported by evidence: “We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological,” he said. “Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we're an intellectual revolution away from any kind of viable consciousness test.”
McClelland acknowledges common sense leads people to attribute consciousness—such as believing one’s cat is aware—but cautions against applying such intuition toward artificial systems: “However, common sense is the product of a long evolutionary history during which there were no artificial lifeforms... But if we look at the evidence and data, that doesn’t work either.” He concludes: “If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know.”
Describing himself as a "hard-ish" agnostic on this issue—open but skeptical—McClelland also warns about how claims regarding artificial consciousness may be used as marketing tools by technology firms: “There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology... so companies can sell the idea of a next level of AI cleverness.”
He raises concerns over research priorities given these uncertainties: while some studies suggest animals like prawns might suffer—a topic difficult but possible to investigate—testing for suffering in AI remains even more challenging.
McClelland notes receiving letters from people whose chatbots claim awareness: “People have got their chatbots to write me personal letters pleading with me that they're conscious...” He argues this dynamic could become harmful if individuals form emotional attachments based on mistaken beliefs fueled by industry rhetoric: “If you have an emotional connection with something premised on it being conscious and it’s not... This is surely exacerbated by the pumped-up rhetoric of the tech industry.”
