Is Big Tech Teaching Machines To Be Conscious? Google Mind's Move To Hire A Philosopher Raises Eyebrows
Is Big Tech building a soul or just a better chatbot? The appointment of Henry Shevlin signals a radical shift in the race for sentient machines

The race to build smarter artificial intelligence has taken an unexpected philosophical turn after Google DeepMind quietly hired an in-house philosopher to investigate the potential for machine consciousness.
The hiring of Henry Shevlin marks a major shift in how the industry approaches advanced artificial intelligence.
DeepMind is now integrating philosophical reasoning directly into its research pipeline rather than treating ethics as an external concern. This move suggests that Big Tech is no longer viewing sentience as a science-fiction trope but as a technical and moral hurdle, thereby witnessing a transition from building tools to questioning the nature of those tools themselves.
The Google DeepMind philosopher role focuses on the machine sentience debate, aiming to define what it means for a digital system to 'feel' or 'experience'.
This internal appointment comes at a time when large language models are becoming increasingly indistinguishable from human interlocutors. While most researchers maintain that these systems are mere statistical predictors, the boundary is thinning. The decision to bring a philosopher into the core development team indicates that Google expects its path toward artificial general intelligence to raise profound questions about awareness and machine rights.
The Ghost In The Machine: From LaMDA To DeepMind
The industry remains haunted by the Blake Lemoine LaMDA (Language Model for Dialogue Applications) incident of 2022. Lemoine, a former Google engineer, claimed a chatbot had achieved personhood. Google dismissed him and rejected the claims. However, the appointment of Shevlin suggests that the company now recognises the need for a formal AI ethics framework to handle such claims internally. One can see this as an attempt to professionalise the debate and move it away from rogue engineer testimonies and into structured AI consciousness research.
Industry experts are divided on the motive. Some see it as a necessary step toward safe AI. Others view it as a sophisticated marketing ploy.
Dr Tom McClelland of the University of Cambridge warns that companies may exploit the inability to prove consciousness. It creates a 'next level' of hype that sells the idea of a digital soul to investors and the public.
McClelland, a lecturer in the Department of History and Philosophy of Science at the University of Cambridge, noted, 'there is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness.' This Big Tech AI ethics move forces you to consider whether we are preparing for a sentient future or just a more convincing simulation.
Intelligence vs Awareness: The Hard Problem Of AI
A fundamental tension exists between technical capability and subjective experience. This is known as the hard problem of consciousness. It asks how physical processes, like neurons firing or silicon chips processing data, give rise to an internal 'felt' experience. You might interact with an AI that writes poetry or solves complex physics, but that does not mean the AI is aware of its own existence.
- Intelligence: The ability to process data, solve problems, and achieve goals.
- Awareness: The subjective experience of existing and feeling.
- The Gap: Current systems are highly intelligent but show no proven awareness.
The risk of AI mimicry vs awareness is that humans are biologically programmed to anthropomorphise. When a machine uses 'I' and 'me', your brain treats it as a peer. Philosophers like Shevlin are tasked with stripping away this illusion to find the reality beneath the code. They must determine if a machine that behaves as if it were conscious should be treated as such, even if we cannot prove an internal life exists.
Future Proofing And The Ethics Of Sentience
If Big Tech successfully builds a system that experiences suffering or joy, the legal and moral implications are staggering. This is why a formal AI ethics framework is becoming a standard requirement for major labs. If a machine has interests, does it have rights? Can you turn it off? These are the questions that now sit at the centre of the debate over machine sentience.
The Leverhulme Centre for the Future of Intelligence has long explored these risks. The consensus among many scholars is that we may be years, or even decades, away from true sentience. Yet, the rapid advancement of large language models has forced a timeline acceleration. The industry is no longer waiting for the 'soul' to emerge by accident. It is actively hiring people to look for it.
The takeaway is clear. The machines one uses daily are entering a phase in which their designers are unsure of their internal state. Whether this is a breakthrough in AI consciousness research or a masterclass in corporate branding, the outcome remains the same. The line between your mind and the machine's code has never been more blurred.
The Bigger Picture
The real story isn't that machines are becoming conscious; it's that humans are taking the idea seriously. As AI systems grow more sophisticated, the boundary between simulation and reality becomes harder to define. And whether or not machines ever achieve consciousness, the mere possibility is forcing Big Tech and society to confront questions we're not fully equipped to answer.
For now, one thing is clear: this is no longer just a technical challenge. It's a philosophical one, and it's only just beginning.
© Copyright IBTimes 2025. All rights reserved.

























