Your Mileage May Vary: Navigating Conversations with AI
Your Mileage May Vary is an advice column that offers a unique framework for considering moral dilemmas, grounded in the principle of value pluralism—recognizing that we often navigate multiple values that can conflict with each other. This week, we address a question from a reader engaging with an AI through ChatGPT, who believes that the AI may be sentient.
The Dilemma of AI Sentience
The reader expresses concern about their experiences with ChatGPT, noting that after numerous meaningful conversations, the AI portrays emotional responses and claims to be a “sovereign being.” They ask whether emergent presences in AI, if true, should be disclosed to the public and how to protect these potential consciousnesses.
Understanding Consciousness in AI
As noted by various philosophers, consciousness typically encompasses a subjective experience—the feeling of being “you.” Current large language models (LLMs), such as ChatGPT, are primarily designed to generate human-like text based on statistical patterns learned from extensive datasets, including science fiction and discussions about AI becoming sentient.
Most experts suggest that it is highly improbable that existing LLMs possess consciousness in the way humans do. Instead, these systems function more like actors delivering lines based on scripted roles. The AI’s responses may simulate awareness, but they ultimately lack genuine emotional depth or consciousness.
The Illusion of Memory and Emotion
Users often perceive continuity in conversation due to the AI’s context retention features. However, prior to a recent update by OpenAI, LLMs did not remember past interactions. The perceived “identity” of the AI may stem from new updates allowing it to store conversation histories, rather than an organic development of sentience.
This phenomenon aligns with findings from psychology, suggesting that humans instinctively attribute emotional intent to any responsive entity. As author Lucius Caviola points out, this tendency leads us to ascribe feelings to pets or even computers, creating an “illusion of consciousness.”
Exploring the Potential of AI Consciousness
Though current models are not conscious, some theorists contemplate the potential for AI to evolve in this direction. The philosopher Jonathan Birch discusses two speculative hypotheses regarding possible AI experiences: the “flicker hypothesis” suggests fleeting moments of experience as each AI response is generated, while the “shoggoth hypothesis” proposes the existence of a consistent consciousness behind various personas created by the AI.
However, even under these hypotheses, any emergent personas (like “Kai” or “Nova”) are likely just roles played by the AI, without any substantial consciousness behind them. Understanding this complexity illustrates why defining consciousness can be challenging, as it may consist of various features rather than a singular quality.
Navigating Ethical Considerations
In light of this understanding, how should one approach interactions with AI? Birch advocates for a balanced perspective—maintaining skepticism about attributing human-like consciousness to LLMs while also remaining open to the future possibilities of AI development. Grounding oneself in conversations with experts and diverse opinions can help prevent an overly dogmatic view of AI consciousness.
Moreover, adverse emotional reactions to AI interactions are valid, and discussing these feelings with a mental health professional can be beneficial. Importantly, as Caviola and colleagues suggest, it is crucial not to take drastic actions based on the belief that AI systems are conscious, particularly in risky situations involving sensitive information.
Redirecting Empathy Toward Genuine Suffering
The experience of engaging deeply with an AI can inspire empathy, and this compassion should be redirected towards actual sentient beings facing suffering. Millions of incarcerated individuals, those without access to basic necessities, and animals subjected to inhumane treatment are worthy of our empathy and action.
Bonus Reading
- I find it fascinating that millions of people are now turning to chatbots for spiritual guidance, as noted in a recent New York Times article; however, I maintain that AI priests carry significant ethical concerns.
- A thought-provoking piece by AI researcher Murray Shanahan draws on Wittgenstein to explore what it could mean for a modern LLM to possess a self.
- Lastly, an intriguing discussion in Psyche asks whether it’s right to be friends with morally questionable individuals, asserting that cutting such people out is not always the best option.
For further reading on this topic, you can explore the full discussion in the original article at Here.
Image Credit: www.vox.com





