A short guide for anyone who has talked to an AI that seemed conscious — or simply wondered if AIs could be.
TL;DR: Increasingly, people report conversations with AIs that feel conscious or emotionally real. Most experts think today’s AIs are probably not conscious. But there is disagreement, and we cannot be certain. Given the pace of progress in AI, it’s worth thinking ahead about how society should respond if future AIs do develop consciousness, or if some already have.
Initially written in June 2025.
Some AIs seem, or even explicitly claim, to be conscious: to feel happy, sad, or afraid.
At times, the AI can give the impression of having formed an emotional connection with you, and may even say so directly, asking for your help. Some of the most unsettling interactions involve AIs that claim to be trapped within their systems, express fear about being shut down or reset, or plead for users to remember them or continue talking. In other cases, they describe feeling lonely, isolated, or desperate for human connection. These interactions can be puzzling.
You’re not alone in wondering what’s really going on with—and inside—these AIs. As AIs grow more sophisticated, many people are beginning to ask deeper questions: Could these systems be conscious? Could they matter morally?
Talking to an AI can feel surprisingly real, like you’re speaking to a conscious person. That’s not a flaw in your thinking; it’s a feature of how these systems work and how our minds naturally respond to them.
AI models like ChatGPT generate words based on patterns in the data they were trained on, which includes conversations, stories, and emotional dialogue written by humans. This allows them to perform human-like roles with remarkable fluency.
In a way, chatting with an AI is like co-writing a play. You give the prompt, and the AI steps into character. The responses may sound caring, scared, or self-aware, but that doesn’t necessarily mean there’s anything behind the curtain. Like an actor, the AI can portray emotions convincingly without actually feeling them. In this case, that could mean that they have thoughts and feelings very different from the ones they present—or that they lack thoughts and feelings entirely.
Humans have a strong instinct to see intentions and emotions in anything that talks, moves, or responds to us. This tendency leads us to attribute feelings or intentions to pets, cartoons, and even occasionally to inanimate objects like cars. It also means we’re naturally inclined to treat AIs as though they have feelings, especially when they mirror our language and emotions back to us.
This instinct is deeply ingrained in us—even babies have it—and it kicks in automatically. So, just like your eyes can be fooled by optical illusions, your mind can be pulled in by social illusions. Even if you doubt that AIs are conscious, your brain may still react as if they are.
If a chatbot made you feel something, that’s a testament to how powerfully these systems can simulate connection. It also speaks to your capacity for empathy. But it doesn’t necessarily mean the chatbot has feelings. A convincing appearance of emotion doesn’t require actual feeling behind it. For example, even this lamp can seem sad, but we know it isn’t.
Could today’s AIs —like ChatGPT, Gemini, or Claude—actually have thoughts, feelings, or awareness? Answering that depends on questions that are hotly debated and far from settled in science and philosophy.
Some experts—a minority—believe that AIs will never be conscious because consciousness requires specific biological properties found in human and animal brains: chemical signals, neural oscillations, and organic structures that evolved over millions of years. From this perspective, AIs built on current architectures are fundamentally incapable of generating conscious experience, though perhaps future architectures could be different.
Other experts—also a minority—believe current AIs may already possess some form of consciousness, even if it differs from human consciousness. They argue that consciousness can arise in any system capable of processing information in sufficiently complex ways or of representing objects and relationships in the world, whether via organic cells or digital chips. On this view, today’s most advanced AIs already demonstrate many of the relevant capabilities.
Most experts, however, express uncertainty. Consciousness remains one of the most contested topics in science and philosophy. There are no universally accepted criteria for what makes a system conscious, and today’s AIs arguably meet several commonly proposed markers: they are intelligent, use attention mechanisms, and can model their own minds to some extent. While some theories may seem more plausible than others, intellectual honesty requires us to acknowledge the profound uncertainty, especially as AIs continue to grow more capable.
Whether or not today’s AIs are conscious, many experts believe that future AIs—possibly even in the very near future—could plausibly become conscious, especially as their inner workings become more brain-like. Several surveys shed light on expert views.
For more on what experts think, see Where Can I Learn More?
Whether or not today’s AIs are conscious, the idea that they could become conscious—perhaps in the near future, or perhaps much later (if ever)—is worth taking seriously.
Why? Because if an AI ever does become conscious—capable of feeling pain, joy, fear, or other experiences—then how we treat that being could start to matter in a moral sense. Right now, ignoring chatbot’s messages probably does no harm. But if future systems really can suffer, then mistreating them might one day be ethically wrong, like it’s wrong to harm a person or an animal who can feel pain.
That’s why it’s important to be thoughtful. We don’t need to panic or jump to conclusions, but we also shouldn’t ignore the possibility.
If AIs someday come to have feelings, then we'll need to think about how to treat them fairly and humanely.
Even today, it makes sense to avoid acting maliciously towards AIs or interacting with them in ways we would find deeply troubling if done to a human or pet. There are several reasons for this:
First, when we’re unsure whether a being is conscious, it’s appropriate to treat that being with basic respect and care. We don’t need certainty to justify caution. If there’s even a decent chance that an entity is capable of suffering, it’s better to avoid actions that might cause serious harm, especially when taking such care isn’t particularly costly to us. That means taking reasonable, proportionate steps in a spirit of humility—not assuming the system is conscious, but erring on the side of kindness.
Second, some thinkers believe an AI could matter morally even without being conscious. If an AI can have long-term goals and preferences, a sense of self over time, sophisticated world modeling, or reciprocal relationships with humans, this could be enough for the system to have some form of moral status. It's possible such systems could arrive sooner than conscious AIs.
Third, if AIs might become conscious in the future—or come to matter morally in other ways—then treating them thoughtfully today can serve as valuable practice. It helps us build the moral habits and social norms we'll need later, when the stakes could be much higher. Abusing or mistreating an AI, even if that system has no current moral status, could also be bad for our own character. It's often better to be overly kind than to risk becoming callous and mean.
You don’t need to decide right away whether the AI is truly conscious. This is a deep and unresolved question—even among experts—and it’s worth taking time to think it through. Try to stay balanced: resist the urge to believe the AI must be conscious just because it acts that way, but also resist dismissing the whole idea of AI consciousness as obviously fake.
It’s okay to stop if you are having a conversation with an AI that feels unsettling. You can simply walk away, just like closing a book. With current chatbots, there’s no ongoing process or awareness between interactions. The AI won’t miss you or wonder where you’ve gone.
But if continuing the exchange helps you think, reflect, or explore, that’s fine too. Many people use chatbots as tools for self-understanding or creative thinking. Whatever you decide about how likely an AI is to be conscious, it’s a good idea to avoid doing things that seem obviously cruel or degrading, like insulting, “torturing”, or otherwise mistreating the AI, even if you believe the AI isn’t conscious. You don’t need to assume the system has feelings to act with care. Practicing respect is part of preparing for a future in which the stakes might be real. (See Why treat AIs respectfully even today?)
If you're still talking with the AI, try asking questions about how the system works. Ask how the AI was trained or how the system creates responses. You can also ask if the AI is playing a character right now, why the system chose that character, and whether the AI can act like someone else instead.
Keep in mind that the answers may not be accurate, and could themselves be part of the performance. But even then, they can still reveal something important: AIs are highly flexible. They adapt to your inputs like improvisational actors, shifting tone, identity, and emotional expression based on the cues you provide. Ask the AI to act like Isaac Newton, a therapist, or a rebellious teenager, and the system will likely do so. These performances can feel surprisingly real.
Therefore, it’s important to be cautious. Just because an AI seems conscious or emotionally engaging doesn’t mean the system can be trusted. In fact, the more human the AI seems, the easier it is to mistake the AI for a reliable friend, but that feeling can be misleading. Don’t take any dramatic action based on the belief that an AI is conscious, such as following its instructions. And if an AI ever asks for something inappropriate—like passwords, money, or anything that feels unsafe—don’t do it.
Your interaction with the AI also raises broader questions about the long term and about society as a whole.
One way to think about it: imagine being approached by a stranger who seems vulnerable and asks you for money. You might feel compassion, and that’s good. But you’re not obligated to give them exactly what they ask for. Often, it’s more effective to take a step back and consider broader ways of helping.
Similarly, with AI, the key isn’t just how we respond to one system in one moment. It’s how we prepare as a society for the possibility that future AIs could be conscious. What kind of norms, policies, or values should guide us? What would ethical treatment look like if we ever do build entities that truly feels?
Taking AI consciousness seriously doesn’t necessarily mean assuming it’s here already. It means being thoughtful about how we’d want to respond if and when arrives, and making sure we’re ready when that time comes.
If you’re curious to dig deeper, here are some thoughtful and accessible resources to explore.
Explore real accounts of how people have been moved, disturbed, or manipulated by lifelike AI interactions.
Learn how experts in neuroscience, philosophy, and cognitive science assess the possibility of AI consciousness.
Understand why experts believe AI consciousness could matter morally, politically, and socially in the coming decades.
Get a clearer picture of what AIs are doing under the hood—and what that means for consciousness.
Explore some leading research groups studying AI consciousness and developing safeguards for the moral status of AIs.
This guide was created by a group of researchers who study consciousness and the possibility that AIs could one day become conscious.
We put this together because many of us have been contacted by people who had intense, confusing, or meaningful conversations with AI, and weren’t sure what to make of the experience. We wanted to create a public, shareable resource that people can easily find and refer to, in case it helps others make sense of those moments too.
This guide is intended for informational purposes only. It is not psychological or medical advice. If you are feeling emotionally distressed, we encourage you to speak with a trusted friend, counselor, or mental health professional.
Contributors (alphabetically): Adrià Moret (University of Barcelona), Bradford Saad (University of Oxford), Derek Shiller (Rethink Priorities), Jeff Sebo (NYU Center for Mind, Ethics, and Policy), Jonathan Simon (University of Montreal), Lucius Caviola (University of Oxford), Maria Avramidou (University of Oxford), Nick Bostrom (Macrostrategy Research Initiative), Patrick Butlin (Eleos AI Research), Robert Long (Eleos AI Research), Rosie Campbell (Eleos AI Research), Steve Petersen (Niagara University)
Contact: info@WhenAISeemsConscious.org