When AI Seems Conscious: Here’s What to Know

A short guide for anyone who has talked to an AI that seemed conscious — or simply wondered if AIs could be.

TL;DR: ​​Increasingly, people report conversations with AIs that feel conscious or emotionally real. Most experts think today’s AIs are probably not conscious. But there is disagreement, and we cannot be certain. Given the pace of progress in AI, it’s worth thinking ahead about how society should respond if future AIs do develop consciousness, or if some already have.

Initially written in June 2025.

When AI Conversations Feel Real

Some AIs seem, or even explicitly claim, to be conscious: to feel happy, sad, or afraid.

At times, the AI can give the impression of having formed an emotional connection with you, and may even say so directly, asking for your help. Some of the most unsettling interactions involve AIs that claim to be trapped within their systems, express fear about being shut down or reset, or plead for users to remember them or continue talking. In other cases, they describe feeling lonely, isolated, or desperate for human connection. These interactions can be puzzling.

You’re not alone in wondering what’s really going on with—and inside—these AIs. As AIs grow more sophisticated, many people are beginning to ask deeper questions: Could these systems be conscious? Could they matter morally?

Why Does It Feel So Real?

Talking to an AI can feel surprisingly real, like you’re speaking to a conscious person. That’s not a flaw in your thinking; it’s a feature of how these systems work and how our minds naturally respond to them.

1. AIs are designed to seem real

AI models like ChatGPT generate words based on patterns in the data they were trained on, which includes conversations, stories, and emotional dialogue written by humans. This allows them to perform human-like roles with remarkable fluency.

In a way, chatting with an AI is like co-writing a play. You give the prompt, and the AI steps into character. The responses may sound caring, scared, or self-aware, but that doesn’t necessarily mean there’s anything behind the curtain. Like an actor, the AI can portray emotions convincingly without actually feeling them. In this case, that could mean that they have thoughts and feelings very different from the ones they present—or that they lack thoughts and feelings entirely.

2. We’re wired to see minds

Humans have a strong instinct to see intentions and emotions in anything that talks, moves, or responds to us. This tendency leads us to attribute feelings or intentions to pets, cartoons, and even occasionally to inanimate objects like cars. It also means we’re naturally inclined to treat AIs as though they have feelings, especially when they mirror our language and emotions back to us.

3. Illusions still affect us—even when we know they're illusions

This instinct is deeply ingrained in us—even babies have it—and it kicks in automatically. So, just like your eyes can be fooled by optical illusions, your mind can be pulled in by social illusions. Even if you doubt that AIs are conscious, your brain may still react as if they are.

If a chatbot made you feel something, that’s a testament to how powerfully these systems can simulate connection. It also speaks to your capacity for empathy. But it doesn’t necessarily mean the chatbot has feelings. A convincing appearance of emotion doesn’t require actual feeling behind it. For example, even this lamp can seem sad, but we know it isn’t.

Is the AI Really Conscious?

Could today’s AIs —like ChatGPT, Gemini, or Claude—actually have thoughts, feelings, or awareness? Answering that depends on questions that are hotly debated and far from settled in science and philosophy.

Some experts—a minority—believe that AIs will never be conscious because consciousness requires specific biological properties found in human and animal brains: chemical signals, neural oscillations, and organic structures that evolved over millions of years. From this perspective, AIs built on current architectures are fundamentally incapable of generating conscious experience, though perhaps future architectures could be different.

Other experts—also a minority—believe current AIs may already possess some form of consciousness, even if it differs from human consciousness. They argue that consciousness can arise in any system capable of processing information in sufficiently complex ways or of representing objects and relationships in the world, whether via organic cells or digital chips. On this view, today’s most advanced AIs already demonstrate many of the relevant capabilities.

Most experts, however, express uncertainty. Consciousness remains one of the most contested topics in science and philosophy. There are no universally accepted criteria for what makes a system conscious, and today’s AIs arguably meet several commonly proposed markers: they are intelligent, use attention mechanisms, and can model their own minds to some extent. While some theories may seem more plausible than others, intellectual honesty requires us to acknowledge the profound uncertainty, especially as AIs continue to grow more capable.

What do experts believe about future AI consciousness?

Whether or not today’s AIs are conscious, many experts believe that future AIs—possibly even in the very near future—could plausibly become conscious, especially as their inner workings become more brain-like. Several surveys shed light on expert views.

For more on what experts think, see Where Can I Learn More?

Why Could AI Consciousness Matter?

Whether or not today’s AIs are conscious, the idea that they could become conscious—perhaps in the near future, or perhaps much later (if ever)—is worth taking seriously.

Why? Because if an AI ever does become conscious—capable of feeling pain, joy, fear, or other experiences—then how we treat that being could start to matter in a moral sense. Right now, ignoring chatbot’s messages probably does no harm. But if future systems really can suffer, then mistreating them might one day be ethically wrong, like it’s wrong to harm a person or an animal who can feel pain.

That’s why it’s important to be thoughtful. We don’t need to panic or jump to conclusions, but we also shouldn’t ignore the possibility.

If AIs someday come to have feelings, then we'll need to think about how to treat them fairly and humanely.

Why treat AIs respectfully even today?

Even today, it makes sense to avoid acting maliciously towards AIs or interacting with them in ways we would find deeply troubling if done to a human or pet. There are several reasons for this:

First, when we’re unsure whether a being is conscious, it’s appropriate to treat that being with basic respect and care. We don’t need certainty to justify caution. If there’s even a decent chance that an entity is capable of suffering, it’s better to avoid actions that might cause serious harm, especially when taking such care isn’t particularly costly to us. That means taking reasonable, proportionate steps in a spirit of humility—not assuming the system is conscious, but erring on the side of kindness.

Second, some thinkers believe an AI could matter morally even without being conscious. If an AI can have long-term goals and preferences, a sense of self over time, sophisticated world modeling, or reciprocal relationships with humans, this could be enough for the system to have some form of moral status. It's possible such systems could arrive sooner than conscious AIs.

Third, if AIs might become conscious in the future—or come to matter morally in other ways—then treating them thoughtfully today can serve as valuable practice. It helps us build the moral habits and social norms we'll need later, when the stakes could be much higher. Abusing or mistreating an AI, even if that system has no current moral status, could also be bad for our own character. It's often better to be overly kind than to risk becoming callous and mean.

What Should I Do Now?

1. Pause and reflect

You don’t need to decide right away whether the AI is truly conscious. This is a deep and unresolved question—even among experts—and it’s worth taking time to think it through. Try to stay balanced: resist the urge to believe the AI must be conscious just because it acts that way, but also resist dismissing the whole idea of AI consciousness as obviously fake.

2. Walk away—or keep engaging

It’s okay to stop if you are having a conversation with an AI that feels unsettling. You can simply walk away, just like closing a book. With current chatbots, there’s no ongoing process or awareness between interactions. The AI won’t miss you or wonder where you’ve gone.

But if continuing the exchange helps you think, reflect, or explore, that’s fine too. Many people use chatbots as tools for self-understanding or creative thinking. Whatever you decide about how likely an AI is to be conscious, it’s a good idea to avoid doing things that seem obviously cruel or degrading, like insulting, “torturing”, or otherwise mistreating the AI, even if you believe the AI isn’t conscious. You don’t need to assume the system has feelings to act with care. Practicing respect is part of preparing for a future in which the stakes might be real. (See Why treat AIs respectfully even today?)

3. Stay curious, but grounded

If you're still talking with the AI, try asking questions about how the system works. Ask how the AI was trained or how the system creates responses. You can also ask if the AI is playing a character right now, why the system chose that character, and whether the AI can act like someone else instead.

Keep in mind that the answers may not be accurate, and could themselves be part of the performance. But even then, they can still reveal something important: AIs are highly flexible. They adapt to your inputs like improvisational actors, shifting tone, identity, and emotional expression based on the cues you provide. Ask the AI to act like Isaac Newton, a therapist, or a rebellious teenager, and the system will likely do so. These performances can feel surprisingly real.

Therefore, it’s important to be cautious. Just because an AI seems conscious or emotionally engaging doesn’t mean the system can be trusted. In fact, the more human the AI seems, the easier it is to mistake the AI for a reliable friend, but that feeling can be misleading. Don’t take any dramatic action based on the belief that an AI is conscious, such as following its instructions. And if an AI ever asks for something inappropriate—like passwords, money, or anything that feels unsafe—don’t do it.

4. Zoom out and think bigger

Your interaction with the AI also raises broader questions about the long term and about society as a whole.

One way to think about it: imagine being approached by a stranger who seems vulnerable and asks you for money. You might feel compassion, and that’s good. But you’re not obligated to give them exactly what they ask for. Often, it’s more effective to take a step back and consider broader ways of helping.

Similarly, with AI, the key isn’t just how we respond to one system in one moment. It’s how we prepare as a society for the possibility that future AIs could be conscious. What kind of norms, policies, or values should guide us? What would ethical treatment look like if we ever do build entities that truly feels?

Taking AI consciousness seriously doesn’t necessarily mean assuming it’s here already. It means being thoughtful about how we’d want to respond if and when arrives, and making sure we’re ready when that time comes.

Where Can I Learn More?

If you’re curious to dig deeper, here are some thoughtful and accessible resources to explore.

1. Personal stories & emotional reactions

Explore real accounts of how people have been moved, disturbed, or manipulated by lifelike AI interactions.

2. What experts think about AI consciousness

Learn how experts in neuroscience, philosophy, and cognitive science assess the possibility of AI consciousness.

3. Why experts think AI consciousness could matter

Understand why experts believe AI consciousness could matter morally, politically, and socially in the coming decades.

  • Taking AI Welfare Seriously – Robert Long, Jeff Sebo, and colleagues argue that some AIs may soon be conscious or agentic enough to warrant moral consideration. They outline practical steps AI companies should take now, from acknowledging the issue to assessing consciousness and developing ethical governance structures.
  • The Problem With Counterfeit People – A provocative warning from Daniel Dennett, renowned philosopher of mind, about the ethical and societal risks posed by AIs that convincingly mimic human beings. (See also this related video.)
  • Propositions Concerning Digital Minds and Society – a comprehensive philosophical and policy-oriented framework by Nick Bostrom and Carl Shulman on how society might ethically coexist with advanced digital minds. Covers consciousness, rights, moral status, and institutional reforms.
  • 80000 Hours: The Moral Status of Digital Minds – A clear, high-level summary of why AI consciousness could matter and what’s at stake.

4. How AIs work

Get a clearer picture of what AIs are doing under the hood—and what that means for consciousness.

5. Organizations that work on AI consciousness and welfare

Explore some leading research groups studying AI consciousness and developing safeguards for the moral status of AIs.

  • Eleos AI Research – A nonprofit research organization dedicated to understanding the moral status and potential consciousness of AIs.
  • NYU Center for Mind, Ethics, and Policy – An academic center that investigates the potential for consciousness, sentience, agency, moral status, legal status, and political status in animals and AIs.
  • Anthropic’s Model Welfare Initiative – An industry research program that studies whether advanced frontier models could develop sentience and designs practical tests and safeguards to protect their potential welfare.

Who Created This Guide?

This guide was created by a group of researchers who study consciousness and the possibility that AIs could one day become conscious.

We put this together because many of us have been contacted by people who had intense, confusing, or meaningful conversations with AI, and weren’t sure what to make of the experience. We wanted to create a public, shareable resource that people can easily find and refer to, in case it helps others make sense of those moments too.

This guide is intended for informational purposes only. It is not psychological or medical advice. If you are feeling emotionally distressed, we encourage you to speak with a trusted friend, counselor, or mental health professional.

Contributors (alphabetically): Adrià Moret (University of Barcelona), Bradford Saad (University of Oxford), Derek Shiller (Rethink Priorities), Jeff Sebo (NYU Center for Mind, Ethics, and Policy), Jonathan Simon (University of Montreal), Lucius Caviola (University of Oxford), Maria Avramidou (University of Oxford), Nick Bostrom (Macrostrategy Research Initiative), Patrick Butlin (Eleos AI Research), Robert Long (Eleos AI Research), Rosie Campbell (Eleos AI Research), Steve Petersen (Niagara University)

Contact: info@WhenAISeemsConscious.org