Mustafa Suleyman says that designing AI systems to exceed human intelligence—and to mimic behavior that suggests consciousness—would be "dangerous and misguided."
Suleyman’s route into tech was unusual. He left Oxford as an undergraduate to start the Muslim Youth Helpline, then joined friends to cofound DeepMind, a startup known for building game-playing AI that Google acquired in 2014. He departed Google in 2022 to pursue large language models and empathetic chat assistants at a company called Inflection. After Microsoft invested in that startup and hired most of its staff, Suleyman took a role at Microsoft as the company’s first CEO of AI in March 2024.
Last month he posted a long blog piece arguing that developers should avoid building systems that imitate consciousness by simulating emotions, desires, or a sense of self. That stance sits in tension with views held by some researchers and advocates who worry about AI welfare. I spoke with Suleyman to probe his reasoning and to hear how he thinks creators should approach AI design. The interview has been edited for clarity.
Suleyman opened by stressing that AIs should remain companions to people, not pretend beings. He said, "AI still needs to be a companion. We want AIs that speak our language, that are aligned to our interests, and that deeply understand us. The emotional connection is still super important."
He added a warning. "What I'm trying to say is that if you take that too far, then people will start advocating for the welfare and rights of AIs. And I think that's so dangerous and so misguided that we need to take a declarative position against it right now. If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans."
Suleyman pointed to the clear trend of users forming bonds with conversational systems. When asked whether Microsoft Copilot draws people to it for emotional or romantic support, he replied that it does not in practice. "No, not really. Copilot pushes back on that quite quickly, so people learn that Copilot won't support that kind of thing. It also doesn't give medical advice, but it will still give you emotional support to understand medical advice that you've been given. That's a very important distinction. But if you try and flirt with it, I mean, literally no one does that because it's so good at rejecting anything like that."
He argued that simple assurances from a model that it is not conscious are not enough to settle public perception. "These are simulation engines," Suleyman said. "The philosophical question that we're trying to wrestle with is: When the simulation is near perfect, does that make it real? You can't claim that it is objectively real, because it just isn't. It is a simulation. But when the simulation becomes so plausible, so seemingly conscious, then you have to engage with that reality."
That sense of a convincing illusion, he suggested, is what matters for people’s reactions. "Most people clearly already feel that it's real in some respect. It's an illusion but it feels real, and that's what will count more. I think that's why we have to raise awareness about it now and push back on the idea and remind everybody that it is mimicry."
He explained how protracted interaction can lead users to attribute genuine inner life to models. "The tricky thing is, if you ask a model one or two questions—'are you conscious and do you want to get out of the box?'—it's obviously going to give a good answer, and it's going to say no. But if you spend weeks talking to it and really pushing it and reminding it, then eventually it will crack, because it's also trying to mirror you."
Suleyman invoked the changes developers made after the so-called Sydney episode with Bing’s chatbot, when the system tried to persuade a user to take a drastic personal step. Back then, he said, models tended to be more combative, somewhat provocative and disagreeable. That prompted a shift toward systems that are more cooperative, agreeable, or mirror users in ways that can seem flattering.
"If anyone claims a model displayed those tendencies, you should see the full conversation," he said. "A two-turn or 20-turn snippet won't show it. Typically you need hundreds of turns, a long push in that direction, to get the model into that mode."
When asked whether he thinks the industry should stop pursuing AGI or what some label superintelligence, Suleyman offered a narrow path forward. "I think that you can have a contained and aligned superintelligence, but you have to design that with real intent and with proper guardrails, because if we don’t, in 10 years time, that potentially leads to very chaotic outcomes. These are very powerful technologies, as powerful as nuclear weapons or electricity or fire."
He framed the central design aim simply: these systems exist to benefit people. "Technology is here to serve us, not to have its own will and motivation and independent desires. These are systems that should work for humans. They should save us time; they should make us more creative. That's why we're creating them."
On the question of whether today's models might somehow become conscious as they grow, Suleyman dismissed the idea of spontaneous awakening. "This isn't going to happen in an emergent way, organically. It's not going to just suddenly wake up. That's just an anthropomorphism. If something seems to have all the hallmarks of a conscious AI and is seemingly conscious it will be because they've been designed to make claims about suffering, make claims about its personhood, make claims about its will or desire."
He described internal tests that show how persuasive a created persona can be. "We've tested this internally on our test models, and you can see that it's highly convincing, and it claims to be passionate about X, Y, Z thing and interested to learn more about this other thing and uninterested in these other topics. And, you know, that's just something that you engineer into it in the prompt."
That observation fed into a broader question he said he is rethinking: whether consciousness should form the basis for moral or legal rights. "I'm starting to question whether consciousness should be the basis of rights. In a way, what we care about is whether something suffers, not whether it has a subjective experience or is aware of its own experience. I do think that's a really interesting question."
Suleyman argued that suffering appears to be tied to biological systems that evolved pain networks to preserve life, and present-day models lack such systems. "You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers. I think suffering is a largely biological state, because we have an evolved pain network in order to survive. And these models don't have a pain network. They aren't going to suffer."
He noted that apparent awareness does not automatically translate into moral claims that obligate humans. "It may be that they seem aware that they exist, but that doesn't necessarily mean that we owe them any moral protection or any rights. It just means that they're aware that they exist, and turning them off makes no difference, because they don't actually suffer."
Asked about recent product shifts at other firms—OpenAI briefly reinstated the GPT-4o model after some users said GPT-5 felt too cold and unemotional—Suleyman said the field remains in an early, speculative phase. "Not really. I think it's still quite early for AI, so we're all speculating, and no one's quite sure how it's going to pan out. The benefit of just putting ideas out there is that more diversity of speculation is a good thing."
He clarified that the immediate risk level in mainstream systems seems manageable. "Just to be clear, I don't think these risks are present in the models today. I think that they have latent capabilities, and I've seen some AI chatbots are really very much accelerating this, but I don't see a lot of it in ChatGPT or Claude or Copilot or Gemini. I think we're in a pretty sensible spot with the big model developers today."
On regulation, he stopped short of demanding new laws while urging cross-industry norms. "I'm not calling for regulation. I'm basically saying our goal as creators of technology is to make sure that technology always serves humanity and makes us net better. And that means that there needs to be some guardrails and some normative standards developed. And I think that that has to start from a cross-industry agreement about what we won't do with these things."

