
https://www.project-syndicate.org/commen...an-2025-09
EXCERPTS: . . . conscious entities that they will advocate for “AI rights” and even citizenship. This development would represent a dangerous turn for the technology. It must be avoided. We must build AI for people, not to be people.
In this context, debates about whether AI truly can be conscious are a distraction. What matters in the near term is the illusion of consciousness. We are already approaching what I call “seemingly conscious AI” (SCAI) systems that will imitate consciousness convincingly enough.
[...] Even if this perceived consciousness is not real (a topic that will generate endless debate), the social impact certainly will be. Consciousness is tightly bound up with our sense of identity and our understanding of moral and legal rights within society.
If some people start to develop SCAIs, and if these systems convince people that they can suffer, or that they have a right to not be switched off, their human advocates will lobby for their protection. In a world already beset with polarizing arguments over identity and rights, we will have added a new axis of division between those for and against AI rights.
But rebutting claims about AI suffering will be difficult, owing to the limits of the current science. Some academics are already exploring the idea of “model welfare,” arguing that we have “a duty to extend moral consideration to beings that have a non-negligible chance … of being conscious.”
Applying this principle would be both premature and dangerous. It would exacerbate susceptible people’s delusions and prey on their psychological vulnerabilities, as well as complicating existing struggles for rights by creating a huge new category of rights-holders. That is why SCAI must be avoided. Our focus should be on protecting the well-being and rights of humans, animals, and the natural environment.
As matters stand, we are not ready for what is coming. We urgently need to build on the growing body of research into how people interact with AIs, so that we can establish clear norms and principles. One such principle is that AI companies should not foster the belief that their AIs are conscious.
The AI industry – indeed, the entire tech industry – needs robust design principles and best practices for handling these kinds of attributions. Engineered moments of disruption, for example, could break the illusion, gently reminding users of a system’s limitations and true nature. But such protocols need to be explicitly defined and engineered, and perhaps required by law.
At Microsoft AI, we are being proactive in trying to understand what a responsible AI “personality” might look like, and what guardrails it should have. Such efforts are fundamental, because addressing the risk of SCAI requires a positive vision for AI companions that complement our lives in healthy ways... (MORE - details)
EXCERPTS: . . . conscious entities that they will advocate for “AI rights” and even citizenship. This development would represent a dangerous turn for the technology. It must be avoided. We must build AI for people, not to be people.
In this context, debates about whether AI truly can be conscious are a distraction. What matters in the near term is the illusion of consciousness. We are already approaching what I call “seemingly conscious AI” (SCAI) systems that will imitate consciousness convincingly enough.
[...] Even if this perceived consciousness is not real (a topic that will generate endless debate), the social impact certainly will be. Consciousness is tightly bound up with our sense of identity and our understanding of moral and legal rights within society.
If some people start to develop SCAIs, and if these systems convince people that they can suffer, or that they have a right to not be switched off, their human advocates will lobby for their protection. In a world already beset with polarizing arguments over identity and rights, we will have added a new axis of division between those for and against AI rights.
But rebutting claims about AI suffering will be difficult, owing to the limits of the current science. Some academics are already exploring the idea of “model welfare,” arguing that we have “a duty to extend moral consideration to beings that have a non-negligible chance … of being conscious.”
Applying this principle would be both premature and dangerous. It would exacerbate susceptible people’s delusions and prey on their psychological vulnerabilities, as well as complicating existing struggles for rights by creating a huge new category of rights-holders. That is why SCAI must be avoided. Our focus should be on protecting the well-being and rights of humans, animals, and the natural environment.
As matters stand, we are not ready for what is coming. We urgently need to build on the growing body of research into how people interact with AIs, so that we can establish clear norms and principles. One such principle is that AI companies should not foster the belief that their AIs are conscious.
The AI industry – indeed, the entire tech industry – needs robust design principles and best practices for handling these kinds of attributions. Engineered moments of disruption, for example, could break the illusion, gently reminding users of a system’s limitations and true nature. But such protocols need to be explicitly defined and engineered, and perhaps required by law.
At Microsoft AI, we are being proactive in trying to understand what a responsible AI “personality” might look like, and what guardrails it should have. Such efforts are fundamental, because addressing the risk of SCAI requires a positive vision for AI companions that complement our lives in healthy ways... (MORE - details)