Here is an excerpt from an article written by Elizabeth M. Renieris, Steven Mills, and Anne Kleppe for MIT Sloan Management Review. To read the complete article, check out others, sign up for email alerts, and obtain subscription information, please click here.
Illustration Credit:
Carolyn Geason-Beissel/MIT SMR | Getty Images
* * *
Experts debate if agentic AI should act like humans.
What to Read Next
For the fourth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In spring 2025, we also fielded a global executive survey yielding 1,221 responses to learn the degree to which organizations are addressing responsible AI. In our most recent article, we explored whether accountability for agentic AI requires new management approaches.
As noted in our September article, although there is no agreed-upon definition, agentic AI generally refers to AI systems that are capable of pursuing goals autonomously by making decisions, taking actions, and adapting to dynamic environments without constant human oversight. This time, in our final article this year, we dive deeper into agentic AI systems that exhibit humanlike qualities or characteristics — especially in terms of their behavior, communication style, appearance of empathy, and persuasive capabilities — also known as anthropomorphic AI. Concerns over increasingly humanlike AI are drawing headlines and legal action, including in the form of wrongful death suits against popular chatbot providers and Federal Trade Commission enforcement actions, including a recent fine against a company claiming its “robot lawyer” surpassed the expertise of a human lawyer.
Given the growing concerns over anthropomorphic AI, we asked our panel to react to the following provocation: Responsible AI governance requires questioning the necessity of overly humanlike agentic AI systems. Nearly 80% of our panelists agree or strongly agree with the statement, arguing that responsible AI governance is not just about how a technology is designed or deployed but also about when it should be deployed. While they acknowledge there’s value in humanlike systems in certain contexts, they recommend proceeding with caution given the considerable risks and point to transparency as key for mitigation. Below, we share insights from our panelists and draw on our own experience to offer recommendations on how organizations can approach anthropomorphized AI through the lens of RAI governance.
Just because we can doesn’t mean we should. Our experts believe that questioning the necessity of overly humanlike agentic AI is core to responsible AI governance. Richard Benjamins, co-CEO of RAIght.ai, explains, “Responsible AI governance is about systematically asking questions about the potential negative impacts of AI use cases on people and society, and about mitigating or preventing those impacts.” Likewise, David R. Hardoon, AI enablement head at Standard Chartered Bank, contends, “It is the role and responsibility of AI governance to inherently question the ‘why of AI,’ whether humanlike or otherwise.” And Ben Dias, chief scientist at IAG, adds, “As with any emerging technology, the ability to deploy overly humanlike agentic AI should not be conflated with the imperative to do so.” Despite this consensus, observes Öykü Işık, professor of digital strategy and cybersecurity at IMD Business School, “organizations often focus on how to implement AI safely while avoiding the harder question of whether some forms of AI are worth pursuing” in the first place.
* * *
Here is a direct link to the complete article.