Here is an excerpt from an article written by Öykü Işık and Ankita Goswami for MIT Sloan Management Review. To read the complete article, check out others, sign up for email alerts, and obtain subscription information, please click here.
Illustration Credit: Anna & Elena Balbusso/theispot.com
* * *
Many organizations commit to principles of AI ethics but struggle to incorporate them into practice. Here’s how to bridge that gap.
In October 2023, New York City released its AI action plan, publicly committing to the responsible and transparent use of artificial intelligence. The plan included guiding principles — accountability, fairness, transparency — and the creation of a new role to oversee their responsible implementation: the algorithm management and policy officer.
But by early 2024, New York’s AI ambitions were under scrutiny. It turned out that a chatbot deployed to provide regulatory guidance to small-business owners was prone to giving misleading advice. Reports revealed that the system was misinforming users about labor laws and licensing requirements and occasionally suggesting actions that could lead to regulatory violations.1 Observers questioned not only the technical accuracy of the system but also the city’s governance protocols, oversight mechanisms, and deployment processes. The episode became a cautionary tale, not only for public institutions but for any organization deploying AI tools at scale.
This case represents just one example of a broader pattern. Across industries, companies have embraced the language of responsible AI (RAI), emphasizing fairness, accountability, and transparency. Yet implementation often lags far behind ambition as AI systems continue to produce biased outcomes, defy interpretability and explainability requirements, and trigger backlash among users.
In response, regulators have introduced a wave of new policies, including the European Union’s AI Act, Canada’s Artificial Intelligence and Data Act, and South Korea’s updated AI regulations — all placing new pressures on organizations to operationalize transparency, safety, and human oversight.
Still, even among companies that understand the potential hazards, progress remains uneven, and they risk embedding error or bias into their processes or committing unexpected and possibly serious ethical violations at scale.
Mind the RAI Gaps
Through interviews with over 20 AI leaders, ethics officers, and senior executives across several industries, including technology, financial services, health care, and the public sector, we explored the internal dynamics that shape RAI initiatives. We found that in some cases, RAI frameworks serve as nothing more than reputational window dressing and organizations simply lack commitment to operationalizing recommended practices. But we also uncovered structural and cultural obstacles that can prevent organizations from translating their principles into sustainable practices. In particular, we identified three recurring gaps; below, we explore them and propose a set of practical strategies for bridging them.
* * *
Here is a direct link to the complete article.
References (4)
1. C. Lecher, “NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup, March 29, 2024, https://themarkup.org.
2. R. Titah, “How AI Skews Our Sense of Responsibility,” MIT Sloan Management Review 65, no. 4 (summer 2024): 18-19.
Show All References