Skip to content

Ethics and AI: 3 Conversations Companies Need to Have

Here is an excerpt from an article written by Reid Blackman, and Beena Ammanath for Harvard Business Review and the HBR Blog Network. To read the complete article, check out the wealth of free resources, obtain subscription information, and receive HBR email alerts, please click here.

Credit: piranka/Getty Images

* * *

Over the past several years, concerns around AI ethics have gone mainstream. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented. No one wants to push out discriminatory or biased AI. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. But once we’ve all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks?

Acting quickly to address concerns is admirable, but with the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes. To implement, scale, and maintain effective AI ethical risk mitigation strategies, companies should begin with a deep understanding of the problems they’re trying to solve. A challenge, however, is that conversations about AI ethics can feel nebulous. The first step, then, should consist of learning how to talk about it in concrete, actionable ways. Here’s how you can set the table to have AI ethics conversations in a way that can make next steps clear.

Who Needs to Be Involved?

We recommend assembling a senior-level working group that is responsible for driving AI ethics in your organization. They should have the right skills, experience, and knowledge such that the conversations are well-informed about the business needs, technical capacities, and operational know-how. At a minimum, we recommend involving four kinds of people: technologists, legal/compliance experts, ethicists, and business leaders who understand the problems you’re trying to solve for using AI. Their collective goal is to understand the sources of ethical risks generally, for the industry of which they are members, and for their particular company. After all, there are no good solutions without a deep understanding of the problem itself and the potential obstacles for proposed solutions.

You need the technologist to assess what is technologically feasible, not only at a per product level but also at an organizational level. That is because, in part, various ethical risk mitigation plans require different tech tools and skills. Knowing where your organization is from a technological perspective can be essential to mapping out how to identify and close the biggest gaps.

Legal and compliance experts are there to help ensure that any new risk mitigation plan is compatible and not redundant with existing risk mitigation practices. Legal issues loom particularly large in light of the fact that it’s neither clear how existing laws and regulations bear on new technologies, nor what new regulations or laws are coming down the pipeline.

Ethicists are there to help ensure a systematic and thorough investigation into the ethical and reputational risks you should attend to, not only by virtue of developing and procuring AI, but also those risks that are particular to your industry and/or your organization. Their importance is particularly relevant because compliance with outdated regulations does not ensure the ethical and reputational safety of your organization.

Finally, business leaders should help ensure that all risk is mitigated in a way that is compatible with business necessities and goals. Zero risk is an impossibility so long as anyone does anything. But unnecessary risk is a threat to the bottom line, and risk mitigation strategies also should be chosen with an eye towards what is economically feasible.

* * *

Productive conversations on ethics should go deeper than broad stroke examples descried by specialists and non-specialists alike. Your organization needs the right people at the table so that its standards can be defined and deepened. Your organization should fruitfully marry its quantitative and qualitative approaches to ethical risk mitigation so it can close the gaps between where it is now and where it wants it to be. And it should appreciate the complexity of the sources of its AI ethical risks. At the end of the day, AI ethical risk isn’t nebulous or theoretical. It’s concrete. And it deserves and requires a level attention that goes well beyond the repetition of scary headlines.

* * *

Here is a direct link to the complete article.

Reid Blackman, Ph.D., is the author of Ethical Machines: Your concise guide to totally unbiased, transparent, and respectful AI (Harvard Business Review Press, July 2022) and Founder and CEO of Virtue, an ethical risk consultancy. He is also a Senior Advisor to the Deloitte AI Institute, previously served on Ernst & Young’s AI Advisory Board, and volunteers as the Chief Ethics Officer to the non-profit Government Blockchain Association. Previously, Reid was a professor of philosophy at Colgate University and the University of North Carolina, Chapel Hill.

Beena Ammanath is the Executive Director of the global Deloitte AI Institute, author of the book “Trustworthy AI,” founder of the non-profit Humans For AI, and also leads Trustworthy and Ethical Tech for Deloitte. She is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, and e*trade.

* * *

Dear Reader:

A number of readers have advised me to be more proactive in “marketing” this blog, Thus, if you are so inclined, please ask one colleague, or friend, to sign on by clicking here.

Thank you.

Bob Morris

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll To Top