The Gen AI Playbook for Organizations

Here is an excerpt from an article written by and for Harvard Business Review. To read the complete article, check out others, sign up for email alerts, and obtain subscription information, please click here.

Illustration Credit:  Rhuby Dear

* * *

Leaders can’t afford to take a “wait and see” approach to adopting generative AI. They need a plan for applying it differently than others in the value chain, say the authors.
The questions about generative AI that we hear most often from business leaders include: When will gen AI match the intelligence of my best employees? Is it accurate enough to deliver business value? Is my CIO moving fast enough to lead our AI transformation? What are my rivals doing with gen AI? But those questions are misdirected. They focus on the intelligence of gen AI and its trajectory—how good gen AI is and how fast it’s improving—rather than on its implications for business strategy. What leaders should be asking is this: How can my organization use gen AI effectively today, regardless of its limitations? And how can we use it to create a competitive advantage?

This article—which draws on our experience working with hundreds of managers, leading gen AI initiatives ourselves, and researching digital transformation and strategy—proposes a framework for thinking about gen AI strategically and offers practical advice. We argue that a cautious “wait and see” approach—motivated by gen AI’s flaws, such as hallucinations—is potentially dangerous. But we don’t mean to imply that speed wins. Strategy does. Companies need to apply gen AI differently from their competitors and from others in their value chain. Here’s the argument for moving forward now:

Nontechie employees can use gen AI without support from experts. For decades AI usage was largely confined to the domain of engineers, computer programmers, and data scientists. But gen AI, led by OpenAI’s ChatGPT, changed that by enabling interactions using natural language. Its breakthrough wasn’t just an improvement in intelligence; it was also a dramatic increase in access. Today everyone in the organization can use gen AI tools, and they don’t need deep technical expertise, the support of a data science team, or central IT’s approval. What’s more, gen AI is increasingly being embedded into the tools people already use—email, videoconferencing, spreadsheets, CRM software, ERP systems—lowering the barriers to adoption even further.

This advancement in human-computer interaction resembles the transition from early command-line computing to the graphical user interface (GUI). In the 1980s, Windows radically transformed personal computing—not by making computers significantly more powerful but by allowing people to access that power without knowing MS-DOS commands. In much the same way, gen AI makes sophisticated machine-learning models available to anyone who can converse with it via writing or eventually, speaking.

Value-creation opportunities exist now. Waiting for a flawless, all-powerful, agentic AI is a mistake. Despite its flaws, gen AI can save time, reduce costs, and unlock new value. Holding off because the output isn’t perfect misunderstands the opportunity. Gen AI can already deliver meaningful improvements and efficiencies in many areas of your business. The benchmark shouldn’t be perfection; it should be relative efficiency compared with your current ways of working.

Competitive advantage comes from using gen AI more strategically than others, not just faster. A lasting advantage from gen AI can only be achieved by applying it differently. Everyone has access to gen AI. If you and your competitors use similar tools for similar tasks, then most of the gains will ultimately flow to others in the value chain if new competition erodes margins. More perilously, your own customers and suppliers may disintermediate you by using it to take care of the tasks you previously performed for them. This means that competitive advantage will hinge on how distinctively you use gen AI: which tasks you delegate to it and reimagine, how you use human expertise to complement it, and what new possibilities you unlock.

Where and When to Use Generative AI

Gen AI’s ubiquitous access and versatility create a new challenge: narrowing down the possibilities to find the best place to begin. Rather than asking whether gen AI performs as well as a human, start by breaking down jobs into their component tasks and ask: Which of these is gen AI well suited to handle today?

Consider the following activities: hiring critical employees, diagnosing cancer, and providing psychotherapy to at-risk individuals. These are often cited as areas where gen AI tools are beginning to approach human levels of intelligence and sophistication. Yet the idea of replacing humans in these roles typically meets strong resistance—and for good reason. The potential consequences of an error here are significant. Misdiagnosing cancer or mishandling a vulnerable patient can have life-altering effects. Choosing the wrong hire for a key leadership role can damage a company’s culture for years.

Now consider another set of tasks: summarizing student course evaluations, screening job applicants’ résumés, and assigning hospital beds. What distinguishes these examples from the first set isn’t necessarily the intelligence required but the cost of getting it wrong. A course evaluation summary that misses a nuance or a preliminary résumé screen that overlooks a marginal candidate creates only limited risk. Assigning hospital beds relies primarily on explicit, structured data (such as availability, patient needs, and expected discharge rates), which AI systems can process reliably.

This illustrates an important principle: The suitability of gen AI for a given task depends not just on the capabilities of gen AI but on two deeper factors. The first is the cost of errors: how serious the consequences would be if gen AI makes a mistake. If an error in a task would lead to serious harm, financial loss, or reputational damage, then firms must be far more cautious about employing gen AI to perform it without human oversight. The second factor is the type of knowledge the task demands. Tasks that rely on explicit data (structured or unstructured information that can be captured and processed) such as screening résumés and summarizing course evaluations are well suited for gen AI. Other tasks—such as psychotherapy, hiring for soft skills, and nuanced leadership decisions—require tacit knowledge: empathy, ethical reasoning, intuition, and contextual judgment built through human experience. These tasks are fundamentally harder for gen AI to perform because they involve not just retrieving information but also interpreting nuance, responding flexibly to context, and applying judgment in ambiguous situations.

These two dimensions—cost of errors and type of knowledge required—form the foundation of our framework for identifying where and how to use gen AI effectively. (See the exhibit “A Framework for Choosing Where and How to Use Gen AI.”)

* * *

Here is a direct link to the complete article.

Bharat N. Anand is the Richard R. West Dean and a profes­sor of business administration at New York University’s Stern School of Business.
Andy Wu is the Arjun and Minoo Melwani Family Associate Professor of Business Administration in the Strategy Unit at Harvard Business School and a senior fellow at the Mack Institute for Innovation Management at the Wharton School.

 

 

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.