Skip to content

Embracing Gen AI at Work

Here is an excerpt from an article written by H. James Wilson and Paul R. Daugherty for Harvard Business Review. To read the complete article, check out others, sign up for email alerts, and obtain subscription information, please click here.

Illustration Credit:  Blindsalida

* * *

For organizations and their employees, this looming shift has massive implications. In the future many of us will find that our professional success depends on our ability to elicit the best possible output from large language models (LLMs) like ChatGPT—and to learn and grow along with them. To excel in this new era of AI-human collaboration, most people will need one or more of what we call “fusion skills”—intelligent interrogation, judgment integration, and reciprocal apprenticing.

Intelligent interrogation involves prompting LLMs (or in lay terms, giving them instructions) in ways that will produce measurably better reasoning and outcomes. Put simply, it’s the skill of thinking with AI. For example, a customer service rep at a financial services company might use it when looking for the answer to a complicated customer inquiry; a pharmaceutical scientist, to investigate drug compounds and molecular interactions; a marketer, to mine datasets to find optimal retail pricing.

Judgment integration is about bringing in your human discernment when a gen AI model is uncertain about what to do or lacks the necessary business or ethical context in its reasoning. The idea is to make the results of human-machine interactions more trustworthy. Judgment integration requires sensing where, when, and how to step in, and its effectiveness is measured by the reliability, accuracy, and explainability of the AI’s output.

With reciprocal apprenticing, you help AI learn about your business tasks and needs by incorporating rich data and organizational knowledge into the prompts you give it, thereby training it to be your cocreator. It’s the skill of tailoring gen AI to your company’s specific business context so that it can achieve the outcomes you want. As you do that, you yourself learn how to train the AI to tackle more-sophisticated challenges. Once a capability that only data scientists and analytics experts building data models needed, reciprocal apprenticing has become increasingly crucial in nontechnical roles.

Why do you need to systematically develop these new skills for thinking, building trust, and tailoring? Empirical research consistently shows that ad hoc instructions—the way most employees prompt LLMs today—lead to unreliable or poor outcomes, especially for complex reasoning tasks. This is true across functions, from customer service, to marketing, to logistics, to R&D. It’s critical for all of us to bring more rigor to our use of gen AI at work. In this article we’ll explain how.

Interrogating AI Intelligently

How do you improve the output of a massively complex system like an LLM, which is trained on mountains of data and driven by probabilities instead of human logic? There are several techniques you can use.

Think step by step.

When prompting gen AI, you need to break down the process it should follow into the constituent parts and then strive to optimize each step—just as the first wave of scientific management did in industrial manufacturing. However, the AI process doesn’t involve an assembly line; it involves a chain of thought through which an outcome is sought. Studies have shown that when gen AI tools are instructed to break reasoning tasks down in this manner, their performance dramatically improves. This is particularly true with tougher problems, as Jason Wei, the OpenAI researcher who first explored chain-of-thought reasoning, has demonstrated.

In fact, adding the simple phrase “Let’s think step by step” to an LLM’s instructions can increase the accuracy of its output more than threefold across a range of tasks from math to strategic reasoning. Let’s say your gen AI prompt is this: “My department has a budget of $500,000. We have spent 20% on equipment and allocated 30% for a new hire. We just received a budget increase of $50,000. What is our remaining budget? Let’s think step by step.” The model will put out: “Initially, your department had $500,000. You spent 20%, or $100,000, on equipment, leaving $400,000. You allocate 30%, or $150,000, for a new hire, which brings the budget down to $250,000. Finally, you recently received a budget increase of $50,000. Your remaining budget is $300,000.” While most people could do this math in their heads, the point is that LLMs (which work far faster) can be made to detail their work on quantitative problems that are much more complex, such as finding the shortest possible route for a sales rep to take among several cities. This creates a traceable chain of reasoning—instead of spitting out an answer at the end of a black-box process—that allows you to verify the accuracy of the results.

* * *

Here is a direct link to the complete article.

H. James Wilson is the global managing director of technology research and thought leadership at Accenture Research. He is the coauthor, with Paul R. Daugherty, of Human + Machine: Reimagining Work in the Age of AI, New and Expanded Edition (HBR Press, 2024).

Paul R. Daugherty is Accenture’s chief technology and innovation officer. He is the coauthor, with H. James Wilson, of Human + Machine: Reimagining Work in the Age of AI, New and Expanded Edition (HBR Press, 2024).

 

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll To Top