What’s Your Edge? Rethinking Expertise in the Age of AI

A CEO recently posed a question to me that’s been keeping executives awake: “If my junior analyst can get the same AI-generated insights as my senior strategist, why am I paying for expertise?”

It’s not hyperbole to say that we’re witnessing an unprecedented democratization of knowledge. Information that was once locked in specialized databases, consulting reports, and expert minds is now instantly available to anyone with access to generative AI and artificial intelligence tools. A startup founder in Indonesia can access strategic frameworks that once required McKinsey consultants. A nurse practitioner in rural Kansas can synthesize medical research like a specialist at Mayo Clinic.

This isn’t simply another wave of automation; it’s a fundamental restructuring of knowledge itself. Organizations that misunderstand this shift face two risks: overpaying for outdated expertise and undervaluing the human capabilities that remain irreplaceable.

The Paradox of Abundant Knowledge

When knowledge becomes commoditized, its value paradoxically shifts from the content to the context. Consider three critical transformations.

  • From answers to questions: AI excels at providing comprehensive answers, but only to the questions that we know to ask. The most valuable human expertise increasingly lies in identifying unasked questions and recognizing that there are unknown unknowns. A seasoned strategist understands not only their industry’s current patterns but also its hidden assumptions and unexplored adjacencies — the white spaces that don’t yet exist in any AI model’s training data.
  • From information to judgment: While AI can instantly synthesize vast amounts of information, it cannot bear the weight of consequences. When an AI system recommends restructuring your organization’s supply chain or entering a new market, the accountability remains entirely human. This gap between intelligence and responsibility creates an irreplaceable role for human judgment. Leaders aren’t paid because they can access information; they’re paid to make decisions when the stakes are real and the outcomes are uncertain.
  • From static knowledge to liquid knowledge: Traditional knowledge management has treated information as a fixed asset to be stored and retrieved from knowledge repositories. But AI reveals knowledge dynamically, reshaping it based on the context, user, and moment. Each prompt generates a unique knowledge artifact tailored to specific needs. This shift from static knowledge to liquid knowledge fundamentally changes how organizations should think about subject matter expertise.

The most valuable human expertise increasingly lies in identifying unasked questions and recognizing that there are unknown unknowns.

The Cognitive Outsourcing Trap

The accessibility of AI tools like ChatGPT creates a subtle but serious risk: cognitive atrophy. We’ve seen this pattern before. GPS navigation eroded our spatial memory. Calculators diminished our capacity to perform mental arithmetic. But those were specific skills. Now we risk outsourcing human thinking itself.

Research from the University of Toronto found that using generative AI systems reduces humans’ ability to think creatively, resulting in more homogeneous ideas and fewer truly innovative ones.1 Other studies have shown that GenAI tools reduce the perceived effort required for critical-thinking tasks, with workers increasingly relying on AI for routine decisions. This raises concerns about long-term cognitive decline and diminished problem-solving capabilities.2

More concerning is the homogenization of thought. When millions of people pose similar questions and receive similar AI-generated answers, we risk intellectual convergence — a flattening of the diverse, chaotic thinking that drives innovation. Three students in my class recently submitted nearly identical AI-generated architecture proposals for their projects. Efficient? Yes. Creative? No.

The New Competitive Advantage: Meta-Expertise

Rather than making human expertise obsolete, AI is elevating what expertise means. An IESE Business School study that analyzed U.S. job postings between 2010 and 2022 found that for every percentage point increase in AI adoption at a company, there was a 2.5% to 7.5% increase in demand for management roles, with the positions emphasizing judgment and cognitive and interpersonal skills. The most valuable professionals are developing what I call meta-expertise. This is the ability to orchestrate knowledge from multiple AI systems, validate outputs, and synthesize information across domains. This requires three distinct capabilities that AI cannot replicate.

1. Creative synthesis. While AI excels at pattern recognition within existing data, breakthrough innovation comes from connecting seemingly unrelated ideas. When a pharmaceutical researcher sees a connection between a butterfly’s wing structures and drug delivery mechanisms, or an architect applies jazz improvisation principles to planning smart buildings, the creative leaps represent uniquely human cognition.

2. Contextual wisdom. The intuitive understanding humans have built through years of experience remains difficult to codify and transfer to AI systems. The experienced plant manager who senses equipment problems before sensors detect them, or the sales director who discerns unspoken client concerns, possesses contextual wisdom that transcends data patterns.

3. Ethical navigation. As AI handles more analytical work, human expertise must increasingly focus on ethical judgment, cultural sensitivity, and stakeholder management. These aren’t edge-case skills; they are central to every significant business decision. The ability to navigate competing interests, understand unspoken cultural norms, and make principled decisions under pressure remains fundamentally human.

Talent and Learning Principles to Rethink

Organizations are beginning to make structural changes to capture value from AI, with larger companies leading the way in redesigning workflows and putting senior leaders in critical AI governance roles, McKinsey reports.

Leaders should rethink their talent strategies around three principles

1. Redefine Role Hierarchies

Traditional hierarchies based on information access are becoming obsolete. There are increasing cases of companies redefining their role hierarchies as they incorporate AI, including professional services firms like AccentureCognizant, and EY, as well as tech giants. The shift, which some observers call “the great flattening,” involves eliminating layers of middle management and augmenting existing roles with AI.

The goal is to have AI automate routine tasks that used to be performed by lower-level employees and managers and enable senior staff members to focus on higher-value, strategic work. Your senior strategist’s value isn’t in knowing frameworks anymore. Their value lies in knowing which framework to apply when, how to adapt them to particular contexts, and when to abandon frameworks entirely for out-of-the-box human thinking.

For example, EY has committed $1.4 billion to an AI transformation that it describes as “human-centered.” The company is redefining its internal functions and launching extensive upskilling programs for its 400,000 employees. The training provides foundational AI literacy to every employee and advanced master classes to leaders. By embedding AI into the core of its strategy and democratizing access to AI knowledge through its EY.ai platform, the firm aims to empower employees to move toward higher-value work, close the skills gap, and ultimately reshape roles.

On the tech side, Amazon is removing some middle-management layers from its structure. CEO Andy Jassy aims to flatten the organization, decrease bureaucracy, and drive decision-making closer to the front lines while using AI to automate tasks.

2. Invest in Cognitive Sovereignty

Organizations must deliberately preserve and strengthen human thinking capabilities. While documented cases of “AI-free zones” remain scarce in practice, research on cognitive decline from AI overuse suggests that it could be a valuable approach.3

Companies should consider forward-looking moves such as:

  • Mandating that strategic proposals include sections developed through human analysis.
  • Implementing “human thinking sprints,” where teams solve problems without AI assistance.
  • Inserting deliberate friction in certain organizational processes, like procurement, to test the cognitive fitness of employees.

Just as physical training encourages muscle memory, these exercises could help employees maintain the cognitive capabilities that differentiate human intelligence.

3. Develop AI Orchestration Capabilities

Job postings for AI operations roles have increased 230% in recent months, with companies seeking professionals who can design entire workflows that integrate AI and human capabilities. Some of these emerging roles are called AI operations lead, AI orchestrator, or agent orchestration engineer. The people filling these roles are expected to act as bridges between human creativity and machine intelligence.

Yet hiring AI-savvy talent is only part of the solution. As any CIO will attest, the real challenge is in figuring out how to weave AI tools into human workflows. Successfully navigating this complexity will require seasoned practitioners with contextual expertise in technology and business domains.

After all, the key is understanding when to deploy AI, human judgment, or both, recognizing that adding AI in does not always improve the value.

* * *

Here is a direct link to the complete article.

Ravikiran Kalluri is an assistant teaching professor at Northeastern University.

References (4)

1. A.R. Doshi and O.P. Hauser, “Generative AI Enhances Individual Creativity but Reduces the Collective Diversity of Novel Content,” Science Advances 10, no. 28 (July 12, 2024): 1-9, https://doi.org/10.1126/sciadv.adn5290.

2. H.-P. Lee, A. Sarkar, L. Tankelevitch, et al., “The Impact of GenAI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers,” in “CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems” (Association for Computing Machinery, 2025): 1-22, https://doi.org/10.1145/3706598.3713778.

Show All References

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.