Learning to Work with Intelligent Machines


Here is an excerpt from an article written by Matt Beane for Harvard Business Review and the HBR Blog Network. To read the complete article, check out the wealth of free resources, obtain subscription information, and receive HBR email alerts, please click here.

Credit:  John W. Tomac

* * *

It’s 6:30 in the morning, and Kristen is wheeling her prostate patient into the OR. She’s a senior resident, a surgeon in training. Today she’s hoping to do some of the procedure’s delicate, nerve-sparing dissection herself. The attending physician is by her side, and their four hands are mostly in the patient, with Kristen leading the way under his watchful guidance. The work goes smoothly, the attending backs away, and Kristen closes the patient by 8:15, with a junior resident looking over her shoulder. She lets him do the final line of sutures. She feels great: The patient’s going to be fine, and she’s a better surgeon than she was at 6:30.

Fast-forward six months. It’s 6:30 AM again, and Kristen is wheeling another patient into the OR, but this time for robotic prostate surgery. The attending leads the setup of a thousand-pound robot, attaching each of its four arms to the patient. Then he and Kristen take their places at a control console 15 feet away. Their backs are to the patient, and Kristen just watches as the attending remotely manipulates the robot’s arms, delicately retracting and dissecting tissue. Using the robot, he can do the entire procedure himself, and he largely does. He knows Kristen needs practice, but he also knows she’d be slower and would make more mistakes. So she’ll be lucky if she operates more than 15 minutes during the four-hour surgery. And she knows that if she slips up, he’ll tap a touch screen and resume control, very publicly banishing her to watch from the sidelines.

Surgery may be extreme work, but until recently surgeons in training learned their profession the same way most of us learned how to do our jobs: We watched an expert, got involved in the easier work first, and then progressed to harder, often riskier tasks under close supervision until we became experts ourselves. This process goes by lots of names: apprenticeship, mentorship, on-the-job learning (OJL). In surgery it’s called See one, do one, teach one.

Critical as it is, companies tend to take on-the-job learning for granted; it’s almost never formally funded or managed, and little of the estimated $366 billion companies spent globally on formal training in 2018 directly addressed it. Yet decades of research show that although employer-provided training is important, the lion’s share of the skills needed to reliably perform a specific job can be learned only by doing it. Most organizations depend heavily on OJL: A 2011 Accenture survey, the most recent of its kind and scale, revealed that only one in five workers had learned any new job skills through formal training in the previous five years.

Today OJL is under threat. The headlong introduction of sophisticated analytics, AI, and robotics into many aspects of work is fundamentally disrupting this time-honored and effective approach. Tens of thousands of people will lose or gain jobs every year as those technologies automate work, and hundreds of millions will have to learn new skills and ways of working. Yet broad evidence demonstrates that companies’ deployment of intelligent machines often blocks this critical learning pathway: My colleagues and I have found that it moves trainees away from learning opportunities and experts away from the action, and overloads both with a mandate to master old and new methods simultaneously.

How, then, will employees learn to work alongside these machines? Early indications come from observing learners engaged in norm-challenging practices that are pursued out of the limelight and tolerated for the results they produce. I call this widespread and informal process shadow learning.

Obstacles to Learning

My discovery of shadow learning came from two years of watching surgeons and surgical residents at 18 top-rated teaching hospitals in the United States. I studied learning and training in two settings: traditional (“open”) surgery and robotic surgery. I gathered data on the challenges robotic surgery presented to senior surgeons, residents, nurses, and scrub technicians (who prep patients, help glove and gown surgeons, pass instruments, and so on), focusing particularly on the few residents who found new, rule-breaking ways to learn. Although this research concentrated on surgery, my broader purpose was to identify learning and training dynamics that would show up in many kinds of work with intelligent machines.

To this end, I connected with a small but growing group of field researchers who are studying how people work with smart machines in settings such as internet start-ups, policing organizations, investment banking, and online education. Their work reveals dynamics like those I observed in surgical training. Drawing on their disparate lines of research, I’ve identified four widespread obstacles to acquiring needed skills. Those obstacles drive shadow learning.

1. Trainees are being moved away from their “learning edge.”

Training people in any kind of work can incur costs and decrease quality, because novices move slowly and make mistakes. As organizations introduce intelligent machines, they often manage this by reducing trainees’ participation in the risky and complex portions of the work, as Kristen found. Thus trainees are being kept from situations in which they struggle near the boundaries of their capabilities and recover from mistakes with limited help—a requirement for learning new skills.

The same phenomenon can be seen in investment banking New York University’s Callen Anthony found that junior analysts in one firm were increasingly being separated from senior partners as those partners interpreted algorithm-assisted company valuations in M&As. The junior analysts were tasked with simply pulling raw reports from systems that scraped the web for financial data on companies of interest and passing them to the senior partners for analysis. The implicit rationale for this division of labor? First, reduce the risk that junior people would make mistakes in doing sophisticated work close to the customer; and second, maximize senior partners’ efficiency: The less time they needed to explain the work to junior staffers, the more they could focus on their higher-level analysis. This provided some short-term gains in efficiency, but it moved junior analysts away from challenging, complex work, making it harder for them to learn the entire valuation process and diminishing the firm’s future capability.

2. Experts are being distanced from the work.

Sometimes intelligent machines get between trainees and the job, and other times they’re deployed in a way that prevents experts from doing important hands-on work. In robotic surgery, surgeons don’t see the patient’s body or the robot for most of the procedure, so they can’t directly assess and manage critical parts of it. For example, in traditional surgery, the surgeon would be acutely aware of how devices and instruments impinged on the patient’s body and would adjust accordingly; but in robotic surgery, if a robot’s arm hits a patient’s head or a scrub is about to swap a robotic instrument, the surgeon won’t know unless someone tells her. This has two learning implications: Surgeons can’t practice the skills needed to make holistic sense of the work on their own, and they must build new skills related to making sense of the work through others.

Benjamin Shestakofsky, now at the University of Pennsylvania, described a similar phenomenon at a pre-IPO start-up that used machine learning to match local laborers with jobs and that provided a platform for laborers and those hiring them to negotiate terms. At first the algorithms weren’t making good matches, so managers in San Francisco hired people in the Philippines to manually create each match. And when laborers had difficulty with the platform—for instance, in using it to issue price quotes to those hiring, or to structure payments—the start-up managers outsourced the needed support to yet another distributed group of employees, in Las Vegas. Given their limited resources, the managers threw bodies at these problems to buy time while they sought the money and additional engineers needed to perfect the product. Delegation allowed the managers and engineers to focus on business development and writing code, but it deprived them of critical learning opportunities: It separated them from direct, regular input from customers—the laborers and the hiring contractors—about the problems they were experiencing and the features they wanted.

A company’s deployment of AI may move trainees away from learning opportunities.

3. Learners are expected to master both old and new methods.

Robotic surgery comprises a radically new set of techniques and technologies for accomplishing the same ends that traditional surgery seeks to achieve. Promising greater precision and ergonomics, it was simply added to the curriculum, and residents were expected to learn robotic as well as open approaches. But the curriculum didn’t include enough time to learn both thoroughly, which often led to a worst-case outcome: The residents mastered neither. I call this problem methodological overload.

Shreeharsh Kelkar, at UC Berkeley, found that something similar happened to many professors who were using a new technology platform called edX to develop massive open online courses (MOOCs). EdX provided them with a suite of course-design tools and instructional advice based on fine-grained algorithmic analysis of students’ interaction with the platform (clicks, posts, pauses in video replay, and so on). Those who wanted to develop and improve online courses had to learn a host of new skills—how to navigate the edX user interface, interpret analytics on learner behavior, compose and manage the course’s project team, and more—while keeping “old school” skills sharp for teaching their traditional classes. Dealing with this tension was difficult for everyone, especially because the approaches were in constant flux: New tools, metrics, and expectations arrived almost daily, and instructors had to quickly assess and master them. The only people who handled both old and new methods well were those who were already technically sophisticated and had significant organizational resources.

4. Standard learning methods are presumed to be effective.

Decades of research and tradition hold trainees in medicine to the See one, do one, teach one method, but as we’ve seen, it doesn’t adapt well to robotic surgery. Nonetheless, pressure to rely on approved learning methods is so strong that deviation is rare: Surgical-training research, standard routines, policy, and senior surgeons all continue to emphasize traditional approaches to learning, even though the method clearly needs updating for robotic surgery.

Sarah Brayne, at the University of Texas, found a similar mismatch between learning methods and needs among police chiefs and officers in Los Angeles as they tried to apply traditional policing approaches to beat assignments generated by an algorithm. Although the efficacy of such “predictive policing” is unclear, and its ethics are controversial, dozens of police forces are becoming deeply reliant on it. The LAPD’s PredPol system breaks the city up into 500-foot squares, or “boxes,” assigns a crime probability to each one, and directs officers to those boxes accordingly. Brayne found that it wasn’t always obvious to the officers—or to the police chiefs—when and how the former should follow their AI-driven assignments. In policing, the traditional and respected model for acquiring new techniques has been to combine a little formal instruction with lots of old-fashioned learning on the beat.

Many chiefs therefore presumed that officers would mostly learn how to incorporate crime forecasts on the job. This dependence on traditional OJL contributed to confusion and resistance to the tool and its guidance. Chiefs didn’t want to tell officers what to do once “in the box,” because they wanted them to rely on their experiential knowledge and discretion. Nor did they want to irritate the officers by overtly reducing their autonomy and coming across as micromanagers. But by relying on the traditional OJL approach, they inadvertently sabotaged learning: Many officers never understood how to use PredPol or its potential benefits, so they wholly dismissed it—yet they were still held accountable for following its assignments. This wasted time, decreased trust, and led to miscommunication and faulty data entry—all of which undermined their policing.

*      *      *

Here is a direct link to the complete article.

Matt Beane is an assistant professor of technology management at the University of California, Santa Barbara, and a research affiliate with MIT’s Initiative on the Digital Economy.


Posted in

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.