
Here is an excerpt from an article written by for The New York Times. To read the complete article, check out others, sign up for email alerts, and obtain subscription information, please click here.
Credit…Illustration by Stephan Dybus
I started with ChatGPT’s “deep research” mode, asking it to compile a report on what new jobs for humans might be created by the rise of A.I. It asked a few follow-up questions and then set off, returning with a 6,000-word report, broken down by industry. I fed that report into ChatGPT 4o — along with the original assignment memo from my editor and a few other recent industry reports on the future of work — and asked for an article in the style of The New York Times Magazine.
It was done within 90 minutes. The article was lively and informative, and while some of its imagined future careers were a bit fanciful (a “synthetic relationship counselor” apparently will be someone who can step in when you’re in love with your A.I.), it also covered an interesting spectrum of plausible jobs and featured some delightful turns of phrase. To the average reader, it likely would have come across as a breezy Sunday read with just enough interesting points to warrant a bit of reflection.
So why aren’t you reading that version? Well, for starters, it would have gotten me fired: Almost all quotes and experts in the article were entirely made up. But I had a deeper, more philosophical concern. Even if the A.I.-written version of this piece was entirely factual, submitting it to my editors would have represented a fundamental misunderstanding of why they hired me. In freelance journalism, as in many fields where the work product is written text, you aren’t just being paid for the words you submit. You’re being paid to be responsible for them: the facts, the concepts, the fairness, the phrasing. This article is running with my byline, which means that I personally stand behind what you’re reading; by the same token, my editor is responsible for hiring me, and so on, a type of responsibility that inherently can’t be delegated to a machine.
Commentators have become increasingly bleak about the future of human work in an A.I. world. The venture-capitalist investor Chris Sacca recently went on Tim Ferriss’s podcast and declared that “we are super [expletive].” He suggested that computer programmers, lawyers, accountants, marketing copywriters and most other white-collar workers were all doomed. In an email to his staff, Fiverr’s chief executive, Micha Kaufman, added designers and salespeople to the list of the soon-to-be-damned.
Such laments about A.I. have become common, but rarely do they explore how A.I. gets over the responsibility hurdle I’m describing. It’s already clear that A.I. is more than capable of handling many human tasks. But in the real world, our jobs are about much more than the sum of our tasks: They’re about contributing our labor to a group of other humans — our bosses and colleagues — who can understand us, interact with us and hold us accountable in ways that don’t easily transfer to algorithms.
This doesn’t mean the disruptions from A.I. won’t be profound. “Our data is showing that 70 percent of the skills in the average job will have changed by 2030,” said Aneesh Raman, LinkedIn’s chief economic opportunity officer. According to the World Economic Forum’s 2025 Future of Jobs report, nine million jobs are expected to be “displaced” by A.I. and other emergent technologies in the next five years. But A.I. will create jobs, too: The same report says that, by 2030, the technology will also lead to some 11 million new jobs. Among these will be many roles that have never existed before.
If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.
Trust
Robert Seamans, a professor at New York University’s Stern School of Business who studies the economic consequences of A.I., envisions a new set of roles he calls A.I. auditors — people who dig down into the A.I. to understand what it is doing and why and can then document it for technical, explanatory or liability purposes. Within the next five years, he told me, he suspects that all big accounting firms will include “A.I. audits” among their offerings.
A related job he imagines is an A.I. translator: someone who understands A.I. well enough to explain its mechanics to others in the business, particularly to leaders and managers. “The A.I. translator helps to interface between something that’s super-technical and what a manager knows and understands — and what they need to know in order to make a decision,” Seamans said.
In a sense, both of Seamans’s visions fall into a broader category of “trust.” I didn’t submit my A.I.-generated article in part because that would have betrayed my editors’ trust, but also because I didn’t trust it — trust that it was true, trust that it got the facts right. Because I hadn’t done the work and the thinking myself, I couldn’t tell if it was being fair or reasonable. Everyone who tries to use A.I. professionally will face a version of this problem: The technology can provide astonishing amounts of output in an instant, but how much are we supposed to trust what it’s giving us? And how can we know?
Learning to Live With A.I.
Read more from the New York Times Magazine’s special issue.
-
New Opportunities for People: It might take your job, but A.I. will create new (human) careers.
-
Widespread Use: The hosts of The Times’s “Hard Fork” podcast on how everyone seems to be using A.I. — for everything.
-
Scholarship Tool: The winners of the A.I. race might soon transform the stories that historians tell about the past.
-
Never Saying Goodbye: After a man’s terminal diagnosis, his family decides to make a virtual avatar that lives on after his death.
As A.I. continues to become more influential in our jobs and organizations, we’re going to develop a lot of these trust issues. Solving them will require humans.
Under the “trust” umbrella will be a whole new breed of fact checkers and compliance officers. Legal documents, annual reports, product specifications, research reports, HVAC contracts — all of these will soon be written by A.I., and all will need humans to review and verify them with an eye toward the surprising and weird mistakes A.I. is prone to make.
This may give rise to a title that could be called trust authenticator or trust director. And such jobs will need to be adjacent to other new roles, which are essentially variations on an A.I. ethicist. It will be these ethicists’ jobs to build chains of defensible logic that can be used to support decisions made by A.I. (or by hybrid A.I.-and-human teams) to a wide variety of interested parties, including investors, managers, customers and perhaps even judges and juries. “Many companies have played around with the idea of an ‘ethics board,’” Seamans said. “I think that you could imagine a future where these A.I. ethics boards are empowered a lot more than they tend to be today.
At its core, trust is about accountability — and this is where a human in the loop is most critical. In everything from contracts to nuclear-launch systems, we need humans to be accountable. “There should be a human who ultimately takes responsibility,” said Erik Brynjolfsson, director of the digital economy lab at the Stanford Institute for Human-Centered Artificial Intelligence and also a founder of the A.I. consulting company Workhelix. “Right now if a car crashes, you have to sort out: Is it the antilock brakes? Was it the driver? Was there something wrong in the road? If it’s the antilock brakes, who was it who made that part? And they trace it back to who ultimately is responsible for that thing. It may be a complex chain of causality, and it’s going to get that much more complicated with A.I., but ultimately you have to trace it back to somebody who takes responsibility.”
In a number of fields, from law to architecture, A.I. will be able to do much of the basic work customers need, from writing a contract to designing a house. But at some point, a human, perhaps even a certified one, needs to sign off on this work. You might call this new role a legal guarantor: someone who provides the culpability that the A.I. cannot. Ethan Mollick, a professor at the Wharton School of Business and the author of “Co-Intelligence: Living and Working With A.I.,” refers to such jobs as the “sin eaters” for A.I. — the final stop in the responsibility chain.
* * *
Here is a direct link to the complete article.