From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI

Here is an excerpt from an article by Karen Renaud, Merrill Warkentin, and George Westerman for the MIT Sloan Management Review. To read the complete article, check out others, and obtain subscription information, please click here.

* * *

It’s time to replace traditional, rule-based approaches to cybersecurity with “smarter” technology and training.

For the past several years, cybercriminals have been using artificial intelligence to hack into corporate systems and disrupt business operations. But powerful new generative AI tools such as ChatGPT present business leaders with a new set of challenges.

Consider these entirely plausible scenarios:

A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company’s marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn’t look like the messages they’ve been trained to detect.
An AI bot calls an accounts payable employee and speaks using a (deepfake) voice that sounds like the boss’s. After exchanging some pleasantries, the “boss” asks the employee to transfer thousands of dollars to an account to “pay an invoice.” The employee knows they shouldn’t do this, but the boss is allowed to ask for exceptions, aren’t they?
Hackers use AI to realistically “poison” the information in a system, creating a valuable stock portfolio that they can cash out before the deceit is discovered.

In a very convincing fake email exchange created using generative AI, a company’s top executives appear to be discussing how to cover up a financial shortfall. The “leaked” message spreads wildly with the help of an army of social media bots, leading to a plunge in the company’s stock price and permanent reputational damage.
Get Updates on Leading With AI and Data

These scenarios might sound all too familiar to those who have been paying attention to stories of deepfakes wreaking havoc on social media or painful breaches in corporate IT systems. But the nature of the new threats is in a different, scarier category because the underlying technology has become “smarter.”

Until now, most attacks have used relatively unsophisticated high-volume approaches. Imagine a horde of zombies — millions of persistent but brainless threats that succeed only when one or two happen upon a weak spot in a defensive barrier. In contrast, the most sophisticated threats — the major thefts and frauds we sometimes hear about in the press — have been lower-volume attacks that typically require actual human involvement to succeed. They are more like cat burglars, systematically examining every element of a building and its alarm systems until they can devise a way to sneak past the safeguards.

* * *

Here is a direct link to the complete article.

Karen Renaud is a computing scientist at the University of Strathclyde in Glasgow, working on all aspects of human-centered security and privacy. Merrill Warkentin, an ACM Distinguished Scientist, is a W.L. Giles Distinguished Professor and the Rouse Endowed Professor of Information Systems in the College of Business at Mississippi State University. George Westerman is a senior lecturer at the MIT Sloan School of Management and founder of the Global Opportunity Initiative in MIT’s Office of Open Learning. The authors are listed here in alphabetical order; all authors contributed equally to this article.

 

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.