How to Regulate Artificial Intelligence

Here is a brief excerpt from a superb article by Oren Etzioni for The New York Times. To read the complete article, read others, and obtain subscription information, please click here.

Photo Credit: Isaac Lawrence/Agence France-Presse — Getty Images

* * *

The technology entrepreneur Elon Musk recently urged the nation’s governors to regulate artificial intelligence “before it’s too late.” Mr. Musk insists that artificial intelligence represents an “existential threat to humanity,” an alarmist view that confuses A.I. science with science fiction. Nevertheless, even A.I. researchers like me recognize that there are valid concerns about its impact on weapons, jobs and privacy. It’s natural to ask whether we should develop A.I. at all.

I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on A.I., in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.” Beyond that, we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.

[Here is the first.]

First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

* * *

My three A.I. rules are, I believe, sound but far from complete. I introduce them here as a starting point for discussion. Whether or not you agree with Mr. Musk’s view about A.I.’s rate of progress and its ultimate impact on humanity (I don’t), it is clear that A.I. is coming. Society needs to get ready.

* * *

Here is a direct link to the complete article.

Oren Etzioni is the chief executive of the Allen Institute for Artificial Intelligence.

Posted in

Leave a Comment