Our Final Invention: How the Human Race Goes and Gets Itself Killed

Our Final

 

Here is a brief excerpt from an article by Greg Scoblete for RealClearTechnology, a sister site of RealClearPolitics, and serves as a catch-all source for technology and gadget news and commentary. Every day, RealClearTechnology editors find and select the best technology news, opinions, reviews and analyses from English-language sources in over 50 countries and all six continents. In addition to providing links to the world’s best tech news and commentary, RealClearTechnology also maintains an active video log, collecting the Web’s best technology video.

Please click here to check it out.

Image: St. Martin’s Press

* * *

We worry about robots.

Hardly a day goes by where we’re not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.

Documentarian James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, is worried about robots too. Only he’s not worried about them taking our jobs. He’s worried about them exterminating the human race.

I’ll repeat that: In 267 brisk pages, Barrat lays out just how the artificial intelligence (AI) that companies like Google and governments like our own are racing to perfect could — indeed, likely will — advance to the point where it will literally destroy all human life on Earth. Not put it out of work. Not meld with it in a utopian fusion. Destroy it.

Wait, What?

I’ll grant you that this premise sounds a bit…. dramatic, the product of one too many Terminator screenings. But after approaching the topic with some skepticism, it became increasingly clear to me that Barrat has written an extremely important book with a thesis that is worrisomely plausible. It deserves to be read widely. And to be clear, Barrat’s is not a lone voice — the book is rife with interviews of numerous computer scientists and AI researchers who share his concerns about the potentially devastating consequences of advanced AI. There are even think tanks devoted to exploring and mitigating the risks. But to date, this worry has been obscure.

In Barrat’s telling, we are on the brink of creating machines that will be as intelligent as humans. Specific timelines vary, but the broad-brush estimates place the emergence of human-level AI at between 2020 and 2050. This human-level AI (referred to as “artificial general intelligence” or AGI) is worrisome enough, seeing the damage human intelligence often produces, but it’s what happens next that really concerns Barrat. That is, once we have achieved AGI, the AGI will go on to achieve something called artificial superintelligence (ASI) — that is, an intelligence that exceeds — vastly exceeds — human-level intelligence.

Barrat devotes a substantial portion of the book explaining how AI will advance to AGI and how AGI inevitably leads to ASI. Much of it hinges on how we are developing AGI itself. To reach AGI, we are teaching machines to learn. The techniques vary — some researchers approach it through something akin to the brute-force memorization of facts and images, others through a trial-and-error process that mimics genetic evolution, others by attempting to reverse engineer the human brain — but the common thread stitching these efforts together is the creation of machines that constantly learn and then use this knowledge to improve themselves.

The implications of this are obvious. Once a machine built this way reaches human-level intelligence, it won’t stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an “intelligence explosion” — an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable. Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a “hard take off” and rockets past what mere flesh and blood brains are capable of.

* * *

To read the complete article, please click here.

Greg Scoblete (@GregScoblete) is the editor of RealClearTechnology and an editor on RealClearWorld. He is the co-author of From Fleeting to Forever: A Guide to Enjoying and Preserving Your Digital Photos and Videos.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.