Why we shouldn’t fear AI (Part 1: Radiation)

If you’re one of the 12 people that read my last blog post, this article is dedicated to you. If not, well, this article is still pretty cool. Previously I talked about why AI is scary. This article is about why we shouldn’t be scared. But first we have to go deeper. To understand why AI-anxiety so pervasive, we have to go… nuclear.

Radiashun is pretty dang scary. Ever since the nuclear bomb hit the fan, entire generations of people grew up in fear of fallout. Not like the game, but like, the atomic, radioactive killer gone wild. And modern media since then hasn’t failed to notice this collective fear. Of course they capitalized on it. Comic books alone use radiation as a central plot device almost to the point of eye-rolling predictability: Fantastic Four: cosmic rays. Spiderman: radioactive spider. Superman: kryptonite. Mr. Manahattan, The Hulk, Daredevil, even Radioactive Man (they weren’t even trying!).

Radiation formed every fantasy story teller’s perfect plot device: it’s powerful, it’s mysterious, it has all kinds of unintended consequences. Need something to take down a sheild? Radiation. Need something to justify having a shield? Radiation. I question whether or not the sun has exposed me to more radiation than a lifetime of movies and comics and cartoons.

But this has had some serious, unintended consequences on the public understanding of radiation–brain tumors caused by cell phones? Leukemia inducing high voltage power lines? Mass media’s take on radiation for the last several decades has mutated the public’s perception of the dangers of radiation. Story time: I can remember a time when teenage Chase overheard some concerned folks discussing the danger of the radiation they’d be subjected to on a flight from Seattle to Albuquerque; I swooped into their conversation to rescue them from their worries and confidently reassured them that such a flight was hardly worse than a normal day’s worth of exposure to background radiation. They showered me with profuse expressions of relieved gratitude, but teenage Chase was far more satisfied with having corrected them of their unscientific ways.

Point is, radio waves didn’t make the Incredible Hulk, but that same electromagnetic radiation sends those sweet tunes to your car radio on the reg. It was gamma waves that created the Hulk actually; mysterious, plot device-y ones.

And just like radiation and pop culture’s take on it, AI is broadly recognized, but broadly misunderstood, due largely to public misconception about the dangers of suddenly self-aware AI taking over the world. This misconception, all fueled by a 1950’s style hysteria about the dangers of the radiation bogey man.

In the case of radiation, our understanding of the atom has lengthened lifespans, provided power in place of burning fossil fuels, and saved Matt Damon from getting stuck on Mars.


A couple of robots, they were up to no good–started making trouble in his neighborhood.

But therein lies the catch–all of these incredible benefits came as the result of responsible stewardship of that power. As real as the danger of nuclear fallout was in the 1950’s and even today, so too are a number of dangers surrounding AI. In my last post I referenced an article that very clearly outlines the clear and present dangers posed by AI. None of them, however, are existential threats to humanity in the form of suddenly self-aware super intelligence. Part 2 of this article will argue why we’ll never see what some of the philosophically minded of us like to call the intelligence singularity; basically when the beginning of the end starts in Terminator or The Matrix.

No: the greatest threat from AI, especially today, comes from AI doing exactly what we tell them to do. Even the best engineers can and will make mistakes. When enormous autonomous systems governed by these AI’s–like those trading stocks at lightning speed–suddenly crash because the instructions we gave them weren’t clear enough, or they weren’t complete, there will be (and have been) significant consequences.

And when these AI interact with society in ways we didn’t necessarily foresee.

And when people with nefarious intent get their hands on working AI.

But in all cases where AI runs rampant in the forseeable future, these are things engineers and policy makers can plan ahead for. The dangers AI poses are hardly different than the challenges we’ve already been facing. There won’t be some doomsday event when Google becomes smarter than every human being on the planet and decides to kill us all. (More on that in part 2)

As a result, there are organizations making an honest attempt at safeguarding society from the negative effects of a broken AI, but not everyone agrees that full blown AI should be open source to begin with (that is, available for anyone with access to a computer). Not coincidentally, similar organizations exist because of the dangers posed by our knowledge of the atom for very similar reasons. In its heyday, some very smart people disagreed on who should and should not know about the potential danger posed by atomic fusion–echoes of the same arguments being had amongst scientists today.




Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s