Why Applied Mathematics?

I read an essay called “A Mathematician’s Apology” by G.H. Hardy. I was an undergrad in pure mathematics, minoring in English literature; it was suggested to me by a professor who probably thought I was a hopeless excuse for a math student but who himself enjoyed reading.

G. H. Hardy, was a famous British number theorist known for taking Ramanujan under his wing. His essay was a humble-brag treatise on the moral and intellectual superiority of pure mathematics. In my literary opinion, he did a miserable job of trying to communicate the beauty of pure mathematics a privileged few are able to truly appreciate, but it was an interesting insight into the pure mathematician’s motivations.

The essay was written at an incredibly uncertain time for the United Kingdom—1940. He stated that his chosen field of interest, Number Theory, was absolutely useless to the scientists and engineers who debased themselves building bombs for the purpose of war. Although he is not alone in this apprehensive application of knowledge (e.g. Oppenheimer’s reaction to the atomic bomb), he was able to claim moral superiority in that, at the time, Number Theory was so useless that it was one of the only true forms of pure mathematics.

One of the greatest ironies is that, long after his death, many important and fundamental theorems in Number Theory are the basis for things like encryption and memory storage, central to computer engineering. While some mathematicians may revel in what many see as useless, history has borne out that what may be useless for the moment may likely be incredibly useful, read applicable, in the future. Sometimes that future date far exceeds the lifespan of any individual mathematician.

—————————————————————————————————————

Here’s the reason for my glaringly negative review of his essay, so don’t feel bad if you don’t read it. To a somewhat uncertain end, in it he ranks mathematicians by ability. I forget if he said Euler or Gauss was the top, but it was like arguing over who should take the greatest of all time spot: Kobe Bryant or Micheal Jordan. He even put himself in the list, albeit a lowly, sub Fermat rank. It was worse than reading Dante talking with Greek philosophers in the Inferno.

I’ll leave you with a tongue-in-cheek quote from his essay that seems relevant to the question:

“We have concluded that the trivial mathematics is, on the whole, useful, and that the real mathematics, on the whole, is not.”

Why we shouldn’t fear AI (Part 1: Radiation)

If you’re one of the 12 people that read my last blog post, this article is dedicated to you. If not, well, this article is still pretty cool. Previously I talked about why AI is scary. This article is about why we shouldn’t be scared. But first we have to go deeper. To understand why AI-anxiety so pervasive, we have to go… nuclear.

Radiashun is pretty dang scary. Ever since the nuclear bomb hit the fan, entire generations of people grew up in fear of fallout. Not like the game, but like, the atomic, radioactive killer gone wild. And modern media since then hasn’t failed to notice this collective fear. Of course they capitalized on it. Comic books alone use radiation as a central plot device almost to the point of eye-rolling predictability: Fantastic Four: cosmic rays. Spiderman: radioactive spider. Superman: kryptonite. Mr. Manahattan, The Hulk, Daredevil, even Radioactive Man (they weren’t even trying!).

Radiation formed every fantasy story teller’s perfect plot device: it’s powerful, it’s mysterious, it has all kinds of unintended consequences. Need something to take down a sheild? Radiation. Need something to justify having a shield? Radiation. I question whether or not the sun has exposed me to more radiation than a lifetime of movies and comics and cartoons.

But this has had some serious, unintended consequences on the public understanding of radiation–brain tumors caused by cell phones? Leukemia inducing high voltage power lines? Mass media’s take on radiation for the last several decades has mutated the public’s perception of the dangers of radiation. Story time: I can remember a time when teenage Chase overheard some concerned folks discussing the danger of the radiation they’d be subjected to on a flight from Seattle to Albuquerque; I swooped into their conversation to rescue them from their worries and confidently reassured them that such a flight was hardly worse than a normal day’s worth of exposure to background radiation. They showered me with profuse expressions of relieved gratitude, but teenage Chase was far more satisfied with having corrected them of their unscientific ways.

Point is, radio waves didn’t make the Incredible Hulk, but that same electromagnetic radiation sends those sweet tunes to your car radio on the reg. It was gamma waves that created the Hulk actually; mysterious, plot device-y ones.

And just like radiation and pop culture’s take on it, AI is broadly recognized, but broadly misunderstood, due largely to public misconception about the dangers of suddenly self-aware AI taking over the world. This misconception, all fueled by a 1950’s style hysteria about the dangers of the radiation bogey man.

In the case of radiation, our understanding of the atom has lengthened lifespans, provided power in place of burning fossil fuels, and saved Matt Damon from getting stuck on Mars.

irobot

A couple of robots, they were up to no good–started making trouble in his neighborhood.

But therein lies the catch–all of these incredible benefits came as the result of responsible stewardship of that power. As real as the danger of nuclear fallout was in the 1950’s and even today, so too are a number of dangers surrounding AI. In my last post I referenced an article that very clearly outlines the clear and present dangers posed by AI. None of them, however, are existential threats to humanity in the form of suddenly self-aware super intelligence. Part 2 of this article will argue why we’ll never see what some of the philosophically minded of us like to call the intelligence singularity; basically when the beginning of the end starts in Terminator or The Matrix.

No: the greatest threat from AI, especially today, comes from AI doing exactly what we tell them to do. Even the best engineers can and will make mistakes. When enormous autonomous systems governed by these AI’s–like those trading stocks at lightning speed–suddenly crash because the instructions we gave them weren’t clear enough, or they weren’t complete, there will be (and have been) significant consequences.

And when these AI interact with society in ways we didn’t necessarily foresee.

And when people with nefarious intent get their hands on working AI.

But in all cases where AI runs rampant in the forseeable future, these are things engineers and policy makers can plan ahead for. The dangers AI poses are hardly different than the challenges we’ve already been facing. There won’t be some doomsday event when Google becomes smarter than every human being on the planet and decides to kill us all. (More on that in part 2)

As a result, there are organizations making an honest attempt at safeguarding society from the negative effects of a broken AI, but not everyone agrees that full blown AI should be open source to begin with (that is, available for anyone with access to a computer). Not coincidentally, similar organizations exist because of the dangers posed by our knowledge of the atom for very similar reasons. In its heyday, some very smart people disagreed on who should and should not know about the potential danger posed by atomic fusion–echoes of the same arguments being had amongst scientists today.

 

 

 

That robot chick from Metropolis that inspired C3PO

Why Artificial Intelligence scares us

The Terminator. Blade Runner. The Matrix. 2001: A Space Odessy. With the exception of Bicentennial Man, AI hardly ever saves the day on the big screen. I wouldn’t say War Games was a turning point for AI as one of Hollywood’s favorite boogeymen, but since then, Arnold Schwarzenegger dressed up like a time-traveling, indestructible robot has pretty much served as every sensationalist news journalist’s favorite image of when AI’s turn into rebellious teenagers and actually go through with emancipating themselves.

But since the 1980’s, AI has been constantly working its way into our everyday lives: in some ways noticeable, some not. From computer chess opponents to the algorithm that finds the fastest route from your house to that new Amazon bookstore, AI has weaseled it’s way into just about everything. IBM’s Watson kicked the crap out of Ken Jennings at Jeopardy!. AI’s even the scapegoat for most rapid stock market crashes since like, 2000, when two trading algorithms find a numerical black hole and start freaking out. (There’s another Terminator picture in that last linked article of course.)

Rest easy for now: we’re pretty far from actual Terminator robots. Boston Dynamics has just about got the walking part figured out–but we’re still pretty far from the able-to-ride-a-motorcycle-down-the-spillways-in-LA part.

The part where Ahnold falls into the crucible of molten steel and gives a thumbs up.

My version of the “robot arm problem”

Real talk: like in the case of the stock market, though, the fear of AI isn’t unfounded. And despite hyperbolic “science” journalism, a lot of smart people, like Bill Gates and Stephen Hawking, agree we should be paying close attention. A couple guys in particular–Eric Horvitz, the director of Microsoft Research’s main lab in Redmond, and the president of AAAI, Thomas Dietterich–wrote an article about tangible, scary consequences of prolific AI use. It’s one of the best I’ve seen on the actual, technical risks posed by AI, and incredibly well written. They talk about criminals using it, the effects of unforseen emergent properties, and just plain old programming bugs. They even talk about how how AI is at least one of a number of significant forces behind shifts away from Pareto’s comfy spot in wealth distribution over the last two decades. Metropolis, anyone?

What they don’t do, though, is cut to why AI scares us. They lay out all the trouble AI is going to cause us in the future, but thanks to kid-geniuses like Matthew Broderick, I have faith none of these will cause the inevitable destruction of mankind. “But what about the singularity!? Super intelligence!? That basilisk thing!?” Stop philosophically picking lint out of your belly button and pick up a book on complexity theory or wait for my next trill article on the “artificial” part of Artificial Intelligence. You’ll find out why your philosophy is bad and subsequently why you should feel bad, but I won’t name names. Fortunately in that ACM article they wave that crap off by the fourth paragraph. Anyway…

yourjokeisbadandyoushouldfeelbadwhoopwhoopwhoop

How I feel about this blog most of the time.

AI has always been scary. It’s been scary since Frankenstein. It’s been scary since John Henry. It’s just that now Frankenstein’s monster got a cybernetic upgrade. Our fear of AI combines two of our greatest fears: our fear of the unknown and our fear of being replaced.

Anyone remember learning about the “uncanny valley” in English? It comes from old school Gothic literature: when the author wanted to spook you they’d omit describing the spooky thing. Mary Shelley never tells you what Frankenstein’s monster looks like. Oscar Wilde was pretty scant with Dorian Gray’s portrait–I mean, how ugly could his dead self have been? Dr. Jekyll and Mr. Hyde, 2spooky4me. The common thread which carries into most modern scary story telling today is playing off of people’s fear of the unknown by purposefully obscuring tangible details–letting our imaginations subconsciously fill the gaps with paranoia and what-if’s.

Uncanny things: people who photoshop Nicolas Cage’s face onto everything.

Everyday this creeps into our uncanny experiences with AI, when Watson holds a conversation with Alex Trebek and Amazon files a patent for delivering stuff to you before you knew you wanted it. But it’s not quite human. It doesn’t jive with our mental model of the world. It causes us to step back and say “that’s weird”. One of my favorite examples of the uncanny in art today is how the animators of Attack on Titan draw the people-eating giants. Or Google Dream’s reconstructed images–the mental map of an image processing AI. It’s weird, dude. AI does that to us with ease, and Hollywood takes it to the bank with machines inhabited by Johnny Depp freed from Tim Burton’s obsession with him.

johnnydepp

Johnny getting that uncanny feeling about his role in “Transcendence” where AI ruins everything of course.

But what AI does that Frankenstein’s monster never could, apart from monster mashing, is trivializing the human experience. It threatens us with the fear of being so easily replaced. Around the turn of the 20th century, the industrial revolution was in full stride. JP Morgan was putting on the ritz and machines were replacing craftsmen. Cameras were replacing painters. The iron horse was replacing… well, the horse. The tale of John Henry encapsulated this battle between man and machine.

John Henry was the Tiger Woods of drilling holes for explosive tunnel digging. One day a steam engine comes along to outshoot his 5 under par. Our boy John Henry beats the machine, but at the cost of working himself to death. The allegory has been hitting close to home for a long time: robots replace people on the factory floor. Not too long from now, automated cars will probably replace taxi drivers. People, for good reason, resent being replaced by a machine, but society eventually grows around the machine and benefits as a whole. This epic struggle between man and machine was pretty dope in The Matrix with all that kung-fu, but as the story unfolds, it’s clear the dependence on one another forms an endless cycle that Keanu Reeves will never escape.

AI easily combines these two fears considered fundamental to the human experience in ways that we never could have imagined 200 years ago. It trivializes us, and we don’t even really know how. But from Frankenstein’s monster came the Adam’s family (*snap*, *snap*). And from John Henry battling the machine? Well now we battle with machines in monster truck rallies, which is pretty sick. I wonder what the sitcom version of Skynet will be like.

Those robots from Interstellar were pretty funny.

Not your every day A squared plus B squared: Euler’s Theorem

This is the first post in a series of pieces on the sweetest formulae you’ve never heard of. These will be a selection of my favorite theorems and their simplified proofs; particularly those that make your head tilt because, honestly, we never learned any of those as kids. Also because I have a ton of other articles which I feel aren’t researched enough and I keep telling myself I’ll finish them. But really because this is pretty cool and it’ll make you more worldy. See? I did you a favor. Don’t say I never did anything for you.

If you’re not one for reading about math, and prefer it played out in front of you like a comical cabaret, Numberphile is an awesome web-series featuring UK-based mathematicians (some of them real heavy-weights in the field, like my old Hungarian, body-building, body-odorful calculus professor would have put it) that skew more towards the theory of numbers–which happens to be the field from which we obtain the result of Euler’s Theorem. The cool thing is it looks like the most useless thing ever, except you rely on it every day, in multiple ways, without even realizing it. No exercises were left to the reader in the making of this piece. You’ll be rewarded with better jokes as the math trudges on.

The man…

Euler was one of the greats. If G.H. Hardy (also a number theorist) were a sports caster (he hated sports as far as I can tell), he’d probably make the analogy that Euler is to Gauss what Abdul-Jabbar is to Bryant. Hardy was a little humble-brag in his essay “A Mathematician’s Apology”, but he actually ranks mathematicians, including himself, against the greats. Like saying Kobe Bryant is a 0.85 on the Michael Jordan scale. I digress. Euler was like the Chuck Norris of mathematics. He’s not known for being the absolute best; he’s known for being the most hardcore. He wrote hundreds of articles, fathered a literal litter of kids, and even when he developed cataracts and lost his eyesight, he continued to kick mathematical ass until the day he died. If someone could divide by zero, it would have been him. Actually, that was Riemann–sort of. Whatever.

…the myth…

But what’s the big deal about this theorem? Consider a salad bar. No. Consider a taco bar. All the ingredients are displayed before you: you got your tomatoes, cheese, sour cream, salsa, you name it. Now imagine all possible taco bars. Some taco bars might have guacamole, while others may have jalapeno, and still others fudge (desert tacos?). Euler’s Theorem is like a guarantee that no matter how ingredients are arranged, and regardless of how many you have, you’re always going to have a delicious taco. I suppose to the trained reader, this is a terrible analogy (I just needed some pulp for “the myth” part), but the truth of taco construction is irrefutable and important beyond consequence; if tacos couldn’t exist in near infinitude of combinations (the power set of taco ingredients, to be precise) and still be delicious, what would be the point of their existence? The taco would be robbed of it’s very taco quintessence. Uncountable tacos.

…the legend.

But real talk. This is Euler’s Theorem: if gcd(a, n) = 1 then a^{ \phi(n) } \equiv \, 1 \,\, mod(n).

MFW presented with math.

MFW presented with math.

If I’m reaching my intended audience, you’re like, “wat, it’s all Greek to me.” Basically it says, if the greatest common divisor of a and n is 1 (a doesn’t divide n and vice versa), then a raised to a special power, divided by n, will always have 1 left over as a remainder. To understand this theorem and its proof, you only need to know two things: modular arithmetic and relative primality. They sound complicated, but you do the former every day and the latter just sounds highfalutin.

Modular arithmetic simply refers to what’s left over after dividing: 5 modulo 3 is 2. 15 modulo 7 is 1. 6 modulo 3 is 0. It turns out it’s pretty handy when working with big numbers since the remainder adds, subtracts, multiplies, and divides just as easily as the numbers being divided themselves. Modular arithmetic was the pits when I first learned it since I promptly forgot how to multiply and divide when they gave us calculators in high school, but it’s pretty simple given the example people use every day: the 12 hour clock versus the 24 hour clock.

The 24 hour clock works by counting hours all the way up to 24, where we know that 0300 means 3:00 AM and 1500 means 3:00 PM.  Well, 3:00 PM is 15 modulo 12. 12 fits inside 15 once with 3 leftover, and we arrive at 3:00 PM.

Relative primality on the other hand is simply a question of whether or not one number divides another. Does 3 divide 8? No. 3 and 8 are relatively prime. Does 4 divide 8? Yes. 4 and 8 are not relatively prime. To be more exact, two numbers are relatively prime if their greatest common divisor is 1. Now, to save time, Euler came up with this thing he called the totient function: \phi(n). The function counts the number of numbers, less than the number n, that are relatively prime to the number n. Sounds convoluted, but easy enough: the totient of 5, for example, is 4. The numbers 1, 2, 3, and 4 are relatively prime to 5, and there are 4 numbers, less than 5, that are relatively prime to 5. How about 6? 1, 4, 5 are the only numbers less than 6 that are relatively prime to 6 (both 2 and 3 divide 6). So the totient of 6 is 3. A nice little fact is that the totient of a prime number (like 5), is one less than the number. Pierre “amateur hour” de Fermat noticed this. We’ll get to that later.

That’s all you need.

Proof

Shia LeBeouf SNL skit where he wiggles his fingers and says

This one’s for you, Dr. Wang

Lets try something assuming I math correctly. We’ll count by 3’s and see what the counting by 3’s looks like modulo 5.

3 \times 1 \equiv 3 \, mod(5)
3 \times 2 \equiv 1 \, mod(5)
3 \times 3 \equiv 4 \, mod(5)
3 \times 4 \equiv 2 \, mod(5)

3 \times 7 \equiv 21 \equiv 1 \, mod(5)
3 \times 8 \equiv 24 \equiv 4 \, mod(5)
3 \times 9 \equiv 27 \equiv 2 \, mod(5)

Notice that the remainders are just counting to 5. If we counted by 4 or 7 the remainders would just be rearranged and repeat itself every 4 steps. The important thing to note is that you’re guaranteed to have a complete shuffling of all 4 numbers, 1, 2, 3, and 4 every 4 steps. It’ll never skip. If we take the first 4 and multiply them together, we get:

3^{4} (1 \times 2 \times 3 \times 4) \equiv (1 \times 2 \times 3 \times 4) \, mod(5)

and dividing both sides:

3^{4} \equiv 1 \, mod(5)

As long as I choose the modulus (5 in this case) to be a prime number, say p, I can use any number less than p, call it a, and I’ll always have that:

a^{p-1} \equiv 1 \, mod(p)

This is called Fermat’s Little Theorem. It means that I can say that 50 raised to the power of 100, divided by 101, will have 1 left over. 50 to the 100th power has 169 digits. It’s a humongous number. But I didn’t have to do any multiplication to know a little something about how the prime number 101 will interact with other numbers less than it.

The requirement that the modulus p is a prime number is pretty restrictive, though. Prime number get pretty far apart pretty fast even though there are a whole bunch of fancy pairings of primes. I want to be able to do this with any modulus n that isn’t necessarily primeWell, this is where Fermat got put on the bench and Euler stepped in. Following along the lines of the proof of Fermat’s Little Theorem, we notice that for a modulus like 4, counting by threes looks like:

3 \times 1 \equiv 3 \, mod(4)
3 \times 2 \equiv 2 \, mod(4)
3 \times 3 \equiv 1 \, mod(4)
3 \times 4 \equiv 0 \, mod(4)
3 \times 5 \equiv 3 \, mod(4)
3 \times 6 \equiv 2 \, mod(4)
3 \times 7 \equiv 1 \, mod(4)
3 \times 8 \equiv 0 \, mod(4)

Because 4 isn’t prime, a pesky 0 shows up (3 times 4 divided by 4 leaves no remainder) but we see the same pattern of counting over and over, every 4 steps. But if we just toss out the remainders that aren’t relatively prime to 4, (remember, the totient of 4), we’re left over with just 1 and 3 since 2 divides 4. There are 2 remainders that don’t divide 4 also meaning that the totient of 4 is 2. Using the same idea as Fermat’s Little Theorem:

3^{\phi(4)} \equiv 3^2 \equiv 8 \equiv 1 \, mod(4)

The complete proof requires just a tiny bit of extra work, but in tossing out the remainders that divide 4, we recover what’s called a reduced residue system. The size of that set of number is the totient of the modulus.

Are you left with a complete sense of underwhelment?

The automotive embodiment of underwhelment.

The automotive embodiment of underwhelment.

Well, with this I can do even bigger math than in the previous example even faster in my head. But you must be thinking, Chase… who cares?

You do.

    …at least those of you who have ever bought anything with a credit card. And those of you reading this on a computer. It’s not so much about how Euler made mental math just a little bit easier, but how about Euler figured out a way for us to use big prime numbers like keys. These formulas form the foundation of the RSA algorithm.

RSA (and more grown up versions of it) is used everywhere. Buying an after-market spoiler for your beat up Honda Del Sol on Amazon cuz you want to impress women with questionable taste in men? RSA. Long story short, if you give me a big giant number, and we both know some big giant number relatively prime to it… do some multiplication and, bam, 1 left over, no matter what. A giant, glaring confirmation that we are both exchanging the correct information.

But here come the nay sayers. Here come the “what a gross oversimplification” critics. Worst of all, here come the people who say RSA is mostly broken–and totally broken when quantum computers hit the market. Well, even after the we enter the Willy Wonka’s Chocolate Factory of quantum computing where magical crap actually happens, lots of devices will still rely on classical transistors for the foreseeable future.  It’s not exactly cheap or easy to trap a qubit. See, we use Euler’s theorem in hash tables and maps–long story short, big prime numbers make it really easy for a computer to look things up in a table without having to search the entire table much in the same way that we can use them to act like a key.  It’s like when you get the corner piece of a puzzle there’s only four places it could possibly go, even though there are probably a thousand places total. Can you imagine watching someone check the middle of the puzzle with a corner piece in between their fingers?

Cryptocurrency: Fiat or Not?

You may have heard about cryptocurrencies in the news: Dogecoin, Litecoin, and most famously, Bitcoin. You may even own a few. You may even drive a stock car sponsored by a handful of Redditors donating Dogecoin. For those who missed the boat early on when the mining was good (yours truly included), cryptocurrencies are mediums of transaction that make use of the same kinds of cryptography that keep your credit card secure. People and businesses who trade cryptocurrencies take part in a distributed exchange network that validates transactions; the work of the validation, ostensibly, gives and ensures the value of the currency.  Weird, right?  Or at least different: Bitcoin started out as a proof-of-principle for alternative currencies in a 2008 research paper.

1280px-1970_Fiat_500_L_--_2011_DC_1

Sweet ride: 1970 Fiat 500

Techopedia logically asserts that cryptocurrencies are what is known as a fiat currency: money that is given and ensured value by a state authority without making use of a tangible commodity as the basis of the currency’s value. Long story short, most gold and silver backed currencies became fiat currencies in early 20th century to include the US dollar in 1933. Techopedia also observe, however, that it behaves similarly to those commodities like gold or silver with its value, measured against fiat currencies, rising and falling with a high degree of volatility. So what is it? In the traditional sense, it certainly isn’t a fiat currency; it isn’t backed by a state or international authority but neither is it backed by a tangible commodity. Supposedly its value is given by implicit social contract between all participants ensuring the network of exchange remains secure. But just because I can be assured that everyone is working to ensure virtual doppelgangers of Bonnie and Clyde don’t steal all my coins, how do I sleep easy knowing they’ll be valuable enough to purchase the gas for the Dogecoin car?

cage_confused

MFW I’m confused: I become Nicolas Cage

So what gives it value? Joules. No, not the science fiction author. I’m refering to the unit of energy joule, named for James Prescott Joule, who infamously disproved that the calories in my donut aren’t quite right and the so-called calories are probably an overestimate and… anyway, moving on. The act of ensuring that the transactions in a cryptocurrency network are true and secure are energy intensive. Many ‘miners’, or people who perform these calculations regarding the transactions are already well aware of this fact. But in order to get a broader understanding, let’s follow the life of a single bitcoin. We’ll name him Xavier.

tl;dr Xavier goes on an adventure through the block chain.

Pretend I’m Alice, and I’d like to give Bob custody of Xavier, a bitcoin, in exchange for goods and services. I, Alice, announce to the network that I’d like to give Bob a coin. Alice and Bob have things called public verifcation keys (known to everyone) and private signing keys (known only to each individual). Other folks on the network see this transcation announced from one public verification key to another along with a challenge value unique to this particular transaction of Xavier. Folks, who we call ‘miners’, on the network then perform a very difficult calculation (thanks Euler) to find an answer so-to-speak for when this challenge value, corresponding to the two public verification keys of Alice and Bob, is fed into a special cryptographic hash function. If Charlie the miner finds the correct answer and validates the transaction (announced to the network), then Charlie is awarded a small fee taken from Alice’s transaction to Bob. Khan Academy does a great overview of how the cryptocurrency bitcoin works here and you can take a look at Bitcoin’s specific documentation starting with a brief overview here.

So if you don’t want to shell out the some 300 US$ for 1 Bitcoin (as of today) and risk the volatility of the exchange rate, then mining would seem the best option. Mining itself is lucrative if you have access to sufficiently powerful hardware that can perform these transaction validation calculations. But the hardware isn’t cheap either. As the network of users grows, so too does the pool of available currency (which is eventually capped for Bitcoin, actually), the difficulty of the transaction validation calculations, and the number of these calculations being performed per second across the network.

lebron-james-flop-2011

The Miami School of Floating Point Operation

These calculations can be measured in a cool unit called FLOPS (floating point operations per second), which are used to measure how fast a computer can math. Dividing the number of FLOPS a computer is capable of achieving at peak performance by the maximum number of watts drawn by the power sources gives us the peak FLOPS per joule (operation / watt-second) of the machine. Ultimately, when purchasing large, powerful machines (we’re not talking your home Netflix box), the operating cost of the watts required to run the machine quickly overrun the cost of the machine itself.  It didn’t take long for cryptocurrency miners to notice that with exponential growth in difficulty of the validation calculation (due to how the currency is algorithmically governed) came serious implications in terms of the total energy consumption of the Bitcoin network. Bloomberg published this article a little over a year ago on the very subject. To cut to the Chase, the Bitcoin network alone could consume the energy equivalent of tens of thousands of households, even if it were run solely on these lean, green, adding machines.

But this is precisely my point. Watts aren’t free. Energy is the commodity that forms the value of cryptocurrency. Economists define energy as a commodity (like oranges or gold) when it’s the form of oil or gas, but here we take it in the explicit sense of joules–as in the joules that would be generated from burning one train car load of coal for example.  By this train of logic (all aboard!) cryptocurrencies are clearly not fiat, just backed up by the intangible end state of tangible energy commodities and very precisely ensured by the total body of participants. (I wonder if then these currencies are a medium by which to trade energy and not the object of trade in of themselves; the problem is that this energy has already been spent–an interesting conversation for another time).

Nah, man, you’re totally wrong.

One of my classmates pointed out in conversation that one day energy will essentially be free, or in other words limitless.  If cryptocurrencies were not fiat, and backed up by gold instead of energy say, you could imagine that everyone would have an arbitrarily large pile of gold backing up a single Bitcoin in their pocket.  I, on the other hand, argue macroscopically, that on a societal scale, what seems like limitless energy now will not be limitless energy when we are able to produce it. From our point of view the sun seems like a limitless source of energy, but the solar system’s point of view it’s just the right amount.

I Googled “anthropmorphized planets” for the purposes of my metaphor and found this page; my Google-fu is strong.

Additionally, different lines of research in the high performance computing community are leaning towards measuring efficiency with bits per joule, rather than flops per joule. As computing densities (in terms of watts/meter^3) increase and with the looming (hopefully) dawn of quantum computation, we’ll face more limitations in managing the heat generated by the I/O of the classically (in the Newtonian sense) observed data (thanks Heisenberg) coming out of a quantum computer.  But this further supports my point–the management of this heat incurs additional energy costs despite having more FLOPS than you could possibly measure.

I haven’t figured out how to work a “fix it again tomorrow” joke in. Something about quantum computers breaking RSA derivative cryptographic algorithms, maybe.