When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.
β Bruno Latour
When one has weighed the sun in a balance, and measured the steps of the moon, and mapped out the seven heavens star by star, there still remains oneself. Who can calculate the orbit of his own soul?
- Oscar Wilde
π€π€π€
In 2017, several notable milestones in artificial intelligence (as powered by neural networks, machine learning, deep learning, genetic algorithms and reinforcement learning) went by but we just kind of brushed them off, either because we were in America or Europe where so much utter insanity took place that we couldn't spare the attention or we were in Africa, where day-to-day survival and our stupidly hot sun will not let us pay attention to the onrushing future. I can't speak for people in places other than those.
In the world of gaming, the AI created by the folks at OpenAI annihilated one of the best human DOTA players in one-on-one combat. I believe the next event will be a team game, five-against-five. I have no doubt the AI will triumph there as well.
In the military sphere, an Air Force colonel and highly skilled fighter pilot got annihilated by an AI-pilot in a one-on-one simulator dogfight.
Wait, sorry. That was 2016. My bad.
In the world of strategy tabletop gaming, an AI system called AlphaGO finally beat the best human players at Go, a game so much more complex than chess that for decades after computers started beating us at chess, people were certain no computer could ever master it. Those people have now been proven to be wrong.Self-driving cars were not left out of the advancements: Uber did over 2 million miles of testing for their self-driving car program and Waymo in November 2017 tested an all-self-driving fleet in Phoenix, Arizona with plans to expand further in the coming years.
Of all these however, the coolest and most revealing machine learning/neural network story I came across from all of the internet? Was one from some grad student's Twitter post about their final-year project. The assignment was to create a neural network and have it animate a stick figure to race across a finish line as fast as possible.
Do you know what the winning neural network did? Instead of making the stick figure run faster or go quadrupedal, it disassembled him into a tall tower with the head at the top and then just ... fell over so that the head crossed the finish line. Why? And by why, I actually mean "why is it in my opinion the coolest machine learning story of the past year?") π π π€
To answer that, the word of the day is Algorithmic Opacity.
Do you want to sit in front of a fire -- or do you want to be warm?
- The Danimal, Usenet
What is algorithmic opacity, you might ask? The answer is that it is the extent to which we cannot understand why a piece of software does what it does. What has become increasingly clear is that fewer and fewer people understand what is going on under the hood of these algorithms that, more and more, are running our world. There are three reasons why this is the case and, in order of increasing scariness, here they are:
Intentional: where the creators or owners of the algorithm don't want you to understand why and how their software operates. A dangerous state of affairs but at least one we can resolve if we're paying attention.
Illiteracy: where it's not that they don't want you to know, it's that it's too hard to know if you're not a specialist. Stay in school, kids!
Intrinsic: where it's not that they don't want you to know but that nobody knows why the algorithm made its decisions. Nobody. Not even its creators, the coders and programmers and scientists.
For our purposes today, we'll be looking at 3).
Intrinsic Opacity (henceforth referred to as "opacity") means that we don't know and can't know because the software (or neural network or algorithm) has essentially trained itself past the starting point they gave it, essentially mutated itself against the stated problem until it has become that solveth the problem, that which knoweth the word.
So what does all this mean?
Nanobot: I tell you what β¦ I wonβt worry you with that detail, if youβll promise not to worry me with the details of where weβre hiding.
Kevyn: I didnβt realize ignorance was a currency.
- Howard Tayler, Schlock Mercenary
I have said before that attention is the ultimate currency. The corollary I missed is that ignorance is the paper to its ink, the night to its car headlight, the phase space of all possible roads not taken. The fact that attention is limited by time and space means that every time it is pointed somewhere, it is explicitly not pointed somewhere else.
In this context by the way, somewhere else is literally everywhere else. That everywhere else is what we call "ignorance."
Hence our stick figure above (seriously, check out Husky Henry's Youtube channel, kid's amazing) It illustrates the kind of solution that a human mind would be very unlikely to come up with. We are biased in favour of familiarity, of habit, of doing things in the manner that our neurobiology as limited by physics has conditioned us to.
The algorithm is biased simply in favour of doing things. It has no real constraining biases (other than those we unconsciously include) because its own biases are absences rather than presences. Our type of ignorance is different from the algorithms' type of ignorance and, in fact, those respective ignorances define the kind of solutions our brains are congenitally capable of inventing. We in short choose to sit next to a fire while the algorithm seeks to be warm.
Being warm can be achieved by sitting next to a fire. It can be achieved by getting naked with the nearest fellow life-form in a sleeping bag. It can be achieved by killing an animal and turning its fur into clothing. It can be done via the Han Solo solution (look to your left) i.e. by cutting open a Tauntaun and climbing inside its steaming guts. The algorithm turns ignorance inside out and explores the most abnormal solutions because it has no "normal" from which to deviate.The behaviours of neural networks and especially their fuckups are fascinating to us because they so closely resemble the amusing cognitive and epistemological errors that cute animals and human children make -- and for the same reason. It's a combined sensation of superiority (with or without smugness) and excitement. Our algorithms aren't mirrors but mimics, not reflections but remixes. They are the missing half of our imagination, venturing into the dark territories that human brains do not know how to visit to bring back answers that we don't know how to think of.
... And yet it played the move anyway, because this machine has seen so many moves that no human ever has.
- Cade Metz, Wired.com
Case in point: consider AlphaGO. That's the machine that has beaten the best human Go players. The quote above refers to a move it used in the game that was so extraordinary that the human player had to get up and leave the room for a while just to contemplate it fully. It blew his mind because it was a move no human player has ever made, could ever have made.
We wonder what will happen when we create artificial intelligence. This is a mistake because we're never going to create artificial intelligence, we are going to grow it. And no, not like a bonsai tree but like a child.
We have created things that are not yet minds, not yet vast but definitely cool and unsympathetic and we rub off on them as they rub off on us. We train the algorithms with the accumulated corpus of our history, our behavior, our physics and they in turn train us to use them more and more, cause us to tailor our responses to match their limits (ever notice how clearly you have to enunciate to be understood by a voice prompt menu on the phone? Exactly.)
While there are no eyes to look into, we have created masses of code, of stimulus-decision-response loops that intersect and interact with us, our very own created abyss that gazes back into us. Given that --in many ways --we don't even know our fellow human beings as well as we think we do, even our closest relatives, how much more these entities forming in our networks!
In conclusion, we find that the fastest advancements in AI come not from programming but from harnessing the fundamental laws of natural selection and evolution to our service. Funny thing is, this already happened. That's where our brains came from too. So no, don't worry about the future of AI. Worry what we're going to do with the ones we already have.
References
Out of the Loop - the article that contained the anecdote that inspired the above. Good stuff, you should totally, like, read it, yo.
The Shallowness of Google Translate - an equally fascinating article by linguist and cognitive scientist Douglas Hofsdater
Three Types of Algorithmic Opacity - exactly what it says on the tin, a primer on the subject.
Why Self-Taught Artificial Intelligence Has Trouble With the Real World - the difference between trained and self-taught machine learning algorithms
How Google's AI Viewed the Move that No One could Understand - an analysis from Wired on just how AlphaGo used an unorthodox move to blow a Go champion's mind.
Elsewhere
You like this? Then behold (in descending order of preference) the means by which thou might supporteth my ministry:
π
Steem/SBD: @edumurphy
Dash: XdCpkRRxejck5tfXumUQKSVeWK8KwJ7YQn
ETH: 0x09fd9fb88f9e524fbc95c12bb612a934f3a37ada
LTC: LLeapGkFjT8BNb2JcwunXG3HVVWLY7SaJS
Doge: : DNwUsAegdqULTArRdQ8n9mkoKLgs7HWCSX
BTC (sure, if you insist ): 1L9foNHqbAbFvmBzfKc5Ut7tBGTqWHgrbi
Cash App: $edumurphy (because fiat works too, I'm not particular π)
That failing, you might also/instead enjoy other #steemstem stuff I've written such as:
Assassin Bug: The Spider Slayer
Geospatial Big Data: The Magic of Maps and Money
Darker Than Black: Behold The Superblack Nanomaterial Vantablack