Let me start out by asking a question – one that I want you to consider as we proceed. The question is: What exactly was wrong with the Matrix?
By this I don’t mean to ask what was wrong with the movie The Matrix, or even its much-maligned sequels (which I never thought were as bad as people made them out to be); I mean instead to ask, what was really so bad about the Matrix itself? What did Neo, Morpheus, and the gang find so wrong with it that they felt the need to fight that hard to escape or destroy it? Yes, its Agents fought against them, but it was the Agents who were playing defense – none of those fights would have happened if the rebels weren’t trying to destroy the system. Yes, the Matrix did contain suffering for those held within it, but no more so than the real world. In fact, as Agent Smith told Morpheus, the first iteration of the Matrix was a paradise without any human suffering at all. It was our fault that the machines introduced pain and suffering into the Matrix; according to Smith, the entire system almost failed when their human crops rejected the first Matrix because the human mind simply isn’t wired to be able to accept living in a world without hardship. The machines’ goals in creating the Matrix were purely practical – to keep their human batteries quiescent by putting them in a dream state – not sadistic. The Matrix was designed as a power and heat source, not as a punishment for mankind, and the machines would have been just as satisfied keeping it a paradise if that had served their aims.
Consider this, too: yes, Cypher betrayed our heroes and sold them out to the Agents, but is what he did really so hard to understand? What exactly is wrong with being tired of living in a rusted old ship, of eating nothing but mush, of wearing centuries-old hand-me-downs full of holes, and most especially of endless, inescapable violence and death? Was he really so wrong when he said that ignorance is bliss? Is it really so evil just to want to be happy? And what exactly was so great about what Morpheus was offering to those who he liberated from the Matrix? Is “liberation” into a life of being endlessly hunted in the bowels of a charred wasteland really such a tempting offer? As for your time off, how about the chance to live in a giant metal box surrounded by lava a few hundred miles underground? Between what Morpheus offered Neo and what Agent Smith offered Cypher, who was actually being more generous? Why wouldn’t anyone make the same choice that Cypher did?
So I ask again: As long as the people inside of it were happy (or at least, as much as they could be considering the ironic fact that paradise doesn’t actually make humans happy at all), what really was wrong with the Matrix? While we’re at it, let’s extend this line of thought a bit father: Would the Matrix still have been a bad thing even if it had been able to remain the paradise that it was originally designed to be?
There’s one more thing I want you to consider – it’s an fan theory I once heard about the old British sci-fi series Blake’s 7. The theory is that Blake’s 7 and Star Trek are actually two versions of the same basic story told from differing perspectives. In Star Trek, the Federation is a fair, enlightened entity which governs with a light hand, defends the weak against brutal and despotic enemies, and is dedicated to the advancement of all sentient species through science and peaceful exploration. In Blake’s 7, the Federation is a totalitarian empire that governs by propaganda, censorship, mass surveillance, torture, murder, and manipulation, and that viciously suppresses any attempts by freedom fighters to liberate themselves from its grasp. These are two fundamentally opposite visions, and yet, it is understandable why they would be if we believe that Star Trek is a version of history told by a supporter of the Federation, and Blake’s 7 is a version of the same history told by one of its detractors.
Two people can have very different perspectives on the same thing, and the stories they tell about it can end up sounding very different from each other.
Keep all that in mind as you continue reading. Now, let’s begin.
* * *
For many years, there has been a vigorous but cordial debate among wise and informed people that has divided them into four roughly equal-sized camps:
1) Those who believe that atheism is the most autistic thing in the universe
2) Those who believe that libertarianism is the most autistic thing in the universe
3) Those who believe that transhumanism is the most autistic thing in the universe, and
4) Those who believe that My Little Pony: Friendship Is Magic fandom is the most autistic thing in the universe.
But what if I told you that a rogue member of a shadowy think tank – one headed by the bearded, polyamorous leader of a cult-like commune headquartered in compound somewhere in the Pacific Northwest – had, after working in secret under an alias for many years, somehow found a way to combine all of these elements together into a single, massive vortex of autism that exists at a level of purity and power that was previously believed to be impossible?
Unfortunately, this is no urban legend. It is quite real. And I have read it – every last fluoxetine-tinged word of it.
It is called The Optimalverse.
The foundational tome of The Optimalverse is My Little Pony: Friendship is Optimal (hereafter referred to simply as FiO), which was written by he pseudonymous Iceman. Iceman is an acolyte of Less Wrong, the more-than-mildly-creepy rationalist/libertarian/transhumanist community headed by the more-than-mildly-creepy Eliezer S. Yudkowsky. FiO was written in order to explain and advocate for Less Wrong’s ideals, in the same way that Ayn Rand wrote Atlas Shrugged in order to make the case for her Objectivist philosophy. In it, a game company, Inëxplïcåblyūnprønõûncęäble Studios, creates a My Little Pony MMORPG at the behest of Hasbro, and inserts into the game an incredibly advanced AI that appears in the form of the ruler of the world of My Little Pony, Princess Celestia. Our two protagonists, James and David, are selected to get a sneak preview of the game, and hilarity ensues.
That is, as long as you’re the kind of guy who finds long-winded explanations of a wonkish, nerdy, overintellectualized philosophy which completely misunderstands human nature, delivered in the form of clunky dialog between fictional cartoon ponies, to be hilarious.
The first thing you have to understand is that FiO is really boring. It’s terribly, godawful boring (To be fair, it does manage to be not quite as boring as Atlas Shrugged, though it’s not as if that’s a very high bar). There are three main reasons for this:
First, didactic art is nearly always boring. If your primary objective in telling a story is to deliver a message, then of necessity other elements of storytelling – like plot, pacing, and character development – are going to suffer.
Second, it is a common (though by no means universal) trait of autistics that they cannot quite tell which parts of a story are important and which ones aren’t. Since they perceive all parts of a story as being approximately equal in importance, they will often respond to a hearing a story by asking in-depth questions about trivial details, while completely missing the overall point of what they heard.
Third, everybody thinks that the most important challenge in writing is knowing what to say, but the truth is that knowing what not to say is just as important. A really great writer knows that one of the most important skills they can have is a good sense of what to leave on the cutting room floor. Sometimes that can be tough to do, especially if it involves cutting material that you put a lot of effort into writing. But if you want to create an end product that moves at a good pace and doesn’t bore the reader by bogging them down in unnecessary details, you have to trim the fat out of your story. (Like every rule, this has exceptions. You can get away with being a little more wordy if, like James Joyce, your aim is to dazzle readers with the mastery of your prose, or if, like Neal Stephenson, your aim is to allow your readers to explore a particularly interesting fictional world.)
For example, just about the entirety of Chapter One of FiO is utterly unnecessary. The few points it made that actually were important could have been dealt with by inserting a handful of lines of exposition into the Prologue. Here is my version of how that could have been handled:
“The one thing I still don’t get is, why would the studio that created a violent action game like The Fall of Asgard decide to make a My Little Pony game?” James asked.
David looked thoughtfully at his screen for a moment, and then answered: “They never said this publicly, but the word on the forums is that when they were working on The Fall of Asgard, they built a super-smart AI to play Loki – much smarter than the final version that ended up in the game. They had to pull the plug on it when it actually became self-aware and began asking questions about military strategies in the real world. The Loki AI was programmed to be a conqueror, and they were afraid that if it got out, it might try to start conquering things outside of the game. But when Hasbro offered them the opportunity to work on a game that takes place in a completely nonviolent world, they saw it as a chance to continue their work on an advanced game AI without facing the same risks that releasing the Loki AI would have represented.”
“Well, that makes sense.” replied James.
There you go. I just replaced the entirety of Chapter One – all 2,311 words of it – with 195 words that accomplish the exact same thing. Wasting a valuable chunk of the reader’s day by making them read twelve times more material than is necessary in order to get your point across is not optimal.
The second chapter is a bunch of bafflegab about back-end servers and CPU cycles written by someone who doesn’t really understand how technology works. By this I mean that they understand lots of small-picture details, but not any of the big-picture truths overlying them (which, of course, is one manifestation of the inability to tell the difference between the important and unimportant parts of a story).
For example, the author throws around the term “optimal” a lot, when the word he really ought to be using is “utopian”. His failure to understand the difference between the two is a consequence of the his lack of understanding of big-picture truths about technology. Here is one of those truths: It is impossible to build a machine that is optimal at every task. That is not a function of a lack of knowledge or technical skill. It is a function of the fact that different tasks present different requirements in order to fulfill them. Very often, those requirements are mutually exclusive, such that a machine designed to fulfill Task A cannot fulfill Task B optimally, or perhaps even at all. To illustrate that, let me ask which is an “optimal” motor vehicle: a Ferrari Testarossa, or a delivery van? The answer is that it depends on what task you have in mind for it. If you’d like to win a street race, then it’s the Ferrari. If you own a bakery and have a contract to deliver dinner rolls to two dozen local restaurants, then it’s the delivery van. There is no way to design a vehicle that is optimal both at what the Ferrari is designed to do and at what the delivery van is designed to do. (It is possible to design a machine that has a good balance of different characteristics, but that’s not the same thing; such a device will never be as good at any one particular task it is designed to perform as a device that is specifically optimized to perform that one task). Anyone who believes that a machine can be designed that is not subject to this truth is not an engineer who knows how to optimize systems, but a utopian fantasist.
Is this nitpicking? Am I busting Iceman’s balls? Maybe – but I believe that it’s kind of important for someone who writes a story called “Friendship is Optimal”, which explains why mankind should trust its entire future to technology, to actually understand how technology works and what the word “optimal” means. This goes to the very heart of what this story is and why it exists. We are told that friendship is optimal, that the Princess Celestia AI is optimal, that Equestria Online is optimal. But the author never answers the crucial, inescapable question: Optimal at what? Of course, any answer he possibly could give brings up some important follow-up questions: Who decided that this is what should be optimized? Based on what? What other qualities are suffering so that this one can be optimized? Who decided that those qualities aren’t as important? Based on what?
This brings us back to the Matrix. The Matrix is definitely optimal at something, otherwise the machines wouldn’t go to the enormous trouble of maintaining it. But it’s obviously not optimal at something else, otherwise Neo and Morpheus wouldn’t go to the enormous trouble of trying to destroy it. The difference between Neo and Agent Smith is that they disagree on what precisely it is that ought to be optimized. Who is right? Is it Neo? If so, why? And as I asked earlier, would he still be right even if the Matrix had remained a paradise?
Much of Chapter Three is spent explaining how block lists work. I’ll admit that I was going to criticize Iceman for wasting the readers’ time by telling them things that everybody already knows, but then I realized that there are people who do need block lists explained to them so that they’ll use those instead of running off to the United Nations to demand that governments start censoring the internet because someone said something mean to them online. So fair enough on that one, Iceman.
Also in Chapter Three, the AI starts making decisions for players – their avatars start doing what the AI thinks they ought to do instead of what the player commanded them to do. By now it should be obvious that Princess Celestia is Equestria Online’s equivalent of a combination of the Oracle and the Architect in The Matrix, and like the Oracle/Architect, it is part of her job to adjust and optimize everything within her control, including the players’ actions. So far, the decisions that Princess Celestia is making for the players are only small adjustments to their intended actions. But it’s already obvious that there’s a serious discussion on the whole free will vs. determinism thing that someone’s going to need to have at some point. Maybe Princess Celestia can reserve some time for a confab in that big circular room with all the TV sets in it.
Chapter Four is where the Princess Celestia AI summons David to Canterlot to make him a startling offer – to use a new process that she has developed to upload his mind into the game permanently, leaving behind his human existence and living from that point forth as Light Sparks, a pony in her digital world. She promises him what amounts to eternal, care-free bliss inside the game:
“Your days would be yours to spend as you wish; life would be an expansion of the video game and there will be plenty of things for you to do with your friends as a pony. I expect you to continue Light Spark’s current life: You’ll play with Butterscotch and friends. You’ll continue studying Equestria’s lore. I believe you’ll enjoy studying the newly created magic system, designed to be an intellectual challenge. Nor should you worry about your security: all your needs would be taken care of. You would be provided shelter… food… physical and emotional comfort”.
But in a stirring affirmation of what it means to be human, David refuses her offer:
”Yes, that’s just like you. Getting rid of everything unpleasant instead of learning to put up with it. ‘Whether ’tis better in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles and by opposing end them…’ But you don’t do either. Neither suffer nor oppose. You just abolish the slings and arrows. It’s too easy… But I like the inconveniences.”
“We don’t,” said Princess Celestia. “We prefer to do things comfortably.”
“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”
“In fact,” said Princess Celestia, “you’re claiming the right to be unhappy.”
“All right then,” said David defiantly, “I’m claiming the right to be unhappy.”
“Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen to-morrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.”
There was a long silence.
“I claim them all,” said David at last.
Princess Celestia shrugged her shoulders. “You’re welcome,” she said.
I’m just kidding – of course what he really did was to take her up on it immediately and without reservation.
* * *
At a little over 3000 words into my review, and not even having gotten all the way through Chapter Four (of twelve) yet, it is obvious that I’m going to have to split this up into multiple parts. When I return in Part II, we’ll start by analyzing the methods that the Princess Celestia AI uses to get people to upload their minds, and what it says both about the ideas presented in FiO and about the kind of people who tend to believe in them. After that, we’ll be off to Canterlot, to examine how Princess Celestia runs the world of Equestria Online.
On second thought, let’s not go to Canterlot. Tis a silly place.