Contra Scott on Moloch

(back to article index)

Scott Alexander's Meditations on Moloch is a very intellectually stimulating essay on co-ordination problems and what he calls 'multipolar traps'. It's mainly about the notion that "in a sufficiently intense competition, everyone who doesn’t throw all their values under the bus dies out", i.e. that since the competition is optimising for some value X, all other values besides X will be sacrificed for even the smallest gain in X, since as soon as one competitor sacrifices such a value all others must do so as well to remain competitive.

Scott correctly notes that choosing to submit to Moloch / free Cthulhu from his watery grave / sacrifice another value for an edge in X is the wrong choice, but then argues that this choice will be irresistable to someone and once they've taken it the rest of us won't have a choice and everything ends in a race to the bottom and a cloud of sexless hydrogen something something. His solution? Create a benevolent superintelligence that can order us around for our own good.

At least he's avoided the usual trap, which is to say 'let's create an all-powerful government that can order us around for our own good'; he recognises that governments have their own internal logic: "as soon as there’s the slightest disconnect between good policymaking and electability, good policymaking has to get thrown under the bus." Democracy, too, is a system to harness Moloch and turn him to our ends, and the bonds are, in fact, far weaker than those of capitalism (since politics is a gigantic externalities-and-public-goods problem even when it's dealing with goods and costs that could easily be private).

But the benevolent-superintelligence solution also has problems, because a superintelligent AI is an optimiser too. Even if Eliezer Yudkowsky manages to solve all the hard problems and come up with a mathematical description of coherent extrapolated volition, and even if he manages to convince the AI researcher who creates the Singularity to build in CEV as its objective function… you now have a society with a single goal, and any attempts to pursue any other goals will be paperclipped with extreme prejudice. Unfortunately, individual humans have different goals to one another, and forcing them all to pursue only Society's One Goal is not conducive to maximising human utility (this is but one of the many reasons why the secular-messianic-rationalism strand of socialist totalitarianism is a dismal failure that generally ends in Stalinism).

'Aha!', you say, 'but the true Scotsman CEV would allow for multiplicity of goals.' Ok, so you have an optimisation process external to humans being used to satisfice human goals, and however it aggregates those goals, there will be some means of resolving competition between competing goals, and now X is that-which-the-Machine-promotes, and we are all forced to sacrifice all values for advantage in getting-the-Machine-to-prioritise-our-values (which is not even reflectively consistent, because once we've sacrificed our values there's nothing left to be prioritised, but who ever said human behaviour was consistent?) and hello Moloch.

You can't have multiple individuals free to pursue their own individual goals and values without giving Moloch a way in (what do you think 'multipolar' means?), and you can't not have that freedom without a priori sacrificing everyone's values to whatever the Central Co-ordinator (be that Stalin or an AI) chooses as X. There is no escape from Goodhart's Law.

But let's step back a bit. The whole "Moloch will destroy everything unless we kill him first" argument rests on the assumption that, fundamentally, all incentives are perverse: that it will always be possible to 'route around' our terminal values and just satisfy the instrumental values; that Goodhart's Law is a knife through the heart of multipolar co-ordination mechanisms. But not all co-ordination mechanisms are created equal: some have narrow technical measures (which, per Goodhart, make bad targets), while others have broad and fuzzy measures. Most people don't defect in the Prisoner's Dilemma, even though it takes a Yudkowsky to come up with a decision theory that doesn't say they should (and also doesn't say they should two-box on Newcomb's problem, which again most people don't). Can we harness, not just people's imaginative competitiveness, but their goodness, to ensure that they optimise in co-ordinated ways?

I believe that we can, and that the critical enabling technology has already been discovered. I believe that Scott is wrong to identify Mammon as one of the faces of Moloch. People do not, in fact, throw their values under the bus for an increase in profits, because your values (I mean your real, revealed-preference values, not the ones you profess to hold) are precisely what you will pay money (i.e. accept opportunity costs elsewhere) to protect. If, to take one of Scott's examples, "the coffee plantations are on the habitat of a rare tropical bird", then if your market system is working properly, the plantations will only happen if the coffee is worth more than the bird. (This is, I think, the strongest of Scott's examples, because it's one that's a real externality under our current system, whereas his others are naturally internal — poisonous coffee only makes its customers sick, lower wages for coffee planters only affects workers who agree to work for the coffee company — up to pecuniary externalities, which (by definition) cancel, so can be ignored.)

Of course, our current system fails to internalise all externalities; neither the coffee plantation nor its customers have a legal obligation to compensate the bird-lovers, so they are not forced to account for that cost in making their decision. But the Coase Theorem tells us that externalities can be internalised by contracting, as long as property rights are clearly defined — indeed, to first order, it doesn't matter what the defined property rights are: if any mutually-beneficial contract will be agreed upon, then any initial definition of property rights leads to an efficient outcome; the only difference the initial rights make is who has to pay whom to sign the contract.

But note that 'to first order' caveat. In the real world, not every mutually-beneficial contract gets made, because of transaction costs. These include not just the obvious — lawyers' fees for drafting the contract, the cost of a sheet of legal-size paper to write it on, the parties' time spent signing it — but also bargaining costs: each party has an incentive to hold out for a better deal, a bigger slice of the pie-surplus created by the deal, and this can prevent reaching an agreement when there are many parties and/or a small surplus, because if each party slightly underestimates how big a slice everyone else will settle for, they all demand slightly too much and the total overshoots the available surplus.

One solution to this is to try to define property rights in such a way that, while they might not always start out with the highest-valued use (because that can differ for situations that look similar to the law), they can be efficiently transferred to the highest-valued use. Part of that is the 'clearly defined' bit, which means that having a government that gets to override your property rights when it thinks you've made the wrong choice is a BAD IDEA because now you've created uncertainty about who actually owns the right.

Another solution is for parties at the bargaining stage to agree on Schelling points. For instance, in Scott's favourite core-objection-to-libertarianism argument (the one with the fish farms), the Coasian solution is for an entrepreneur to buy at least three-tenths of the lake (possibly using capital raised by selling shares to the fish farmers), the transaction cost problem is each farmer holding out for a bigger share, and the Schelling point is for them to all pay/get the same amount because the problem setup is totally symmetrical. In reality this is usually harder because the conditions of the problem aren't known with exact certainty, which is why real-world Schelling points can generally be characterised as places where the cost of rights enforcement rises discontinuously.

But the really important point to notice here is that technology reduces transaction costs. Indeed, the entire long boom of the Industrial Revolution and bourgeois capitalism can arguably be attributed, not to proximate causes like 'steam engines increasing both supply and demand of coal', but to improvements in the technology of trade (including both the material — container shipping — and the bargaining — clear property rights and free contract) that enabled better goal alignment between humans and Moloch. Scott seems to think that increasing technology is problematic because it gives us new ways to sacrifice to Moloch. He does — I have to give him credit here — bring up 'technology can improve our co-ordination ability'. But then he dismisses it with "coordination only works when you have 51% or more of the force on the side of the people doing the coordinating", which is a classic statist's error. A majority of force is a tool for control (though if you're better co-ordinated than the masses, you can control them with a lot less force than they have in total, until the day they manage to co-ordinate and stick your head on a pike). But to co-ordinate — to co-operate — you only need enough people sane enough to co-operate on the Prisoner's Dilemma that their pool is valuable enough to give everyone else an incentive to co-operate too. Given how much more wealth free markets have produced than the alternatives, I think by now there's strong evidence that getting into positive-sum games is in an individual's rational self-interest; the way to defeat Moloch is to make more people sane enough to realise that (which is why when you go around saying that 'rationally self-interested fish farmers will always pollute the lake' you are NOT HELPING).

So by all means, try to make sure that AI will be Friendly. But a necessary condition for Friendliness is respecting our sovereignty, and not using its overwhelming force or skill at manipulation to drive us down a particular path 'for our own good', and if you can square the circle of combining that with the AI being an optimising agent, good luck to you, but if you can't, maybe you shouldn't build the AI just yet.

And in the meantime, stop claiming that Mammon is a face of Moloch so we have to hurry up and build the AI; Mammon is not a trick Moloch is playing on us, it is a trick we are playing on Moloch, and he is using all of his tricks (cognitive biases, Malthusian desire-bugs, government) to try and stop us. Yes, the free market isn't perfect, if we eliminate government we'll still have cognitive biases and so utopia is not an option. But since government is more susceptible to the biases than the market is, eliminating government is still worthwhile, and won't lead us to burn down all of civilisation in a Hobbesian warre of all against all. Trust me on that one.

Want more Moloch musings? Read If The Robots Stole All Our Jobs.

(back to article index)