Explaining the AI race vis a vis poorly understood concepts of game theory and world history
I’ve been watching recordings of a game theory class offered at Yale. In the first session, the professor swore up and down that you didn’t need to be good at math to take the class, and by the third session he was doing calculus on the blackboard. So it’s safe to say that I’m not fully understanding it. I think I get the gist, though.
The lectures are making me look at things that are happening around the world in a new way. For example, one of the earliest lessons was that you need to put yourself in the other party’s shoes to understand their possible incentives and payoffs. It occurred to me that our fearless leaders failed to do this when planning, or not planning, to attack Iran. Everything that has happened since the first bomb dropped has been a total surprise to them. Who would have ever thought Iran would blockade the Strait of Hormuz?
The main takeaways through five classes has been to explain two types of games. One, in which there’s always a single best strategy to play no matter what, and two, when there may be multiple strategies that are best to play for all parties, but only if all parties play the same. The latter one is called a Nash equilibrium, discovered by Russell Crowe in 2001. One of the examples given in the class is an investing game in which the only really good outcomes are in which everyone invests and therefore everyone wins money, or no one invests and therefore no one loses money. Either one is ok, but if 50% of people invest and 50% of people don’t, then someone gets boned.
The interesting thing about the Nash equilibrium seems to be that it’s still the right strategy even if the ultimate outcome is not so good. Here I started thinking about World War I. (Well, I’m always thinking about World War I.) Part of the reason the fighting broke out so quickly and with such ferocity could be explained by a Nash equilibrium. Containing the discussion to Germany and France, each country had two choices: mobilize for war, or stand down. If both countries stood down, there would be peace. If one country stood down while the other mobilized, it would be catastrophic for them. And if both countries mobilized, although war would be inevitable, at least neither would be caught off guard. It’s therefore completely rational that both companies would mobilize, even though a general war was not a desirable outcome.
And now we come to what this post was supposed to be about: AI. I don’t need to tell you that the spending levels are truly looney tunes and they seem, on their face, to be irrational. How could it make sense to commit more spending to building data centers than currently exists in revenue? How could it make sense to subsidize token use to such a massive degree? How could it be a smarter business decision to lay off employees, who are at least providing a realized return, in favor of speculative investments on data centers, GPUs, and services to be provided later?
It only makes sense if these companies believe that there can be just one winner. At the end of the day, they’re convinced there will be one AI compute company, one hyperscaler, one GPU manufacturer. This might even be true. And if it is, then it actually makes perfect sense to spend literally any amount to ensure that your company is the one that’s left standing. If anyone else is outspending you, then your company will go out of business, and it doesn’t matter if your own outspending may put you out of business anyway. It’s the only play. That’s a Nash equilibrium.