Contested Ground
Keynes, Centaurs, and the future of white collar work
I once spent close to 48 hours in an office once without going home. Our client was selling a copper mine in South America and by that point there were perhaps a hundred highly paid professionals working through successive nights to make it happen. Deal counsel, local counsel, company counsel, bankers, accountants, consultants, executive teams. Two sides, multiple jurisdictions, and so on.
We were not working all night because that was the optimal number of hours to spend on the problem. Quite the opposite. I was a corporate lawyer, and I would estimate that a third of my time during those 48 hours was spent waiting for comments from someone else. The rest was work that an engaged 16-year-old could have done with sufficient instruction. The documents were not complicated because they required rare genius. They were complicated because the transaction had no right answers, only wrong ones, and then below the wrong ones a vast contested space where every provision was a negotiation. We were not trying to finish. There was no finishing. We were trying to extract the maximum possible advantage from a situation that would remain contested until the moment it closed and immediately reconstitute itself in the next transaction.
The reason our client hired a hundred professionals was not that a hundred professionals were the minimum required to execute the task. It was that this was an important transaction and important transactions reward firepower. I am certain that if the professionals had been cheaper, they would have bought more of them, not fewer.
I thought about that transaction recently when Dario Amodei used the Centaur analogy from chess: human plus AI outperforming either alone, but only as a transitional phase before AI pulls ahead entirely. It’s a compelling frame but not one that I recognize from those hours on the copper mine.
Chess ends. There is a winner and the winner is verifiable. Deep Blue doesn’t need to persuade Kasparov that he’s lost because the board tells him. The Centaur phase was real: for a period, the best human with AI could beat the best AI alone. That period is over. The best AI now plays better chess than any combination of human and machine, because every position has a fact of the matter and the machine finds it. Dario’s contention is that software engineering is in the Centaur phase now, and that the rest of knowledge work will follow. He is probably right about software engineering, for the same reason he’s right about chess. Code compiles or it doesn’t. Tests pass or they don’t. The signal is outside the players.
The warranty schedule I was working on was not like that. It had been drafted weeks before closing, commented upon, and disclosed against. Standard work, the kind AI handles capably already, probably better than the junior associate who first drafted it. By the time we were on those late-night calls, all of that was done and we were arguing about the final points.
Warranties are risk apportionment. These final points weren’t wrong answers waiting to be corrected, they were positions. Each side wanted the other to take more risk. The stakes weren’t high enough to escalate, so it fell to junior lawyers to grind it out, call by call, until we were told to stop. There was no external signal to consult. The answer was being produced by the people on the call, and it would only exist once they’d made it.
The bots will agree the technical points faster and to a higher standard than the junior lawyers ever did. That is unambiguously good. It means getting to the warranty schedule’s final points sooner, with cleaner drafts, with fewer errors to unpick. And then the junior lawyers will be on the late-night call anyway. Because those final points aren’t verifiable. There is no signal outside the room to optimize against, only the signal the players produce in real time.
You could build a negotiator bot one that haggles over warranty provisions faster and more strategically than any junior lawyer. But if both sides have one, the contested ground shifts to whoever instructed the bot and what mandate they were given. The haggling doesn’t disappear. It moves. And if the verifiable work gets cheaper, the clients don’t spend less. They spend the savings on more contested ground.
Keynes made the same mistake in 1930. Industrial inputs were getting cheaper, machines were replacing labour, and he extrapolated forward: within a generation, people would work fifteen hours a week and spend the rest in leisure. He was wrong by such a margin that the prediction reads as comedy now.
The obvious response is that this time is different, not because society won’t advance, but because all the advancing is verifiable and so will be done by AI. Fine, you might not be able to turn the haggling over copper mine warranties to a bot, but who cares, because it will be out here curing cancer, building space stations, finding supermaterials to replace copper. The contested work is a rounding error on the upside.
But that compounds the Keynes error rather than escaping it. There is no theoretical limit on the number of contestable tasks human beings can dream up. And their relative value won’t just hold steady as AI does the verifiable work, it will rise, because the productivity surplus from an AI boom has to go somewhere, and contested ground is where it goes. This is what happened to services in the industrial economy Keynes was imagining. He saw the substitution and missed the reconstitution.
And beyond that, technological progress tends to create problems that the same generation of technology cannot solve. Tell an agronomist in 1850 that by the 2010s overeating would be one of America’s greatest health crises and that reengineering human metabolism would seem the answer. Caloric abundance didn’t solve the problem of scarcity and leave nothing in its place. It created a new category of problem, one that couldn’t have existed before the abundance arrived. AI-generated abundance will do the same. We don’t know what the cognitive equivalent of obesity is. We know it follows from the structure of what’s happening.
Dario has given us the appealing image of a nation of geniuses working in data centres on the hard problems: cancer, poverty, the scientific frontiers that have been stalled for decades. That future seems both attractive and likely. But industrial abundance gave us penicillin and also the McDouble. Both came from the same cost collapse. Neither was predictable from the other. The nation of geniuses will cure diseases and it will also be deployed on tasks that today seem trivial or absurd, because in a world of AI-generated abundance, contested ground is where relative advantage lives, and relative advantage is always worth paying for.
The Centaur fails not because it’s wrong about capability but because it assumes a fixed game with a verifiable outcome. The copper mine wasn’t that: every time you make the verifiable work cheaper, more contested work appears. Keynes assumed static desires. We are assuming a fixed quantity of contested work. The actual picture: a nation of geniuses generating new contested games faster than any single game gets resolved. Not a threat to human economic activity. The definition of it.


It was interesting to read about your theory of the future and human capacity/work. Thanks for sharing!