Is the world getting harder to predict?
Considering macro volatility when everything micro can be modeled
I was recently in a room in which the great economist Tyler Cowen led a conversation around the fascinating question “Is the world getting harder to predict?” This provocation stuck in my head: When one looks at the unexpected Ukraine/Taiwan/EU global political dramas, Twitter’s cowboy takeover or FTX’s explosion into the biggest outright fraud since Enron, the world certainly appears more volatile. So a knee-jerk answer to Tyler’s question might be “yes, of course the world is getting harder to predict!” However, I’ll push the uncomfortable point that more and more of our world is getting shockingly predictable.
As citizens of the hyperconnected 21st century, we must keep in mind that advertising has been a core financial engine of growth. This means that a trillion-dollar effort1 tracks and predicts everything about you. We take for granted how good our Amazon suggested purchases are, how sloppy our Google queries can be, or how immediately interesting our Instagram/Twitter feeds are when we log on. We see OpenAI’s outpainting feature (shown below) demonstrate the expanding neighborhood of predictability2 around any human effort. Similarly, the eerily good AI-generated Joe Rogan and Steve Jobs podcast shows us that beyond predicting what you might say, software can now predict how you might say it. Put simply, if you are easily modeled, then you are easily predicted3, and this will have increasing implications in the future.
So this seems there might be a paradox: how can more and more of the world be increasingly predictable while the biggest parts are increasingly unpredictable?
At first I thought there might be strange interactions across magnitudes. “Small-scale increasing predictability” combined with “large-scale decreasing predictability” has a flavor similar to the Turing Pattern; differential equations posed by Alan Turing that capture short-range and long-range inverse effects (see the pufferfish below). But this initial thought is unnecessarily complex and unlikely to be insightful. However, this initial framing did help uncover a critical assumption: this mathematical model makes the strong assumption that the big pieces of the world are proportionally driven by the predictable smaller parts. I don’t think this is a correct assumption, which was the insight into a much simpler answer.
The argument
Question: “Is the world getting harder to predict?”
Answer: “Yes, because the hardest aspects to model are becoming more influential.”4
Here is my argument with foundational books to each idea:
Increasing technology means key decisions have increasing reach (David Deutsch's "Beginning of Infinity", 2011).
“Beginning of Infinity” is one of my top recommendations for anybody embarking on a scientific journey. Deutsch’s big idea is that Explanatory Knowledge (according to the glossary, an explanation is a “Statement about what is there, what it does, and how and why”) necessarily creates more Explanatory Knowledge. This positive feedback creates an exponential growth of explanatory (ie, useful) knowledge which can continue to infinity, providing certain criteria are met. We are on the steep end of the curve on many dimensions of technology today (eg, “AI” applicability and synthetic biology) in which decisions have global implications.Key decisions are being made by individuals empowered by new technology. (James Davidson and Lord William Rees-Mogg's "Sovereign Individual", 1997).
If Beginning of Infinity is the techno-optimist manifesto, then the Sovereign Individual might be the techno-pessimist Nostrodamus’ Prophecies. The core idea is that modern democracy emerged with the Industrial Revolution because new technology acted as an equalizer of workers' outputs: the assembly line ensured that each worker creates a similar amount of value. Davidson and Rees-Mogg argue that the software revolution will have an undoing effect on social equality: software will accentuate the differences of individual ability which will then decrease the validity of democratic governance. This was impressively prescient given that it was written five years before broadband wifi and a decade before the iPhone or Bitcoin. Davidson and Rees-Mogg would see the powerful founder+CEO+celebrity archetype of the 2000s as corroboration to their thesis: technology now makes some individuals as powerful as nation states.
Large groups actions are easy to predict and decisions made by representative members of large groups are easy to predict. But decisions made by high-agency, non-representative single individuals are very difficult to predict (Asimov's Foundation Series, 1951).
Asimov foresaw the world of Big Data in the 1950s when he built a SciFi universe around Psychohistory, a fictional branch of mathematics that sees the future by leveraging the fact that humans en masse are highly predictable. Such a concept comes from physics: the overall behavior of a gas follows a few physical laws, but a single molecule’s path is entirely unpredictable. The Foundation Series tells the story of a single individual who solves the equations that predicts a thousand years of interstellar humanity5. Even more prescient, the Psychohistory prophecy has limits and foresees windows of time in which an individual will emerge who determines the outcome of a crisis. But who that person is, and whether their actions keep humanity on the foreseen path, is unknowable.Therefore the world is getting harder to predict because decisions made by a few key individuals are of increasing consequence.
Examples of modern unpredictability
History is punctuated by legends of single individuals. Hegel coined the term “world-historical individual” in the 1800s to describe people like Napoleon and Caesar who have an outsized mark on humanity’s timeline. While the historical kings and generals that Hegel studied were certainly unpredictable, the modern world-historical individual feels a difference-of-kind apart because the modern individual can build her own platform of global reach. A few pertinent examples of technology-powered individuals changing the world today:
A single 30-year-old individual, Sam Bankman-Fried makes a $22 Billion company that vaporizes overnight in spectacular fashion. Some say they saw the red flags in early 2022, but there were multiple years of operation that were so fraudulent that the top-tier bankruptcy team says FTX is the worst case of “mismanagement” they have ever seen. It’s more than just financial fraud that makes this case interesting: Before the blowup, “SBF” very publicly aligned himself with forward-focused Effective Altruism philanthropy movement, and regardless of SBF’s authenticity, he substantially accelerated the global EA growth.
Elon Musk takes over Twitter in either an erratic or brilliant manner, depending on whose opinion you get. As a private company, Twitter is now free from public shareholders demands for conventional leadership. The outcome of this saga is unpredictable because it is entirely dependent on Musk himself. On one hand, scores of advertisers are leaving with dozens of labor lawsuits following the 80+% reduction in force. On the other hand, it’s possible the remaining Twitter team performs excellently, Elon brings more activity to the site that is still the global town square, and the newfound agility positions Twitter uniquely well for a landmark event such as TikTok being banned in the US.
Donald Trump’s becomes President of the United States. Aside from the Bio-SciFi book RiboFunk (1996), I don’t think anybody could have predicted Donald Trump becoming the political force that he became. But that’s partly because very few people predicted how integral social media became (maybe Clay Shirky and tech executives saw it coming). Regardless of how you personally feel about Trump, his mastery of social technology must be appreciated. Smart friends are now pitching me on Kim Kardashian and Jake Paul as future politicians: I don’t have an opinion here but do think it’s easy to underestimate the intellect of social media stars who can single-handedly build empires the size of heyday cable TV channels.
China is run by Xi, Russia is run by Putin. Two major countries on the global stage appear to be entirely controlled by single individuals, of which technology-powered state control is an essential piece of their strategies. There seems to be increasing tensions between the populations and the governments in these countries, but the outcomes are totally unpredictable.
I think some might try to ascribe a value judgment to this increasing unpredictability. My perspective is bluntly that we can’t go backwards and any attempts to do so will only backfire6: Macro unpredictability is here to stay as long as technology develops, and technology will surely continue to develop. If we appreciate the implications of an unpredictable world, I believe we can create much good in the future ahead.
How I learned to stop worrying and love the unpredictability
What are some ways to think about thriving in a world that is made of many easily predictable elements and a few main drivers of variance? We can take a quick look backward before looking forward, and the history of physical science is a great case study.
We take for granted that the science we use today is built on a totally unpredictable chain of unique individuals and chance events. There’s been increasingly more study of this phenomena: Nature posted a blog about the unpredictability of scientific success, and I love reading the Metascience/Progress Studies work7 such as New Things Under The Sun, New Science, or Ben Reinhardt’s and Nadia’s Asparouhova’s blogs. No NSF officer could predict the impact of an individual genius on the level of John Von Neumann, although Warren Weaver’s ability to cultivate Nobel laureates may be the current record. We can’t manufacture genius (yet?), but we can prioritize development of individual agency and aim to build the conditions that promote scenius (magical pinpoints of group creativity).
It is surprising that bicycle makers were the first to take flight, but the Wright Brothers moment of triumph is a tale of iteration and hyperfixation that reflected their agency. They chose the problem of tinkering with flight because they loved it, and they won the race to flight. But what about explorations that aren’t as clearly defined?
Much progress in science (aka, resolutions to Kuhnian Crises) was unpredictable and, as such, initially rejected by contemporaries. The story of the birth of thermodynamics is particularly illustrative8 : Sadi Carnot was a French mathematician who self-published a failed book that printed 100 copies. He received little attention in his life for his ideas and he died poor, alone, and insane. Two years after his death, a French professor found a copy of the book and wrote about it, which made its way to James Prescott Joule, a beer brewer’s son in England, who tinkered to experimentally explore the world with brother. In 1847, Joule applied to give a talk on his explorations of Carnot’s ideas to chemists at the Royal Society meeting but was rejected for being too strange, he was instead given just a small time slot to read a summary with no discourse allowed. But 22-year-old William Thompson was in the audience and spoke out of turn because he was so impressed by Joule's ideas, and a lively discussion ensued in the session. Years of collaboration later, Thompson became Lord Kelvin, Joule is immortalized in our concept of energy, and Carnot is forever known as "the father of thermodynamics." Stories like the chance encounter between Joule and Thompson can be cherrypicked throughout the centuries, but now the impacts of such events now can happen in weeks, not decades9.
In addition to celebrating the hidden gems of unpredictability, we must also appreciate the failure modes of forcing predictability. In an excellent Q&A, Tom Kalil references the book Seeing Like a State by James Scott: this book is an essential list of high-budget failures of governments forcing complex systems to become manageable. Examples include natural forests in post-feudal Europe that were converted into unstable monocrops which eventually collapsed, and cities planned from airplanes by foreign architects (eg, Le Corbusier and Brasilia) that resulted in barren spaces that were used in opposite to the best-laid plans. The highlight of Scott’s book was a narrative of Jane Jacobs’ successful fights against the top-down planners: while Robert Moses and Le Corbusier were flying in planes, Jane Jacobs was walking at street level observing how people lived. While every architecture student knows her for alley-level view on city planning, I think her brilliance extends into technical progress too. Jacobs used the metaphor that rigid army formations only makes sense in parades for managers and an army’s functional form would look like chaos to anybody but the practitioner. If we’re trying to optimize for creative output in positive direction the decades ahead, we need to consider this idea of a functional form versus a manageable form. This will look like cultivating agency in individuals by giving opportunities to explore and a careful reimagining of what managerial metrics actually matter.
Concluding with some ideas in motion
If the world is to become more and more unpredictable, then the predictable parts are in danger of having less and less impact. If we appreciate the magnitude of how much technology has developed (go play with ChatGPT if you haven’t yet) then we must act with urgency to adapt our systems and institutions, especially for personal development (aka education) and scientific research.
Though I write with optimism now, I admittedly felt an initial unease and a desire to cling to a more predictable existence that I’ve always known. Everything is moving faster: we used to think of cultural generations as lasting a decade, but now I think it's closer to two years between cultural shifts. This journey from unease to optimism is similar to my realization of how much of biotech research is going to change with full-stack automation. There can be beauty and fantastic outcomes in the unknown ahead, it requires good people being empowered and engaged.
I’ll conclude with some draft ideas for future exploration:
Self-agency might be the core prophylactic to predictability. It is more important than ever to develop ourselves and our youth into high-agency individuals. Sir Ken Robinson’s 2006 TED Talk (the most watched TED Talk of all time) was pushing this thesis using the language of enhancing creativity: His take was that schools have to to get out of the business of selling facts and focus on developing creative freedom. Similar to the point in “the Sovereign Individual” that assembly lines equalized economic value per worker, some argue that modern factory schools were designed to produce workers for those assembly lines that don’t exist anymore. These heavy critiques apply to all education levels from kindergarten through the PhD, and my guess is that we’ll see the most changes at the college-age first. If we train predictability and manageability, we may be training replaceability.
The question is *not* how to decelerate technology’s development, but what interventions we can do that accelerate technology’s growth and impact in areas that might organically be the last recipients. For example, online ads and pharmaceuticals will be the first recipients of any advancements because startup logic drives innovators to high-paying markets. But what about frontier fields like artificial enzymes or historically high-walled gardens like nuclear energy?
Let’s actively study and experiment how to create conditions that encourage more unpredictable results of individuals or groups in hyperproductive communities.
If you read Martin Scorsese’s classic 2019 roast of the Marvel franchise, you’ll see he pleas for us to again turn toward unpredictability. The 77-year-old master filmmaker received a lot of attention for calling the Avenger movies “theme-park rides”, so he wrote a longer essay to distinguish what he called "audiovisual entertainment" from cinema. In his words, true cinema comes from a single artist's vision, whereas audiovisual entertainment is “market-researched, audience-tested, vetted, modified, revetted and remodified until they’re ready for consumption.” To make the financial return predictable, the content is made predictable. So instead of accepting the risk of an artist’s vision that can last forever upon success10, we get shiny but forgettable content of diminishing value. Now mix into this critique the technological aspect: Stability.ai is now generating photo-realistic images at effectively real-time speed, meaning that the audiovisual entertainment industry is about to get overhauled. One ironic upside is that we may have an Iron Man film made by a one-person team.
Might a one-person Iron Man production actually become cinema again in Scorsese’s eyes? That is unpredictable!
I hope to continue discussing these ideas online, offline and in future posts. I want to thank Rick for reading and shredding a draft of this essay :) I wish I had time to factor in all his great feedback, and specifically appreciate him pointing out important initial omissions such as Hegel and Shirky.
Credit to OpenAI’s chatGPT for helping me rewrite sentences and paragraphs that were clunky.
Also, the process of thinking and writing this essay motivated me to read Tyler Cowen’s and Daniel Gross’ book Talent.
Edit: Cool to see this linked at MarginalRevolution :)
“The best minds of my generation are thinking about how to make people click ads.” - Jeff Hammerbacher (link), who was the first data scientist at Facebook, then founded Cloudera
Admittedly, I’m going fast and loose with the way I hop between “modeling” and “predicting.” I hope those who would nitpick here would give the pass for simplicity.
For fun, imagine a world in which you are assessed in real time whether or not your actions were predictable. There is some machine that has all your data to build some model of you, and consider there a light over your head: it’s green every time you are acting as the the model of you predicts, and the light flashes red when you do something “out of distribution.” Small talk in the grocery store: green. Driving kids to sports, green. Listening to trending pop playlist on Spotify, green. When in your life would you make the light flash Red? What would you do or say on a daily basis that is *not* predictable?
Alternative phrasing in a more negative light: the parts that are easiest to model are having less influence.
This is not a spoiler! These elements of the story are given in the first ~20 pages of the first Foundation Series book.
Although I do love javascript-free webpages.
Choose your favorite term for engineering how we innovate
Thanks to Jake Feala’s suggestion to read Einstein’s Fridge! The story of Joule and Lord Kelvin comes from this book
Ian Goodfellow famously had the idea for Generative Adversarial Networks while at a bar in 2013, then coded the first prototype that night: 50,000+ citations later, that GANs paper represented an important leap in the incredible growth of AI that happened within a year or two.
How many readers know the premise of this 1960 Hitchcock masterpiece (written by Roald Dahl!) without even knowing how or why they know it?
I personally agree that the future is getting less predictable & AI in particular is going to falsify a huge overhang of extant long-term predictions which have misleadingly-good track records because they are predicated on AI never happening. But to bracket my personal opinions, how would one measure such things as 'people *think* the future is getting more/less predictable' or 'the future *has* been getting more/less predictable than people thought'? Some possibilities:
- prediction market forecasts being systematically overconfident: have PredictIt or Metaculus contracts been expiring with worse Brier or other proper scoring rule results when grouped by year? Is GJP's accuracy decaying? Are the old historical predictions from Tetlock's expert surveys getting worse over time?
- prediction market prices being higher variance: prices reflect information & certainty, so if the future has been getting harder to predict, then price time series should be increasingly volatile
- similarly, futures markets: volatility and the risk premia of long-dated options can either be higher or lower than in earlier time periods. (The future may be richer or poorer, but any change in levels would be reflected in the base price.)
- insurance costs: ditto---the riskier the future, the higher premiums must be charged. (eg.can you still buy cheap longevity insurance or annuities? If so, then insurers apparently don't believe that anti-aging research is going to pay off.)
- interest rates: interest rates reflect the desire to trade less money now for more money later, and incorporate a lot of things like nominal inflation or demographic trends like retirement planning, but also include risk such as expropriation or disaster or just the opportunity cost of being locked into the wrong investment. So at a simple first look, higher interest rates imply a more unpredictable future and vice-versa.
- conversely, debt loads: debt is dangerously fixed, so one will prefer debt to equity if the future is predictable and you can count on the cash flows to service it. On an individual level, things like student loans or house mortgages are commitments to a predictable future.
- social/geographic mobility: people will prefer buying houses to renting or staying put the more predictable the future, because the less the optionality is worth. (If you know the future, you either have moved to the right place already or you know there's nothing better out there.)
- creative destruction and turnover in the economy, especially of large corporations
- increased contracting and regulation, redistributive politics, larger governments as % of GDPs etc
- demographics: the older and more female a population, the less you expect any radical revolutions or uprisings which would spoil your predictions; violence is a young man's game.
Again bracketing AI out, my impression is that over the past 2 decades, pretty much all of these have trends consistent with believing a more rather than less predictable future. What we see is a world where aging populations and governments invest ever more into various kinds of 'insurance' and avoiding any consequences or major changes, spending however many trillions of dollars it takes to satisfy risk aversion. In the 1990s, people thought the future like 2022 would be a lot crazier than it is. In 1995, it was easy to imagine how IBM (or Microsoft) might not exist in 2005; in 2022, can you imagine Google (or the rest of FANG) not existing in 2032? I can't. Stuff like CRISPR is cool and benefiting a few people, but again, stagnant compared to the hopes & dreams of what would happen once the Human Genome Project would finish, and to things that happened back in the 1990s like Dolly or three-parent babies. People were looking forward to debunking Fukuyama with the rise of Russia (eg. Esther Dyson) or China, but he's having the last laugh as they prove to be hollow corrupt authoritarian states which are struggling to maintain middle-income status, and the only people they appeal to as 'a new post-liberal-democratic paradigm' are would-be authoritarian strongmen. Or consider the lockdown response to COVID. So it looks like people expect a more predictable future, have thus far been largely right, and have tended to do things that would cause that.
The counter-arguments here are mostly anecdotal and not even great ones. SBF incinerated $8b? Fine; but the consequences of FTX has thus far been mostly some embarassment. Meanwhile, some dude over at AT&T incinerated up to $100b in a bad merger, which is a lot bigger, while 1MDB was a lot smaller ($1b) and the Guptas ('who?') stole a lot more (>$20b), and those had actual geopolitical consequences.
Some industrialist took over some relatively minor advertising company? Yeah, that's something that used to happen a lot - how nostalgic to see an instance in our latter days, reminds one of the youthful American economy, before all the poison pills (which Twitter had) were legalized and other measures put into place to stop takeovers (and did).
Trump was elected? Sure, that was surprising, but the signature feature of Trump's administration was that it incompetently passed the time for 4 years, so it affected few meaningful predictions, and elections like Trump's forecasting error were within the total forecasting error historically so statistically, a loser candidate like Trump winning isn't even unusual. Let's remember how many shenanigans there have been around presidential elections historically, whether it's Watergate or Literary Digest or the Compromise of 1877 or JFK beating Nixon credited to his (non-social) media savvy or hey remember how crazy 2000 was?
Xi/Putin are awful? Yes, they sure are; are they more awful - and unpredictable - than Mao or Stalin or Pol Pot or Kim Il-Sung or Adolf Hitler, or countless other dictators and tyrants throughout history, and is the awfulness and unpredictability of the current crop of dictators like Modi or Erdogan "increasingly unpredictable"? No.
Your strong random individual thesis reminds me of the fractal nature of causality in biology. In most of the universe, a molecular scale event gets washed out in the random thermodynamic fluctuations. Best of luck for the ambitious hydrogen atom in the center of the sun, but it probably will just get crunched into helium like its neighbors. But in biology the system is at edge of chaos and so micro influences have macro effects. Even a quantum event can have societal effects if it swaps a nucleotide in a cancer driver gene that kills Steve Jobs, or adds 20 IQ points to the next John von Neumann. Similarly, our model of evolution is a single fit individual whose selfish gene quickly takes over the population.
The counterargument from biology might be emergence, multiple causality, or both. Things might have a distribution of causes and it's hard for people to fit all the causal factors into their mental models. But a bigger machine learning model might be able to eventually be predictive in theory, with the right data. Or Wolfram might say it's all perfectly predictable, just creates random chaos when you run the program out.
On a tangent I've been thinking a lot about org design and see an interesting connection between Jane Jacobs:Robert Moses::chaos:planning story you tell and the top-down org chart vs Team Topologies and agile services approach.
Awesome post, love all the great references and new reading material. Cool that you used ChatGPT but AI is a long way off from great writing like this!