The Expert Predicted It

You trust the forecast because the person giving it has credentials, experience, and confidence. Across 28,000 predictions and twenty years of data, confidence and accuracy moved in opposite directions.

Cedric Atkinson

In August 1990, Alan Greenspan told his colleagues at the Federal Reserve that those arguing the economy was already in a recession were "reasonably certain to be wrong."1 The recession had begun in July. One month earlier.

In March 2007, Ben Bernanke testified before Congress that the problems in the subprime mortgage market seemed "likely to be contained."2 The Great Recession began eight months later. It wiped out $11 trillion in household wealth, pushed unemployment past ten percent, and produced the deepest contraction since the 1930s.

In October 2007, three weeks before the recession officially started, the New York Federal Reserve staff forecast 2.6 percent GDP growth for 2008. They assessed roughly a 90 percent probability that no recession would occur.3

0% Forecast (Oct 2007) +2.6% Actual (2008) −3.3% 5.9pp error NY Fed staff forecast vs Bureau of Economic Analysis. The recession had already started when the forecast was issued.

The Federal Reserve controls the most powerful lever in the American economy. It sets the interest rate. It regulates the banks. It employs hundreds of PhD economists and has more real-time economic data than any institution on earth. It could not predict what would happen to the economy it manages. The economists were not careless. Prediction, at the scale and complexity of a national economy, does not work. The evidence for this is one of the most thoroughly documented findings in the social sciences.

The study

In 1984, a political psychologist at the University of Pennsylvania named Philip Tetlock began collecting predictions from experts. Not casual opinions. Formal, structured predictions with defined outcomes and time horizons. He asked political scientists, economists, national security analysts, and government advisors to assign probabilities to specific questions. Will the Soviet Union collapse? Will GDP grow above three percent? Will this leader hold power? Three possible outcomes per question. Better, same, or worse. The expert assigns a probability to each.4

He did this for twenty years. Between 1984 and 2003, 284 experts generated roughly 28,000 predictions across political, economic, and geopolitical domains. The experts averaged twelve years of experience in their fields. Most held doctorates. Their political philosophies ranged from Marxist to libertarian. They made their living commenting on or advising about these exact topics.5

The benchmark was simple. Three outcomes per question. If you assign equal probability to each, you get 33.3 percent on everything. A dart-throwing chimpanzee. That was Tetlock's phrase. Imagine a chimp throwing darts at a board labeled with the three outcomes. That is the baseline against which the experts were measured.6

The experts, on average, barely outperformed the chimp. They also usually lost to simple algorithms that did nothing more sophisticated than predict the status quo would continue. The algorithm that said "things will stay roughly the same" beat the people who studied these questions for a living.7

That alone would be notable. But Tetlock found something more specific. He divided the experts into two cognitive styles, borrowing from the philosopher Isaiah Berlin, who borrowed from the Greek poet Archilochus. "The fox knows many things. The hedgehog knows one big thing."8

Hedgehogs organize the world through a single powerful framework. They are ideological. They extend one big idea into every domain. They project certainty. They tell a compelling story. When the evidence contradicts their framework, they explain it away. Foxes are different. They draw from multiple models. They are skeptical of grand theories. They are comfortable saying "I don't know." They adjust when the evidence changes.

Foxes outperformed hedgehogs on every measure Tetlock tracked. Calibration. Discrimination. Accuracy within their domain and outside it. This was the single most consistent pattern in the entire study. Not political orientation. Not years of experience. Not whether they were predicting within their specialty. The only variable that reliably separated better forecasters from worse ones was cognitive style.9

The gap showed up most clearly in how each group responded to being wrong. When confronted with an outcome that contradicted their forecast, foxes updated their beliefs by approximately 59 percent of the amount prescribed by Bayesian probability theory. Hedgehogs updated by 19 percent. Some hedgehogs, when presented with disconfirming evidence, moved their beliefs in the wrong direction. They became more confident in the position the evidence had just undermined.10

Here is the part that matters for the rest of this piece. Hedgehogs are what the media wants. They speak in clean narratives. They project conviction. They are quotable. Tetlock found that forecasters with the biggest media profiles tended to lose to their lower-profile colleagues. He described a "rather perverse inverse relationship between fame and accuracy."11 The traits that make an expert attractive to a television producer are the exact traits that make them a worse forecaster. The confidence. The single narrative. The bold prediction. The refusal to hedge. Every one of those traits correlated with worse performance. Foxes, as Tetlock put it, "bore you with a cloud of howevers." Nobody books them.

The record

Tetlock's study measured individuals. The institutional record is worse.

In 2018, three researchers at the International Monetary Fund published the definitive analysis of recession forecasting. Zidong An, Joao Tovar Jalles, and Prakash Loungani examined GDP forecasts across 63 countries from 1992 to 2014. During that period, 153 recessions occurred. They asked a simple question: how many were predicted by April of the preceding year?12

Five. Out of 153. A 3.3 percent detection rate.

Study Period Recessions Predicted Detection rate
Loungani 1989–1998 60 2 3.3%
Ahir & Loungani 2008–2009 62 0 0%
Ahir & Loungani 2012 15 0 0%
An, Jalles & Loungani 1992–2014 153 5 3.3%

Loungani himself, an economist at the IMF, had studied this earlier. In 2001, examining consensus forecasts from 1989 to 1998, he found that only 2 of 60 recessions were predicted a year in advance. His conclusion: "The record of failure to predict recessions is virtually unblemished."13

The 2009 recession was the clearest test. Forty-nine countries entered recession that year. By September 2008, after the credit crunch had been front-page news for over a year, after Northern Rock had been nationalized, after Bear Stearns had collapsed, the consensus forecast predicted zero of them.14 Not one. The average growth forecast in April before a recession was approximately positive three percent. The average actual growth was approximately negative three percent. A six-percentage-point swing, in the wrong direction, every time.12

Private sector forecasts were no better. Loungani found them "virtually identical" to official forecasts. It was, as he put it, "a statistical photo finish." The problem was not that the IMF was uniquely bad. The problem was that recession forecasting itself does not work, regardless of who does it.13

The 2016 United States presidential election produced the same pattern in a different domain. Two things "everyone knew." Hillary Clinton would be elected. If somehow Donald Trump won, markets would tank. Trump won. Markets soared. No forecast that took a conventional view of the 2016 election got the period since then correct.15

Paul Krugman, a Nobel laureate, published a column in The New York Times in July 2022 titled "I Was Wrong About Inflation." He had been on the side less concerned about the inflationary consequences of the American Rescue Plan. He admitted the call was wrong. He described the experience as "a lesson in humility." He did not mention abstaining from future forecasts.16

Tetlock's study ended in 2003. The record since then has not improved. In January 2020, the IMF projected 3.3 percent global growth for the year. By April, they revised it to negative 3.0 percent. Actual global GDP fell 3.5 percent. A 6.8 percentage point miss. No model predicted the pandemic. No institution predicted the economic contraction it would produce. The largest single-year decline in global output since the Second World War arrived without a forecast.

The following year produced a different kind of failure. Throughout 2021, Federal Reserve Chair Jerome Powell described rising inflation as "transitory." In April, the month the Fed adopted the word, CPI was 4.2 percent. By December it was 7.0 percent. By June 2022 it reached 9.1 percent. In November, Powell retired the word. The Fed did not raise rates until March 2022. By July 2023, the target had gone from near zero to 5.25 percent, the fastest tightening in four decades. The Fed's own projections in December 2021 had called for a rate of approximately 0.9 percent by the end of 2022. The actual rate was 4.5 percent. The institution that sets the interest rate could not predict where it would set the interest rate twelve months later.17

The earnings report was one visible number nobody checked against the register. The forecast is another.

The hedge fund industry, which employs thousands of macro forecasters managing $4.5 trillion, produced annualized returns of 2.8 percent over ten years. The S&P 500 returned 13.8 percent over the same period. An index fund with no forecasters, no analysts, and no opinions outperformed the entire industry by a factor of five.18

$100 $200 $300 0 2 4 6 8 10 yrs $364 $132 S&P 500 Macro funds $100 invested HFRI Macro (Total) Index vs S&P 500, annualized returns through July 31, 2022. Approximately $4.5 trillion entrusted to hedge funds.

The arithmetic

The record is bad. But the record alone is not the argument. Forecasts could fail for fixable reasons: bad data, insufficient computing power, not enough expertise. If the problem were technical, better tools would solve it. The problem is mathematical.

Howard Marks, the co-chairman of Oaktree Capital Management and one of the most respected investors alive, laid out the arithmetic in a 2022 memo titled "The Illusion of Knowledge."19 Most forecasting follows a chain of dependencies. The forecast does not predict one thing. It predicts a sequence.

The chain of dependencies (67% accuracy per step)
Step 1
Economy
67%
Steps 1 + 2
+ Interest rates
45%
Steps 1 + 2 + 3
+ Stock market
30%
Steps 1–4
+ Best sector
20%
All five correct
13%
Being right two-thirds of the time at each step leaves a 13% chance the final forecast materializes.

Being right two-thirds of the time at each individual step, which would be an extraordinary track record, leaves only a 13 percent probability that the final forecast materializes. The chain does not forgive. Every additional dependency multiplies the chance of failure. Five dependencies at 67 percent each produce the same odds as a single coin flip that lands wrong seven times out of eight.

Marks cited the historian Niall Ferguson, who posed a question that appears simple: Has inflation peaked? To answer it, Ferguson wrote, you must predict the supply and demand for 94,000 commodities, manufactures, and services. The future path of interest rates. How long the dollar remains strong. The duration and outcome of the war in Ukraine. Saudi Arabia's response to Western oil demands. China's lockdown policies. The impact of Covid variants on labor markets.20

Each of those is itself a prediction requiring further predictions. And each of those depends on decisions that have not been made by people who have not made them yet. Marks's observation was sharper: predicting inflation is not as impossible as predicting the war in Ukraine. It is more impossible, because it requires being right about the war, the pandemic, and a thousand other things, simultaneously.

The arithmetic guarantees it. The chain of dependencies is a mathematical feature of prediction in complex systems. You cannot fix it with better models, more data, or smarter people. The structure of the problem guarantees the failure of the forecast.

The mechanism

If the arithmetic explains why forecasts fail, the mechanism explains why they cannot improve.

Marks identified three structural problems. The first is complexity. The American economy has 330 million participants. Consumers, workers, producers, investors. Each interacts with suppliers, customers, regulators, and global markets. The number of nodes and interactions runs into the billions. Economic models simplify by assuming consumers behave rationally, producers optimize efficiently, and prices reflect information. Every one of those assumptions is false in specific, measurable ways that change over time.21

The second problem is inputs. A forecast requires inputs. Each input is itself a forecast. To predict where the economy will be in twelve months, you need to predict interest rate decisions, fiscal policy, commodity prices, consumer sentiment, geopolitical stability, and technological change. Each of those predictions requires further predictions. The model does not sit on solid ground. It sits on a stack of other models, each of which is guessing.21

The third problem is what makes economics permanently different from physics. Richard Feynman, the Nobel laureate in physics, put it this way:

Imagine how much harder physics would be if electrons had feelings. Richard Feynman

Electrons do not rebel, forget, innovate, or change their behavior because they read about what electrons are supposed to do next quarter. Economic participants do all of these things. They foresee, react, and adapt. They read the forecast and change the behavior that the forecast was predicting. A model of an economy is a model of a system that is aware of the model and responds to it.

Sowell observed the same problem from a different angle. People are not chess pieces.22 You cannot move them to a square and expect them to stay. They see the move. They adapt. And their adaptation produces the opposite of the intended outcome. Maryland raised taxes on millionaires expecting $100 million in additional revenue. Two thousand high earners left the state. Revenue dropped $200 million. The forecast assumed static behavior. The targets were not static.23

Sowell's observation about economic forecasting was specific. "Economists are often asked to predict what the economy is going to do," he wrote. "But economic predictions require predicting what politicians are going to do. And nothing is more unpredictable."24

Then there is the problem of stationarity: the assumption that patterns observed in the past will persist into the future. In most physical systems, this assumption holds. One hundred years of flood data is a reasonable guide for building a levee. In economics, the assumption fails because the relationships between variables change over time.

The Phillips Curve, the most influential relationship in macroeconomics for more than half a century, demonstrated this. For sixty years, the curve held: lower unemployment meant higher wage inflation. It was the bedrock of central bank policy. Unemployment fell below 5.5 percent in March 2015. It reached a fifty-year low of 3.5 percent in September 2019. No significant inflation appeared until 2021. The relationship that generations of economists relied on simply stopped working. The measurement was fine. The economy underneath it had changed in ways no model captured.25

Scott Sagan, the political scientist, offered the shortest summary of the stationarity problem: "Things that have never happened before happen all the time."26

The cost

Thomas Sowell asked three questions of every claim. Compared to what? At what cost? What are the hard facts?27

Applied to forecasting, the answers are precise. Compared to what? Compared to chance. Two hundred and eighty-four experts across twenty years, and the result was barely distinguishable from a random number generator. Compared to simple algorithms. An extrapolation that says "things stay the same" beat most of the people paid to say otherwise. The hard facts? Twenty-eight thousand data points. One hundred and fifty-three recessions. Five predicted. 3.3 percent.

The remaining question is the one that carries the weight: at what cost?

The cost is borne by the people who trust the forecast. The homeowner who bought at the peak because no recession was predicted. The retiree whose portfolio followed the macro call. The policymaker who designed a stimulus, an austerity program, or a budget based on a growth forecast that turned out to be six percentage points wrong. The IMF's own Independent Evaluation Office examined its surveillance in the years before the financial crisis and found "a high degree of groupthink, intellectual capture, a general mindset that a major financial crisis in large advanced economies was unlikely, and inadequate analytical approaches."28 The institution that exists to warn about crises was captured by the belief that crises of that scale could not occur.

John Kenneth Galbraith identified the structural reason the industry persists: "There are two kinds of forecasters: those who don't know, and those who don't know they don't know."29 The person who issues the forecast does not bear the cost of being wrong. They issue another forecast. Their career does not end. Their reputation, in many cases, grows. Tetlock found that the most famous forecasters were the least accurate. Fame did not correct the error. It rewarded it.

This is the same structural finding that operates in the Boeing story. Knowledge in one place. Power in another. Consequences in a third. The forecaster has the platform. The institution has the authority. The person who followed the forecast bears the loss. Three groups. The same separation. The feedback loop that would force correction does not exist because the cost of being wrong lands on someone who had no voice in the prediction.

Sowell's framework for institutional failure predicts this. Residual claimants, people who bear the consequences of their own decisions, self-correct. They must. Their errors cost them something, and the cost produces change. Insulated decision-makers do not self-correct because the feedback never reaches them in a form that forces change.30 The forecasting industry is structurally insulated. Its practitioners are not residual claimants. They are commentators. And commentators, as a class, face no penalty for being wrong. They face a penalty for being boring. Hedgehogs dominate because they are interesting, not because they are right.

The alternative

The constructive version of this argument comes from Marks. In a memo written in November 2001, weeks after the September 11 attacks and in the wreckage of the dot-com collapse, he drew a line between two questions that most people treat as one.31

The first question: Where are we going? This is prediction. What will GDP be next year? Where will the market be in December? When will the recession start? Marks argued, and the evidence supports him, that this question cannot be answered reliably by anyone. The chain of dependencies is too long, the inputs are unknowable, the participants adapt, and the math does not work.

The second question: Where are we now? This is assessment. Are we early or late in the economic cycle? Is credit expanding or contracting? Is investor sentiment euphoric or depressed? Are valuations above or below historical norms? These are questions about the present, not the future. They can be answered with observable data. They do not require a chain of predictions that multiplies toward zero.

We may never know where we're going, or when the tide will turn, but we had better have a good idea where we are. Howard Marks, "You Can't Predict. You Can Prepare," November 2001

The distinction is operational. A prediction says: the market will fall 20 percent next year. An assessment says: credit is expanding rapidly, risk appetite is high, valuations are stretched, and these conditions have historically preceded corrections. The prediction requires being right about timing, magnitude, and trigger. The assessment requires being right about where you are in a pattern that has repeated for centuries.

Marks's phrase for the actionable version is cycle awareness. Every market, every economy, every business moves in cycles. The pendulum swings between euphoria and depression, overpriced and underpriced. The midpoint of the arc best describes the average position, but the pendulum spends almost no time there. It is always moving toward an extreme or away from one. You cannot predict when the pendulum will reverse. But you can observe, with reasonable precision, how far from the center it has traveled.32

The value is precision about which questions can be answered. Expertise, assessment, and diagnosis are real. The doctor who examines a patient and identifies a condition is exercising expertise. The epidemiologist who says a specific disease will kill a specific number of people next year is forecasting. The first has a track record. The second has a 3.3 percent detection rate.

 

The argument is not against expertise itself. Expertise operates in specific, bounded domains where feedback is immediate and repetitive. The surgeon who has performed a thousand procedures is better than the one who has performed ten. The pilot with twenty thousand hours reads an instrument panel differently than the one with two hundred. The chess master sees patterns the novice cannot. These are real skills developed through real feedback, where the expert acts, the outcome arrives, and the expert adjusts, repeatedly.

Forecasting does not have this structure. The prediction resolves months or years later, under conditions that have changed in ways nobody anticipated. The feedback is delayed, ambiguous, and contaminated by a thousand other variables. The forecaster can always explain why they were right in principle but wrong in timing. Or right in direction but wrong in magnitude. The escape routes are infinite. The feedback loop that would force improvement does not close.

Daniel Boorstin, the former Librarian of Congress, put it in twelve words: "The greatest enemy of knowledge is not ignorance. It is the illusion of knowledge."33

The expert predicted it. The prediction was the problem.

New pieces when they're ready. Nothing else.

Sources

  1. Alan Greenspan, FOMC transcript, August 1990. "Those who argue that we are already in a recession, I think, are reasonably certain to be wrong." NBER dates the recession as beginning July 1990.
  2. Ben Bernanke, testimony before the Joint Economic Committee, U.S. Congress, March 28, 2007. "The impact on the broader economy and financial markets of the problems in the subprime market seems likely to be contained." NBER dates the Great Recession as beginning December 2007.
  3. Simon Potter, "The Failure to Forecast the Great Recession," Liberty Street Economics (Federal Reserve Bank of New York), November 25, 2011. NY Fed staff forecast for 2008 real GDP growth: +2.6%. FOMC central tendency, October 2007: 1.8–2.5%. Actual 2008 GDP (Q4/Q4): approximately −3.3% (BEA). NY Fed assessed roughly 90% probability of no recession. The unemployment forecast was similarly off: projected ~5–6% for Q4 2009, actual 10.0%. Under normal distribution assumptions based on Great Moderation-era errors, this was a 6-standard-deviation event.
  4. Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press. Chapter 1: "twenty years of research of soliciting and scoring experts' judgments."
  5. Tetlock (2005). 284 experts, approximately 28,000 predictions (27,451 verifiable per the book), 1984–2003. Tetlock's publications page, University of Pennsylvania: "Cumulatively they made 28,000 predictions bearing on a diverse array of geopolitical and economic outcomes." Experts averaged 12 years of experience. Domains: political, economic, and geopolitical/military. Menand, Louis, "Everybody's an Expert," The New Yorker, December 5, 2005.
  6. Tetlock (2005), Chapter 2. The "dart-throwing chimpanzee" is Tetlock's metaphor for the equal-probability baseline: assigning 33.3% to each of three possible outcomes. Kahneman, Daniel, Thinking, Fast and Slow (2011): "People who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options."
  7. Tetlock (2005), Chapters 2–3. Experts barely outperformed the equal-probability baseline and "usually lost to simple extrapolation algorithms" (predicting the status quo would continue). Tetlock's Penn publications page: "forecasters were often only slightly more accurate than chance."
  8. Berlin, Isaiah, The Hedgehog and the Fox: An Essay on Tolstoy's View of History (1953), drawing on the Greek poet Archilochus (c. 680–645 BC): "The fox knows many things; the hedgehog knows one big thing."
  9. Tetlock (2005), Chapter 3: "Knowing the Limits of One's Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs." Foxes outperformed hedgehogs on every measure. The hedgehog–fox distinction was the single most consistent predictor of accuracy, stronger than political orientation, years of experience, or domain expertise.
  10. Tetlock (2005), Chapter 4: "Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs." Foxes updated beliefs by approximately 59% of the Bayesian-prescribed amount. Hedgehogs updated by approximately 19%. Some hedgehogs moved their beliefs in the wrong direction when confronted with disconfirming evidence.
  11. Tetlock, Penn publications page: "Forecasters with the biggest news media profiles tended to lose to their lower profile colleagues, suggesting a rather perverse inverse relationship between fame and accuracy."
  12. An, Zidong, Joao Tovar Jalles, and Prakash Loungani, "How Well Do Economists Forecast Recessions?" IMF Working Paper WP/18/39, March 5, 2018. Also published in International Finance, Vol. 21, Issue 2, pp. 100–121, June 2018. 63 countries, 1992–2014. 153 recessions. 5 predicted by April of the preceding year (3.3%). Average growth forecast in April before a recession: approximately +3%. Average actual growth: approximately −3%. By October of the recession year itself, 35 recessions were still being missed.
  13. Loungani, Prakash (2001), "How accurate are private sector forecasts? Cross-country evidence from consensus forecasts of output growth," International Journal of Forecasting, Vol. 17, Issue 3, pp. 419–432. 1989–1998: only 2 of 60 recessions predicted a year in advance. "The record of failure to predict recessions is virtually unblemished." Private and official sector forecasts "virtually identical."
  14. Ahir, Hites and Prakash Loungani, "There Will Be Growth in the Spring: How Well Do Economists Predict Turning Points?" VoxEU/CEPR, April 2014. 49 countries in recession in 2009: zero predicted by September 2008. Harford, Tim, "An Astonishing Record, of Complete Failure," Financial Times, May 2014.
  15. Marks, Howard, "The Illusion of Knowledge," Oaktree Capital Management memo, September 8, 2022. Two things "everyone knew" about the 2016 election: Clinton would win, and if Trump won, markets would tank. Both wrong. Markets soared.
  16. Krugman, Paul, "I Was Wrong About Inflation," The New York Times, July 21, 2022. "In early 2021, there was an intense debate among economists about the likely consequences of the American Rescue Plan... I was on [the side less concerned about inflation]. As it turned out, of course, that was a very bad call... The whole experience has been a lesson in humility." Marks's observation: no mention of abstaining from future forecasting.
  17. IMF World Economic Outlook, January 2020: projected 3.3% global GDP growth for 2020. WEO Update, April 2020: revised to −3.0%. Actual 2020 global GDP: −3.5% (IMF WEO Update, January 2021). A 6.8 percentage point miss, the largest single-year forecast error in IMF history. Federal Reserve Chair Jerome Powell described inflation as "transitory" throughout 2021; retired the term November 30, 2021, Senate Banking Committee hearing: "I think it's probably a good time to retire that word." FOMC formally adopted "transitory" in April 28, 2021 statement. CPI year-over-year: 4.2% April 2021 (when word adopted), 7.0% December 2021, peak 9.1% June 2022. December 2021 SEP median projection for end-2022 federal funds rate: 0.875% (three 25bp hikes). Actual end-2022: 4.25–4.50%. Rate rose from 0.00–0.25% to 5.25–5.50% between March 2022 and July 2023, fastest tightening since the Volcker era. Bureau of Labor Statistics; Federal Reserve Board; IMF.org.
  18. HFRI Macro (Total) Index: 5-year annualized return 5.0%, 10-year annualized return 2.8%. S&P 500: 5-year annualized 12.8%, 10-year annualized 13.8%. Data through July 31, 2022. Approximately $4.5 trillion in hedge fund assets. Cited in Marks (2022).
  19. Marks, Howard, "The Illusion of Knowledge," Oaktree Capital Management memo, September 8, 2022. The chain of dependencies, Ferguson's cascade, and the arithmetic of sequential forecasting.
  20. Ferguson, Niall, cited in Marks (2022). To answer "Has inflation peaked?" you must predict supply and demand for 94,000 commodities, manufactures, and services; future interest rates; dollar strength; Ukraine war duration; Saudi oil policy; Chinese lockdown policy; Covid variant impact on labor markets.
  21. Marks (2022). The Three Forecasting Problems: (1) The Machine: 330 million U.S. participants, billions of interactions. (2) The Inputs: each forecast requires unknowable inputs that are themselves forecasts. (3) The Unpredictable Influences: random events (Covid-19, Ukraine invasion, elections) dominate outcomes. Feynman, Richard: "Imagine how much harder physics would be if electrons had feelings."
  22. Sowell, Thomas, Social Justice Fallacies, Basic Books, 2023. "People are not just inert chess pieces, carrying out someone else's grand design."
  23. Maryland millionaire tax: expected $100 million in additional revenue. Approximately one-third of millionaire tax filers (2,000+) left the state. Revenue decreased by $200 million. The Wall Street Journal, May 27, 2009, "Millionaires Go Missing." Maryland Comptroller data.
  24. Sowell, Thomas, quoted in Marks, "Uncertainty II," Oaktree Capital Management memo, May 28, 2020. "Economists are often asked to predict what the economy is going to do. But economic predictions require predicting what politicians are going to do. And nothing is more unpredictable."
  25. Marks (2022). Phillips Curve: for 60 years, lower unemployment correlated with higher wage inflation. Unemployment fell below 5.5% in March 2015. Reached 3.5% in September 2019 (50-year low). No significant inflation until 2021. BLS data.
  26. Sagan, Scott. "Things that have never happened before happen all the time." Cited in Marks (2022).
  27. Sowell, Thomas, A Conflict of Visions, William Morrow, 1987. The three questions (Compared to what? At what cost? What hard facts?) appear across multiple works, most directly in The Vision of the Anointed (1995) and Economic Facts and Fallacies (2008).
  28. IMF Independent Evaluation Office, "IMF Performance in the Run-Up to the Financial and Economic Crisis: IMF Surveillance in 2004–07," August 2011. Found "a high degree of groupthink, intellectual capture, a general mindset that a major financial crisis in large advanced economies was unlikely, and inadequate analytical approaches."
  29. Galbraith, John Kenneth. "There are two kinds of forecasters: those who don't know, and those who don't know they don't know."
  30. Sowell, Thomas, Knowledge and Decisions, Basic Books, 1980. "Residual claimants (people who bear the consequences of their own decisions) self-correct. Insulated decision makers do not, because feedback never reaches them in a form that forces change."
  31. Marks, Howard, "You Can't Predict. You Can Prepare," Oaktree Capital Management memo, November 20, 2001. "The key to dealing with the future lies in knowing where you are, even if you can't know precisely where you're going." Written weeks after September 11 and during the dot-com collapse.
  32. Marks (2001). The pendulum model: markets swing between euphoria and depression. "The midpoint of its arc best describes the position 'on average,' but it actually spends very little time there." Five interlocking cycles: economic, credit, corporate life, market/psychological, and business fads.
  33. Boorstin, Daniel J. "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." Cited in Marks (2022).