Books mentioned in this essay:

This Time Is Different: Eight Centuries of Financial Folly, by Carmen M. Reinhart and Kenneth S. Rogoff

Manias, Panics & Crashes, by Charles P. Kindleberger and Robert Aliber

The Myth of the Rational Market: A History of Risk, Reward, and Delusion on Wall Street, by Justin Fox

Slapped by the Invisible Hand: The Panic of 2007, by Gary B. Gorton

In Fed We Trust: Ben Bernanke's War on the Great Panic, by David Wessel

It seems astonishing that only a few years ago—perhaps five, certainly ten—the possibility of a major financial crisis striking the American and world economies appeared nonsensical. Hardly anyone thought or worried about it. To be sure, investors, bankers, corporate executives, government officials, and economists knew that the financial system (basically: banking plus investing) could behave erratically. But few believed that the system's excesses were so dangerous that they might trigger a calamitous chain reaction of losses, panic, and paralysis that would bring the world to the brink of a second Great Depression: precisely what happened from late 2007 to mid-2009.

Almost everyone assumed that advances in economic knowledge and government regulation, along with the increased dominance of financial markets by professionals—bankers, investment bankers, and portfolio managers for pensions and other large investors—had created firewalls against catastrophe. Deposit insurance, enacted in 1933, precluded old-fashioned banking panics. People wouldn't withdraw their money at the first hint of trouble, because they knew that the government would protect their savings. Mistakes in the pricing of stocks, bonds, and other securities would be largely self-correcting, because market professionals would see them as money-making opportunities. If bubbles burst, the Federal Reserve could act-lowering interest rates, easing credit—to limit the damage to the "real economy" of production and jobs.

This faith in progress has been the biggest casualty of the crisis. At best, we overstated our understanding of the economic system and our ability to manipulate it. At worst, our loss of control is crippling. Some might argue just the opposite: that the crisis vindicated the belief in progress and showed that governments and economists had learned the lessons of the 1930s. After all, the next Great Depression didn't happen. The Federal Reserve and other central banks responded. Governments enacted "stimulus" packages of new spending and tax cuts. Large declines in production, employment, and international trade didn't feed on themselves. But it was a close call, and it is not clear that in a future crisis similar measures could be mechanically deployed with equal success.

The Paradox of Finance

Finance is at the heart of our vulnerability. Predictably, bankers, investment bankers, traders, and money managers have become standard villains in the conventional crisis narrative. They are no longer "masters of the universe," to use Tom Wolfe's phrase. Rather, they're overpaid, greedy, and short-sighted predators. Finance is disparaged as an anti-social, parasitic activity that exists mainly for the self-enrichment of Wall Street. "Financial engineers" created exotic securities designed to extract fees from others and to generate profits for traders. This caricature is understandable-and wrong.

Broadly defined, the purpose of finance is to provide ways for society to save and invest, to give people and firms a choice between spending now and spending later, to match borrowers and lenders. Without reliable ways to save, societies would remain mostly present-oriented, and investment in new productive capacity and technologies—the foundation of economic growth—would be hamstrung. People would bury their savings. Businesses could only invest from accumulated profits. Households would strain to save for future needs—retirement, college, unexpected illnesses—because their savings would earn no returns. They would also have to defer many desires (buying a car, taking a long vacation) until they had ample savings to do so.

Since World War II, numerous financial innovations have created large public benefits. Credit cards have made it easier for consumers to borrow. Venture capital funds have financed new technologies and industries, notably, personal computers and many internet applications. Junk bonds have given non-blue chip companies another source of borrowing aside from banks, as well as providing financing to buy and break-up unwieldy conglomerates. The liberalization of mortgages helped millions of Americans to become homeowners. All good.

But there was a catch. In each of these cases, the process went to destructive excess. Venture capital firms financed too many dubious dotcom startups, contributing to the tech bubble of the late 1990s. Credit cards were marketed at excessive interest rates to many households that couldn't handle them. Junk bonds encouraged some companies to assume too much debt, leading to bankruptcy. And most menacingly, the liberalization of home mortgages spawned the brutal crisis of 200709. The paradox of modern finance is thus silhouetted: advanced economies require sophisticated financial systems, but these also threaten economic stability. Is it possible to defeat the paradox? That is the long-term question posed by the crisis.

All Fall Down

History isn't reassuring. In their new book, This Time Is Different: Eight Centuries of Financial Folly, economists Carmen Reinhart of the University of Maryland and Kenneth Rogoff of Harvard have performed the most comprehensive review ever of financial crises. Examining 66 countries over eight centuries, they find regular instability. Since World War II, almost every major European country has suffered a major banking crisis, according to Reinhart's and Rogoff's tabulation. The United Kingdom has endured four. For the United States, their count is two: the savings and loan collapse of the 1980s, and the 200709 financial collapse.

A severe banking crisis damages the broader economy. Borrowing, lending, and investing contract or collapse, as financial institutions hoard cash and see many past investments go sour. Consumer confidence and purchasing power drop if, lacking government insurance, depositors lose money. Many financial institutions become either insolvent (meaning their assets are worth less than their liabilities) or "illiquid" (meaning that they cannot meet their depositors' demand for cash, because most of their cash is tied up in outstanding loans). In a crisis, the distinction between insolvency and illiquidity often blurs, because outstanding loans and investments (say, in bonds, mortgages, or stocks) can only be sold at fire-sale prices that result in devastating losses. With few eager buyers, trying to escape illiquidity often brings on insolvency.

One common cause of banking crises, Reinhart and Rogoff report, is inflated home values, because housing loans often comprise a big share of banks' lending portfolios. Banking crises are less "the trigger of recession" than "an amplification mechanism" that worsens the downturn, they say. The adverse consequences are severe, the authors find. In advanced countries, housing prices drop 30% to 50%. Stock prices also fall on average about 50% from their peaks. (In the recent U.S. crisis, the immense wealth losses flowed mainly from the lower real estate and stock prices, not lost bank deposits that were mainly insured.) On average, unemployment rises seven percentage points from its low. There is a massive increase in government debt, mostly from a sharp drop in government tax revenues and increases in spending to cushion the downturn. Three years after the crisis, the growth in debt among advanced countries averaged 86%. (All these comparisons involve countries that have experienced post-World War II banking crises, though not all countries are in every comparison.)

A review of defaults on government debt (also called "sovereign debt") yields a picture only slightly brighter. Few major countries have never defaulted. England and later the United Kingdom did so at least eight times, from 1340 to 1932. France did nine times, starting in 1558; Spain six times, from 1557 to 1647. Latin American countries have been serial defaulters in both the 19th and 20th centuries. From 1985 to 1990, Brazil defaulted repeatedly; Venezuela seems the champion defaulter, having done so ten times since 1830. The United States doesn't escape. In 1790, it deferred some interest payments for a decade. States defaulted on bonds in two waves: one in the 1830s and 1840s, the other in the 1870s and 1880s. All told, Reinhart and Rogoff count 250 instances of default on external debt (money owed to foreigners) and 70 on domestic debt (money owed to a country's citizens). Many were overlapping. True, as they note, defaults by advanced countries have subsided since 1800. But the frequency of banking crises and their negative effects on government debt create the potential for future crises.

Bloodletting

Studying the past does not make Reinhart and Rogoff optimistic: "the main message of this book [is that] we have been here before. The instruments of financial gain and loss have varied over the ages, as have the types of [financial] institutions…. Countries, institutions, and financial instruments may change across time, but human nature does not." Given the bleak historical record, questions arise: Why do financial systems function at all? What makes lenders and investors continue to stake their money when disaster may suddenly strike?

One reason is compulsion. Historically, the wealthy were often forced to lend to needy monarchs. French kings sometimes solved their inability to repay by beheading their creditors, a process popularly called "bloodletting." Compulsion hasn't disappeared. Economists call its modern version "financial repression," which means a systematic restricting of investment opportunities that forces people to put their money where government wants it. Reinhart and Rogoff write:

In China and India today, most citizens are extremely limited as to the range of financial assets they are allowed to hold, with very low-interest bank accounts and cash essentially the only choices. With cash and jewelry at high risk of loss and theft and very few options for accumulating wealth to pay for retirement, healthcare, and children's education, citizens still put large sums in banks despite the artificially suppressed returns. In India, banks end up lending large amounts of their assets directly to the government, which thereby enjoys a far lower interest rate than it probably would in a liberalized capital market. In China, the money goes via directed lending to state-owned enterprises and infrastructure projects, again at far lower interest rates than would otherwise obtain.

Another obvious reason that financial markets survive is that over time much lending and investing is profitable. Though crises persist, they still occur only intermittently. Since 1800, banking crises have afflicted the United Kingdom only 9% of the time, France 12%, Sweden 5%, and the United States 13%, according to Reinhart and Rogoff's estimates. Considering the awful consequences, countries do have powerful reasons to avoid crises. In a 1981 paper, economists Jonathan Eaton and Mark Gersovitz argued that what historically limited governments from defaulting was worry about their "reputation" as reliable borrowers. Loans, usually in gold and silver coin, were often needed to fight wars or buy food to avert famine. Lenders were often foreign. The English and Spanish borrowed from Italian bankers. Similar considerations exist today. Countries often need to borrow to finance trade and investment, or cushion recessions. Finally, finance is profitable because riskier loans and investments carry higher rates to compensate for possible losses.

Costly Delusions

The main reason that financial crises recur, Reinhart and Rogoff argue, is not so much greed as wishful thinking and delusion. They quote a savvy Wall Street trader: "More money has been lost because of four words than at the point of a gun. Those words are, ‘This time is different.'" Though that became their book's title and theme, Reinhart and Rogoff do not much illuminate why the delusion persists, except to attribute it to human nature. For a fuller explanation, it's worth revisiting the 1978 classic Manias, Panics, and Crashes: A History of Financial Crises by the late economic historian Charles Kindleberger. In it, he sets out a coherent framework for understanding financial booms and busts.

Relying heavily but not exclusively on the work of economist Hyman Minsky, Kindleberger described a financial cycle with three distinct phases. First came the "displacement": some new and arguably transformative development that excited economic spirits. It could be "the outbreak or end of a war, a bumper harvest or crop failure, the widespread adoption of an invention with pervasive effects—canals, railroads, the automobile—some political event or surprising financial success…." Regardless, it inspires confidence and creates new profit opportunities. Investment and production increase. The boom begins, usually fed by easier credit. In its early stages, all this may be well-grounded, a conventional response to new economic demands or sources of supply.

Next comes "euphoria," when people become intoxicated. "Positive feedback develops, as new investment leads to increases in income that stimulate further investment and further income increases," Kindleberger writes. At some point, these developments become detached from underlying economic realities and assume a life of their own. Speculation (what Adam Smith called "overtrading") gathers momentum. Investors increasingly buy for price appreciation of the asset, not the income it generates. The objects can be stocks, commodities (e.g., oil, gold, foodstuffs), real estate, tech start-up companies, currencies, or shopping malls. Prices spiral upwards as hardly anyone wants to be left behind in the rush to riches. Not surprisingly, the boom phase also invites fraud and criminality.

Finally comes what Kindleberger calls "revulsion," which can cause panic and a crash. Some event (a swindle revealed, a bank failure, a fall in prices, warnings by financial eminences) triggers a reappraisal. Prices begin to drop, and the declines accelerate as everyone sprints to the exit. The crash stops, Kindleberger contends, when

one or more of three things happen: (1) prices fall so low that people are again tempted to move back into less liquid assets [that is, away from cash to stocks, real estate, and other assets that cannot be sold so easily]; (2) trade is cut off by setting limits on price declines, shutting down exchanges, or otherwise closing trading; or (3) a lender of last resort [the government or a central bank like the Fed] succeeds in convincing the market that money will be made available in sufficient volume to meet the demand for cash.

If the Kindleberger-Minsky model is correct—a good bet—then the ultimate villain in financial crises is the initial "displacement." As often as not, the displacement involves "innovation." In this context, the word does not merely mean some technical advance, say, the computer chip or fiber optics. It means more broadly any apparently novel set of circumstances or developments that convince people that "this time is different," because the economic terrain or conventional rules have fundamentally altered. What, then, was the "innovation" that started the cycle culminating in the 200709 crisis? There is already one popular candidate: a misguided faith in the "efficiency" of free markets that lulled investors into foolish bets and blinded government regulators to the gathering dangers.

Efficient Markets

For years, the belief in "efficient markets" reigned supreme among academic finance economists. By "efficient," they meant that the market—buying and selling—automatically processed all known information about stocks, so that the resulting prices correctly reflected a firm's present and future prospects, as best they could be known. Future price changes were a "random walk": they could not be predicted from past price movements. The "efficient market" hypothesis, associated closely with economist Eugene Fama of the University of Chicago, seemed supported by studies of stock prices. Although the "efficient market hypothesis" didn't explicitly reject the "manias" and "panics" that concerned Kindleberger, it did so implicitly. If "the market" quickly digested new information, then prices wouldn't dramatically diverge from underlying values. Eager sellers would quickly profit if prices seemed too high; rational buyers would do the same if prices seemed too low. Momentum trading and crowd psychology were discounted.

Although the efficient market hypothesis applied only to U.S. stocks, it conveyed a broader message: open financial markets are inherently rational. The implication was that "as more stocks, bonds, options, futures, and other financial instruments were created and traded, they would inevitably bring more rationality to economic activity. Financial markets possessed a wisdom that individuals, companies, and governments did not," as Justin Fox, a former columnist for Time, writes in The Myth of the Rational Market: A History of Risk, Reward, and Delusion on Wall Street. If this were so, then government regulation could be relaxed. Significantly, that comported with an anti-regulation ethos that began in the Reagan years. Regulators might police markets for fraud and compel more transparency in transactions. Aside from remedying abuses, these measures would improve market functioning by increasing the quality of information. But no one had to worry about titanic financial crises.

The triumph of these doctrines then set the stage for crisis, the argument goes. Motivated by compensation practices that rewarded short-term profits, Wall Street banks and investment banks created sophisticated and complex financial instruments—collateralized debt obligations (CDOs) and credit default swaps (CDSs), for instance—that could be sold and traded. To increase profits, these very same banks and investment banks used more and more borrowed money (i.e., "leverage") to conduct their trading and investing. With some exceptions, not including Fed chairmen Alan Greenspan and Ben Bernanke, government regulators were untroubled by these developments. So a mountain of risky securities grew atop an expanding foundation of short-term credit until the whole rickety structure collapsed in a manner that wouldn't have surprised Kindleberger.

The Great Reversal

This compact story transforms the financial crisis into a simple morality tale. Bad ideas ("efficient markets") led to bad policies (lax regulation) that led to bad outcomes. This seems persuasive, but it's exaggerated and misleading. For starters, it's hard to find top government policymakers who uncritically believed in "rational markets." Certainly, Greenspan and Bernanke didn't. Before the crisis, both gave speeches (Greenspan in August 2002 and Bernanke in October 2002) acknowledging the possibility of irrational bubbles. After the big "tech bubble" of the late 1990s, it was hard to argue otherwise. But both men cautioned against having the Fed "prick" the bubble before it popped on its own. It was hard to determine when rising asset prices constituted a bubble, they said. Trying to deflate it with higher interest rates and tighter credit might do more harm than good for employment and output. The Fed could limit the damage once the bubble had burst, they argued.

The truth was that the "efficient markets hypothesis" was mainly an academic preoccupation. As Fox writes: "For all the success that the new ideas about efficient markets achieved on campus and within certain precincts of the investing world, they had yet to penetrate the real centers of economic power in America by the early 1980s." Indeed, it's hard to tell when they did penetrate. By the late '80s, the theory itself was increasingly under assault from scholars and real world developments. "Behavioral" economists found quirks in investor psychology that confounded pure rationality. Many investors, for example, feared losses more than they prized gains; their risk-taking was skewed. Worse, the stock market crash of 1987, when prices dropped more than 20% in a single day, hardly seemed rational. Economist William Sharpe, a leading figure in the efficient market school, pronounced the collapse "pretty weird" to the Wall Street Journal and soon heard from his mother: "Fifteen years of education, three advanced degrees, and all you can say is ‘it's weird.'"

The actual origins of the financial crisis are more obscure and disturbing than efficient market ideas and their alleged policy offspring. The defining economic event of the 1980s and for many years thereafter was the decline of double-digit inflation (from about 13% in 1979 to 4% by 1983), which unleashed a quarter-century economic boom. Inflation's suppression resulted from an unspoken alliance between then-Fed chairman Paul Volcker and newly-elected Ronald Reagan, who supported Volcker's high interest rates and tolerated the resulting harsh recession (peak monthly unemployment: 10.8%). The purging of inflationary expectations, in Kindleberger's terminology, was the initial "displacement." It created new profit opportunities and changed the way people thought and acted.

Contrasts between "before" and "after" Volcker-Reagan were stark. Before, government had aggressively tried to manage the economy to eliminate recessions, with progressively worse results-not only rising inflation but also four recessions in a row (1969, 197375, 1980, and 198182). After, government was less activist, and the economy improved. There were only two relatively mild recessions in the next quarter-century (those of 199091 and 2001). Average unemployment was much lower; the monthly maximum was 7.8%. The drop in interest rates, crudely reflecting inflation's decline, boosted both stock and housing prices. Before, the stock market had stagnated: it was about the same in 1982 as in 1965. After, it soared: in 1999, it was almost 12 times higher than in 1982. Housing prices followed a similar, if less steep, upward trajectory. The median sales prices of existing homes nearly tripled from 1982 to 2007.

People took note. The lessons of this great reversal slowly seeped into popular and expert consciousness. Assumptions shifted; beliefs changed. Government economic management, it seemed, succeeded more by doing less. The "activist" polices of the 1960s and '70s had been mistaken. Left largely alone, financial markets mostly trended upward, as did home prices. Markets seemed largely self-correcting. When they went to excess, the Fed could cushion the effects by alertly easing credit conditions. The 1987 stock market crash hadn't caused a deep recession. Neither had the wildly inflated stock and tech bubbles of the late '90s. The acclaim for Greenspan reflected his (and the Fed's) apparent ability to prevent savage instability.

It was these actual experiences that conditioned behavior and shaped thinking, not devotion to the academic belief in efficient markets. If the theory attained some popular prominence, it did so only as one way of explaining improved U.S. economic performance. But it was the performance, not the theory, that mattered most. If the economic instability of the 1970s had persisted, few would have paid attention to the theory, which would have seemed unrelated to everyday reality. Greenspan and Bernanke's ideas seemed at most a pragmatic version of efficient markets. Financial markets policed themselves most of the time. But they could get out of whack. They were good but not perfect.

Logic to the Madness

On balance, the economic world seemed less risky. Economists talked about the Great Moderation: benevolent business cycles characterized by long expansions and infrequent, brief recessions. Financial markets for stocks, bonds, and currencies had become less volatile by 2003 and 2004; routine market swings were smaller. Good. The trouble was that all these favorable developments ultimately backfired. They rationalized self-defeating behavior.

If the economy was less risky, then practices and government policies that had once seemed dangerous or imprudent were less so. Investors and households alike could take on more "leverage"—borrow more—because the threats to repayment (deep recessions, big unpredicted drops in financial markets) had lessened. Lending standards could be relaxed for the same reasons. With rising asset prices (stocks, homes), households could save less and spend more because their wealth was constantly ascending. Thus, the gains from a less inflationary economy were squandered by the false security they inspired.

In its details, the financial crisis of 200709 was highly complicated, featuring many obscure financial instruments and institutions. But in its essentials, the crisis was fairly simple, as Yale University economist Gary Gorton shows in Slapped by the Invisible Hand: the Panic of 2007. It resembled an old-fashioned bank panic, with the names and identities of the players changed. In a traditional panic, something—a bank or corporate failure, news of loan losses—frightens depositors about the safety of their accounts. People then assault not just one bank but many banks, demanding their money, because no one knows which banks have suffered losses and which haven't. It was this typical "retail" panic that government deposit insurance had effectively outlawed by assuring ordinary savers that, even if a bank failed, they'd get their money back.

By contrast, the 200709 crisis was a "wholesale" panic, Gorton writes. The "depositors" were not individuals but insurance companies, pension funds, and other "institutional investors." They didn't make deposits but provided funds through the "repo market." "Repo" is short-hand for "repurchase agreement." Under a repurchase agreement, the lender advances money to the borrower, which provides collateral in the form of securities (U.S. Treasury securities or some other); the borrower repays the loan in a short period, usually a day to several months, and thereby "repurchases" its collateral. By 2007, the "repo" market may have grown to $10 trillion, and many banks and investment banks depended heavily on it for funds. But then it imploded, as lenders began to doubt the value of mortgage-backed securities being posted as collateral. Panic ensued because no one knew which financial institutions held the most "toxic" securities, Gorton argues. So lenders withdrew credit from all.

With hindsight, many home loans packaged into those mortgage-backed securities seem absurd. Borrowers had weak credit histories—documentation of their incomes was sometimes missing—and after low "teaser" rates for two or three years, loan rates would rise to clearly unaffordable levels. But there was a logic to this madness, and it derived from the heady, post-inflationary era. Home prices had consistently appreciated—and, it was argued, would continue appreciating. If so, borrowers could refinance their loans after two or three years at lower rates, because their homes would be worth more and loans would be safer to lenders. Even if borrowers defaulted, lenders could more easily recover their funds. This beguiling logic rationalized more lending, which sent housing prices up, which rationalized more lending—until the bubble burst. The whole cycle originated in the disinflationary decline of interest rates that propelled the initial surge in home prices.

We all know the denouement. Excessive lending had also occurred elsewhere: in commercial real estate, car loans, and "private equity." As doubts mounted about financial institutions' health, their lenders retreated and redirected funds into "safe" U.S. Treasury securities. The Federal Reserve and Treasury, having underestimated the crisis for months, rushed to fill the void after the bankruptcy of Lehman Brothers in September 2008. Again, the details of different loan programs were highly technical and baffling. But their basic purpose was plain: to substitute government credit, albeit temporary, for vanishing private credit and to do "whatever it takes"—in David Wessel's evocative phrase—to avoid another Great Depression. As Wessel shows in In Fed We Trust: Ben Bernanke's War on the Great Panic, How the Federal Reserve Became the Fourth Branch of Government, the response was big enough to restore confidence by demonstrating that money markets would continue to function.

So the central question remains: can we harness finance for good without suffering from its periodic crashes?

Systemic Risk

On paper, the answer would seem to be "yes." In the wake of crisis, both Greenspan and Bernanke confessed that their trust in the self-regulation of financial markets, though qualified, was still overdone. The road to redemption, then, would seem clear: revert to a more tightly regulated financial system, more resembling what existed from World War II to the early 1980s. The general idea is that if government ultimately stands behind the financial system, then it must police risk-taking to preempt "systemic failures." In this spirit, the Obama Administration has made a series of proposals: tougher capital requirements for banks, greater regulation of "derivatives" such as credit default swaps, tighter supervision of any financial institution deemed "too big to fail," a way to shut such institutions gracefully without duplicating the chaos of the Lehman bankruptcy, and a consumer finance agency to prevent abusive practices.

If adopted, some of these proposals (versions of which Congress debated as this essay was written) might improve financial safeguards. But one should be cautious. It's impossible to restore the early post-World War II financial system, and it's easy to exaggerate its virtues. At the end of World War II, commercial banks and savings and loan associations ("thrifts")—both heavily regulated—preformed about three quarters of financial intermediation between savers and spenders. Now their role is about two fifths. Other channels (commercial paper, bonds, private equity, hedge funds) have flourished. In addition, there are now gigantic international flows of money that didn't exist in earlier decades. Banks in Europe can imperil banks in the United States—and vice versa. Regulation is harder, because the geography of finance is more intricate and more global.

Nor was the earlier system as sturdy as it seems in hindsight. By the 1980s, it was beginning to wobble. Not only was there the savings-and-loan crisis, as high inflation made many older, low-yielding home mortgages unprofitable, but commercial banks also suffered large losses on loans to Latin American developing countries, for commercial real estate and for oil and natural gas projects. Government regulators didn't prevent these problems-a conclusion that reinforces the limited role of "deregulation" in explaining the recent crisis. The problem was not deregulation, because banks and much of the housing finance system—at the heart of the crisis—remained heavily regulated. The problem was that the regulators, like the bankers and brokers they supervised, succumbed to over-optimism. They overlooked or even encouraged relaxed lending standards.

Here is the difficulty. Policing for mistakes or recklessness at individual financial institutions is one thing. Policing for "systemic risk"—trends that jeopardize the entire financial superstructure—is quite another. Most bubbles do not trigger full-blown crises. They merely deflate and inflict losses. If markets are to function, this process must occur. Otherwise, investors and traders, believing themselves increasingly protected from losses, will undertake ever greater risks. But when does normal market functioning become a systemic threat? It isn't easy to tell. Even in mid-2008, when the crisis was unfolding and had already taken several unexpected turns, government officials and private bankers underestimated its potential fury. They suffered a collective failure of imagination. They didn't envision the chain reaction that would nearly devastate the system.

What this crisis demonstrated is that government officials and private bankers, traders, and investors are likely to share similar beliefs about how the system operates. After all, they've lived through the same experiences. The capacity for delusion persists, because although financial crises repeat an age-old cycle, as Kindleberger observed, the details differ. The housing bubble and the preceding tech bubble were not identical twins. So it is always possible to find new reasons why unsustainable trends can be sustained. The next crisis won't be like the last and, conceivably, might involve government debt of wealthy countries. Since World War II, no major advanced nation has defaulted on its official debt, and the notion that one or more might do so has long been considered unthinkable. That faith has justified a permissive attitude towards borrowing, which has resulted in high debt levels that now are rising rapidly. But the unthinkable can't happen, because this time is different.