To understand the motivations for the Wall Street and Consumer Protection Act, and the accompanying rules and regulations put in place after the financial crisis that affect many operating in fintech today, it helps to take a closer look into some of the dynamics that drove the financial system off a cliff in the first place.
It’s fair to say the root cause of the financial crisis was simple: too much leverage in the system. But beyond that, the story gets complicated fast. Here are comprehensive “plain English” answers to three fundamental questions that are important for understanding the financial system’s near-death experience.
What are derivatives and how did they wreck the financial system?
Derivatives, in their most basic form, are relatively uncontroversial financial agreements between two parties that are linked to—or “derived” from—the underlying price of a financial asset. They can be used to transfer risk or, less productively, to speculate on future price changes (e.g., either as insurance or as a bet). Commodities futures and shorting stocks, to name two common examples, are both types of derivatives.
The derivatives that contributed to the financial crisis are called credit default swaps, or CDS. CDS were actually first developed in the mid-1990s to help banks manage risks resulting from extremely large, high-quality corporate loans. Banks would pay investors to take on a portion of the loan’s risk; in this way, CDS acted as insurance for the bank to protect against losses if the loan defaulted. In the case of default, the investors (the parties on the other side of the CDS) were obligated to cover the bank’s losses.
The main benefit of CDS to banks, however, wasn’t the guaranteed payout if the loan defaulted. Defaults were generally considered highly unlikely. Instead, CDS allowed banks to hold smaller capital cushions to protect against potential losses—andto make more money. Financial regulators require banks to hold a set amount of capital that is equivalent to (or, as was the case in the 1990s, sometimes higher than) the risk they take on; paying investors to hold some of their risk through CDS allowed them to use capital they would otherwise have to hold on their books for more profitable activities.
Over time, CDS became more complex and linked to different types of securitized loans— mortgages included—and the underlying quality of the assets was often unclear or extremely difficult to accurately determine. (We’ll discuss this in more detail with the next question.)
At the same time, CDS became more popular with investors beyond traditional banks—those in the so-called “shadow banking” sector—whose ability to pay the original lenders in the case of default was not adequately considered. And, distinct from the initial use of CDS, investors figured out how to make money from multi-layered bets on the likelihood of default or price changes for the same underlying product. Although regulators were aware of the rapidly growing CDS market, there were no rules in place to help discipline it.
When mortgages began to default on a massive scale in 2007 and 2008, holders of mortgage CDS—like AIG, to name the most infamous example—were required to pay out and make the original lenders whole. But unlike traditional banks, these types of non-bank institutions were not required to hold capital to cover their risks (and even if they had been, the requirements—as they were for traditional banks—would almost certainly have been too low). Plus, the fact that so many investors had used CDS to place tangled networks of bets on mortgages with so little transparency meant that no one was sure which institutions were credit-worthy and which weren’t. This threw the credit markets into chaos.
As more and more homeowners defaulted on their mortgages, institutions exposed to large amounts of mortgage debt or mortgage CDS were wiped out, requiring cash infusions from the Federal Reserve to prevent outright collapse.
In this way, too much risk became distributed opaquely throughout the financial system, among banks and non-banks, while CDS helped obscure the fact that system-wide capital reserves failed to match the risk.
In the last few years, Republicans and some northeastern Democrats have supported bills strongly supported by the financial industry that would roll back the parts of Dodd-Frank that govern derivatives. Proposals include permitting derivative trades to be explicitly backed by taxpayer dollars, creating loopholes for derivatives that would weaken institutions’ risk management profiles, requiring extensive cost-benefit analyses on new rules, and weakening new government authority to regulate systemic risk. For the most part, these types of proposals haven’t succeeded, but they will almost certainly continue to be put forth in the coming years.
What are mortgage-backed securities and how did they destabilize the financial system?
A mortgage-backed security (MBS) is a large group of mortgage loans that are packaged together and sold to investors. First, mortgage lenders extend (or “originate”) individual mortgage loans to homebuyers; then they sell those mortgages to an entity (often Fannie Mae, Freddie Mac, or an investment bank) that groups them together and sells them to investors. Investors in mortgage-backed securities then own the loans within the MBS, making money from homebuyers’ principal and interest payments. (Fannie Mae and Freddie Mac provide guarantees to some MBS investors on principal and interest payments—meaning Fannie or Freddie is responsible for making up the difference to investors if a homebuyer defaults.) For mortgages that don’t meet Fannie or Freddie’s underwriting requirements, investment banks step in to securitize the loans and sell them on to other investors.
Historically, MBS have been very safe investments, and, along with Fannie and Freddie’s guarantee, are critical to ensuring liquidity in the housing market, so that millions of Americans can continue to purchase a home. The primary problem with MBS prior to the onset of the financial crisis was that many included very risky subprime mortgages—mortgages that should never have been originated in the first place. Yet banks bought these mortgages anyway--without enough regard for their quality--then proceeded to pool them together in ways that obscured their risk, obtain AAA ratings from credit rating agencies—many of whom were found to be asleep-at-the-wheel—and either hold them in their portfolio or sell them on to investors throughout the financial system.
When homebuyers with subprime mortgages began to default on their loans, financial institutions that owned MBS (and/or derivatives linked to MBS) began to experience large, unexpected losses. Investment banks did not have adequate capital cushions to absorb these losses, which directly caused Lehman Brothers to fail and other large firms to require government rescue.
The Volcker Rule is sometimes confused with the Glass-Steagall Act, which was passed in 1933 and repealed in 1999. Glass-Steagall required commercial banks (banks that held people’s deposits) to sell or spin off the vast majority of their profit-seeking investment activities—essentially breaking up the banks after the Depression. For nearly 70 years, investment banks and commercial banks couldn’t be housed together.
The rationale for Glass-Steagall’s passage was that, prior to the Depression, banks had gambled on risky investments for their own profit using depositors’ money and, especially after federal deposit insurance was created, should not be allowed to risk either depositors’ or taxpayers’ funds again in the future. After Glass-Steagall was repealed, depository institutions, like Citigroup, were allowed to acquire non-commercial banks or build out their own investment banking activities.
The Volcker Rule (sometimes called the mini Glass-Steagall) has a similar aim, but takes a softer approach. Instead of breaking up banks’ commercial and investment businesses, the Volcker Rule places limits on their activities. Advocates for stronger Wall Street reform have criticized the Volcker Rule for being too complex, which some say will allow banks to find legal loopholes to continue to pursue risky investment practices.
The repeal of Glass-Steagall did not cause the financial crisis, but it may have exacerbated it. The initial bank failures or near-failures—Bear Stearns, Lehman Brothers, Merrill Lynch—were investment banks that did not hold deposits. But once the contagion spread to depository institutions like Citigroup and Bank of America, among others, they would have been less exposed to losses—and less in need of government rescue—had Glass-Steagall remained the law of the land. These institutions also wouldn’t have been allowed to contribute to the pre-crisis casino culture in the first place.
Nobel-prize winning economist Joseph Stiglitz said of the law’s repeal:
“Commercial banks are not supposed to be high-risk ventures; they are supposed to manage other people’s money very conservatively…. It is with this understanding that the government agrees to pick up the tab should they fail. Investment banks, on the other hand, have traditionally managed rich people’s money — people who can take bigger risks in order to get bigger returns. When repeal of Glass-Steagall brought investment and commercial banks together, the investment-bank culture came out on top. There was a demand for the kind of high returns that could be obtained only through high leverage and big risk-taking.”
Even so, the question of whether reinstating Glass-Steagall is necessary—or even relevant—is still hotly debated among financial reform experts and political leaders. Even though federal action is tremendously unlikely, it’s a question that isn’t going away anytime soon.