Real Estate Research provided analysis of topical research and current issues in the fields of housing and real estate economics. Authors for the blog included the Atlanta Fed's Jessica Dill, Kristopher Gerardi, Carl Hudson, and analysts, as well as the Boston Fed's Christopher Foote and Paul Willen.
Comments are moderated and will not appear until the moderator has approved them.
Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.
In addition, no off-topic remarks or spam is permitted.
October 4, 2011
The uncertain case against mortgage securitization
The opinions, analysis, and conclusions set forth are those of the authors and do not indicate concurrence by members of the Board of Governors of the Federal Reserve System or by other members of the staff.
Did mortgage securitization cause the mortgage crisis? One popular story goes like this: banks that originated mortgage loans and then sold them to securitizers didn't care whether the loans would be repaid. After all, since they sold the loans, they weren't on the hook for the defaults. Without any "skin in the game," those banks felt free to make worse and worse loans until...kaboom! The story is an appealing one and, since the beginning of the crisis, it has gained popularity among academics, journalists, and policymakers. It has even influenced financial reform. The only problem? The story might be wrong.
In this post we report on the latest round in an ongoing academic debate over this issue. We recently released two papers, available here and here, in which we argue that the evidence against securitization that many have found most damning has in fact been misinterpreted. Rather than being a settled issue, we believe securitization's role in the crisis remains an open and pressing question.
The question is an empirical one
Before we dive into the weeds, let us point out why the logic of the above story need not hold. The problem posed by securitization—that selling risk leads to excessive risk-taking—is not new. It is an example of the age-old incentive problem of moral hazard. Economists usually believe that moral hazard causes otherwise-profitable trade to not occur, or that it leads to the development of monitoring and incentive mechanisms to overcome the problem.
In the case of mortgage securitization, such mechanisms had been in place, and a high level of trade had been achieved, for a long time. Mortgage securitization was not invented in 2004. To the contrary, it has been a feature of the housing finance landscape for decades, without apparent incident. As far back as 1993, nearly two-thirds (65.3 percent) of mortgage volume was securitized, about the same fraction as was securitized in
2006 (67.6 percent) on the eve of the crisis. In order to address potential moral hazard, securitizers such as Fannie Mae and Freddie Mac (the government sponsored enterprises, or GSEs) long ago instituted regular audits, "putback" clauses forcing lenders to repurchase nonperforming or improperly originated loans, and other procedures designed to force banks to lend responsibly. Were such mechanisms successful? Perhaps, perhaps not. It is an empirical question, and so our understanding will rest heavily on the evidence.
The case against securitization
Benjamin Keys, Tanmoy Mukherjee, Amit Seru, and Vikrant Vig released an empirical paper in 2008 (revised in 2010) titled "Did Securitization Lead to Lax Screening? Evidence from Subprime Loans" (henceforth, KMSV) that pointed the finger squarely at securitization. The paper won several awards and, when it was published in the Quarterly Journal of Economics in 2010, it became that journal's most-cited paper that year by more than a factor of two. In other words, it was a very well-received and influential paper.
And for good reason. KMSV employs a clever method to try to answer the question of securitization's role in the crisis. Banks often rely on borrowers' credit (FICO) scores to make lending decisions, using particular score thresholds to make determinations. Below 620, for example, it is hard to get a loan. KMSV argues that securitizers also use FICO score thresholds when deciding which loans to buy from banks. Loan applicants just to the left of the threshold (FICO of 619) are very similar to those just to the right (FICO of 621), but they differ in the chance that their bank will be able to sell their loan to securitizers. Will the bank treat them differently as a result? This seems to have the makings of an excellent natural experiment.
Figures 1 and 2, taken from KMSV, illustrate the heart of their findings. Using a data set of only private-label securitized loans, the top panel plots the number of loans at each FICO score. There is a large jump at 620, which, KMSV argues, is evidence that it was easier to securitize loans above 620. The bottom panel shows default rates for each FICO score. Though we would expect default to smoothly decrease as FICO increases, there is a significant jump up in default at exactly the same 620 threshold. It appears that because securitization is easier to the right of the 620 cutoff, banks made worse loans. This seems prima facie evidence in favor of the theory that mortgage securitization led to moral hazard and bad loans.
Reexamining the evidence
But what is really going on here? In September 2009, the Boston Fed published a paper we wrote (original version here, updated version here) arguing for a very different interpretation of this evidence. In fact, we argue that this evidence actually supports the opposing hypothesis that securitizers were to some extent able to regulate originators' lending practices.
The data set used in KMSV only tells part of the story because it contains only privately securitized loans. We see a jump in the number of these loans at 620, but we know nothing about what is happening to the number of nonsecuritized loans at this cutoff. The relevant measure of ease of securitization is not the number of securitized loans, but the chance that a given loan is securitized—in other words, the securitization rate.
We used a different data set that includes both securitized and nonsecuritized loans, allowing us to calculate the securitization rate. Figures 3 and 4 come from the latest version of our paper.
Like KMSV, we find a clear jump up in the default rate at 620, as shown in the bottom panel. However, the chance a loan is securitized actually goes down slightly at 620, as shown in the top panel. How can this be? It turns out that above the 620 cutoff banks make more of all loans, securitized and nonsecuritized alike. This general increase in the lending rate drives the increase in the number of securitized loans that was found in KMSV, even though the securitization rate itself does not increase. With no increase in the probability of securitization, it is hard to argue that the jump in defaults at 620 is occurring because easier securitization motivates banks to lend more freely.
The real story behind the jumps in default
So why are banks changing their behavior around certain FICO cutoffs? To answer this question, we must go back to the mid-1990s and the introduction of FICO into mortgage underwriting. In 1995, Freddie Mac began to require mortgage lenders to use FICO scores and, in doing so, established a set of FICO tiers that persists to this day. Freddie directed lenders to give greater scrutiny to loan applicants with scores in the lower tiers. The threshold separating worse-quality applicants from better applicants was 620. The next threshold was 660. Fannie Mae followed suit with similar directives, and these rules of thumb quickly spread throughout the mortgage market, in part aided by their inclusion in automated underwriting software.
Importantly, the GSEs did not establish these FICO cutoffs as rules about what loans they would or would not securitize—they continued to securitize loans on either side of the thresholds, as before. These cutoffs were recommendations to lenders about how to improve underwriting quality by focusing their energy on vetting the most risky applicants, and they became de facto industry standards for underwriting all loans. Far from being evidence that securitization led to bad loans, the cutoffs are evidence of the success securitizers like Fannie and Freddie have had in directing lenders how to lend.
With this in mind, the data begin to make sense. Lenders, following the industry standard originally promulgated by the GSEs, take greater care extending credit to borrowers below 620 (and 660). They scrutinize applicants with scores below 620 more carefully and are less likely to approve them than applicants above 620, resulting in a jump-up in the total number of loans at the threshold. However, because of the greater scrutiny, the loans that are made below 620 are of higher average quality than the loans that are made above 620. This causes the jump-up in the default rate at the threshold.
Figures 5 and 6 show that this pattern also exists among loans that are kept in portfolio and never securitized. The change in lending standards causes these loans, as well as securitized loans, to jump in number and drop in quality at 620 (and 660). However, as figure 3 shows, the securitization rate doesn't change because securitized and nonsecuritized loans increase proportionately. The FICO cutoffs are used by lenders because they are general industry standards, not because the securitization rate changes. This means the cutoffs cannot provide evidence that securitization led to loose lending.
But the debate does not end there. In April 2010, Keys, Mukherjee, Seru, and Vig released a working paper (KMSV2), currently forthcoming in the Review of Financial Studies, that responded to the issues we raised. According to the paper, the mortgage market is segmented into two completely separate markets: 1) a "prime" market, in which only the GSEs buy loans, and 2) a "subprime" market, in which only private-label securitizers buy loans. KMSV2 argues that only private-label securitizers follow the 620 rule and, by pooling these two types of loans in our analysis, we obscured the jump in the securitization rate in that market.
The latest round in the debate
We went back to the drawing board to investigate these claims. We detail our findings in a new paper, available here. In the paper, we demonstrate that the pattern of jumps in default—without jumps in securitization—is not simply an artifact of pooling, but rather exists for many subsamples that do not pool GSE and private-label securitized loans. For example, we find the pattern among jumbo loans (by law an exclusively private-label market), among loans bought by the GSEs, and among loans originated in the period 2008–9 after the private-label market shut down. Furthermore, as figure 7 shows, the private-label market boomed in 2004 and disappeared around 2008, while the size of the jump in the number of loans at 620 continued to grow through 2010, demonstrating that use of the threshold was not tied to the private market.
What's more, KMSV's response fails to address the fundamental problem we identified with their research design: following the mandate of the GSEs, lenders independently use a 620 FICO rule of thumb in screening borrowers. Even if some subset of securitizers had used 620 as a securitization cutoff, one would not be able to tell what part of the jump in defaults is caused by an increase in securitization, and what part is simply due to the lender rule of thumb. Consequently, the jump in defaults at 620 cannot tell us whether securitization led to a moral hazard problem in screening.
To put this in more technical jargon, KMSV use the 620 cutoff as an instrument for securitization to investigate the effect of securitization on lender screening. But the guidance from the GSEs that caused lenders to adopt the 620 rule of thumb in the first place means that the exclusion restriction for the instrument is not satisfied—the 620 cutoff affects lender screening through a channel other than any change in securitization.
We also found that the GSE and private-label markets were not truly separate. In addition to qualitative sources describing them as actively competing for subprime loans, we find that 18 percent of the loans in our sample were at one time owned by a GSE and at another time owned by a private-label securitizer—a lower bound on the fraction of loans at risk of being sold to both. Because the markets were not separate, the data must be pooled.
Our findings, of course, do not settle the question of whether securitization caused the crisis. Rather, they show that the cutoff rule evidence does not resolve the question in the affirmative but instead points a bit in the opposite direction. Credit score cutoffs demonstrate that large securitizers like Fannie Mae and Freddie Mac were able to successfully impose their desired underwriting standards on banks. We hope our work causes researchers and policymakers to reevaluate their views on mortgage securitization and leads eventually to a conclusive answer.
By Ryan Bubb, assistant professor at the New York University School of Law, and Alex Kaufman, economist with the Board of Governors of the Federal Reserve System
April 18, 2011
What effect does negative equity have on mobility?
padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
A debate has broken out in the housing literature over the effect of negative equity on geographic mobility. The key question is whether homeowners with negative equity—those who are "under water"—are more or less likely to move relative to homeowners with positive equity. In a paper published in the Journal of Urban Economics last year (available on the New York Fed website), Fernando Ferreira, Joseph Gyourko, and Joseph Tracy (hereafter FGT leaving out these categories) argue that underwater owners are far less mobile. Using data from 1985 to 2005, they find that negative equity reduces the two-year mobility rate of the average American household by approximately 50 percent. This is a very large effect and, if true, FGT's findings have important policy implications for both the housing market and the labor market today. For example, the economist and Nobel laureate Joseph Stiglitz, in testimony to the Joint Economic Committee of Congress on December 10, 2009, stated:
But the weak housing market will contribute to high unemployment and lower productivity in another way: a distinguishing feature of America's labor market is its high mobility. But if individuals' mortgages are underwater or if home equity is significantly eroded, they will be unable to reinvest in a new home.
The fear is that if people with negative equity can't move to new jobs, then the job-matching efficiency of the U.S. labor market will suffer, putting upward pressure on the unemployment rate. This type of "house lock" is exactly what the economy doesn't need as it emerges from the recent housing crisis and recession.
However, recent research by Sam Schulhofer-Wohl, an economist from the Minneapolis Fed, casts doubt on FGT's conclusions, as well as the economic intuition in Stiglitz's testimony. Schulhofer-Whol replicated the FGT analysis using the same data set (the American Housing Survey, or AHS) over the same sample period. But he found the exact opposite result: negative equity significantly increases geographic mobility.
What is the source of the discrepancy?
The difference in results stems from what at first blush seems like a small discrepancy in how the two papers identify household moves in the AHS. Here are the details: the AHS is conducted every two years by the U.S. Census Bureau as a panel survey of homes. That means that AHS interviewers go to the same homes every two years to record who lives there (among other pieces of information). For a home that is owner-occupied in one survey year, there are four possibilities regarding its status two years later. First, the home could still be owner-occupied by the same household as before. Second, the home could be owner-occupied by a different household. Third, it could be occupied by a different household that rents the home but doesn't own it. Finally, the home could be vacant.
In their paper, FGT treated the first category as a non-move and the second category as a move. FGT threw out of their analysis any observations that fell into the third and fourth categories.1 Dropping these last two categories, rather than coding them as moves, introduces significant bias into FGT's results. As Schulhofer-Wohl notes, it effectively assumes that households in negative equity positions are no more likely to rent out their homes, or leave them vacant when they move, than are households with positive equity. But it is relatively straightforward to show that this assumption is not borne out in the data. Specifically, Schulhofer-Wohl finds that positive-equity households who move sell their houses to new owner-occupiers two-thirds of the time. The other two possibilities (renting out the home or leaving it vacant) combine to occur only one-third of the time. In contrast, among negative-equity households who move, sales to new owner-occupant households occur half of the time, with the other two possibilities occurring the other half. Thus, by dropping the last two categories of transitions, FGT are artificially increasing the mobility rate of positive equity households relative to negative equity households.
Schulhofer-Wohl recodes the moving variable so that instances in which an owner-occupied home is rented or vacated also count as moves. He then re-estimates FGT's regressions. The coding change reverses the estimated relationship between negative equity and mobility. The new estimates show that negative equity raises the probability of moving by 10 to 18 percent, relative to the overall probability of moving in the AHS data. This of course is in marked contrast to FGT's results, where negative equity was found to significantly decrease the probability of moving.
What does theory tell us?
When thinking about what economic theory might say about the relationship between negative equity and mobility, it is important to distinguish how equity might affect selling versus how equity might affect moving. FGT write that their results suggest a role for what behavioral economists call "loss aversion." In this context, loss aversion can occur when owners are reluctant to turn paper losses into real ones by selling a home that has fallen in price. But, as Schulhofer-Wohl's analysis makes clear, it is possible and even common for households to move to different houses without selling their old ones. That means that loss aversion potentially affects the probability of selling a home without affecting the probability of moving.
Of course, while moving and selling are theoretically distinct, they often occur together in practice. One reason for the tight relationship between moving and selling involves liquidity constraints. Even short-distance moves entail nontrivial transaction costs, so households that do not have liquid wealth may not be able to move without selling their home. As a result, to the extent that negative equity decreases the probability of selling (via loss aversion), it may also decrease the probability of moving.
Besides loss aversion, there are at least two other channels through which liquidity constraints are relevant for the way that negative equity affects homeowner mobility. By definition, underwater households cannot retire their mortgages by selling their houses. Liquidity-constrained households that are also under water do not have the cash to make up the difference between the outstanding mortgage balance and sale price. As a result, negative equity could reduce selling (and, by extension, moving). On the other hand, liquidity-constrained households are more likely to simply default on their mortgages. Thus, negative equity might increase the probability of moving, though the moves that it facilitates are accompanied by foreclosures and not sales. Note that this "default channel" between negative equity and mobility depends importantly on expectations of future housing prices. Negative-equity households who do not think housing prices will rise any time soon are more likely to default on their mortgages, and thus move, than households who think that higher prices and restored housing equity are just around the corner.
The offsetting implications of liquidity constraints on mobility mean that theory doesn't provide a clean prediction for how negative equity should affect mobility. The question boils down to which implication is dominant in the data. The findings from the Schulhofer-Wohl paper suggest that the default channel may be relatively large, so concern about negative equity impeding homeowner mobility may be overblown.
Are these studies relevant to the current environment?
The sample period for both papers we have discussed ended in 2005. While we certainly believe that the issue addressed by both papers is very important, and that the Schulhofer-Wohl analysis corrects an important omission in the FGT study, we would offer a cautionary note to those who would extrapolate the findings of these studies to the current environment. The period 1985–2005 was a boom time in housing markets for most areas of the country. One way to see this is by noting the low number of negative equity observations in both the FGT and Schulhofer-Wohl papers. The majority of negative equity observations in the AHS data is likely from only a couple of areas in the country and from a narrow time period (most likely from the East and West coasts in the late 1980s and early 1990s). These places and time periods may be unrepresentative of the average negative-equity owner today.
Even more importantly, there were very few foreclosures from 1985 to 2005 relative to the past several years. This paucity of foreclosures was probably due not only to the low number of negative-equity households, but also to the low probability of foreclosure conditional on having negative equity. Recall that if housing prices are generally rising, households with negative equity will try hard to hang on to their homes and reap the benefits of future price appreciation, even if they are liquidity-constrained. It's probably safe to say that price expectations are lower today than they were in 1985–2005. Because low price expectations increase defaults, and because defaults and foreclosures increase the mobility of negative-equity owners through the default channel, it might be the case that the current effect of negative-equity on mobility is not only positive, but also even larger than the positive estimates in Schulhofer-Wohl's paper.
Research economist and assistant policy adviser at the Federal Reserve Bank of Atlanta
1 This coding choice is not divulged in the FGT paper. The authors confirmed in private correspondence that it was a conscious decision to omit these categories and not a coding error, and that they are currently working on a revision of their original work that will address this issue.
March 9, 2011
The seductive but flawed logic of principal reduction
padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
The idea that a program to reduce principal balances on mortgage loans will cure the nation’s housing ills at little or no cost has been kicking around since the very early stages of the foreclosure crisis and refuses to die. If news stories are true, the administration, in conjunction with the state attorneys general, will soon announce that lenders have agreed to write down borrower principal balances by a grand total of $20–$25 billion as part of a deal to address serious procedural problems in foreclosure filings. Policy wonks and housing experts will greet this announcement with glee, saying that policymakers have ignored principal reduction for too long but have seen the light and are finally going to cure the epidemic of foreclosures that has gripped the country since 2007. Are the wonks right? In short: we think not.
Why do so many wonks love principal reduction? Because they think principal reduction prevents foreclosures at no cost to anyone—not taxpayers, not banks, not shareholders, not borrowers. It is the quintessential win-win or even win-win-win solution. The logic of principal reduction is that in a foreclosure, a lender recovers at most the value of the house in question and typically far less. This is because of the protracted foreclosure process during which the house deteriorates and the lender collects no interest but has to pay lawyers and other staff to navigate 50 different byzantine state bureaucracies to get a clean title to the house, which it then has to sell in an extremely weak market. In contrast, reducing the principal balance to equal the value of the house guarantees the lender at least the value of the house because the borrower now has positive equity and research shows that borrowers with positive equity don’t default. To put numbers on this story, suppose the borrower owes $150,000 on a $100,000 house. If the lender forecloses, let's assume it collects, after paying the lawyers and the damage on the house, etc., $50,000. However, if it writes principal down to $95,000, it will collect $95,000 because the borrower now has positive equity and won't default on the mortgage. Lenders could reduce principal and increase profits!
The problem with the principal reduction argument is that it hinges on a crucial assumption: that all borrowers with negative equity will default on their mortgages. To understand why this assumption is crucial to the argument, suppose there are two borrowers who owe $150,000 but one prefers not to default (perhaps because she has a particularly strong preference for her current home, or because she does not want to destroy her
credit, or because she thinks there's a chance that house prices will recover) and eventually repays the whole amount while the other defaults. If the lender writes down both loans, it will collect $190,000 ($95,000 from each borrower). If the lender does nothing, it will eventually foreclose on one and collect $50,000, but it will recover the full $150,000 from the other borrower, thus collecting $200,000 overall. Hence, in this simple example, the lender will obtain more money by choosing to forgo principal reduction.
The obvious response is that the optimal policy should be to offer principal reduction to one borrower and not the other. However, this logic presumes that the lender can perfectly identify the borrower who will pay and the borrower who won't. Given that there is a $55,000 principal reduction at stake here, the borrower who intends to repay has a strong incentive to make him- or herself look like the borrower who won't!
This is an oft-encountered problem in the arena of public policy. Planners often have a preventative remedy that they have to implement before they know who will actually need the assistance. This inability to identify the individuals in need always raises the cost of the remedy, sometimes dramatically so. A nice illustration of this problem can be seen in the National Highway Traffic Safety Administration's (NHTSA) proposed regulation to require all cars to have backup cameras to prevent drivers from running over people when they drive in reverse. Hi-tech electronics mean that such cameras cost comparatively little: $159 to $203 for cars without a pre-existing navigation screen, and $53 to $88 for cars with a screen, according to the NHTSA. $200 seems like an awfully small price to pay to prevent gruesome accidents that are often fatal and typically involve small children and senior citizens. But the NHTSA says that the cameras are actually extremely expensive, and arguably prohibitively so. What gives? How can $200 be considered a lot of money in this context? The problem is that backup fatalities are extremely rare, something on the order of 300 per year, so the vast majority of backup cameras never prevent a fatality. To assess the true cost, one has to take into account the fact that for every one camera that prevents a fatality, hundreds of thousands will not. Done right, the NHTSA estimates the cost of the cameras between $11.3 and $72.2 million per life saved.
The idea of principal reduction starts with a correct premise: borrowers with positive equity—that is, houses worth more than the unpaid principal balance on their mortgages—rarely ever lose their homes to foreclosure. In the event of an unexpected problem (like an unemployment spell) that makes the mortgage unaffordable, borrowers with positive equity can profitably sell their house rather than default. The reason that foreclosures are rare in normal times is that house prices usually increase over time (inflation alone keeps them growing even if they are flat in real terms) so almost everyone has positive equity. What happened in 2006 is that house prices collapsed and millions of homeowners found themselves with negative equity. Many who got sick or lost their jobs were thus unable to sell profitably.
With this idea in mind, it then follows that if we could somehow get everyone back into positive-equity territory, then we could end the foreclosure crisis. To do that, we either need to inflate house prices, which is difficult to do and probably a bad idea anyway, or reduce the principal mortgage balances for negative-equity borrowers. So we have a cure for the foreclosure crisis: if we can get lenders to write down principal to give all Americans positive equity in their homes, the housing crisis would be over. Of course, the question becomes, who will pay? Estimates suggest that borrowers with negative equity owe almost a trillion dollars more than their homes are worth, and a trillion dollars, even now, is real money. The principal reduction idea might stop here—an effective but unaffordable plan—but people then realized that counting all the balance reduction as a cost was wrong. Furthermore, in fact, not only was the cost far less than a trillion dollars, but, as we noted above, many principal reduction proponents argue that it might not cost anything at all.
The logic that principal reduction can prevent foreclosures at no cost is compelling and seductive, and proposals to encourage principal reduction were common early in the foreclosure crisis. In a March 2008 speech, one of our bosses, Eric Rosengren, noted that "shared appreciation" arrangements had been offered as a way to reduce foreclosures; these arrangements had the lender reduce principal in return for a portion of future price gains realized on the house. In July 2008, Congress passed the Housing and Economic Recovery Act of 2008, which created Hope for Homeowners, a program that offered government support for new loans to borrowers if the lender was willing to write down principal.
While we were initially supportive of principal-reduction plans, we began to have doubts over the course of 2008. Our reasons were twofold. First, we could find no evidence that any lender was actually reducing principal. Commentators blamed the lack of reductions on legal issues related to mortgage securitization, but we became skeptical of this argument, because the incidence of principal reduction was so low that it was clear that securitization alone could not be the only problem or even a major one, (Subsequent research has shown this to be largely right: the effect of securitization on renegotiation was between nil and small in this crisis, and lenders did not reduce principal much even during the Depression, when securitization did not exist.) And the second issue, of course, was our realization of the logical flaw described above.
Negative equity and foreclosure
But aren't we being pessimistic here? Aren’t we ignoring research that shows that negative equity is the best predictor of foreclosure? No, we aren't. On the contrary, we have authored some of that research and have long argued for the central importance of negative equity in forecasting foreclosures. But what research shows is not that all or most people with negative equity will lose their homes but rather that while people with negative equity are much more likely to lose their homes, most eventually pay off their mortgages. The relationship of negative equity to foreclosure is akin to that of cholesterol and heart attacks: high cholesterol dramatically increases the odds of a heart attack, but the vast majority of people with high cholesterol do not have heart attacks any time in the near or even not-so-near future.
To be sure, there are some mortgages out there with very high foreclosure likelihood: loans made to borrowers with problematic credit and no equity to begin with, located in places where prices have fallen 60 percent or more. However, such loans are quite rare now—most of those defaulted soon after prices started to fall in 2007—and make up a small fraction of the pool of troubled loans currently at risk. To add to the problem, the principal reductions required to give such borrowers positive equity are so large that the $20–25 billion figure discussed for the new program would prevent at most tens of thousands of foreclosures and make only a small dent in the national problem.
Millions of borrowers with negative equity will default, but there are many millions more who will continue to make payments on their mortgages, behavior that is not, contrary to popular belief, a violation of economic theory. Economic theory only says that borrowers with positive equity won’t default (read it carefully). It is logically false to infer from this prediction that all borrowers with negative equity will default. "A implies B" does not mean that "not A" implies "not B," as any high school math student can explain. And in fact, standard models show that the optimal default threshold occurs at a price level below and often significantly below the unpaid principal balance on the mortgage.
The problem of asymmetric information
Ultimately the reason principal reduction doesn't work is what economists call asymmetric information: only the borrowers have all the information about whether they really can or want to repay their mortgages, information that lenders don’t have access to. If lenders weren't faced with this asymmetric information problem—if they really knew exactly who was going to default and who wasn't—all foreclosures could be profitably prevented using principal reduction. In that sense, foreclosure is always inefficient—with perfect information, we could make everyone better off. But that sort of inefficiency is exactly what theory predicts with asymmetric information.
And, in all this discussion, we have ignored the fact that borrowers can often control the variables that lenders use to try to narrow down the pool of borrowers that will likely default. For example, most of the current mortgage modification programs (like the Home Affordable Modification Program, or HAMP) require borrowers to have missed a certain number of mortgage payments (usually two) in order to qualify. This is a reasonable requirement since we would like to focus assistance on troubled borrowers need help. But it is quite easy to purposefully miss a couple of mortgage payments, and it might be a very desirable thing to do if it means qualifying for a generous concession from the lender such as a reduction in the principal balance of the mortgage.
Economists are usually ridiculed for spinning theories based on unrealistic assumptions about the world, but in this case, it is the economists (us) who are trying to be realistic. The argument for principal reduction depends on superhuman levels of foresight among lenders as well as honest behavior by the borrowers who are not in need of assistance. Thus far, the minimal success of broad-based modification programs like HAMP should make us think twice about the validity of these assumptions. There are likely good reasons for the lack of principal reduction efforts on the part of lenders thus far in this crisis that are related to the above discussion, so the claim that such efforts constitute a win-win solution should, at the very least, be met with a healthy dose of skepticism by policymakers.
Senior economist and policy adviser at the Boston Fed
Research economist and assistant policy adviser at the Federal Reserve Bank of Atlanta
Research economist and policy adviser at the Boston Fed
December 21, 2010
Revisiting real estate revisionism: Concessionary mortgage modifications during the Depression
During the current foreclosure crisis, lenders have seemed far more willing to foreclose on delinquent borrowers rather than offer them loan modifications. Some commentators have argued that this was not always the case. They claim that loan modifications are infrequent today because so many loans have been securitized, and thus are not owned by any one person or firm. They also say that the modern securitization process reduces loan modifications because securitization separates the entity that makes the modification decision—that is, the mortgage servicer—from the entities that gain the most if a foreclosure is avoided—that is, the mortgage investors. As we pointed out in our last post, Yale economist John Geanakoplos and Boston University law professor Susan Koniak argued in a March 2008 New York Times op-ed that the uncomplicated relationship between banks and borrowers in the good old days allowed the banks to work out modifications when their borrowers ran into trouble.
The Congressional Oversight Panel, created by Congress in October 2008 to "review the current state of financial markets and the regulatory system," expressed a similar belief in a March 2009 report on the state of the U.S. housing market:
For decades, lenders in this circumstance [that is, with troubled borrowers] could negotiate with can-pay borrowers to maximize the value of the loan for the lender (100 percent of the market value) and for the homeowner (a sustainable mortgage that lets the family stay in the home). Because the lender held the mortgage and bore all the loss if the family couldn't pay, it had every incentive to work something out if a repayment was possible.
Even in the good old days, lenders reluctant to restructure
Such claims, however, have usually been made with little or no reference to supporting research. Fortunately, a recent paper by Andra Ghent of Baruch College exploits a new data set to shed considerable light on this topic. Her findings argue against the idea that lender reluctance to modify is a recent phenomenon.
Ghent uses a data set from the National Bureau of Economic Research (NBER) that covers mortgages from 1920 to 1939, a period that encompasses the massive housing turmoil of the Great Depression. The data set consists of "mortgage experience cards," which the NBER collected in the 1940s from mortgage lenders in the New York metropolitan area. On the cards are the answers to short questionnaires about the characteristics of individual mortgage loans (see page 5 of Ghent's paper for an example). The cards also contain explicit information about any loan modifications, including the date of the modification and whether it was principal reduction, interest-rate reduction, change to the amortization schedule, or something else. The cards include loans from three types of mortgage lenders: life insurance companies, savings and loans, and commercial banks.1
Ghent finds few modifications in these cards, and these few were not particularly generous. Using a fairly conservative definition of what constitutes a concessionary modification, Ghent finds that approximately 5 percent of loans originated between 1920 and 1939 were modified, while 14 percent were terminated by foreclosure or a deed-in-lieu of foreclosure (the latter occurs when the owner surrenders the house to the lender without going through the foreclosure process). Of the loans that received a concessionary modification, about 40 percent received an interest rate reduction, which Ghent defines as an interest rate cut of at least 25 basis points (relative to origination) resulting in a new rate that is at least two standard deviations below the average interest rate on newly originated loans. The average rate reduction was only 78 basis points below the prevailing interest rate of new originations, suggesting that interest rate cuts were not particularly generous.
Another 40 percent of the modified loans received reductions in their amortization schedules, which would have likely decreased the required mortgage payments. However, Ghent points out that most of these extended amortizations occurred before 1930. In the period 1930–32, when house prices fell and unemployment rose the most, this type of modification was rare.
Principal balance reductions—and increases
Ghent also finds that less than 2 percent of all loans received principal balance increases. She argues that such increases may correspond to instances of forbearance. Forbearance occurs when a lender reduces the required mortgage payment for a short period. At the end of the period, the lender adds the arrears back to the loan balance. We have a minor quibble on this point: today, forbearance is not considered a permanent concessionary modification when the lender does not have to write down any debt.
What about principal reductions? Perhaps the most surprising finding is that the data set shows no instances of principal reduction in the New York City metropolitan area and only a handful of instances in a broader sample that includes the entire states of Connecticut, New Jersey, and New York over a similar period. To us, this low number of principal reductions is compelling evidence that even Depression-era lenders were averse to renegotiating with troubled borrowers, just as lenders are today.
Balloon mortgages sank some borrowers
Another interesting finding concerns the refinancing decisions of lenders. Short-term balloon mortgages were more common in the 1920s and 1930s than they are today, and various scholars have linked the high foreclosure rate of the Depression to the unwillingness of lenders to refinance these mortgages when they came due. In fact, lender reluctance to refinance maturing mortgages is often used to explain the existence of the Home Owners' Loan Corporation (HOLC), a government organization set up in the early 1930s to refinance troubled mortgages. Ghent revisits this hypothesis with her data, measuring the frequency at which short-term balloon mortgages ended in foreclosure. She finds that balloon mortgages that were about to expire did indeed experience increased rates of foreclosure (see Ghent's table 4). However, this relationship only exists during the years when HOLC was purchasing a great many loans (1933–35). In other years, balloon mortgages were no more likely to end in foreclosure than other loans.
To us, this finding suggests a "HOLC effect." While HOLC was actively buying loans, private lenders may have refused to roll them over so that the borrowers would qualify for a HOLC refinance. If they did, then the lenders would be paid close to par for the loans by the government (see our previous post about the generosity of the HOLC program). In particular, the lenders received what were effectively government bonds in return for their mortgage. While these bonds carried lower interest rates, they carried vastly less credit risk as well.
To explain her findings, Ghent points to information problems between borrowers and lenders. In particular, lenders may not have known which borrowers were likely to truly need modifications, nor did they know with certainty which borrowers were likely to re-default if a modification were offered. Note that these information problems must have been quite severe. The national unemployment rate hit 10.8 percent in November 1930 and stayed in double digits for more than a decade. In this environment, a borrower asking for a modification was quite likely to really need one. The fact that lenders made few modifications suggests some strong intrinsic hurdles to renegotiation when information between borrowers and lenders is less than perfect.
Old problems, new analysis
The crucial policy question is what the Depression-era reluctance of lenders to renegotiate teaches us about today's foreclosure crisis. Ghent surmises that the information problems are less of an issue in the current environment, but we disagree. Even with better data and screening technology, today's lenders face significant information problems when deciding on modifications. Moreover, Ghent's paper is also informative on the role of securitization in reducing modifications. Even when individual lenders owned entire loans, modifications were rare.
All told, Ghent's paper is full of solid analysis on a topical subject. And while she doesn't quite go this far, we believe that her findings not only confirm the importance of information problems, but also they may bury the notion that securitization is the primary obstacle to renegotiation in the current foreclosure crisis.
Research economist and assistant policy adviser at the Federal Reserve Bank of Atlanta
Chris Foote, senior economist and policy adviser at the Federal Reserve Bank of Boston
1 Ghent argues that the data set probably provides a representative sample of loans held by life insurance companies and commercial banks in the 1920s and 1930s, but is less likely to be representative of loans held by savings and loans due to a survivorship bias. Unlike life insurance companies and commercial banks, savings and loans were not able to reliably report data on their inactive loans at the time of the survey.
Real Estate Research Search
- Affordable housing goals
- Credit conditions
- Expansion of mortgage credit
- Federal Housing Authority
- Financial crisis
- Foreclosure contagion
- Foreclosure laws
- Governmentsponsored enterprises
- Homebuyer tax credit
- House price indexes
- Household formations
- Housing boom
- Housing crisis
- Housing demand
- Housing prices
- Income segregation
- Individual Development Account
- Loan modifications
- Monetary policy
- Mortgage crisis
- Mortgage default
- Mortgage interest tax deduction
- Mortgage supply
- Multifamily housing
- Negative equity
- Positive demand shock
- Positive externalities
- Rental homes
- Subprime MBS
- Subprime mortgages
- Supply elasticity
- Upward mobility
- Urban growth