Notes from the Vault

Larry D. Wall
January 2014

The Volcker Rule went from a three-page memorandum by Paul Volcker, to 11 pages in Section 619 of the Dodd-Frank Act, and then to over 900 pages of regulation and commentary in 2013. This Notes from the Vault uses the concept of regulatory dialectic to explain why regulations that implement seemingly simple concepts eventually become so long and complicated. From this, I draw some inferences on the extent to which we can rely on regulation to solve the problem of excessive risk taking in the financial system.

Bright lines for fuzzy concepts
Short, effective regulations may be possible to achieve a single unambiguous goal if compliance is easily measured—for example, a regulation that prohibits a bank from acquiring control of a manufacturing firm (outside of satisfaction of its claim in bankruptcy proceedings). However, short, effective regulation becomes more difficult as regulators seek to balance potentially conflicting goals where attainment is not easily measured. The difficulty of writing a simple regulation is illustrated by the Volcker Rule as adopted in Dodd-Frank, as the act bans "proprietary trading" but permits banks to engage in market making. Not only are these goals potentially in conflict, but the line between them is often fuzzy in practice.

Although the Volcker Rule is the most recent example of a lengthy regulation, the history of capital regulation in the United States since the early 1980s offers an even better illustration of how a seemingly simple concept evolves into a complex set of regulations.

Capital regulations seek, on the one hand, to reduce the risk of bank failure and safety net losses in the event of failure. On the other hand, regulators do not want capital rules to raise costs by so much that banks are no longer competitive in providing a full range of traditional banking services, including making loans to borrowers who are creditworthy but not entirely default risk-free. Moreover, supervisors must do so using unavoidably flawed measures of bank level risk.1

The combination of conflicting goals and measurement difficulty would suggest that perhaps flexible bank supervision would be more effective than rigid rules with predefined measures and minimum standards. Supervisors could adjust the standards over time in response to new information and in response to better measures of banks' risk exposures along the lines of the Bank of England's Andrew Haldane's dog chasing a Frisbee. However, the modern era of bank capital regulation in the United States began in December 1981 when bank regulatory agencies determined that such flexible capital supervision had not proven effective and they adopted numeric capital guidelines.

In the period immediately prior to the 1981 regulations, the bank regulatory agencies had supervised bank capital level but had not set minimum capital levels in regulation (see this FDIC update). Capital supervision had proven generally effective in preventing individual banks from reducing their capital ratios below their peers, but not in preventing a drop in overall capitalization levels among the largest banks. The supervisors decided that if this trend were to be reversed, banks needed to be given specific targets for capital adequacy. The approach taken in 1981 was the adoption of a regulation that specified minimum leverage ratios using total assets as the measure of risk.

As banks and the regulators gained experience with the 1981 standards, problems emerged. One problem was that banks could meet the 1981 standards by reducing their holdings of low-risk assets such as government securities. Another was that the numeric regulations required U.S. banking groups to maintain levels of accounting capital above those of some of their international competitors (especially Japanese banks). Thus, U.S. supervisors sought and, in Basel I, obtained, an international agreement on capital adequacy in 1988 by the Basel Committee on Banking Supervision. The Basel I agreements were more risk sensitive, in that the number of risk categories was expanded from one (all assets count equally) to five.

Although Basel I was more risk sensitive than the 1981 standards, it nevertheless was not very sensitive to differences in credit risk, as all loans to private parties carried the same risk weight. This led some banks to complain that they were no longer competitive in the high-grade corporate loan market. Moreover, banks were engaging in various forms of "regulatory capital arbitrage" to lower their required amounts of capital, according to Federal Reserve economist David Jones (2000). For example, he notes that banks were "cherry picking" their best loans for sale in the securitization market while retaining their lower-quality loans, with the net effect that loan books were riskier than was contemplated by Basel I.

The concerns about Basel I led to the 2004 international adoption of the Basel II capital accords, which  provided for far more sophisticated measurement of risk based on banks' internal risk models. However, quantitative evaluations of Basel II suggested that its adoption would significantly reduce banks' capital requirements. In large part because of these findings, the U.S. commercial bank regulators did not adopt Basel II. However, Basel II was widely adopted by other developed countries and by the U.S. Securities and Exchange Commission for those investment banks subject to its consolidated supervision.

The subsequent financial crisis revealed a number of flaws in Basel II, which the Basel Committee has sought to remedy with a set of amendments called Basel III. These changes result in generally higher capital requirements for banks and reduced concerns about underweighted risk. In part because of these changes, the United States is currently in the process of implementing Basel III (see the Federal Reserve Board's web page for the Basel Regulatory Framework).

But even these capital regulations are recognized to be insufficient to achieve all of the regulatory objectives. Thus, U.S. bank regulators have also adopted a second complex approach to measuring capital adequacy called stress testing (see my previous Notes from the Vault). Initially, stress testing was adopted during the crisis as a way of addressing weaknesses in the Basel approach to measuring capital. However, stress testing has become embedded as a tool for measuring capital adequacy under the Dodd-Frank Act.

The end result of over 30 years of formal capital regulation is a set of rules that almost makes the Volcker Rule look short and concise. Focusing solely on the increased complexity of the Basel regulations, Andrew Haldane notes that Basel I required only 30 pages of regulation whereas Basel III requires 616 pages. He estimates that Basel III requires the estimation of over 200,000 risk weights.

Some lessons from capital regulations
The modern experience with capital regulation suggests three lessons on regulatory complexity. First, clearly stated rules are a requirement for obtaining consistency across firms and time. The current capital regulation cycle was started because supervisors were finding it difficult to hold the line, let alone raise the bar, in terms of minimum capital requirements. 

Second, banks will respond to measurement errors in ways that will likely reduce the effectiveness of the regulation. Banks responded to the lack of risk weighting in the 1981 standards by reducing their holdings of low-risk assets. They responded to the simplistic weighting scheme in Basel I by continuing to reduce their holdings of low-risk assets, such as large corporate loans. Both Basel I and II provided for low-risk weights on sovereign debt in developed countries, which encouraged banks to have larger proportions of what turned out to be rather risky holdings in some cases.

Third, over time banks will find and exploit opportunities to reduce measured risk in the regulation. Some banks responded to Basel I by securitizing their better assets. Some banks responded to Basel I and II by shifting assets from their banking books to their trading books, where some types of assets effectively carried lower-risk weights. Many large consulting firms now promote "risk-weighted assets (RWA) optimization" services, which help banks reduce their capital requirements through both developing methods that assign lower-risk weights on assets and restructuring asset portfolios to reduce risk-weightings (for example, see here, here, and here).

Regulatory dialectic
Who could have predicted the evolution from the simple capital standards of 1981 to the complex regulations in Basel III and the stress tests? It turns out that Edward Kane's (1977) paper on the regulatory dialectic not only predicts such increased complexity but also offers a prediction as to how it will all end.

Kane begins with a demand for regulation. His focus is on selective credit allocation, but the demand could as easily arise from a desire to enhance financial stability and reduce the expected cost of bank failures to the taxpayers. The government satisfies this demand with regulations that constrain banks' (or other firms') ability to create value for their shareholders (and management). For example, capital regulations force banks to substitute equity for debt, reducing both the marginal implicit subsidy the bank receives from the safety net for taking more risk and the tax advantages of debt that arise because interest payments but not dividends are tax deductible.

These constraints create a profit opportunity for those banks that find ways of avoiding the intent of the regulation while complying with its form. In the case of capital regulation, this comes in the form of a lower cost of funding, which allows banks that reduce their capital requirements to be more competitive and to earn higher rates of return. Given this strong motivation, employees and managers have an incentive to use their relatively greater resources to find ways around the regulation. The result is that the regulations become increasing less effective as successively better ways around the regulation are developed.

Kane notes that those demanding effective regulation eventually notice that the regulation is becoming less effective and they demand changes that would close the loopholes. Stricter regulation will then be forthcoming, provided those favoring the change have sufficient political clout. Yet these changes represent merely the first round of the process. The new regulations still act to constrain banks from operating in the way most beneficial to shareholders. Hence, bank employees and managers have an incentive to find new ways of complying with the new regulations in form but reducing the effectiveness of the regulations in substance. As these efforts to avoid regulation become more effective, the demand for a new round of reregulation arises.

So how does the process end? In the case of selective credit controls, Kane states, "Customarily a network of controls continues to expand unless and until the budgetary cost, social inconvenience, economic waste, and distributional inequity associated with the system become painfully obvious even to the ordinary citizen." In the case of capital regulation, one could interpret recent calls to return to greater reliance on simple leverage (capital to total asset) ratios as recognition of the limits to increasing complex efforts to measure bank risk. However, by itself a simple leverage ratio will only invite a restart of the regulatory dialectic at a simpler point.

Implications
The above discussion uses regulatory dialectic to explain why lengthy regulations are the almost inevitable result of regulations that would significantly constrain bank profitability. The theory predicts that the next iteration of the Volcker Rule will be even longer as regulators try to limit banks' ability to avoid the intent of the rule. Of course, the theory leaves open the possibility that better written regulations can slow the process and, indeed, well-written regulation tries to anticipate the more obvious avoidance tactics. However, so long as the regulation constrains profitable activity, powerful incentives exist to find ways to avoid the regulation.

If there is one hopeful note in this discussion, it is that the dialectic cycle may be slowed or stopped by reducing the foregone profit associated with prudential regulation and, hence, the incentive to seek to avoid the regulation. Two obvious possibilities that arise in the case of capital regulation are reducing the tax benefits of debt financing and reducing the safety net subsidy to risk taking. Viewed from this perspective, more effective prudential regulation is not and cannot be a complete solution to the subsidies inherent in too-big-to-fail (TBTF) policies. Rather credibly ending the TBTF policies is essential to making prudential regulation more effective.2

Larry D. Wall is the executive director of the Center for Financial Innovation and Stability at the Atlanta Fed. The author thanks Paula Tkac for helpful comments on the paper. The views expressed here are the author's and not necessarily those of the Federal Reserve Bank of Atlanta or the Federal Reserve System. If you wish to comment on this post, please e-mail atl.nftv.mailbox@atl.frb.org.

_______________________________________

1 See David M. Rowe for an excellent summary of the difficulties in measuring risk in a bank's trading book. Further note that measuring risk for the banking book poses even greater difficulty in a variety of ways.

2 I have expressed skepticism about how close we are to ending too-big-to-fail. That skepticism is not intended to discourage efforts to end TBTF. Rather, the comments were intended to discourage premature claims of victory over what I view as a difficult to solve problem.

References
Jones, David. 2000. "Emerging Problems with the Basel Capital Accord: Regulatory Capital Arbitrage and Related Issues." Journal of Banking & Finance 24.1: 35–58.

Kane, Edward J. 1977. "Good Intentions and Unintended Evil: The Case against Selective Credit Allocation." Journal of Money, Credit and Banking 9.1: 55–69.