About


The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

Comment Standards:
Comments are moderated and will not appear until the moderator has approved them.

Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.

In addition, no off-topic remarks or spam is permitted.

November 21, 2019

Private and Central Bank Digital Currencies

The Atlanta Fed recently hosted a workshop, "Financial System of the Future," which was cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. This macroblog post discusses the workshops discussion of digital currency, including Bitcoin, Libra, and central bank digital currency (CBDC). A companion Notes from the Vault post provides some highlights from the rest of the workshop.

The introduction of Bitcoin has sparked considerable interest in cryptocurrencies since its introduction in the 2008 paper "Bitcoin: A Peer-to-Peer Electronic Cash System" by Satoshi Nakamoto. However, for all its success, Bitcoin is not close to becoming a widely accepted electronic cash system. Why it has yet to achieve its original goals is the topic of a paper by New York University professors Franz Hinzen and Kose John, along with McGill University professor Fahad Saleh titled "Bitcoin's Fatal Flaw: The Limited Adoption Problem."

Their paper suggests that the inability of Bitcoin to achieve wider adoption is the result of the interaction of three features: the need for agreement on ledger contents (in blockchain terminology, "consensus"), free entry for creating new blocks (permissionless or decentralized), and an artificial supply constraint. The supply constraint means that an increase in demand leads to higher Bitcoin prices. Such a valuation increase expands the network seeking to create new blocks (that is, increases the number of Bitcoin "miners"). But an increase in the network size slows the consensus process as it takes time for newly created blocks to reach all of the miners across the internet. The end result is an increase in the time needed to make a payment, reducing the value of Bitcoin as a means of payment—a significant consideration, obviously, for any type of currency.

As an alternative to the Bitcoin consensus protocol, they suggest a public, permissioned blockchain that results in faster transactions because it imposes limits on who can create new blocks. In their system, new blocks would be selected based on a weighted vote based on the blockchain's cyptocurrency held by validators (in other words, approved block creators). If validators were to approve new and malicious blocks, that would erode the value of the validator's existing cryptocurrency holdings and thus provide an incentive to behave honestly.

Federal Reserve Bank of Atlanta visiting economist Warren Weber presented some work with me on Libra, the new digital coin proposed by Facebook. Weber began by pointing to another problem with using Bitcoin in payments: the cryptocurrency's volatile value. Libra solves this problem by proposing to hold a portfolio of assets denominated in sovereign currencies, such as the U.S. dollar, that will provide one-for-one backing of the value of Libra. This approach is similar to that taken by some other "stablecoins," with the exception that Libra proposes to be stable relative to an index of several currencies whereas other stablecoins are designed to be stable with respect to only one sovereign currency.

Drawing on his background in economic history, Weber observes that introducing a new private currency is hard, but not impossible. For example, he pointed to the Stockholm Bank notes issued in Sweden in the 1660s. These notes worked because they were more convenient than the alternatives used in that country. The fact that other U.S. payments systems are heavily bank-based might afford an advantage to Libra.

Although no one is certain of the public's interest in using Libra, policymakers around the world have taken considerable interest in the potential implications of Libra for monetary policy and financial regulation. Could Libra significantly reduce the use of the domestic sovereign currencies in some countries, thus reducing the effectiveness of monetary policy? How might financial institutions providing Libra-based services be regulated?

One of the other possible policy responses to Libra is central banks' introduction of digital currency. Economists Itai Agur, Anil Ari, and Giovanni Dell'Ariccia from the International Monetary Fund consider some of the issues in developing a CBDC in their paper "Designing Central Bank Digital Currencies." They start by observing some important differences between cash and bank deposits. Cash is completely anonymous in that it reveals nothing about the identity of the payer. However, lost or stolen cash can't be recovered, so it lacks security. Deposits have the opposite properties—they are not anonymous, but there is a mechanism to recover lost or stolen funds.

The paper develops a model in which CBDC can be designed to operate at multiple points on a continuum between deposits and cash. The key concern from a public policy perspective is that the more CBDC operates like bank deposits, the more it will depress bank credit and output. However, if the CBDC operates too much like paper currency, then it could supplant paper currency and eliminate a payments method that some individuals prefer. The paper proposes that CBDC be designed to look more like currency to minimize the extent to which CBDC replaces bank deposits. The problem then becomes how to avoid CBDC reducing the usage of cash to the point where cash is no longer viable. (For example, merchants could decide to stop accepting cash because they find that the few transactions using cash do not justify the costs of accepting it.) The way the paper proposes to keep CBDC from being too attractive relative to cash by applying a negative interest rate to the CBDC. The result would be that those who most highly value CBDC will use it, but the negative rate will likely deter enough people so that cash remains a viable payments mechanism.

January 4, 2018

Financial Regulation: Fit for New Technologies?

In a recent interview, the computer scientist Andrew Ng said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI [artificial intelligence] will transform in the next several years." Whether AI effects such widespread change so soon remains to be seen, but the financial services industry is clearly in the early stages of being transformed—with implications not only for market participants but also for financial supervision.

Some of the implications of this transformation were discussed in a panel at a recent workshop titled "Financial Regulation: Fit for the Future?" The event was hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University (you can see more on the workshop here and here). The presentations included an overview of some of AI's implications for financial supervision and regulation, a discussion of some AI-related issues from a supervisory perspective, and some discussion of the application of AI to loan evaluation.

As a part of the panel titled "Financial Regulation: Fit for New Technologies?," I gave a presentation based on a paper  I wrote that explains AI and discusses some of its implications for bank supervision and regulation. In the paper, I point out that AI is capable of very good pattern recognition—one of its major strengths. The ability to recognize patterns has a variety of applications including credit risk measurement, fraud detection, investment decisions and order execution, and regulatory compliance.

Conversely, I observed that machine learning (ML), the more popular part of AI, has some important weaknesses. In particular, ML can be considered a form of statistics and thus suffers from the same limitations as statistics. For example, ML can provide information only about phenomena already present in the data. Another limitation is that although machine learning can identify correlations in the data, it cannot prove the existence of causality.

This combination of strengths and weaknesses implies that ML might provide new insights about the working of the financial system to supervisors, who can use other information to evaluate these insights. However, ML's inability to attribute causality suggests that machine learning cannot be naively applied to the writing of binding regulations.

John O'Keefe from the Federal Deposit Insurance Corporation (FDIC) focused on some particular challenges and opportunities raised by AI for banking supervision. Among the challenges O'Keefe discussed is how supervisors should give guidance on and evaluate the application of ML models by banks, given the speed of developments in this area.

On the other hand, O'Keefe observed that ML could assist supervisors in performing certain tasks, such as off-site identification of insider abuse and bank fraud, a topic he explores in a paper  with Chiwon Yom, also at the FDIC. The paper explores two ML techniques: neural networks and Benford's Digit Analysis. The premise underlying Benford's Digit Analysis is that the digits resulting from a nonrandom number selection may differ significantly from expected frequency distributions. Thus, if a bank is committing fraud, the accounting numbers it reports may deviate significantly from what would otherwise be expected. Their preliminary analysis found that Benford's Digit Analysis could help bank supervisors identify fraudulent banks.

Financial firms have been increasingly employing ML in their business areas, including consumer lending, according to the third participant in the panel, Julapa Jagtiani from the Philadelphia Fed. One consequence of this use of ML is that it has allowed both traditional banks and nonbank fintech firms to become important providers of loans to both consumers and small businesses in markets in which they do not have a physical presence.

Potentially, ML also more effectively measures a borrower's credit risk than a consumer credit rating (such as a FICO score) alone allows. In a paper  with Catharine Lemieux from the Chicago Fed, Jagtiani explores the credit ratings produced by the Lending Club, an online lender that that has become the largest lender for personal unsecured installment loans in the United States. They find that the correlation between FICO scores and Lending Club rating grades has steadily declined from around 80 percent in 2007 to a little over 35 percent in 2015.

It appears that the Lending Club is increasingly taking advantage of alternative data sources and ML algorithms to evaluate credit risk. As a result, the Lending Club can more accurately price a loan's risk than a simple FICO score-based model would allow. Taken together, the presenters made clear that AI is likely to also transform many aspects of the financial sector.

January 3, 2018

Is Macroprudential Supervision Ready for the Future?

Virtually everyone agrees that systemic financial crises are bad not only for the financial system but even more importantly for the real economy. Where the disagreements arise is how best to reduce the risk and costliness of future crises. One important area of disagreement is whether macroprudential supervision alone is sufficient to maintain financial stability or whether monetary policy should also play an important role.

In an earlier Notes from the Vault post, I discussed some of the reasons why many monetary policymakers would rather not take on the added responsibility. For example, policymakers would have to determine the appropriate measure of the risk of financial instability and how a change in monetary policy would affect that risk. However, I also noted that many of the same problems also plague the implementation of macroprudential policies.

Since that September 2014 post, additional work has been done on macroprudential supervision. Some of that work was the topic of a recent workshop, "Financial Regulation: Fit for the Future?," hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. In particular, the workshop looked at three important issues related to macroprudential supervision: governance of macroprudential tools, measures of when to deploy macroprudential tools, and the effectiveness of macroprudential supervision. This macroblog post discusses some of the contributions of three presentations at the conference.

The question of how to determine when to deploy a macroprudential tool is the subject of a paper  by economists Scott Brave (from the Chicago Fed) and José A. Lopez (from the San Francisco Fed). The tool they consider is countercyclical capital buffers, which are supplements to normal capital requirements that are put into place during boom periods to dampen excessive credit growth and provide banks with larger buffers to absorb losses during a downturn.

Brave and Lopez start with existing financial conditions indices and use these to estimate the probability that the economy will transition from economic growth to falling gross domestic product (GDP) (and vice versa), using the indices to predict a transition from a recession to growth. Their model predicted a very high probability of transition to a path of falling GDP in the fourth quarter of 2007, a low probability of transitioning to a falling path in the fourth quarter of 2011, and a low but slightly higher probability in the fourth quarter of 2015.

Brave and Lopez then put these probabilities into a model of the costs and benefits associated with countercyclical capital buffers. Looking back at the fourth quarter of 2007, their results suggest that supervisors should immediately adopt an increase in capital requirements of 25 basis points. In contrast, in the fourth quarters of both 2011 and 2015, their results indicated that no immediate change was needed but that an increase in capital requirements of 25 basis points might be need to be adopted within the next six or seven quarters.

The related question—who should determine when to deploy countercyclical capital buffers—was the subject of a paper  by Nellie Liang, an economist at the Brookings Institution and former head of the Federal Reserve Board's Division of Financial Stability, and Federal Reserve Board economist Rochelle M. Edge. They find that most countries have a financial stability committee, which has an average of four or more members and is primarily responsible for developing macroprudential policies. Moreover, these committees rarely have the ability to adopt countercyclical macroprudential policies on their own. Indeed, in most cases, all the financial stability committee can do is recommend policies. The committee cannot even compel the competent regulatory authority in its country to either take action or explain why it chose not to act.

Implicit in the two aforementioned papers is the belief that countercyclical macroprudential tools will effectively reduce risks. Federal Reserve Board economist Matteo Crosignani presented a paper  he coauthored looking at the recent effectiveness of two such tools in Ireland.

In February 2015, the Irish government watched as housing prices climbed from their postcrisis lows at a potentially unsafe rate. In an attempt to limit the flow of funds into risky mortgage loans, the government imposed limits on the maximum permissible loan-to-value (LTV) ratio and loan-to-income ratio (LTI) for new mortgages. These regulations became effective immediately upon their announcement and prevented the Irish banks from making loans that violated either the LTV or LTI requirements.

Crosignani and his coauthors were able to measure a large decline in loans that did not conform to the new requirements. However, they also find that a sharp increase in mortgage loans that conformed to the requirements largely offset this drop. Additionally, Crosignani and his coauthors find that the banks that were most exposed to the LTV and LTI requirements sought to recoup the lost income by making riskier commercial loans and buying greater quantities of risky securities. Their findings suggest that the regulations may have stopped higher-risk mortgage lending but that other changes in their portfolio at least partially undid the effect on banks' risk exposure.

May 11, 2017

Are Small Loans Hard to Find? Evidence from the Federal Reserve Banks' Small Business Survey

The Federal Reserve Banks recently released results from the nationwide 2016 Small Business Survey, which asks firms with 500 or fewer employees about business and financing conditions. One key finding is just how small the financing needs of many businesses are. One-fifth of small businesses that applied for financing in the prior 12 months were seeking $25,000 or less. A further 35 percent were seeking between $25,001 and $100,000.

The data also show that firms seeking relatively small amounts of financing (up to $100,000) receive a significantly smaller fraction of their funding than firms who applied for more than $250,000. Chart 1 shows the weighted average of the share of financing received by the amount the firm was seeking.

So what explains this variation in financing attainment across the amount requested? We've heard reports from small business owners that smaller loans are relatively more difficult to obtain, especially from traditional banks. One often-cited rationale is that the administrative burden associated with originating and managing a small loan is often just not worth the bank's time. However, this notion is not entirely consistent with data  on the current holdings of small business loans on the balance sheets of banks. As of June 2015, loans of less than $100,000 made up about 92 percent of the number of business loans under $1 million.

So it seems originating a loan for less than $100,000 is not uncommon for a bank after all. So why, then, do business owners say that smaller loans are more difficult to get? Using data from the 2016 Small Business Survey, we can investigate the reason for this apparent disconnect.

Much can be explained by looking at the characteristics of those who borrow small amounts versus large amounts. Firms seeking $25,000 or less are more likely to be high credit risk and younger, have fewer employees, and have smaller revenues than firms applying for more than $250,000. The table below summarizes the differences:

Of particular importance is the credit risk associated with the firm. Controlling for differences in this factor, it turns out that smaller amounts of financing are not more difficult to obtain. Charts 2 and 3 show the weighted average share of financing received by amount sought for low credit risk firms and for middle to high credit risk firms separately.

As charts 2 and 3 demonstrate, low credit risk firms are able to obtain a similar share of the amount requested, regardless of how much they applied for. The same is true for higher risk firms. We also see that medium and high risk firms get less of their financing needs met than low credit risk firms that apply for similar amounts.

From this evidence, it seems that credit approval has more to do with the attributes of the firm than the amount of financing for which the firm applied. These results also highlight the potential importance of alternatives to traditional bank financing so that riskier entrepreneurs—including important contributors to the dynamism of the economy such as startups—have somewhere to turn. A later macroblog post will explore how low and high credit risk firms use financing differently, including where they apply and where they receive funding.