COVID-19 RESOURCES AND INFORMATION: See the Atlanta Fed's list of publications, information, and resources for help navigating through these uncertain times. Also listen to our special Pandemic Response webinar series.

About


The Atlanta Fed's macroblog provides commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues.

Authors for macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

Comment Standards:
Comments are moderated and will not appear until the moderator has approved them.

Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.

In addition, no off-topic remarks or spam is permitted.

May 28, 2020

Firms Expect Working from Home to Triple

The coronavirus and efforts to mitigate its impact are having a transformative impact on many aspects of economic life, intensifying trends like shopping online rather than visiting brick-and-mortar stores and increasing the incidence of working from home. Indeed, many tech giants have already made working from home a permanent option for employees.

Working from home, or telecommuting, is not a new phenomenon. According to a survey by the U.S. Bureau of Labor Statistics (BLS), around 8 percent of all employees worked from home at least one day a week before the arrival of COVID-19. However, only 2.5 percent worked from home full-time in the 2017–18 survey period.

Working from home has surged in the wake of social distancing and other efforts to contain the virus, and this surge brings up a good question: How many jobs can be done at home? Some careful research by Jonathan Dingel and Brent Neiman indicates that nearly 40 percent of U.S. jobs can be done at home.

While this provides an upper bound, can does not mean will, so a natural follow-up question is: How many jobs willbe done at home? To get a sense of how many jobs and how many working days will beperformedat home after the pandemic recedes, we turn to our Survey of Business Uncertainty (SBU). To preview our conclusion, the share of working days spent at home is expected to triple after the COVID-19 crisis ends compared to before the pandemic hit, but with considerable variation across industries.

In the May SBU, we asked two questions to gauge how firms anticipate working from home to change. To get a pre-pandemic starting point, we asked panelists, "What percentage of your full-time employees worked from home in 2019?" And to gauge how that's likely to change after the crisis ends, we asked, "What percentage of your full-time employees will work from home after the coronavirus pandemic?" We asked firms to sort the fraction of their full-time workforce into four categories, ranging from those employees working from home five full days per week to those who rarely or never work from home.

Chart 1 summarizes firms' responses to these two questions. It also summarizes the responses by workers to questions about working from home in the BLS's 2017–18 American Time Use Survey. For the period preceding COVID-19, SBU results and the Time Use Survey results are remarkably similar. Both surveys say 90 percent of employees rarely or never worked from home, and a very small fraction worked from home five full days per week. As reported in the chart's rightmost column, about 5 to 6 percent of all working days happened at home before the pandemic hit.

Chart 1: Working From Home, Pre- and Post-COVID

According to the SBU results, the anticipated share of working days at home is set to triple after the pandemic ends—rising from 5.5 percent to 16.6 percent of all working days. Perhaps even more striking, firms anticipate that 10 percent of their full-time workforce will be working from home five days a week.

Overall, firms say that about 10 percent of their full-time employees worked from home at least one day a week in 2019. That fraction is expected to jump to nearly 30 percent after the crisis ends (well below the upper bound estimated by Dingel and Neiman). Chart 2 gives a look at firm's working-from-home expectations for major industry groups.

Chart 2: Working From Home at Least One Full Day Per Week, Pre- and Post-COVID, by Industry

The share of people working from home at least one day a week is expected to jump markedly in the construction, real estate, and mining and utilities sectors, presumably by granting front-office staff working-from-home status. It is also expected to jump markedly in health care, education, leisure and hospitality, and other services, possibly by relying more heavily on remote-delivery options (for example, online education and virtual doctor's visits). Firms in the business services sector anticipate that working from home will rise to nearly 45 percent.

For the industries we can match directly to American Time Use Survey statistics, the two data sources imply a similar incidence of working from home before COVID-19. For manufacturing, SBU data indicate that 9 percent of employees worked at home at least one day a week prior to COVID-19, and the American Time Use Survey indicates that 7.3 percent did so. For retail and wholesale trade, the corresponding figures are 4.1 percent and 4.0 percent, respectively.

To summarize, our survey indicates that, compared to before the pandemic, the share of working days spent at home by full-time workers will triple after the pandemic. Our results also say that this shift will happen across major industry sectors. These changes in the location of work are also likely to exert powerful effects on the future of cities and the demand for high-rise office space (more on that next month).

Regarding the long-run impact of the shift to working from home, there are grounds for optimism, including a potential boost to productivity—although if you're juggling kids at home and working from your couch or bedroom, we can understand if it's hard to imagine right now.

 

November 29, 2018

Cryptocurrency and Central Bank E-Money

The Atlanta Fed recently hosted a workshop, "Financial Stability Implications of New Technology," which was cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. This macroblog post discusses the workshop's panel on cryptocurrency and central bank e-money. A companion Notes from the Vault post provides some highlights from the rest of the workshop.

The panel began with Douglas Elliot, a partner at Oliver Wyman, discussing some of the public policy issues associated with cryptoassets. Drawing on a recent paper he cowrote, Elliot observed that there are "at least four substantial market segments" that provide long-term support for cryptoassets:

  • libertarians and techno-anarchists who, for ideological reasons, want a currency without a government;
  • people who deeply distrust their government's economic management;
  • seekers of anonymity, who don't want their names associated with transactions and investments; and
  • technical users who find cryptoassets useful for some blockchain applications.

Besides these groups are the speculators and investors who hope to benefit from price appreciation of these assets.

Given the strong interest of these four groups, Elliot argues that cryptoassets are here to stay, but he also asserts that these assets raise public policy issues that regulation should address. Some issues, such as anti–money laundering, are being addressed, but all would benefit from a coordinated global approach. However, he observes that of the four long-term support groups, only the technical users are likely to favor such regulations.

Another paper, by University of Chicago professor Gina C. Pieters, analyzed the extent to which the cryptocurrency market is global using purchases of cryptocurrency by state-issued currencies. She finds that more than 90 percent of all cryptocurrency transactions occur using one of three currencies: the U.S. dollar, the South Korean won, and the Japanese yen. She further finds that the dominance of these three currencies cannot be explained by economic size, financial openness, or internet access. Pieters also observed that transactions involving bitcoin, the largest cryptocurrency by market value, do not necessarily represent a country's cryptomarket share.

Warren Weber, former Minneapolis Fed economist and a visiting scholar at the Atlanta Fed, discussed so-called "stable coins," one type of cryptocurrency. The value of many cryptocurrencies has fluctuated widely in recent years, with the price of one bitcoin soaring from under $6,000 to more than $19,000 and then plunging to just over $6,000—all within the period from October 2017 to October 2018. This extreme price volatility creates a significant impediment to Elliot's technical users who would like some method of buying blockchain services with a currency controlled by a blockchain. In an attempt to meet this demand, a number of "stable coins" have been issued or are under development.

Drawing on a preliminary paper, Weber discussed three types of stable coins. One type backs all of the currency it issues with holdings of a state-issued currency, such as the U.S. dollar. A potential weakness of these coins is that they incur operational costs that require payment. Weber observed that interest earnings might cover part of these expenses if the stable coin issuer holds the dollars in an interest-bearing asset. Additionally, charging redemption fees might offset some or all of the expense.

The other two alternatives involve the creation of cryptofinancial entities or crypto "central banks." Both of these approaches seek to adjust the quantity of the cryptocurrency outstanding to stabilize its price in another currency. However, Weber observed that both of these approaches are subject to the problem that the cryptocurrency could take on many values depending upon people's expectations. If people come to expect that a coin will lose its value, neither of these approaches can prevent the coin from becoming worthless.

The question of whether existing central banks should issue e-money was the topic of a presentation by Francisco Rivadeneyra of the Bank of Canada. Summarizing the results of his paper, Rivadeneyra observed that central banks could provide e-money that looks like a token or a more traditional account. The potential for central banks to offer widely available account-based services has long existed. However, after considering the tradeoffs, central banks have elected not to provide these accounts, and recent technological developments have not changed this calculus. However, new technologies may have changed the tradeoff for token-based systems. Many issues will need to be addressed first, though.

April 28, 2014

New Data Sources: A Conversation with Google's Hal Varian

New Data Sources: A Conversation with Google's Hal Varian

In recent years, there has been an explosion of new data coming from places like Google, Facebook, and Twitter. Economists and central bankers have begun to realize that these data may provide valuable insights into the economy that inform and improve the decisions made by policy makers.

Photo of Hal VarianAs chief economist at Google and emeritus professor at UC Berkeley, Hal Varian is uniquely qualified to discuss the issues surrounding these new data sources. Last week he was kind enough to take some time out of his schedule to answer a few questions about these data, the benefits of using them, and their limitations.

Mark Curtis: You've argued that new data sources from Google can improve our ability to "nowcast." Can you describe what this means and how the exorbitant amount of data that Google collects can be used to better understand the present?
Hal Varian: The simplest definition of "nowcasting" is "contemporaneous forecasting," though I do agree with David Hendry that this definition is probably too simple. Over the past decade or so, firms have spent billions of dollars to set up real-time data warehouses that track business metrics on a daily level. These metrics could include retail sales (like Wal-Mart and Target), package delivery (UPS and FedEx), credit card expenditure (MasterCard's SpendingPulse), employment (Intuit's small business employment index), and many other economically relevant measures. We have worked primarily with Google data, because it's what we have available, but there are lots of other sources.

Curtis: The ability to "nowcast" is also crucially important to the Fed. In his December press conference, former Fed Chairman Ben Bernanke stated that the Fed may have been slow to acknowledge the crisis in part due to deficient real-time information. Do you believe that new data sources such as Google search data might be able to improve the Fed's understanding of where the economy is and where it is going?
Varian: Yes, I think that this is definitely a possibility. The real-time data sources mentioned above are a good starting point. Google data seems to be helpful in getting real-time estimates of initial claims for unemployment benefits, housing sales, and loan modification, among other things.

Curtis: Janet Yellen stated in her first press conference as Fed Chair that the Fed should use other labor market indicators beyond the unemployment rate when measuring the health of labor markets. (The Atlanta Fed publishes a labor market spider chart incorporating a variety of indicators.) Are there particular indicators that Google produces that could be useful in this regard?
Varian: Absolutely. Queries related to job search seem to be indicative of labor market activity. Interestingly, queries having to do with killing time also seem to be correlated with unemployment measures!

Curtis: What are the downsides or potential pitfalls of using these types of new data sources?
Varian: First, the real measures—like credit card spending—are probably more indicative of actual outcomes than search data. Search is about intention, and spending is about transactions. Second, there can be feedback from news media and the like that may distort the intention measures. A headline story about a jump in unemployment can stimulate a lot of "unemployment rate" searches, so you have to be careful about how you interpret the data. Third, we've only had one recession since Google has been available, and it was pretty clearly a financially driven recession. But there are other kinds of recessions having to do with supply shocks, like energy prices, or monetary policy, as in the early 1980s. So we need to be careful about generalizing too broadly from this one example.

Curtis: Given the predominance of new data coming from Google, Twitter, and Facebook, do you think that this will limit, or even make obsolete, the role of traditional government statistical agencies such as Census Bureau and the Bureau of Labor Statistics in the future? If not, do you believe there is the potential for collaboration between these agencies and companies such as Google?
Varian: The government statistical agencies are the gold standard for data collection. It is likely that real-time data can be helpful in providing leading indicators for the standard metrics, and supplementing them in various ways, but I think it is highly unlikely that they will replace them. I hope that the private and public sector can work together in fruitful ways to exploit new sources of real-time data in ways that are mutually beneficial.

Curtis: A few years ago, former Fed Chairman Bernanke challenged researchers when he said, "Do we need new measures of expectations or new surveys? Information on the price expectations of businesses—who are, after all, the price setters in the first instance—as well as information on nominal wage expectations is particularly scarce." Do data from Google have the potential to fill this need?
Varian: We have a new product called Google Consumer Surveys that can be used to survey a broad audience of consumers. We don't have ways to go after specific audiences such as business managers or workers looking for jobs. But I wouldn't rule that out in the future.

Curtis: MIT recently introduced a big-data measure of inflation called the Billion Prices Project. Can you see a big future in big data as a measure of inflation?
Varian: Yes, I think so. I know there are also projects looking at supermarket scanner data and the like. One difficulty with online data is that it leaves out gasoline, electricity, housing, large consumer durables, and other categories of consumption. On the other hand, it is quite good for discretionary consumer spending. So I think that online price surveys will enable inexpensive ways to gather certain sorts of price data, but it certainly won't replace existing methods.

By Mark Curtis, a visiting scholar in the Atlanta Fed's research department


February 1, 2013

Half-Full Glasses

Just in case you were inclined to drop the "dismal" from the "dismal science," Northwestern University professor Robert Gordon has been doing his best to talk you out of it. His most recent dose of glumness was offered up in a recent Wall Street Journal article that repeats an argument he has been making for a while now:

The growth of the past century wasn't built on manna from heaven. It resulted in large part from a remarkable set of inventions between 1875 and 1900...

This narrow time frame saw the introduction of running water and indoor plumbing, the greatest event in the history of female liberation, as women were freed from carrying literally tons of water each year. The telephone, phonograph, motion picture and radio also sprang into existence. The period after World War II saw another great spurt of invention, with the development of television, air conditioning, the jet plane and the interstate highway system…

Innovation continues apace today, and many of those developing and funding new technologies recoil with disbelief at my suggestion that we have left behind the era of truly important changes in our standard of living…

Gordon goes on to explain why he thinks potential growth-enhancing developments such as advances in healthcare, leaps in energy-production technologies, and 3-D printing are just not up to late-19th-century snuff in their capacity to better the lot of the average citizen. To paraphrase, your great-granddaddy's inventions beat the stuffing out of yours.

There has been a lot of commentary about Professor Gordon's body of work—just a few examples from the blogosphere include Paul Krugman, John Cochrane, Free Exchange (at The Economist), Gary Becker, and Thomas Edsall (who includes commentary from a collection of first-rate economists). Most of these posts note the current-day maladies that Gordon offers up to furrow the brow of the growth optimists. Among these are the following:

And inequality in America will continue to grow, driven by poor educational outcomes at the bottom and the rewards of globalization at the top, as American CEOs reap the benefits of multinational sales to emerging markets. From 1993 to 2008, income growth among the bottom 99% of earners was 0.5 points slower than the economy's overall growth rate.

Serious considerations, to be sure, but there is actually a chance that some of the "headwinds" that Gordon emphasizes are signs that something really big is afoot. In fact, Gordon's headwinds remind me of this passage, from a paper by economists Jeremy Greenwood and Mehmet Yorukoglu published about 15 years ago:

A simple story is told here that connects the rate of technological progress to the level of income inequality and productivity growth. The idea is this. Imagine that a leap in the state of technology occurs and that this jump is incarnated in new machines, such as information technologies. Suppose that the adoption of new technologies involves a significant cost in terms of learning and that skilled labor has an advantage at learning. Then the advance in technology will be associated with an increase in the demand for skill needed to implement it. Hence the skill premium will rise and income inequality will widen. In the early phases the new technologies may not be operated efficiently due to a dearth of experience. Productivity growth may appear to stall as the economy undertakes the (unmeasured) investment in knowledge needed to get the new technologies running closer to their full potential. The coincidence of rapid technological change, widening inequality, and a slowdown in productivity growth is not without precedence in economic history.

Greenwood and Yorukoglu go on to assess, in detail, how durable-goods prices, inequality, and productivity actually behaved in the first and second industrial revolutions. They conclude that game-changing technologies have, in history, been initially associated with falling capital prices, rising inequality, and falling productivity. Here is a representative chart, depicting the period (which was rich with technological advance) leading up to Gordon's (undeniably) golden age:

Mbchart130201
Source: "1974," Jeremy Greenwood and Mehmet Yorukoglu,
Carnegie-Rochester Conference Series on Public Policy, 46, 1997


Greenwood and Yorukoglu conclude their study with this pointed question:

Plunging prices for new technologies, a surge in wage inequality, and a slump in the advance of labor productivity - could all this be the hallmark of the dawn of an industrial revolution? Just as the steam engine shook 18th-century England, and electricity rattled 19th-century America, are information technologies now rocking the 20th-century economy?

I don't know (and nobody knows) if the dark-before-the-dawn possibility described by Greenwood and Yorukoglu is the apt analogy for where the U.S. (and global) economy sits today. (Update: Clark Nardinelli also discussed this notion.) But I will bet you there was some commentator writing in 1870 who sounded an awful lot like Professor Gordon.

Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed