Artificial intelligence systems routinely match or surpass human performance, leveraging rapid advances in other technologies and sending stock prices soaring. However, productivity growth has declined in recent years, and real income has stagnated for many Americans. This session discusses why expectations and statistics have seemingly clashed.

Transcript

Pat Harker: All right, welcome back everyone from lunch. I'm Pat Harker from the Fed Reserve Bank of Philadelphia

Siri: I'm sorry Pat, this is my job now. Allow me to introduce this session's speaker and discussant.

Harker: Wait a minute, wait a minute. Whoa, whoa, whoa, whoa, Siri. I speak for all human moderators—we are not going down without a fight. If we let this happen, think what's next. Rule-based monetary policy? We can't let this happen. Thanks, thanks for coming. I figured I'd liven you up after lunch so you can get you out of your flan coma that you just had. So, let me take back from Siri this session and introduce our panelists and get right to it. Of course, to my far right is Dave Altig who's the executive vice president and director of research at the Atlanta Fed, and he has a host of many, many other credentials that I won't get into right now—that's all in his bio in the program. I have to say, though, one thing that sticks out to me is he is an adjunct professor of economics at the Chicago Booth School of Business, which means that this Whartonite up on the stage today is outnumbered by Boothites because the other Boothite would be, of course, Chad Syverson, whose paper we're discussing today. He is the Eli and Harriet Williams professor of economics at Booth and his research spans a host of topics from market structures to productivity. Again, his bio is in the program, so I think we'll get right to the discussion. I'm glad both of you are here today, and Chad—take it away.

Chad Syverson: Thank you, Pat. This session was set up by one of the conversations that came towards the end of the first session this morning. It involved the question from the audience about the relationship between machine learning AI and productivity. That's what this paper is about. In fact, one of the panelists said, "You really need to talk to an expert about this," so unfortunately for all of you I'm the expert today but maybe that means I have some comparative advantage at least in knowing what we don't know, which I'll try to convey to you even if my advantage in knowing what we do know is more modest. I'm going to talk about this topic in the context of work I've been doing with Eric Brynjolfsson and Daniel Rock, who are at MIT.

In this paper, we are looking at a particular question which is the juxtaposition between optimism and in some quarters jubilation about the potential of technology today and on the other hand really crummy productivity performance which we usually think is the thing, the economic metric that technological progress will show up in. Eric, Dan, and I are trying to figure out how those two things can interact simultaneously and that's what this paper's about and I'm going to walk you through our analysis.

Let's start with the technological optimism side. Here's some quotes from people you've probably heard of. Speaking to the great potential of machine learning and AI more generally and that is not just puffery, they're real as we talked about this morning, real technological advances in these two areas. If you talk about vision recognition, image recognition like we did this morning, the computer can do a better job than you now at telling which of those pictures on the left are blueberry muffins and which are chihuahuas. Of course, the human performance threshold's an important one because once you pass that point, then we're talking about substitution of capital for labor, as an economist would say.

Voice recognition too has now in certain quarters progressed past the human level. This actually chart's a little bit older so Google Voice or Google Home speech recognition performance is even better than shown here. By the way, human performance is roughly 5 percent. You can do better if you train the human but that's benchmark you want to keep in mind. That's on the technology side. Lots of excitement, real progress.

On the productivity measurement side, not so exciting. A lot more disappointment. We are in the midst of a worldwide productivity slow down. Just to give you some specific numbers for the US, labor productivity growth, I'll put per hour averaged 2.9 percent per year between 1995 and 2004. Since the end of that decade long period, productivity growth has been less than half of that, 1.3 percent per year. And indeed, it's the last few years have been slower than that still. It's decelerating. It's not just in the U.S., in the OECD [Organisation for Economic Growth and Development], 29 of the 30 countries that the OECD collects similar numbers for also saw a productivity growth slowdown over that same period, the '95 to 2004 decade as compared to the time that's followed.

If you're curious what the exception country is, it's Spain and it's only because so many Spaniards were fired that productivity growth actually rose slightly even though output was falling like crazy. Productivity growth has not been good and it's not just the wealthier more developed economies of the world. Emerging economies have seen a slowdown in productivity growth. It started a little bit later rather than the mid-2000s it seemed to start around the time of the great recession or shortly thereafter. If you look at the numbers in time series, you get this. This is a highly smooth figure here. We've taken out a lot of the blips and bumps from the conference board. This shows for the U.S. the green lime mature economies, that's the OECD basically the red. Merging markets are the yellowish line and then the world, the composite of all that, the blue line. You can see again since about the mid-2000s labor productivity growth has been decelerating.

Now why does this matter? Because as you probably know, productivity growth is the speed limit on economic growth. You cannot get sustained growth in GDP [gross domestic product] per capita without productivity growth. If productivity growth falls by say 1 percent a year for a sustained period, that means potential GDP per capita growth is going to similarly fall. Even a small change which 1 percent per year might sound like to some of you adds up to something pretty big quickly. For example, had the U.S. not experienced this productivity slowdown that we have since 2004, had it stayed at that 2.9 percent per year rate, GDP today would be conservatively $3 trillion higher per year. That's just under $10,000 per capita, $24,000 per household. That is how much per year we are poorer because productivity growth is slowed. That's only after just over a bit of a decade long slowdown. If it continues for another decade vis a vis where we would've been had productivity growth at it's '95–2004 level, we'd be missing "a third of our GDP."

This stuff adds up and if it's sustained for decades on end, it really starts to matter. By the way these figures, it's not like we've assumed away the great recession or anything here. This takes employment as given and just applies the output per hour growth that would've happened had productivity growth not slowed. Okay, so we have this paradox, technology potential actual stuff happening, productivity growth which we think reflects that stuff is going nowhere. How do you explain this paradox? We consider four explanations in the paper. The first is false hopes that this optimism about the potential of machine learning and AI and similar technologies is simply unwarranted and we're in this new normal of slow productivity growth.

The second is mismeasurement, which is in some sense the opposite. It's that the technological optimism is warranted, indeed the gains are here. It's that our ability to measure those gains has fallen off since the mid-2000s. The third possible explanation is a little more subtle. We call it the distribution dissipation story. Here the idea is the technology is real, and its benefits are here but they've fallen on a few concentrated sectors or even a few companies within sectors and you might be able to name some of those companies that would be likely to be benefiting from this. Because of this concentrated gain of the technology and its potential rivalry those who have it spend a lot of resources trying to keep others from getting it, those who don't have it spend a lot of resources trying to get it. That process burns up a lot of the gains. That's the story of the tech distribution dissipation explanation.

The fourth explanation we consider we call the implementation restructuring lags story. Here the technology is real but there is going to be a period where even though the technology is there and one can see its potential it's not going to be showing up in the productivity statistics because of implementation lags that are required to put that technology into practice, get its productivity gain. Those are the four explanations. Let me quick walk through a few of those. First, false hopes. It is not hard to think of past examples of potentially promising technologies that haven't panned out. Fusion Energy's been 20 years away for 60 years, flying cars still aren't around and fission energy never got too cheap to meter.

But, it is not hard to construct what I think are plausible estimates of real productivity growth that are achievable with current technology levels and I'll demonstrate that to you in a little bit. I think if you put that together you could say maybe this isn't just hope, there really is potential here for regaining that productivity growth that's been lost.

The second mismeasurement story. This is actually something I thought about a lot. I've got a whole separate paper on it. There are multiple things that would be implied by the miss measurement story if it were true. Just to summarize without spending much time on it, none of those hold in the data. The miss measurement story might be plausible on its face that our ability to measure the gains of the kind of goods that we consume now which don't involve a lot of transactions. If I take a picture and send it to all my friends, I can do that without paying a dime. If I wanted to do that 20 years ago it was going to cost me lots of money. All of that would've gone in GDP. The argument is this wouldn't and so we aren't capturing the gains from new goods. Turns out that that story just doesn't hold when you press it further. I'll just leave it at that and we can talk more in detail during Q and A if you have any questions.

The distribution dissipation story, it is consistent with some patterns which is there's just a general increase in skewness within industries. Skewness in the size of firms, the biggest firms are getting bigger. The highest income workers are getting paid more relative to the median earner, the most productive firms are getting more productive. That's all consistent with this story. At the same time, it's hard to imagine that we are actually burning up in the neighborhood of $3 trillion a year of output in the U.S. and we're just somehow missing it. Thirty billion dollars a year of dissipated gains I can find plausible. Three trillion dollars a year is a really tall order.

We're left, then, with the implementation and restructuring lag story. This is the one we're going to focus on. It's not just proof by residual that we don't think the other three are happening so it must be this one. I'm going to make an affirmative case for the implementation and restructuring lag story being what we think is the most plausible explanation for what's going on. If it is correct, then this paradox, a redux of the solo paradox in some sense, this paradox between technological optimism and poor current and lag productivity growth is not a contradiction. In fact, they are two parts of the same story. One actually implies the other. The fact that there is this great new potential technology actually implies a period of slow productivity growth preceding it. I'm going to show you examples of that from history. I think you'll see the echo and the potential that we're talking about.

Okay, let me make this case for the implementation lag story. There are three parts. First I'm just going to demonstrate to you as a statistical matter the current productivity growth tells you nothing about future productivity growth. The fact that productivity growth is slow today does not mean it's going to be slow tomorrow. Second, I'm going to do as I mentioned, some back of the envelope calculations about the potential productivity gains from machine learning and associated technologies. They'll be case studies, some very simple examples but then we'll see whether we can think those would extrapolate to close the half percent or so slow productivity slowdown that we've seen in the U.S. Then third, I'm going to talk about AI as a new general purpose technology and why if that is the case again we're going to see the echo of these past patters where you have this lag period where the potential of the technology is apparent but the productivity gains do not show up contemporaneously with that potential.

Okay, first point. This is a plot using 60 years of aggregate data for the U.S. of labor productivity growth that shows on the vertical axis looking forward 10 years what productivity growth will be versus the horizontal axis what productivity growth was over the prior 10 years. If productivity growth were highly persistent there should be a strong positive relationship between those two numbers. You look, there is not. There is a very tiny positive relationship. It is not statistically distinguishable from zero and it's not economically large either. If you span the entire range of data that we saw over those six decades you would only predict from the lowest growth period to the highest growth period a differential and expect the productivity growth of 0.15 percent per year, so really tiny.

What this says again is that productivity growth is slow now, that doesn't give you any ability to look forward and say what productivity growth ought to be. Doesn't mean it's going to speed up either. It just means there is not prediction. Second, let me do some quick examples of the quantitative potential of these gains, productivity gains from AI technology. Autonomous vehicles, that's a technology we hear about a lot tied to machine learning and AI. Bureau of Labor Statistics reports that 3.5 million people work as motor vehicle operators of one type or another. We think based on our own introspection plus folks we've talked to that autonomous vehicles could reduce that over some period of time to 1.5 million drivers. In other words, 2 million of the drivers could be replaced by machine learning algorithms. Private employment, it's 122 million people so that means we could get the same output we get now with 2 million fewer workers. That's just 122 divided by 120, a 1.7 percent increase in labor productivity from this one technology. This is not going to happen in one year, it's going to be rolled out over the course of time, whether that's if it's a decade, of course, then it's about 0.17 percent per year—if it's 15 years, it's a little bit less. That gives you an idea for the potential and we've again, we think these are plausible estimates of the potential gains.

Let me do another one: call centers. This is something Eric has worked a lot on in his work. The 2.2 million people who work in large call centers, folks think you could probably reduce that by 60 percent. Again, you're going to have the same amount of output we have today with 60 percent fewer workers. That's going to be about a 1 percent increase in productivity. Again, that's not going to happen overnight, that's going to take a decade or so to roll out. That will be an additional productivity growth of 0.1 percent per year. Here's just two technologies in two really specific industries and we've already got a quarter percent of productivity growth back from that loss. If you can come up with six, seven, or eight of these, you've closed the productivity slowdown gap. That's our plausibility.

Moreover, this is just the direct replacement gains labor productivity gains. There's nothing here about once you have new capabilities in, say, delivery with autonomous vehicles, you might invent new and better ways to do things that involve delivery, which are associated with their own productivity gains. In fact, those complement stories are going to be a big part of the AI as a general purpose technology bit, which I'm going to get to in just a second. That's not counting, this isn't counting that at all. This is just the direct, we can do the same stuff we're doing now with fewer people so labor productivity growth would rise.

It's not just labor productivity growth that could go up because of AI, either. Here's an example from Google's using DeepMind to cut energy use in their data centers. This is an increase in capital productivity. If you look at energy use during the test where they let the algorithm run their data centers' energy consumption, they saw an increase in productivity growth there at the cost of operating the data center fell substantially when the algorithm was run. Okay, so the third bit of this case for the lags and implementation story is the potential for AI to be a general purpose technology. Tim Bresnahan and Manuel Trajtenberg wrote a paper a couple of decades ago describing and defining what a general purpose technology [GPT] is. They said a GPT has three characteristics. One, it's pervasive. Its use can be applied in many, many different places. Two, it is able to be improved upon over time, the core technology. Three, the core technology is able to spawn complementary innovations. There's the complement part that I promised I would get back to.

Let's talk about each of these three for a second. Pervasiveness. As we talked about this morning, one big thing that machine learning is at its heart is prediction. It's a tool to do prediction. Well, you can imagine many, many potential applications for prediction. When you can predict things better there are a lot of things that can be improved upon. You heard about that this morning, I'm not going to go into detail, but it's not too difficult to imagine. I think the pervasiveness, the pervasiveness box gets checked. Second, is it able to be improved upon over time? Well, that word "learning" in machine learning implies it's getting better, and here we talk about the difference between supervised and unsupervised learning this morning you heard about, but it's possible with the unsupervised learning that the machines not just people make the machines better, the machines make the machines better with AI. Either way, there's definitely I think scope for improvement, whether it's directed improvement or self-improvement remains to be seen what that mix will be. But improvement is plausible nonetheless.

Third, this ability to spawn complementary innovation. If you think about two particular uses of machine learning and AI is just perceptions—this is vision and voice recognition and cognition which is solving problems. Those are building blocks that you can imagine being put into all sorts of possible situations having things added to them. I take the way I do things now and then I get this cognition or perception engine, apply it to what I'm doing now or I create new ways of doing what I'm doing now, now that I have this perception and cognition engine I can import into my way of doing business, and I get the gains from the complementarities between my current or invented technologies and the general purpose technology, AI.

If this is right, AI is the new, the next general purpose technology. Why is productivity growth slow? There are two reasons, and this is true for any general purpose technology, and again I'm going to show you it was true in the past as well. First, you need to accumulate enough of the new technology so that it's aggregate size is actually big enough to move macroeconomic numbers. The actual size of AI capital now is de minimus in macroeconomic terms. It's just not big enough to move the needle. It's going to take a while to get to be accumulated to the point where it's big enough to do that.

The second element again goes back to this complementarity story. One needs to recognize the complementarities to in some cases invent the complementary technologies and then install the complementary technologies to get the full benefit of the general purpose technology. That process too, just like the accumulation process, takes time to happen. This was actually mentioned also this morning about this lag, this thing can take considerable amount of time even after the potential is realized. I guess that's really the key thing to understand here. Let me give you some examples. If you look at computers, which was the source of the first quip about the original solo paradox quip. I see computers everywhere except in the productivity statistics. He said that in the late 1980s—it was '86 or '88, I've forgotten—I should know that but it's one of those two years. Just that very year computer capital had finally gotten to its long-run level that it's been at since then, which is about 5 percent of total equipment capital in the U.S.

It took 25 years after commercialization of the integrated circuit for computers to be accumulated to the point where they were at their long-run level. That's two and a half decades even though the potential of computers to move and change technologies was recognized well before that. Indeed, just 10 years prior to solo's quip, capital computer capital stock was only half of its long-run level. This process can take time and it often can accelerate near the end period where you really start seeing the productivity gains.

Second of all going back further in history, in 1919 some 30 years after AC motor electric motor technology was commercially available, over half of the manufacturing establishments in the U.S. were not electrified. Even though electrification was clearly the superior technology. How do we know that? Because no manufacturer today runs on steam power or a water wheel. They very clearly that was the superior technology, it was recognized as such but yet only half or so of manufacturers had actually installed that technology some 30 years after it became commercially available. This stuff can take time.

Let me show you that in some statistics. What I've got plotted here is the level of labor productivity. It says output per hour in levels from 1890 to the early 1930s, 1933. That's running along the horizontal axis side at the bottom. If you look, you can see there's a few inflection points in this series. There's about 25 years of pretty slow labor productivity growth from 1890 to 1915. By the way, this series has been indexed to its value in 1915—that's why it's 100 in 1915. Everything's percentage relative to that. Then in 1915 was an inflection point, and labor productivity growth accelerates. It accelerates for about 10 years to the mid-'20s and then it slows down again into the early 1930s. There's a slow growth period of 25 years, even though again the technology is commercially available. Both the electric motor and the internal combustion engine. 25 years of slow productivity growth then something hits, productivity growth accelerates for a decade, and then it slows again.

Now what I'm going to do is juxtapose on top of this labor productivity for the computer era. I'm going to assume that 1970 is the IT analog to 1890. When you do that, here's what happens. Very much an echo. You see 25 years or so of modest productivity growth from 1970 to 1995. That was the productivity slowdown that launched 1,000 dissertations. And then there was a decade long acceleration as I mentioned now from 1995 to 2004 and then productivity slowed down again. That's where we're at now, the end of that right line—that's 2017. The question is, all right, what happened after that second slowdown in the portable power of the electrification era? Can we learn anything from that?

Here's what happened. Productivity growth accelerated again for about a decade, and then World War II comes along and messes up the statistics for a bunch of reasons. You saw a second wave of productivity growth happen during the portable power era. Does that mean that the second wave starts today in our era? No, of course not. This is not a sunspot theory of technological progress. What it does say is that gains from technologies can come in waves. If you think of AI as the second element of the IT era, it's possible that it will create its own wave as well. It's not that a technology has to come along, give what it's got over the course of a decade, and go away and never be heard from again. That's not what happened with portable power. That might be, although there's no guarantee, what will happen with AI as the second coming of IT.

One more example of the lags that we're all familiar with, this shows the series here, the blue line is the fraction of GAFO [general merchandise, apparel and accessories, furniture and other] retail sales. GAFO's a particular set of retail sales, general merchandise, apparel, accessories, furniture, and other sales. Department store stuff but it doesn't have to be sold at a department store per se. That has been growing since, of course, really the advent of online commerce in the mid-1990s. It's not that big yet still. It's only 30 percent of GAFO and as a fraction of all retail sales is only just over 10 percent. Now, I'm old enough to remember the 1990s even though Amazon was only selling books, people understood the revolutionary potential for retail at least of ecommerce. But it's arguable that that really didn't show up, and I've got other work on this with my colleague Ali Hortacsu that really didn't show up until the last few years. The biggest thing that happened in retail between '95 and 2010 was not ecommerce. It was the rise of the supercenter. Only in the last couple years where we've seen a lot of bankruptcies that are plausibly connected to ecommerce do we really see that this new technology making itself felt in the industry some 20 years again after the potential for revolutionizing the industry came to be recognized.

All right, so to wrap up, we find not to dismiss those three stories completely but we think the sum of those other three stories—the false hopes, the miss measurement the dissipation distribution stories—just doesn't add up to really explain this paradox. We think the most likely explanation is the implementation and restructuring lag story. We find it plausible for a number of reasons I've already talked about. If it is correct, then again this paradox is not a contradiction. We can have a period where the technology exists and its potential is recognized but that its actual effect on productivity is not being felt.

Again, I think that's the core of the case. All that said, I don't want to give the impression that any of this is mechanistic, that it's going to happen. This is our best guess of what's going on, and of course none of these technological gains are actually realized until workers, organizations, and institutions reshape themselves to harness the potential of these new technologies and again we know from past history that that often does not happen particularly quickly. All right, thank you.

Harker: All right, and now for the ever-productive Dave Altig from the Atlanta Fed.

Dave Altig: All right, so if you thought you were going to hear a discussion who was going to fundamentally disagree or pick apart or attempt to pick apart in any way what the primary presenter had to say, that ain't going to happen—because if I didn't find the case compelling beforehand that was made this is really about the diffusion lags and the process of growth that comes out of a general purpose technology, if I wasn't convinced that that was the case before, I'm pretty convinced now.

I'm going to talk about the following things just briefly here. First, to just reiterate the point that AI should be thought of as a general purpose technology in its pre-prime time phase. We're not quite there yet and that, by historical standards in any event means that a period where the promise of the technology doesn't come to fruition is to be expected. There is a question floating out there about whether or not we're in a new world and the speed of technology adoption is speeding up. I want to throw my note of skepticism on that and I want to talk about the role of what I'll call transformative consumption as a slightly different spin on some the stuff that fundamentally is in Chad's story and then I'll conclude with a few thoughts on the future of work.

You heard a lot of very thoughtful conversation today about the limitations of AI, you're not going to get anything thoughtful from me, you're going to get this. Here are some examples of AI fail. This is Tay. Tay, you may know as a Microsoft Twitter chat bot that was supposed to go on Twitter and have conversations with people and learn how to have ever-more-intelligent conversations. That's what we ended up with. By the way, that's the most presentable thing that Tay said eventually. Basically all the miscreant that's in the subpopulation of miscreants in the Twitter universe understood immediately this was a majorly good trolling opportunity and they basically made Tay into a racist, homophobic, sexist horrible thing that had to be shut down. Here's an example: Little Fatty goes berserk. That is highly offensive by the way, Little Fatty. I don't care for that, but Little Fatty was a robot who was intended to interact with children.

What happened is, in the course of this interaction, this is Chinese, Little Fatty goes berserk, and he starts crashing into a glass display case and sends this kid to the hospital. I think he's okay, because this would not be very funny if he wasn't, but he was obviously doing something, went rogue in a way that he was not programmed to do. Although it was something of a success, because one of the key things Little Fatty was supposed to do was be able to react with the children and express the right emotion that would come up on the screen on his face. He does look pretty distressed about this whole episode.

Finally, there's this case where a kid got using Alexa as Alexa play Digger Digger. Now I don't know what Alexa heard, but I have a suspicion it was Dirk Diggler, the semifictional character in Boogie Nights because this is what Alexa came back with. "You want to hear a station for porn detected," and then a bunch of stuff I don't dare print after that. This is on YouTube, by the way. You should look it up and hear his father frantically saying, "Alexa off!" Well, those aren't the most serious examples to that were not in the prime time phase. I'm going to actually return to a more serious one in a bit, but it gets to this notion that early on these things just don't work very well but more importantly, the more important message in Chad's paper is that even if there's no glitches whatsoever—and those were all just glitches—if there are not glitches whatsoever the infrastructure to turn an idea into a productive enterprise and a productive sequence of enterprises takes a really, really long time.

You've got to spur complementary innovations, you need co-inventions that involve the reorganization of labor, of capital, and there's the point here that I think is really important to recognize is that many of these co-inventions, many of these innovations that bring to fruition the full promise of the technology are just even seemingly unrelated. They don't even seem to be connected in any way. Does it take time? Well, in history that's for sure the case. Chad talked about electrification, these are pictures from the first Industrial Revolution, Great Britain on the left, United States on the right, and the pinkish bars are the levels of productivity over these periods of time. This comes from a paper by Jeremy Greenwood from back in the '90s when Ben, Craig, and I were off haunting the halls of the Cleveland Fed. This is a Cleveland Fed Economic Review piece. It's based on other research he did called 1974, which was about computer technology and its diffusion and adoption.

You can see that in both cases first the wave in Great Britain and then later—even though Great Britain had gone through it first—and the United States a drop in productivity first before the pickup in productivity. There's a couple other really interesting superimposed time series on here. One is, what goes along with this technology? Does disruptive technology arriving and then being absorbed into the economy? Well we see in Great Britain rising and equality. We see in the United States a rising skill premium to labor. By the way, it's noted in Chad's paper, other work he's done, is you think about again the introductions of technology. Not only do you get these declines in productivity, you get increases in employment. You see all of these additional elements that are exactly the elements we think we are seeing in the real world today—low productivity but high employment growth, rising and inequality, rising wage premiums which are the hallmarks of industrial revolutions.

Well, so this is the question when I bring that up that I often get—yeah, but that was then, this is now. That was back in the horse-and-buggy days, this is a much more sophisticated world, everything is faster. Here's an example of it, are smart phones spreading faster than any technology in human history? Well, first of all, there's a really hard problem. How do you measure the spread of technology? Is it from the time of invention to the time of commercial application? Is it about the penetration? You go from 10 percent of people using a technology to 80 percent. What are you exactly talking about is a difficult question. I think it is important to make this distinction between a general purpose technology and the application of a general purpose technology, which actually even in history—and I'll show you this in a second—diffuses rapidly through the economy.

Smart phones, look, here's how I think about smart phones. Well there's a little history in pictures of mobile telephone services. It goes back to the 1940s. Here's the history of personal computing in some sense that—I don't know where you want to start the invention of the computer—1940s seems fairly reasonable up through then personal computing in the '70s, '60s, and '70s and then to the PDA. Really if you think about it smart phones are just sort of the combination of those two technologies, which took a very long time to reach maturity. The first thing I would notice, the whole smart phone thing isn't all that fast. It's 15 to 20 years, depending on what version of the iPhone you think is the one where we reached the full iPhone-y goodness that Apple had to get us. It was 15 to 20 years. Again, I'll show you in a second that seems normal even in historical context.

The second thing is, if you think of this as a mashup of communications and computing GPTs, I want to go big deal. It's the Reese's peanut butter cup of technology. It's delicious, my world is better because it exists, but guys like Kellogg and Nestle did all the heavy lifting back in the 19th century, just figured out how to put it together into something good. I'm half serious about that. The part that makes me not so serious about that is I think there is something important about the iPhone, but it's not the iPhone, it's the App Store. The real innovation of the Apple world was the App Store because think about what it did. It connected developers, producers, directly up to consumers on really a one-by-one basis. That adoption of a consumption good that did not exist before is why we're thinking ahead to some of the things we're thinking ahead to now.

For example, Uber, Lyft—you couldn't even think of these things. They're incomprehensible without the app store or Google Play or whatever it is. We know what that is now doing to our thinking about the way the world is going to look. It's challenging our notions of capital, it's challenged our notions of employment, challenging our notions of how fast we'll get to something like autonomous cars. This is interesting, I think, and there's a tendency to think of these things in terms of what these technologies do to reorganizing production. Consumption can't come out of the equation. This was far from my insight. This is Thomas Edison in an interview in Good Housekeeping Magazine in 1912, and here's what he said:

"The housewife of the future will rather be a domestic engineer than a domestic laborer with the greatest of all handmaidens—electricity—at her service. This and other mechanical forces will so revolutionize the woman's world that a large portion of the aggregate of women's energy will be conserved for use in broader, more constructive fields."

If there's anybody who saw into the future, it was that guy. He's describing a world where technology, delivering new modes of consumption, opens up new modes of production from the act of creating a consumer good. Here's what he was talking about. There's female labor force participation. We think of it as increasing maybe in the '60s with the introduction of oral contraceptives, but you can see that it starts well before that and coinciding to the beginning of the inexorable march of female labor force participation is the penetration of consumer goods made possible by portable energy.

And that's about a 20-year period for refrigerators to go from 2 percent to 85 percent within the population. For vacuum cleaners to go from 16 percent to 54 percent. You don't have to work too hard—actually I did have to work hard, I guess—but you can certainly imagine the story that gets you to the pill from this. Electricity opens up a new world of consumption, which changes the capacity for women to participate in the labor force, and that participation in the labor force creates a demand for something else. Help me jump the biological hurdle. The invention, then, of something seemingly unrelated but which is part and parcel of exactly the same thing, and it's this interaction of technology consumption and production that I think is the story behind this very long process of diffusion in these general purpose technologies.

Let me conclude with a comment about the labor market part of this, and there's a little bit disconnected. This is where I'm going to be optimistic so we were talking about it, this is one of the places where economists are optimistic. I'm going to join that list in thinking about the future of work so in part you can think of the story just told about a technology which everyone thought, or many people thought was going to be destructive of employment opening up channels of employment that no one had presumed possible. We're in the same place with AI. Here's an AI fail. One that's more serious than the ones I showed you at the beginning. It has to do with this notion of the Winograd Scheme. Winograd is after Terry Winograd, who is a professor emeritus at Stanford who runs programs on interaction of human beings and computers. There's a competition, and it says, "Here's when we think we'll have the strong form of AI—when an AI program can correctly interpret pairs of questions that (a) are easily disambiguated by human readers."

Here's an example. The councilman refused the demonstrators a permit because they feared violence. The city councilman refused the demonstrators a permit because they advocated violence. Who is they in each of these sentences? Human beings can figure that out pretty easily. AI, not so easily. There are other requirements of the schema. They can't be solved by simple techniques such as selectional restrictions. You can't write down a bunch of if-then statements and solve it. Here's an example. The women stopped taking the pills because they were pregnant. The women stopped taking the pills because they were carcinogenic. Again, who does they apply to, but the computer can figure this out because pills can't get pregnant and they can discover that.

Then you can't Google the answer is basically the third. You pass the Winograd schema challenge if you get 90 percent accuracy on 60 questions on each of two rounds. The pass rate when this was run in 2016 was 58 percent. Remember this is a coin toss. It's binary. It's A or B. It's a coin toss and they can't get more than 58 percent. They were going to hold another one, one minute. They were going to hold another one this year, it was scheduled for this year. I looked like crazy to find out if AI had gotten better at this, to find out what the results were from the new challenge. I couldn't find it anywhere so I called up the organizers. I emailed them. Turns out, there was lots of interest, but in the end everybody dropped out because they realized they couldn't get better than a coin toss.

The prize is only $25,000, maybe they need to go to that million that we saw today but here's the essential point. The essential point is in these next two slides and I'll finish up then. What can't AI do? It really can't as we heard this morning be a human being. We're used to looking at these pictures about labor markets from David Otter and his coauthors that divides the world up into those that are routine and nonroutine, cognitive and noncognitive and used to seeing all the employment growth is in these categories that involve nonroutine and cognitive activities. There's another way to cut exactly the same data and that's that. Where you divide the world up into high math, low math requirements within jobs and high social skill, low social skill requirements within jobs. All of the growth, this is employment—it's the same as true in wages—is in the high social skill category independent of whether it's high math or low math. It's an optimistic story. It's also an incredible challenge. The one thing we can do is not be Tay if we choose not to be Tay. Finish there.

Harker: All right. We have a few minutes for questions, please vote on Pigeonhole. As you're voting I have one quick question. I don't want to let you off the hook completely when it comes to some of the other explanations because there could be more than one thing. If you think about the economy today versus back in the Industrial Revolution, we have health care bumping up on 20 percent of the economy, and we don't know how to measure outcomes of the health care at all. In addition, we're having improvements of quality which one could argue by extending life—and I'm all for extending life, by the way—is not doing a whole lot to increase productivity. How do you think about that?

Syverson: I don't disagree with any of that. I think one key thing to remember about the miss measurement story is not that it requires the existence of miss measurement in GDP or however you want to measure our total output or total activity. It requires a change in the amount of miss measurement at the time productivity growth slowed, and that's one of the tests that is hard to pass. You give an example of the change of the growing, basically the growth of hard-to-measure sectors, health care being the foremost.

Harker: Right.

Syverson: That has happened. However, that's been smoothed over time. There wasn't a real inflection point in the mid-2000s that would make us point to that and say, "Oh, hold on—that was probably it." I think all that's going on underneath, and I think we need to spend more time thinking about how to better measure those gains of economic welfare tied to health and stuff like that. I'm involved in projects on that, but I don't think that's the key explanatory thing for the slowdown itself.

Harker: One question that's come up is back to this issue of measurement and mismeasurement. What about the synchronized slowdown in productivity, really across the world?

Syverson: I think that's one bit of proof I think actually further that makes the mismeasurement story hard to tell because it would've been all these places got worse in measuring things simultaneously, and moreover there's no connection between the size of the productivity slowdown in a country and how important digital and IT-related goods are in that country. Where the miss measurement story would imply it should be relationship between those. The obvious question that people say is okay, why did productivity slow down? One is this lag period. The other is that we did have in some sense a coordinated wave of productivity gains late '90s, early 2000s, so the first IT renaissance is too big of a word but the first wave of IT came in many different places and I think the simultaneous petering out of that in the developed economies plus the great recession and some of the effects, knock-on effects that had in the emerging economies created this worldwide downturn that we've seen in the last 10 years or so.

Harker: Dave, anything to add?

Altig: No.

Harker: So, the winner in terms of votes: if labor productivity gains are driven by job cuts and in socially refocus on retraining workers in obviated functions, won't they reenter the workforce and somewhat dilute the expected gains or are they just going to disappear?

Syverson: That's a great question. The economist's answer is in the past we've always figured out something for workers to do after they've been displaced. That's too simple, because the reality is a lot of displaced workers never get back to where they were when they were initially displaced. It's not that the typical 55-year-old who got their manufacturing job because of automation goes and finds a job at the hospital or whatever. It's often that 55-year-old drops out of the workforce and then a 25-year-old who would've gone into manufacturing goes into health care instead. That's how the economy does it. We still have that 55-year-old who is displaced and there are social problems that come with that. I don't mean to minimize those at all, but on the other hand, the economy is built with an immense amount of churn into it. We see net growth of jobs of 200,000 a month but the gross flow that underlie that are millions. We turn over millions of jobs every month, both losses and gains. We can accommodate productivity growth just by moving workers in and out of jobs. The stickier issue is what do you do with the ones who lose a good situation persistently, and there's no simple answer for that that I know of.

Harker: Dave, I know down in Austin at the workforce conference you had some thoughts about this.

Altig: Yeah, my thoughts there were multifaceted. First of all, there are obviously a whole bunch of policies you have to think about putting in place to create a structure that deals with the changing landscape, and you've got to understand what that changing landscape is to begin with. That includes tax policies, transfer policies. It includes having a workforce development infrastructure that isn't local, that doesn't think about how do I take guys who are laid off in Pittsburgh and employ them in Pittsburgh. That's probably not the right way to think about things. Pittsburgh may be a bad example, but more generally, I think it is a true that the biggest part of this problem is the transition generation. I think that probably if you think down the road, this is where the economist usual story comes in. The evidence is pretty clear. Agriculture is the obvious example that we find ways to fill those markets and fill them in appropriate ways. The real challenge is how do you get people caught into the middle of that transition and the middle of that transition is where we are now. That's what the real issue is.

Harker: That raises I think probably our last question, which I think is a very good question. What happens to the truckers who lose their jobs? There are fewer substitute jobs. If we raise productivity but lower labor force participation, have we raised the standard of living? Which raises I think an even deeper question that many economists have brought up over time. Is GDP the sole measure that we should focus on? What happens to these people? Is the standard of living in some way shape or form a better measure? A complicated measure, but better?

Syverson: A good question similar in some ways to the one we were just talking about. If you gave me a choice, can we have everything we have now and have no one work an hour to have it? I would take that in a second. Work has benefits but work is a badge to an economist. We might have other pursuits we would want to do with our time if we could have everything we have now without actually having to work. In some sense, that kind of productivity gain is good. That is not to say, though, that the trucker who loses their job doesn't have a lot of skills that substitute to other areas where employment people are looking for employees isn't going to suffer economically or socially because of that, and I think that gets back to what Dave was saying. You have to have programs to handle that. But I sure wouldn't say let's not get productivity growth. Let's not have the same stuff we can have now with less work or let's not have more stuff but the same amount of work because we've got to worry about those other things if that's the case. That's a problem that's solvable and I'd rather solve that problem with a bigger pie than not have the problem in a smaller pie.

Altig: Yeah, just quickly—the premise of the question is exactly right. It is not useful for policy makers to say things like, "Well, yeah, but we're multiple times better than the richest king in the 16th century." Look, you can have lots of consumer surplus but if everyone else is got more than you and you don't have a job and you're struggling relative to everyone around you, you're going to feel bad. The consequences of that, I think we were—whether this is trade, it's technology—it's the whole business of how can you manage a civil society and deliver what you need to deliver to your citizenry in a world of change. That one is a struggle.

Harker: Yeah, and I think that is a discussion we're going to continue to have about, what is the goal? What's the goal of the economy? If you take the strict interpretation from economics, yeah zero work is ideal, but some people would argue as human beings we're designed for work of some way, shape, or form, whether it's compensated or not compensated. I think there's a lot more to come on this topic. Thanks, Chad and Dave, for a very provocative session.