Please enable JavaScript to view the comments powered by Disqus.

COVID-19 RESOURCES AND INFORMATION: See the Atlanta Fed's list of publications, information, and resources for help navigating through these uncertain times. Also listen to our special Pandemic Response webinar series.

About


Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.

Comment Standards:
Comments are moderated and will not appear until the moderator has approved them.

Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.

In addition, no off-topic remarks or spam is permitted.

March 23, 2020

Fast Cars and Fast Payments

My son and I recently attended the Daytona 500. As an 11-year-old, he is fascinated with fast cars and speeds that routinely exceed 200 mph. The cars were certainly fast at Daytona International Motor Speedway this year. While he is blown away by the speed of the cars, I remain amazed at the overall safety record of these cars. There were numerous wrecks at this year’s race, but only one driver was seriously injured on the last lap in a wreck that was horrifying to witness. The speed of the cars definitely makes for an exciting event, but at the end of the day, safety is vital. Nobody wants to see a driver injured, and racing organizations have gone to great lengths to make sure safety is the top priority even if it means compromising the speed of cars. Could safety be as important as speed in payments, too?

Having been involved in the payments industry for the past 13 years, faster and safer have always been two (of several) buzz words associated with payments. But faster, being much cooler to discuss, seems to be the focus all too often. (Don’t talk to my son about the safety of cars—he wants to talk speed!) I joke with my colleague about surveys we often come across claiming that an extremely high percentage of people want faster payments. As a standalone question—yes, I can absolutely see that. Faster is better than the status quo or slower, right?

But we rarely get a glimpse into how important faster payments really are and if people actually want them. Are they just giving an obvious answer to a leading question? How would people respond to a question about faster payments when the question includes other attributes such as safety?

In a recent surveyOff-site link, approximately 1,000 respondents from the United States between the ages of 16 and 75 were asked to choose the most critical characteristics of a payment instrument: safety from fraud and theft, privacy protection, ease of use, wide acceptance, and speed. Only 12 percent of them chose speed as one of the top two. Interestingly, respondents chose safety (62 percent) and privacy (37 percent) as the most important characteristics.

Coming home from Daytona, my son and I talked about the race and just how amazing it was to watch in person. I asked him if he would like to see the cars go faster in light of some of the awful crashes that we saw. In his 11-year-old wisdom, he said the cars are probably fast enough because he didn’t like watching that final wreck. While I could debate whether or not payments are fast enough, much like the cars racing in the Daytona 500, safety remains paramount for payment instruments and must remain at the forefront of any discussion on payments.

March 16, 2020

Are Emerging Payments More Vulnerable to Fraud?

Whenever I am in a conversation about new or emerging payment products or services, I invariably get asked whether I think they will attract heightened attention from criminals. My personal opinion is, "YES, at least initially!" Why do I have that opinion? The conventional wisdom is that criminals recognize that new payment systems are likely to have some security gaps in the beginning that can be exploited. There are a number of examples I can cite to support this position.

Consider the payment card enrollment process that accompanied the introduction of the Apple Pay wallet in late 2014. Whether it was a rush to get cardholders enrolled or because of loopholes in the Identification and Verification (ID&V) process, a number of the banks offering the service fell victim to fraud early on. Criminals enrolled a number of stolen credit and debit cards in the service and then were able to make high-dollar purchases because of weak verification controls. Some industry observers cited initial fraud losses in the 600Off-site link-to-800Off-site link-basis-point range at some of the early issuers. This rate compares to an overall in-person, payment card fraud rate of 12.2 basis points in 2015 cited in the Federal Reserve's Payments Study supplement Changes in U.S. Payments Fraud from 2012 to 2016. Fortunately, the affected banks reacted quickly and shored up their payment card enrollment processes.

Also consider the implementation of faster payments in the United Kingdom in 2008. As did other countries implementing faster payments, the United Kingdom tried to limit fraud by taking a measured approach. In the beginning, only credit push transactions with a maximum value of £10,000 (approximately $15,000) were eligible. (Most of the initial participating banks had lower limits.) In 2010, the maximum amount was raised to £100,000. Now the maximum limit is £250,000, although financial institutions may still set lower limits and differentiate between consumer and commercial account payments. My colleague Julius Weyman highlighted some of the fraud risks in faster payments in his 2016 working paper reviewing overall risks in faster payments schemes around the globe. He pointed to the 132 percent increase in online banking fraud the United Kingdom experienced in the year following implementation.

There is growing concern among consumers in the United States and the United Kingdom about the liability for authorized push payments—such as P2P payments—because of their near-real-time nature and their finality. In a future post, I'll examine this issue with authorized push payments and look at how the United Kingdom is dealing with it.

So circling back to my initial question, do you believe that the fraud rates for new and emerging payment products are likely to be higher than the more established payment products? Let us know what you think.

April 1, 2019

Contactless Cards: The Future King of Payments?

Just over two years ago, my colleague Doug King penned a post lamenting the lack of dual interface, or "contactless," chip payment cards in the United States. In addition to having the familiar embedded chip, a dual interface card contains a hidden antenna that allows the holder to tap the card on or wave it near the POS terminal. This is the same technology—near field communications (NFC)—that various pay wallets inside mobile devices use.

Doug is now doing his daily fitness runs with a bigger smile on his face as the indicators appear more and more promising that 2019 will be the year of the contactless card. Large issuers have been announcing plans to distribute dual interface cards either in mass reissues or as a cardholder's current card expires. Earlier this year, some of the global brand networks launched advertising campaigns to make customers aware of the convenience that contactless cards offer.

So why have U.S. issuers not moved on this idea before now? I think there have been several reasons. First, for the last several years, financial institutions have focused a lot of their resources on chip card migration. Contactless cards will create an additional expense for issuers and many of them wanted to let the market mature as it has done in a number of other countries. They were also concerned about the failure of contactless card programs that some of the large FIs introduced in the early 2000s—most merchants lacked terminals capable of handling the technology.

The EMV chip migration solved much of the merchant terminal acceptance problem as the vast majority of POS terminals upgraded to support EMV chips can also support contactless cards. (While a terminal may have the ability to support the technology, the merchant has to enable that support.) Visa claims that as of mid-2018, half of POS transactions in the United States were occurring at terminals that were contactless-enabled. Another factor favoring contactless transactions is the plan by major U.S. mass transit agencies to begin accepting contactless payment cards. According to the American Public Transportation Association's 2017 Ridership Report, there were 41 transit agencies in the United States with annual passenger trip volumes of over 20 million trips.

Given that consumer payments is largely a total sum environment, these developments have led me to ask myself and others what effect contactless cards will have on consumers' use of other payment forms—in particular, mobile payments. As my colleagues and I have written numerous times in this blog, mobile payments continue to struggle to obtain consumer adoption, despite earlier predictions that they would catch on quickly. There are some who believe that the convenience of ubiquity and fast transaction speed will favor the dual purpose card. Others think that the increased merchant acceptance of contactless will help push the mobile phone into becoming the primary payment form.

My personal perspective is that contactless cards will hinder the growth of in-person mobile payments. There are those who claim to leave their wallet at home and never their phone, and they will continue to be strong users of mobile payments. But the reality is that mobile payments are not accepted at all merchant locations, whereas payment cards are practically ubiquitous. While I am a frequent user of mobile payments, simply waving or tapping a card appeals to me. It's much more convenient than having to open the pay application on my phone, sign on, and then authorize the transaction.

Do you believe the adoption of contactless cards by consumers and merchants will be as successful as it was for EMV chip cards? And do you think that contactless cards will help or hinder the growth of mobile payments? Let us hear from you.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

March 25, 2019

Safeguarding Privacy and Ethics in AI

In a recent post I referred to the privacy and ethical guidelines that the nonprofit advocacy group EPIC (Electronic Privacy Information Center) is promoting. According to this group, these guidelines are based on existing regulatory and legal guidelines in the United States and Europe regarding data protection, human rights doctrine, and general ethical principles. Given the continued attention to advancements in machine learning and other computing technology advancements falling under the marketing term of “artificial intelligence” (AI), I thought it would be beneficial for our readers if we were to review these guidelines so the reader can assess their validity and completeness. The heading and the italicized text in these guidelines are EPIC’s specific wording; additional text is my commentary. It is important to point out that neither the Federal Reserve System nor the Board of Governors has endorsed these guidelines.

  • Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. EPIC says the main elements of this principle can be found in the U.S. Privacy Act and a number of directives from the European Union. It is unlikely that the average person would be able to fully understand the complex computations generating a decision, but everyone still has the right to an explanation of and validation for the decision.
  • Right to Human Determination. All individuals have the right to a final determination made by a person. This ensures that a person, not a machine, is ultimately accountable for a final decision.
  • Identification Obligation. The institution responsible for an AI system must be made known to the public. There may be many different parties that contribute to an AI system, so it is important that anyone be able to determine which party has overall responsibility and accountability.
  • Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions. I understand the intent of this principle—any program developed by a person will have some level of inherent bias—but how is it determined that the level of bias has reached an “unfair” level, and who makes such a determination?
  • Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system. An AI system that presents significant risks, especially in the areas of public safety and cybersecurity, should be evaluated carefully before a deployment decision is made.
  • Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions. This basic principle will be monitored by the institution as well as independent organizations.
  • Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms. As an extension of number 6, detailed documentation and secure retention of the data input help other parties replicate the decision-making process to validate the final decision.
  • Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls. As more Internet-of-Things applications are deployed, this principle will increase in importance.
  • Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats. AI systems, especially those that could have a significant impact on public safety, are potential targets for criminals and terrorist groups and must be made secure.
  • Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system. This principle ensures that the institution will not establish or maintain a separate, clandestine profiling system to assure the possibility of independent accountability.
  • Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents. The concern this principle addresses is that such a score could be used to establish predetermined outcomes across a number of activities. For example, in the private sector, a credit rating score can be a factor not only in credit decisions but also in other types of decisions, such as for vehicle, life, and medical insurance underwriting.
  • Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible. I refer to this final principal as the “HAL principle” from 2001: A Space Odyssey, where the crew tries to shut down HAL (a Heuristically programmed ALgorithmic computer) after it starts making faulty decisions. A crew member finally succeeds in shutting HAL down only after it has killed all the other crew members. HAL is an extreme example, but the principle ensures that an AI system’s actions do not override or contradict the actions and decision of the people responsible for the system.

On February 11, 2019, the president signed an executive order promoting the United States as a leader in the use of AI. In addition to addressing technical standards and workforce training, the order called for the protection of “civil liberties, privacy, and American values” in the application of AI systems. As the development of AI systems increases pace, it seems important that an ethical framework be put in place. Do you think these are reasonable and realistic guidelines that should be adopted? Do you think some of them will hinder the pace of AI application development? Are any principles missing?

Let us know what you think.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

Take On Payments Search


Recent Posts


Categories