Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.
Comments are moderated and will not appear until the moderator has approved them.
Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.
In addition, no off-topic remarks or spam is permitted.
October 19, 2020
All Things Biometrics
Since 2014, I have written a number of posts in our Take on Payments blog on biometrics technology—the automated capture of an individual's unique physical or behavioral characteristics—and related issues. In fact, the Retail Payments Risk Forum (RPRF) hosted a conference on biometrics in November 2015 that brought experts in the field from all over the world to discuss the present and future state of the biometrics being used in consumer applications. Since that time, we have seen some smartphones move from using fingerprint readers to using facial recognition to authenticate users, with some applications even using voice recognition.
But as developers and users are discovering, not all biometric methodologies are equally suited for all applications. We have to consider factors such as risk level, cost, operating environment, and targeted population to determine if a particular biometric modality is better suited than another for an intended application. And along with the technology, a host of policy issues such as privacy, consent, and trust have emerged.
We had hoped to convene another comprehensive biometrics conference this fall but due to the COVID-19 restrictions on group gatherings, we have postponed the event and hope to convene it in the fall 2021. We continue to seek ways to fulfill the RPRF's mission of research and education on payment risk issues, so will focus on biometrics in our next Talk About Payments webinar, which is scheduled for the afternoon of October 29.
We are excited to have James "Jim" Loudermilk as our guest in discussing the current state of biometrics in authentication as well as related policy issues. Jim was a technology executive with the Federal Bureau of Investigation for 21 years, where he represented the bureau nationally and internationally on identification and innovation issues. He was a member of the FBI Biometric Steering Committee and represented the FBI with the National Science Foundation Center for Identification Technology Research. Jim is highly regarded by his peers for his knowledge of biometrics and their applications.
I hope you will join Jim and me as we discuss all things biometrics on October 29 from 3 to 4 p.m. (ET). The webinar is open to the public and free of charge, but you must register in advance to participate. Once you've registered, you will receive a confirmation email with login and call-in information. You can register here or through our Talk About Payments web page. If you have any questions concerning the webinar please direct them to me at David.email@example.com. Jim and I look forward to seeing you on the 29th.
October 13, 2020
Digital Payments and the Path to Financial Inclusion
When Dr. Raphael Bostic joined the Atlanta Fed as president and CEO in 2017, the Atlanta Fed embarked on a reevaluation of its strategies. One important outcome was the creation of new areas of focus—high-priority initiatives—for the Atlanta Fed that include promoting safer payments innovation and advancing economic mobility and resilience. On the surface, these goals may appear to be unrelated. They are not. A new white paper by Raphael Bostic, Shari Bower, Oz Shy, Larry Wall, and myself reflects the Bank's view that safer payments innovation can help advance economic mobility and resilience.
With our new focus, the Atlanta Fed refreshed the way we engage with our industry partners. As part of that refresh, the Risk Forum met with the PeachPay community last year to begin a discussion of how payments innovation can help lead to better financial inclusion for a vulnerable population. PeachPay is a group of key players in the U.S. payments processing industry who meet to discuss best practices, provide education, discuss new product innovation, and discuss regulatory and policy issues that may affect the payments industry. Thanks to everyone who helped with this work.
As with most new technology, digital payments and fintech have pros and cons. All in all, however, we think digital payment services have the potential to create opportunities for people who mostly use cash to better manage their financial lives. That's why we focused the white paper on addressing the needs of cash-based users.
Of course, when we started writing, we had no idea that a global pandemic was in our future. We knew the trends towards electronic and digital payments were growing, but we could not have predicted that these trends would accelerate as they have. That acceleration is troublesome because, before the pandemic, an estimated 4 percent of consumers, approximately 5.14 million households, had no card or bank account. Having one or the other is required for anyone to participate in the digital economy. On top of that, this lack of a card or bank account is highest among people with the lowest incomes.
The white paper doesn't provide a definitive solution to the exclusion of cash-based users, but it does offer some ideas:
- Preservation of cash: Mandate either by law or regulation indefinite support of the cash economy. Some options would include requiring that banks continue to support the cash infrastructure, or that merchants cannot refuse cash.
- Bridge the gap: Create on-ramps to digital payments for cash users by offering cash-in/cash-out networks or public bank accounts in convenient locations.
- Focus on cashless solutions: These solutions, including digital dollars or central-bank-backed digital currency, would keep characteristics of cash payments.
We hope you will read the paper.
Ultimately, solutions that keep the economy open to cash-based consumers will require collaboration among people in the payments industry, fintech providers, and the Fed, and it will require many ideas. We are forming a Special Committee on Payments Inclusion with many stakeholders to help generate these ideas. As we say in the paper: "Together, we will work to ensure that all individuals have ubiquitous access to safe, efficient, and inclusive payments.""
October 5, 2020
Facial Recognition Bias: Reality or Myth?
In an August post, I wrote about some academic reports that had alleged ethnic and gender bias in facial recognition algorithm programs. These reports resulted in some major technology vendors withholding the sales of their facial recognition software to law enforcement agencies in the United States. Fortunately, we have an objective organization to help provide the answer to the question of whether there is bias in facial recognition algorithms.
That organization is the nonregulatory government agency, the National Institute of Standards and Technology (NIST). NIST, under the umbrella of the U.S. Department of Commerce, was founded in 1901 and operates one of the country's oldest physical science laboratories, providing measurements and standards for a wide range of technologies including biometrics. Its mission is to "promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life."
Since 2000, NIST has been evaluating the performance of facial recognition algorithms submitted by vendors as part of an ongoing objective measurement effort called the Face Recognition Vendor Test. Testing results are updated and published annually. While vendor participation is voluntary, NIST believes the participants are representative of a substantial part of the facial recognition industry.
The overall testing cycle was composed of three types of facial recognition algorithm testing: one-to-one matching, one-to-many matching, and, the most recent, testing of demographic effects. This testing used a database of approximately 18 million quality facial images representing 8.5 million individuals. The testing included 189 commercial algorithms submitted by 99 developers from companies and academic institutions from all over the world.
The measurements that NIST made were categorized into false negatives (where two images of the same individual are not associated) and false positives (where an image of two different individuals are erroneously identified as the same person). The latter error has far greater consequences, including the risk of giving an unauthorized person access to a secure location or of possibly falsely arresting an individual. The overall results of the testing are too detailed and numerous to list in this post. As one would expect with such a wide set of submissions, the results of the various algorithms ranged from what I would categorize as highly accurate to substandard. I recommend you watch a YouTube video in which Mei Ngan of NIST covers the test results. (The Women In Identity organization produced the video.) I think that, after you see the results, you'll agree with my assessment of whether there is bias in facial recognition: "It depends." Some of the algorithms show no bias and others do, indicating a need for additional improvement in their development.
In my August post, I also raised the issue of how face coverings will affect the performance of facial recognition programs such as those run by the Transportation Security Administration and Customs and Border Protection. NIST has recently tested the algorithms with this restriction and generally found that accuracy was substantially lower, although the developers are making modifications to the algorithms to improve their performance. Ms. Ngan covers this subject in her presentation as well.
Stay tuned for more biometrics information and discussion in our posts, and check out our October 29 Talk About Payments webinar that will feature one of the foremost biometrics experts in the country.
September 21, 2020
Personal Responsibility for Irrevocable Payment Scams
Those who have experience with parenting know that with many joys come challenges. For me, one of those challenges is teaching my children the importance of personal responsibility. Picking up after themselves, making sure their chores are finished before running out the door to play, and owning up to mistakes are just some of the personal responsibilities that they struggle with daily. And while there is a light at the end of the tunnel for this struggle, I firmly believe it is their having to experience the consequences that is getting us there. In this parent's opinion, knowing there are consequences for their actions helps children become responsible.
You might be thinking, "What does this notion of teaching personal responsibility have to do with payments?" Earlier this year, my colleague Dave Lott started the dialogue among those of us at the Risk Forum, and perhaps within some of our readers' circles, when in a post he posed the question "What is the likelihood that similar protections will be extended to consumers here (United States)?" The post was related to the extension of consumer protections in the United Kingdom to combat its growing problem of authorized push payment (APP) fraud.
In August, a UK-based consumer advocate organization called Which? released a research report based on the experiences of 150 consumers related to the Contingent Reimbursement Model (CRM) Code adopted by many financial institutions in the United Kingdom in 2019. The CRM Code has two primary goals: to reduce the occurrence of APP fraud and, for the fraud that occurs, to reduce the impact. Many of these scam payments in the United Kingdom are occurring on their faster payments rail, which was designed to make payments immediate and irrevocable. The report concluded that consumers' experiences with reimbursement for APP scams were mixed. Some consumers were reimbursed by their financial institution after authorizing payments to scammers while others were unable to receive any reimbursements.
The primary payment instrument in the United States today for large-scale corporate APP scams is wire. For consumers, person-to-person (P2P) services such as CashApp, Venmo, and Zelle are being used to scam individuals out of money. All these payments, both business and consumer, are irrevocable. Once the payments leave their accounts, neither the financial institution nor service provider has liability. But should individuals in the United States, like those in the United Kingdom, be afforded protections for these wire and P2P payments if they're scammed? And should these protections also apply to newer real-time payment schemes here in the United States?
My personal belief is that financial institutions or P2P services should not be responsible for people who fall victim to APP scams. Their responsibility should be limited to educating their customers on the rules around these payments and their finality when executed. APP scams are often the result of social engineering campaigns, and I am of the thought that, just as I expect my children to accept personal responsibility for their mistakes, it's fair for consumers to accept their responsibility for making sure they do not become the next social engineering victim. Do you think this is a reasonable approach to these scams and payments? Or should the United States banking industry and regulators move toward a model like the United Kingdom has in place?