Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.
Comments are moderated and will not appear until the moderator has approved them.
Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.
In addition, no off-topic remarks or spam is permitted.
August 24, 2020
Facial Recognition Biometrics: Bruised but Still Standing
So far, 2020 has been a rocky year for facial recognition biometrics. In June, Amazon, Microsoft and IBM delivered a body blow, announcing they would not sell their facial recognition software to law enforcement agencies. They cited a lack of accuracy, a potential for misuse or abuse, and the lack of federal privacy legislation to safeguard individual rights. Widespread use of facial masks due to the COVID pandemic dealt another punch. Masks have generally rendered facial recognition inoperable for any number of applications on mobile phones. The masks have also hobbled the Transportation Security Administration's plans to further automate passenger authentication and check-in processes. Will the technology be able to recover and go another round?
Unfortunately, there is a great deal of misinformation and misinterpretation of studies about the technology behind facial recognition and its use, particularly with regard to claims of racial and gender bias. Critics often point to a 2018 study by MIT and Microsoft researchers in which three facial classification algorithms misclassified the gender of light-skinned males at a rate of less than 1 percent but darker-skinned females as high as 34 percent. Critics of facial biometrics technology have pointed to the research as evidence of bias against various minority groups.
It is important to note that "gender classification" is a very different from "facial recognition," although they are often lumped together in the media. In a gender classification process, a digital facial image of an individual is captured and processed through an algorithm that determines whether the image is that of a male or female. Numerous studies have shown that the accuracy of such classification systems is largely based on the database of images being used to "train" the algorithm—that is, to teach it to properly classify an image. The smaller the database, the less accurate the classification.
In a facial recognition process, the digital image captured by the camera is compared using a recognition algorithm to see if it matches the individual's image in a database or on their identification document. While the top performing algorithms are highly accurate, studies have found that results can vary based on lighting, camera definition, viewing angle, and other factors. While most people think facial recognition is new technology, the casino industry has used it to identify banned players since the 1990s.
In a future post, I will discuss the findings of the National Institute of Standards and Technology in its 2020 evaluation of more than 200 facial recognition algorithms. The promising news is that the top performing algorithms showed no discernible bias.
While there are certainly privacy and other issues connected to facial recognition and other biometric technologies, I believe objective education and discussions can address these issues. So I think the technology is not on the ropes but is ready to go another couple of rounds.
April 27, 2020
My Internet Journey of Self-Discovery
I don't know how many times my social security number has been compromised, much less any other personally identifiable information (PII). Knock on wood, so far I have avoided identity theft, synthetic or otherwise. I have taken all of the recommended steps to protect myself—I get fraud alerts on my credit reports, I've implemented identity monitoring, and so forth. However, given that hackers frequently sell stolen data online, I fear my social security number lingers on the dark web in perpetuity, waiting to be compromised at any time. My curiosity being what it is, I set off on the interwebs to see what I could find.
An internet search string asking "How many times has my personal data been breached?" returned some interesting results. According to the website Have I Been Pwned?, a searchable repository of data breaches, my personal email address has been breached at least a dozen times going back to 2008. Not all these instances were known to me—I do not recall having a MySpace page! I have also been notified of other breaches that were not listed here, including from financial services companies and medical providers, so the number is surely higher.
I was surprised to learn that my email address was discovered in multiple credential stuffing lists, including "Collection #1," a large collection of credential stuffing lists discovered in January 2019. According to Have I Been Pwned, 773 million unique email addresses and passwords were included. Credential stuffing is an automated cyberattack where criminals attempt to gain fraudulent access to user accounts through use of these types of collections of user names and passwords. On the bright side, if there is one, the website indicated that none of my information had been "pasted," meaning posted on public content-sharing websites frequented by hackers. For over a decade, I have used a password vault to generate and store all of my user profiles and account logins and currently have over 200 different records. I do not reuse passwords, especially for profiles that have payments instruments tied to them, and I believe this practice has provided some measure of protection from this type of activity.
The next stop on my journey was the credit bureau to see what else I could learn about the state of my PII. Experian offers consumers a free "Dark Web Internet Surveillance Report." Although five associated records were located, according to this source, my social security number is currently not on the dark web.
My identity protection monitoring service was the final stop to review my digital exposure report on information about me found on the internet. Relief! My exposure is consistent with the reports from the other sources.
I would rate myself as average in terms of my digital footprint and doubt my internet habits differ from most people's. I doubt my breach experience differs much, either, but from this journey, I've discovered that the safeguards I have in place to protect my personal information seem to be working. Have you taken an internet journey to discover where your personal information may reside? What steps have you taken to ensure your identity remains safe?
May 20, 2019
Could Federal Privacy Law Happen in 2019?
Some payments people have suggested that this could be the year for mobile payments to take off. My take? Nah. I gave up on that thought several years ago, as I've made clear in some of my previous posts. I'm actually wondering if this will be the year that federal privacy legislation is enacted in the United States. The effects of the European Union's General Data Protection Regulation (GDPR) that took effect a year ago (see this Take on Payments post) are being felt in the United States and across the globe. The GDPR essentially has created a global standard for how companies should protect citizens' personal data and the rights of everyone to understand what data is being collected as well as how to opt out of this collection. While technically the GDPR applies only to EU citizens, even when traveling outside the European Union, most businesses have taken a cautious approach and are treating every transaction—financial or informational—that they process as something that could be covered under the GDPR.
A tangible impact of the GDPR in the United States is that the state of California has passed a data privacy law known as the California Consumer Privacy Act of 2018 (CCPA) that is partly patterned after the GDPR. The CCPA gives California residents five basic rights related to data privacy:
- The right to know what personal information a business has collected about them, where it was obtained, how it is being used, and whether it is being disclosed or sold to other parties and, if so, to whom it is being disclosed or sold
- The right to access that personal information free of charge up to two times within a 12-month period
- The right to opt out of allowing a business to sell their personal information to third parties
- The right to have a business delete their personal information, except for information that is required to effect a transaction or comply with other regulatory requirements.
- The right to receive equal service and pricing from a business, even if they have exercised their privacy rights under the CCPA.
According to the National Conference of State Legislatures (NCSL) 17 states have mandated that their governmental websites and access portals state privacy policies and procedures. Additionally, other states have privacy laws related to privacy, such as children's online privacy, the monitoring of employee email, and e-reader policies.
Take On Payments has previously discussed the numerous efforts to introduce federal legislation regarding privacy and data breach notification with little traction. So why do I think change is in the air? The growing trend of states implementing privacy legislation is putting pressure on Congress to take action in order to have a consistent national policy and process that businesses operating across state lines can understand and follow.
What do you think?
By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed
-payments">Retail Payments Risk Forum at the Atlanta Fed
February 11, 2019
AI and Privacy: Achieving Coexistence
In a post early last year, I raised the issue of privacy rights in the use of big data. After attending the AI (artificial intelligence) Summit in New York City in December, I believe it is necessary to expand that call to the wider spectrum of technology that is under the banner of AI, including machine learning. There is no question that increased computing power, reduced costs, and improved developer skills have made machine learning programs more affordable and powerful. As discussed at the conference, the various facets of AI technology have reached far past financial services and fraud detection into numerous aspects of our life, including product marketing, health care, and public safety.
In May 2018, the White House announced the creation of the Select Committee on Artificial Intelligence. The main mission of the committee is "to improve the coordination of Federal efforts related to AI to ensure continued U.S. leadership in this field." It will operate under the National Science and Technology Committee and will have senior research and development officials from key governmental agencies. The White House's Office of Science and Technology Policy will oversee the committee.
Soon after, Congress established the National Security Commission on Artificial Intelligence in Title II, Section 238 of the 2019 John McCain National Defense Authorization Act. While the commission is independent, it operates within the executive branch. Composed of 15 members appointed by Congress and the Secretaries of Defense and Commerce—including representatives from Silicon Valley, academia, and NASA—the commission's aim is to "review advances in artificial intelligence, related machine learning developments, and associated technologies." It is also charged with looking at technologies that keep the United States competitive and considering the legal and ethical risks.
While the United States wants to retain its leadership position in AI, it cannot overlook AI's privacy and ethical implications. A national privacy advocacy group, EPIC (or the Electronic Privacy Information Center), has been lobbying hard to ensure that both the Select Committee on Artificial Intelligence and the National Security Commission on Artificial Intelligence obtain public input. EPIC has asked these groups to adopt the 12 Universal Guidelines for Artificial Intelligence released in October 2018 at the International Data Protection and Privacy Commissioners Conference in Brussels.
These guidelines, which I will discuss in more detail in a future post, are based on existing regulatory guidelines in the United States and Europe regarding data protection, human rights doctrine, and general ethical principles. They call out that any AI system with the potential to impact an individual's rights should have accountability and transparency and that humans should retain control over such systems.
As the strict privacy and data protection elements of the European Union's General Data Privacy Regulation take hold in Europe and spread to other parts of the world, I believe that privacy and ethical elements will gain a brighter spotlight and AI will be a major topic of discussion in 2019. What do you think?
By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed