Positions

EFA Statement on AI

Introduction

While still in its early stages in Europe, the use of AI/machine learning technologies is becoming increasingly important in the global fight against money laundering and terrorist financing. Over the past years, fraudsters and criminals have become smarter, with enhanced technological capabilities, as well as the increasing capacity to dupe the current system using deep fakes and other means.  The fight against money laundering is vital to consumer protection. If deployed properly, the EU can regulate innovative AI in a way that facilitates further digitalisation, allows wide-scale uptake of AI across the EU, while also enabling new technologies to effectively tackle fraudulent activity within the bloc, and most importantly protect its citizens.

However, the EFA is concerned that there is currently a lack of clarity on some of the key terms and definitions within the proposals, as well as a lack of consistency with the supporting documents that underpin the rationale for the AI Act. If left unresolved, these might have a significant impact on AI investment across the EU. Thus, it is important that all stakeholders have the same understanding of these terms and the scope of definitions used.

You can find the PDF version of our position here.

The EFA would like to highlight the following particular issues:

  • Biometric identification vs. authentication and verification

We encourage the co-legislators to reiterate the clear differentiation in terminology. Firstly, between biometric authentication and verification (e.g. used by Fintechs to scale-up and onboard customers) versus remote biometric identification (e.g. used by authorities for mass surveillance in public spaces) as it has been done in the European Commission’s White Paper on AI.

Secondly, there is also a further distinction to be drawn between verification (meaning the process of confirming that an individual is who they claim to be) and authentication (the process of matching an identifier to a specific stored identifier in order to grant access to a device or service). Such differentiation has been clearly outlined in a recent study from the Directorate-General for Internal Policies of the European Parliament

In the proposal for the AI Act, these important nuances in terminology have been lost. Importantly, the definition of “remote biometric identification system” does not clearly exclude verification and authentication from its scope. This could lead to products that are designed exclusively to combat fraud and which rely on unique biological characteristics of an individual to verify that individuals are who they say they are to fall within the scope of “remote biometric identification systems” and be subject to high-risk requirements or even prohibited.

Consequently, the EFA would urge the co-legislators to ensure that the definitions are clarified and explicitly exempt identity verification and authentication from the scope of the AI Act. This would create legal clarity and avoid that both types of identification fall under the “high-risk” application.

It is also essential that the “high-risk” classification takes into account use cases; for example, allowing for the exemption of AI systems that combat fraud from the scope and constraints of “high-risk” AI. Such exemptions for fraud prevention have precedent in the framework of the Payment Services Directive 2 (PSD2), showing how EU legislation can provide the necessary flexibility to socially beneficial activities like fraud prevention.

  • Human oversight requirements

The EFA is concerned that the European Commission’s proposal to require all decisions made through (high-risk) biometric identification to be overseen by two natural persons will not contribute to the desired result of improving the accuracy of decisions.

FinTechs often use human oversight in case decisions are uncertain. However, the EFA encourages reviewing the horizontal requirement of human oversight, as we believe it would be more beneficial to limit automatic human oversight to certain sectors where it is really needed (e.g. law enforcement).

  • Ban on social scoring for private entities

EFA agrees that social scoring should not be used to discriminate against any type of vulnerabilities due to age, disability or social or economic situation. However, any new prohibitions on social scoring should be proportionate, evidence-based, and ensure that they do not prevent the use of AI technology for fraud prevention and financial risk management. Considering their social benefits in fraud prevention and the global fight against financial crime, the EFA believes such uses of AI should be exempted from this prohibition.

  • The private power to sue

Enforcement powers of public authorities are in the EU complemented by private enforcement mechanisms. Benefits of private enforcement include greater incentives, because private actors profit from success in litigation, and better information, given that private plaintiffs are personally involved in the relevant case. However, the AI Act does not provide for rights to sue for private actors. The EFA believes its inclusion in the Act could be highly beneficial in that respect.

More Positions