But where there is a lot of light, there is a lot of shadow: This famous quote by Goethe describes the dilemma of banks quite aptly at the moment, as they are increasingly the focus of cybercriminals. Worse, with the rapid progress of AI, these attacks are becoming more frequent, sophisticated, and more complex to detect. This challenges traditional security systems and calls regulators such as the German Federal Office for Information Security (BSI) and the financial regulator BaFin into action.
Nico Leidecker, a renowned IT security expert from NVISO in Frankfurt am Main, sheds light on the role of AI in modern cybercrime. Hackers can identify targeted vulnerabilities in banking systems with the help of artificial intelligence, such as ChatGPT. These tools use machine learning to identify patterns in security systems and create tailored phishing emails. Such an attack can occur in seconds, and a single click by a careless employee can be enough to compromise the entire system.
Another alarming example is provided by a team of researchers from Sensity, a security firm specialising in deepfake detection. They were able to show how biometric checks used by both traditional banks and cryptocurrency exchanges can be fooled by deepfake technology. Specifically, they bypassed an automated “am alive” test using AI-generated faces. These verification processes, often called “know your customer” or KYC tests, are critical to the security of financial transactions. They require users to provide photos of their IDs and faces, which are matched in real-time with the captured face.
The regulatory challenge and new threats
The financial industry, which processes billions of transactions and confidential data daily, is desirable because it is a rewarding target for AI-based attacks. Leidecker uses AI technologies to collect public data to create a detailed profile of potential attack targets. This includes personal information, business relationships, networks, and even banking behaviour patterns.
But despite advanced technology, many of these attacks go entirely unnoticed. Désirée Sacher-Boldewin, a cyber defence expert, stresses that the current threat environment requires a reassessment of security protocols. Urgent. After all, many banks still rely on outdated systems and are unprepared for the sophisticated attacks that AI enables.
In addition, regulation presents another hurdle: while AI technology has the potential to increase security, regulators often demand transparency and traceability – something that is not always present in AI decisions.
David Lawler’s LinkedIn article provides a powerful explanation of how banks today are using AI applications for loan approval processes. However, these systems need help clearly explaining their decisions, leading to questions of transparency and fairness.
He also illustrates that while AI can improve many processes in the banking world, transparency and ethics challenges still need to be addressed.
Federal Office for Information Security (BSI) warns against deepfakes
The German Federal Office for Information Security (BSI) explicitly warns against deepfake attack methods enabled by AI. This technology can manipulate voices, images, and even videos to look and sound deceptively similar to real people.
“The high performance of off-the-shelf hardware means that any interested person can create videos or images that are more or less indistinguishable as to whether they are real recordings or manipulations. A prominent example is the video of former U.S. President Obama criticising his successor, Donald Trump. Today, it is still largely possible to tell from details that it is a fake. But it is only a matter of time when this will no longer be possible manually.”
(Matthias Neu, Melanie Müller, Biju Pothen, and Moritz Zingel in the book chapter “Digital Ethics,” page 97)
Measures against the Deepfake Threat
In the face of this growing threat, banks have taken proactive measures. They invest in advanced detection systems, implement multi-step authentication processes, and regularly train their employees to deal with potential security threats. One example of such advanced technology is ZAK Veriscore, software that streamlines the legitimation process in banks and recognises the authenticity features of over 1,300 different ID cards. This ensures that banks can accurately verify the identity of their customers, effectively preventing fraudulent activity. Collaboration with cybersecurity firms and artificial intelligence experts will also be intensified to keep abreast of the latest deepfake developments. Feedback loops are also being set up to report suspicious activity quickly and effectively. Some banks are experimenting with blockchain technology to verify the authenticity of documents and communications. But one thing is clear: despite these efforts, the threat of deepfakes remains an ongoing challenge that requires constant vigilance and innovation.
IT expert Leidecker also stresses the need for a proactive approach to security. He recommends banks invest in AI-powered defences while training their employees on the latest security protocols. In an era where technology is both a threat and a guard, staying one step ahead is critical.
Let’s stay in touch!
Stay up-to-date on the latest market trends, best practices and regulatory changes affecting cross-border selling by following us on LinkedIn.