AI Creates New Challenges in the Fight Against Fraud Protection
As consumers, we are all familiar with the concept of identity theft. But there is a similar crime with consequences that could be potentially more serious. That crime is identity fraud. And it is getting progressively harder to detect and combat thanks to artificial intelligence (AI).
Generative AI took the world by storm when ChatGPT was released to the public a couple of years ago. Ever since, tech companies have been locked in a virtual arms race to determine who the dominant players will be. Meanwhile, cybercriminals have worked to leverage AI for their own purposes. They have done so largely in the shadows.
Stopping identity fraud is considered part of modern fraud protection. Leading organizations like DarkOwl rely on a variety of tools, including specialized threat intelligence platforms, to identify and mitigate identity fraud problems before they blow up.
A Very Serious Problem
Fraud protection is always challenging because cybercriminals continue to come up with new ways of doing what they do. With the introduction of AI, their abilities exploded. Now we are looking at a very serious identity fraud problem. How bad is it?
According to a recent Bank Info Security report, 54% of organizations participating in their annual IT Pro Survey say they are “very concerned about AI increasing identity fraud.” that’s not all. Some 48% also say they lack confidence in their abilities to defend against AI cyberattacks.
The main issue here is AI’s ability to generate convincing deepfakes via still images, video, and even audio. Cybercriminals can hijack a person’s identity and then use AI to present themselves as that person for whatever reason.
Numbers cited by Forbes indicate that the volume of deepfake videos posted online exploded by 900% between 2019 and 2020. Forbes also cites one example in which identification fraud was pulled off to the tune of $25 million. The scam was perpetrated by way of deep fake technology and a video phone call.
How They Do It
Given the sophistication of darknet intelligence and monitoring, you might be wondering how cybercriminals are getting away with identity fraud. First and foremost, they are intimately familiar with the fraud protection strategies enterprises rely on. They take that knowledge and combine it with emerging technologies capable of doing some amazing things.
Here are some examples of how they manage to pull off ID fraud:
- Synthetic identities – AI makes it possible to create synthetic identities by combining real and fictional information. An estimated 85% of all identity fraud cases nowadays involve synthetic identities.
- Password cracking – Although password cracking lost its luster some 20 years ago, AI has ensured its comeback. New AI tools make even the most hardened passwords susceptible to cracking. Even the toughest passwords can be cracked within several minutes to a few hours.
- AI defenses – Cybercriminals are even leveraging AI to create better defenses against exports like DarkOwl. As for victim defenses, AI can help threat actors get around them.
AI-enabled identity fraud goes above and beyond simple identity theft. It is a serious problem capable of impacting even the largest and most heavily defended organizations. Any organization with a commitment to fraud protection needs to be aware of this emerging threat.
Artificial intelligence allows us to do some amazing things in the generative arena. But what is seen largely as a beneficial technology also has its dark side. A threat actor can create an impressive and convincing deep fake as easily as the rest of us can modify family photos with AI. The reality of the situation raises compelling questions for fraud protection.