The Rise of AI in Fraud : New Tactics and Tools for Both Sides
AI tools such as ChatGPT, Google Gemini, Claude and Microsoft Copilot are also known as generative AI. This is because they generate new content. Generative AI can create photos, artwork, voice content, music and documents. In fact, all of the artwork in this edition of the newsletter is AI-produced.
There is now a product sold on the dark web called FraudGPT, which allows criminals to make content to facilitate a range of frauds, including creating phishing emails, or to custom-make scam web pages designed to steal personal information.
The use of voice cloning is also a growing issue- it can be used to convince a relative that a loved one is in need of financial help, or even in some cases to convince them the individual has been kidnapped and needs a ransom paid.
Reports of AI tools being used to try to fool banks’ systems have increased significantly, according to anti-fraud organisation CIFAS.
AI as a Fraud Fighting Tool
It must be remembered that AI is also used to combat fraud. Many industries use AI for fraud prevention and detection purposes. This can be done by:
- Analysing vast amounts of data to spot patterns and anomalies.
- Monitoring behaviours and flagging anything which deviates from this.
- Verifying identity, such as facial recognition.
- Analysing text and speech which can help identify phishing emails.
- Anything suspicious which is spotted can be actioned by AI.
- Predicting fraud risks in future by looking at historical data and trends.
As we highlighted in our May edition of this newsletter, the UK government uses AI-enabled analysis of public sector data to prevent and detect fraud.
AI Checkers
If you want to check if something is likely to have been AI-generated, there are plenty of online AI checkers available. Simply search “check if text / picture / video is AI generated” and you’ll find lots of free tools. Be mindful that these services are still developing and they may sometimes get it wrong. It is likely that their accuracy will increase over time - the Counter Fraud Team has noticed an improvement in these services in the past 12 months.
Using Generative AI Safely
Having generative AI tools such as ChatGPT within easy reach can be extremely helpful. AI has exciting potential in terms of solving problems, increasing efficiency, and improving lives. However, there are also pitfalls if used without considering the risks.
Privacy – anything that is shared with openly available AI platforms can be stored and become part of the systems knowledge base. This includes the requests you make and any data you share. Sharing other people’s information with AI could constitute a data breach. Uploading business sensitive data could represent a breach of confidentiality.
Hallucinations – generative AI can hallucinate. A lawyer in New York fell foul of this when he asked AI to help him find case citations for a court case. AI created imaginary cases which the lawyer then presented in court as case law. As you can imagine, the courts took a very dim view of this incident and the law firm were fined $5,000.
Inaccuracy and Bias – AI tools such as ChatGPT are trained on data from multiple sources. These sources may be biased or inaccurate, resulting in AI parroting ill-founded information. There are concerns that AI tools may exclude less obvious information, and that it tends to be poor at weighing up strengths and weaknesses of source data.
Trust – the ability to detect AI-generated content is a hot topic at the moment, as AI generated false information are highly dangerous tools for criminals who look to manipulate people. Improving online safety and preventing the spread of false information is a top priority for governments and law makers. As detection tools get better AI-produced content may be detected. If you have not been transparent about how you have used AI, this may undermine trust.
It is important to be aware of the limitations of AI tools. If you are considering using AI at work, you must ensure that you follow your organisation’s AI Policy. If they do not have an AI policy, please seek guidance from your organisation’s Information Governance and IT teams before using AI.
This article is in the 'AI Special' June edition of our Counter Fraud Newsletter which can be found her Publications page.