Phone us
Artificial intelligence (AI) is reshaping industries, but it is also providing criminals with powerful new tools. Fraud in the UK has risen sharply, with a 12 per cent increase in reported cases in 2024, costing businesses and individuals £1.17 billion (Reuters, 2025). This trend shows that organisations must adapt quickly to AI-enabled fraud threats.
AI-Powered Phishing and Social Engineering

Phishing emails and social engineering have long been popular tactics for fraudsters, but AI has made them significantly more convincing. Criminals are now using AI-driven tools to generate personalised emails and messages that mimic the language, tone, and style of trusted organisations. Unlike traditional scams, these communications are far more sophisticated, making them difficult to detect. AI also enables attackers to analyse publicly available data, such as information from social media to tailor attacks to individual employees and increase the likelihood of success (CrowdStrike, 2024).

Deepfakes and Impersonation Fraud

Deepfake technology is increasingly being exploited to impersonate executives and deceive employees into transferring funds. In a notable case, UK engineering firm Arup lost £20 million after an employee was tricked by an AI-generated video call impersonating senior staff (The Guardian, 2024)

Consumer advocate Martin Lewis, a vocal critic of deep-fake scams, including those using his own image, has called for tech companies to face multi-billion-pound fines. He emphasised the daily prevalence of scams, their financial and emotional impact on UK victims, particularly the elderly, and criticised firms for prioritising profit over removing fraudulent content, warning that without significant penalties, advertising practices that enable fraud are unlikely to change. (Financial Times)

Experts predict a 1,500 per cent increase in deepfake content by 2025, underlining the scale of the threat (Reality Defender, 2025).

Mobile and Text-Based AI Scams

Fraudsters are also targeting mobile devices. In 2025, O2 blocked over 600 million scam texts, many of which used AI-driven tools to avoid detection and trick recipients into clicking malicious links (The Scottish Sun, 2025).

Financial Sector Risks

The UK Finance Annual Fraud Report 2025 highlights a surge in card-based fraud and remote purchase scams. While Authorised Push Payment (APP) fraud showed a slight decline, AI-enabled techniques such as identity manipulation and deepfake scams are driving new risks for businesses (UK Finance, 2025).

AI as a Defence

AI is not only a weapon for fraudsters. Financial institutions are increasingly using AI to analyse behaviour, detect anomalies, and stop fraud in real time. However, these systems must be continually updated to stay effective against rapidly evolving threats (Keyrus, 2025).

How AJC Can Help

At AJC, we understand that AI has changed the fraud landscape. Traditional defences are no longer enough. Our team helps organisations by:

  • Assessing fraud risks with a focus on AI-enabled threats.
  • Designing a tailored mitigation strategy, including advice on fraud detection tools and layered security controls.
  • Updating due diligence procedures for account applications, employee interviews and third party onboarding
  • Training staff to identify phishing, deepfakes, and impersonation attempts.
  • Embedding fraud prevention into wider business continuity and resilience strategies.

AI-driven fraud is evolving rapidly, but your organisation does not need to face it alone. By working with AJC, you can build robust defences, stay ahead of emerging risks, and protect your business for the future.

If your organisation is ready to step up its fraud response, contact us at AJC to find out how we can support your journey.

Click here, to find out more about our Fraud Risk Management services.

Contact us on 020 7101 4861 or email us at info@ajollyconsulting.co.uk if you think we can help.

References

  • Martin Lewis Financial Times Link
  • Reuters. (2025). Britain sees 12% spike in fraud cases as banks battle £1.6 billion epidemic. Link, Link 2
  • CrowdStrike. (2024). AI in Social Engineering Attacks. Link 
  • The Guardian. (2024). UK engineering firm Arup falls victim to £20m deepfake scam. Link
  • Reality Defender. (2025). UK Government Deepfake Report. Link
  • The Scottish Sun. (2025). O2 blocks over 600 million scam texts this year. Link
  • UK Finance. (2025). Annual Fraud Report 2025. Link
  • Keyrus. (2025). AI and Fraud Prevention in UK Financial Services. Link

Image accreditation: Grisha Petrosyan Dec 2022 from Unsplash.com. Last accessed on 25th September 2025. Available at: https://unsplash.com/photos/a-person-taking-a-picture-with-a-cell-phone-CfNU0SpCVN4

In case you missed it...

cyber resilience mutuals
AJC Strengthens Cyber Resilience in Mutuals

Mutual organisations continue to play a pivotal role in the UK financial landscape, and the need for robust cyber security...

Read More
£600 Million Lost to Fraud
£600 Million Lost to Fraud in...

The latest figures from UK Finance paint a troubling picture of the nation’s fraud landscape. In just the first six...

Read More
FCA romance fraud scam
FCA Warns Banks Over Missed Chances...

The Financial Conduct Authority (FCA) has criticised UK banks and payment firms for repeatedly missing key opportunities to prevent romance...

Read More

Get in touch

    By submitting this form you are consenting that your data be handled in accordance with our Privacy Notice and we will be in touch regarding your enquiry.