AJC Strengthens Cyber Resilience in Mutuals
Mutual organisations continue to play a pivotal role in the UK financial landscape, and the need for robust cyber security...
Read MorePhishing emails and social engineering have long been popular tactics for fraudsters, but AI has made them significantly more convincing. Criminals are now using AI-driven tools to generate personalised emails and messages that mimic the language, tone, and style of trusted organisations. Unlike traditional scams, these communications are far more sophisticated, making them difficult to detect. AI also enables attackers to analyse publicly available data, such as information from social media to tailor attacks to individual employees and increase the likelihood of success (CrowdStrike, 2024).
Deepfake technology is increasingly being exploited to impersonate executives and deceive employees into transferring funds. In a notable case, UK engineering firm Arup lost £20 million after an employee was tricked by an AI-generated video call impersonating senior staff (The Guardian, 2024)
Consumer advocate Martin Lewis, a vocal critic of deep-fake scams, including those using his own image, has called for tech companies to face multi-billion-pound fines. He emphasised the daily prevalence of scams, their financial and emotional impact on UK victims, particularly the elderly, and criticised firms for prioritising profit over removing fraudulent content, warning that without significant penalties, advertising practices that enable fraud are unlikely to change. (Financial Times)
Experts predict a 1,500 per cent increase in deepfake content by 2025, underlining the scale of the threat (Reality Defender, 2025).
Fraudsters are also targeting mobile devices. In 2025, O2 blocked over 600 million scam texts, many of which used AI-driven tools to avoid detection and trick recipients into clicking malicious links (The Scottish Sun, 2025).
The UK Finance Annual Fraud Report 2025 highlights a surge in card-based fraud and remote purchase scams. While Authorised Push Payment (APP) fraud showed a slight decline, AI-enabled techniques such as identity manipulation and deepfake scams are driving new risks for businesses (UK Finance, 2025).
AI is not only a weapon for fraudsters. Financial institutions are increasingly using AI to analyse behaviour, detect anomalies, and stop fraud in real time. However, these systems must be continually updated to stay effective against rapidly evolving threats (Keyrus, 2025).
At AJC, we understand that AI has changed the fraud landscape. Traditional defences are no longer enough. Our team helps organisations by:
AI-driven fraud is evolving rapidly, but your organisation does not need to face it alone. By working with AJC, you can build robust defences, stay ahead of emerging risks, and protect your business for the future.
If your organisation is ready to step up its fraud response, contact us at AJC to find out how we can support your journey.
Click here, to find out more about our Fraud Risk Management services.
Contact us on 020 7101 4861 or email us at info@ajollyconsulting.co.uk if you think we can help.
References
Image accreditation: Grisha Petrosyan Dec 2022 from Unsplash.com. Last accessed on 25th September 2025. Available at: https://unsplash.com/photos/a-person-taking-a-picture-with-a-cell-phone-CfNU0SpCVN4
Mutual organisations continue to play a pivotal role in the UK financial landscape, and the need for robust cyber security...
Read MoreThe latest figures from UK Finance paint a troubling picture of the nation’s fraud landscape. In just the first six...
Read MoreThe Financial Conduct Authority (FCA) has criticised UK banks and payment firms for repeatedly missing key opportunities to prevent romance...
Read More