For an SME, What Should Be...
For many small and medium-sized organisations, improving cyber security can feel daunting, especially when formal standards start to enter the...
Read MoreOne of the most pressing concerns is the inadvertent leakage of sensitive data. In May 2023, Samsung experienced a significant data breach when employees used ChatGPT to review internal code and documents, inadvertently exposing confidential information. This incident led Samsung to ban the use of generative AI tools across the company to prevent future breaches. Research has shown that nearly 10% of employee interactions with GenAI tools involve the input of sensitive data, highlighting how easily confidential information can be mishandled in a corporate environment.
Prompt injection attacks represent a sophisticated threat where malicious actors craft inputs to manipulate AI models into executing unintended actions. A notable example is the “Policy Puppetry” technique, which circumvents safety protocols in leading GenAI models, causing them to generate prohibited or dangerous outputs. These attacks pose serious risks, especially in high-trust environments like banking, healthcare, and law.
The rise of accessible AI tools has also created new weapons for adversaries. North Korean hackers, for instance, have been reported to use GenAI to generate convincing CVs and prepare for job interviews, enabling them to infiltrate companies under false pretences. These roles provide both insider access and an income stream for state-sponsored operations. It’s a stark reminder that GenAI, while powerful, must be deployed with careful guardrails in place to strengthen security.
Another lesser-known but equally significant risk is AI hallucination, when a GenAI model generates responses that sound accurate but are factually incorrect or entirely fabricated. This can have serious consequences in sectors where accuracy is non-negotiable, such as healthcare, legal, or financial services. In 2023, a New York lawyer was fined after submitting a legal brief written by an AI tool that included fictitious case law. Hallucinations like this erode trust and can lead to legal liability or reputational damage if left unchecked. Guardrails, verification tools, and human oversight are essential to mitigate the risk of false or misleading outputs.
Compliance is another major challenge for GenAI deployment. AI models frequently process personal or sensitive data, making alignment with data protection laws like the GDPR, CCPA, or the UK’s DPA 2018 critical. Regulators are beginning to scrutinise GenAI systems more closely, especially when used for automated decision-making. Companies that can’t demonstrate transparency, explainability, or consent mechanisms could face steep penalties. The EU’s AI Act, currently being finalised, will require businesses to assess and mitigate the risks associated with high-impact AI systems. This signals a shift toward stricter global accountability, with compliance becoming not just a legal requirement but a strategic necessity for effective GenAI security.
To address these risks, organisations should implement strict data governance policies, establish robust verification mechanisms, and conduct ongoing monitoring. Training employees on the secure use of AI tools and creating layered defence strategies such as AI model audits, zero-trust architectures, and threat detection can help stay ahead of evolving threats.
As GenAI continues to evolve, so too must our approach to securing it. With thoughtful planning, responsible deployment, and proactive mitigation strategies, organisations can harness the power of GenAI while upholding safety, compliance, and trust.
AJC can play a vital role in helping organisations navigate the evolving risks associated with GenAI. Through tailored advisory services, AJC supports clients in establishing clear governance structures, ensuring regulatory compliance, and implementing robust risk mitigation strategies. We also support organisations achieve the ISO/IEC 42001 – a newly established global standard designed to set the benchmark for businesses implementing AI.
From aligning AI initiatives with data protection laws like GDPR to building frameworks that promote transparency, accountability, and ethical use, AJC helps businesses deploy GenAI responsibly. Whether it’s developing usage policies or supporting compliance with upcoming legislation like the EU AI Act, AJC ensures that organisations not only harness the benefits of AI, but do so safely, legally, and sustainably.
Contact us on 020 7101 4861 or email us at info@ajollyconsulting.co.uk if you think we can help.
Image accreditation: Philip Oroni (2024) from Unsplash.com+. Last accessed on 7th May 2025. Available at: https://unsplash.com/photos/a-laptop-computer-sitting-on-top-of-a-table-pruUoNXfRDM
For many small and medium-sized organisations, improving cyber security can feel daunting, especially when formal standards start to enter the...
Read MoreThe GDPR requirement to report certain personal data breaches within 72 hours is one of the most widely cited obligations...
Read MoreThe Data (Use and Access) Act 2025 is being introduced in stages, with ICO guidance continuing to evolve alongside it....
Read More