GDPR’s 72 Hour Rule Explained: Why...
The GDPR requirement to report certain personal data breaches within 72 hours is one of the most widely cited obligations...
Read MoreArtificial intelligence tools are rapidly becoming part of everyday work. Generative AI platforms can summarise documents, write reports, analyse data, and assist with coding, often saving significant time. As a result, employees across many organisations are beginning to experiment with these tools independently.
However, this rapid adoption has created a new challenge for security and risk teams. Increasingly, staff are using AI platforms without formal approval, governance, or oversight. This phenomenon is commonly referred to as “shadow AI”.
Much like shadow IT before it, shadow AI occurs when employees adopt technology outside official organisational controls. The difference is that many AI tools require users to submit large volumes of data for processing. In some cases, that data may include confidential documents, intellectual property, or personal information. For organisations responsible for protecting sensitive information, the rise of shadow AI introduces a complex and evolving cybersecurity risk.
Shadow AI refers to the use of artificial intelligence tools that have not been formally approved, assessed, or monitored by an organisation’s IT, security, or governance teams. Employees may use these tools to draft emails, produce reports, write code, analyse spreadsheets, or summarise research. In many cases, the intention is positive. Staff are often trying to work more efficiently or improve productivity. However, when AI services are used without proper oversight, organisations may lose visibility over how sensitive information is handled.
Unlike traditional software deployments, many AI tools can be accessed instantly through web browsers or personal accounts. This ease of access means employees can begin using them immediately, often without realising the potential implications for data protection or security. Research has already highlighted the scale of the issue. A global survey by Salesforce found that more than half of employees using generative AI at work were doing so without formal approval from their organisation.
The risks associated with shadow AI are broader than simple policy violations. When employees upload internal information to external AI services, organisations may lose control over how that information is stored, processed, or reused.
One of the most immediate concerns is data exposure. Generative AI platforms often require users to submit text, documents, or code to generate responses. If employees upload confidential information, that data may be stored by the service provider or used to improve the model. In some cases, organisations may not fully understand where the data is processed or which jurisdictions it passes through.
There are also concerns around intellectual property protection. Employees using AI tools to assist with coding, product development, or strategic analysis may inadvertently share proprietary information with third-party systems. Once that information has been submitted to an external platform, organisations may have limited control over how it is handled. The National Cyber Security Centre (NCSC) has warned that organisations should carefully assess the security implications of AI tools before allowing them to be used with sensitive information.
For many organisations, shadow AI also creates potential regulatory exposure. Data protection laws such as GDPR place strict requirements on how personal information is processed and transferred. If employees submit personal or customer data to external AI tools without appropriate safeguards, organisations may struggle to demonstrate compliance with these obligations. Issues such as international data transfers, data retention, and transparency may all arise.
Regulators have already begun to highlight these concerns. The UK Information Commissioner’s Office (ICO) has emphasised that organisations must consider data protection risks when adopting AI technologies and ensure that appropriate governance frameworks are in place.
Similar concerns are being raised internationally. The Organisation for Economic Co-operation and Development (OECD) has highlighted the need for organisations to implement responsible AI governance structures to manage risks associated with AI deployment and data handling.
Several factors explain why shadow AI is emerging so quickly across organisations.
First, generative AI tools are widely accessible. Many platforms offer free or low-cost access through simple web interfaces, meaning employees can experiment without needing software installation or internal approval.
Second, there is strong pressure on organisations to improve productivity. AI tools can assist with drafting documents, analysing data, and automating routine tasks. For many employees, the benefits are immediate and tangible, which encourages adoption even in the absence of formal policies.
Third, governance frameworks have not yet caught up with the pace of AI innovation. While most organisations have mature processes for approving software or managing cloud services, many are still developing policies that specifically address AI usage.
Technology analysts are already highlighting this gap. Gartner has warned that organisations deploying generative AI without appropriate governance risk exposing sensitive data and creating new attack surfaces.
Attempting to ban AI tools outright is unlikely to be effective. In many cases, employees will continue to experiment with these technologies, particularly when they deliver clear productivity benefits. Instead, organisations should focus on building governance frameworks that enable responsible and secure AI usage.
A practical starting point is improving visibility. Security teams need to understand which AI platforms are already being used across the organisation. Network monitoring tools, cloud access controls, and employee engagement can help identify unauthorised services.
Organisations should also establish clear guidance for staff. Employees need to understand which AI tools are approved and what types of information can safely be shared with them. In many cases, the risk arises not from the use of AI itself, but from the uncontrolled sharing of sensitive data.
Finally, organisations may wish to provide secure, approved AI tools internally. By offering governed alternatives, organisations can enable innovation while maintaining control over how information is handled.
Artificial intelligence will increasingly become embedded in everyday business operations. As the technology becomes more powerful and more widely available, organisations will need to adapt their cybersecurity and governance strategies accordingly.
Shadow AI represents an early indication that the adoption of AI is moving faster than many organisations anticipated. Those that respond proactively by implementing clear governance, improving visibility, and educating employees will be better positioned to manage the risks.
Ultimately, the goal is not to prevent the use of AI, but to ensure it is used in a way that protects sensitive data, maintains regulatory compliance, and supports long-term organisational resilience.
Shadow AI creates governance challenges that organisations cannot address through policy alone. Clear oversight, defined accountability, and proportionate controls are essential if AI use is to be managed responsibly.
AJC supports organisations in reviewing whether their AI governance frameworks are keeping pace with actual usage across the business. We help assess policy coverage, approval processes, data handling expectations, accountability structures, and assurance mechanisms so that AI-related risks are understood and managed appropriately.
By strengthening oversight, clarifying responsibilities, and aligning governance arrangements with regulatory expectations, AJC helps organisations enable responsible AI adoption while protecting sensitive information and maintaining confidence in their control environment.
Contact us on 020 7101 4861 or email us at info@ajollyconsulting.co.uk if you think we can help.
Sources:
https://www.salesforce.com/news/stories/ai-at-work-research/
https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/
https://www.oecd.org/en/publications/governing-with-artificial-intelligence_795de142-en.html
The GDPR requirement to report certain personal data breaches within 72 hours is one of the most widely cited obligations...
Read MoreThe Data (Use and Access) Act 2025 is being introduced in stages, with ICO guidance continuing to evolve alongside it....
Read MoreAs fraud tactics continue to evolve, organisations are being forced to rethink security measures that were once seen as standard....
Read More