UK’s New AI Cyber Security Standard: What It Means for Resilience Professionals

  • 07 Feb 2025
  • Rebecca
News-UK’s New AI Cyber Security Standard What It Means for Resilience Professionals.jpg

In January 2025 the UK government published a world first AI cyber security standard[1] aimed at protecting the digital economy from cyber security risks to artificial intelligence (AI) systems.

This voluntary code of practice and implementation guide follows global stakeholder feedback, including the National Cyber Security Centre, and forms the basis of a new global standard for secure AI through the European Telecommunications Standards Institute.

Last year, the UK AI sector generated £14.2 billion in revenue, and this standard’s 13 principles aim to maintain vital growth while protecting critical infrastructure from cyber-attacks. The principles emphasise protecting AI systems from hacking and sabotage and guides developers into creating secure and resilient products.

The Minister for Cyber Security, Feryal Clark MP said: 

This will not only create the opportunities for businesses to thrive, secure in the knowledge that they can be better protected than ever before but support them in delivering cutting-edge AI products that drive growth, improve public services, and put Britain at the forefront of the global AI economy.”[2]

BCI research[3] indicates that nearly 75% of organizations experienced a cyber-attack in 2024 and that it is the top risk for organizations in the future[4]. Although the adoption of AI tools has been relatively slow, with many organizations preferring a ‘wait and see’ approach, progress is being made to integrate AI into the sector. Currently, research shows that AI is primarily used for day-to-day time-saving tasks such as online meeting transcription, but some practitioners report using AI to create realistic and emotive training scenarios much faster than manual methods. Others have used AI to align their organizational policies with relevant legislation, significantly reducing human error and the time required for repetitive cross-checking tasks.

Findings from the BCI A Year in the World of Resilience Report 2024 reflect growing interest in AI adoption across organizations. Over one in three organizations anticipate AI playing a significant role in their operations, to varying degrees, throughout 2025. This is an increase on the previous year, indicating rising interest in the sector.

Tackling the dangers of AI

AI has the potential to enhance resilience, but there are also risks that resilience professionals must address. For example, entering sensitive company data into public AI systems can lead to data leaks and attackers acquiring information that leads to a breach. To tackle this, practitioners could consider establishing an AI management group to assess risk appetite and develop organizational policies for safe use and limitations. They could also consider implementing AI awareness training for staff to ensure efficiencies are achieved, but data is kept secure.

In addition, one of the code’s principles is to identify, track and protect assets, including interdependencies/connectivity. Practitioners can look to achieve this by fostering close collaboration with their third-party suppliers and working to break down silos between organisational departments.

The AI Cyber Security Code of Practice has been released to help developers and end users implement AI in a safe way without stifling innovation. Although this is UK guidance, it can be used as global good practice to boost resilience alongside others such as the European Union AI Act, Canada’s voluntary code of conduct, and other regulations in the pipeline including Brazil’s first AI regulation, which is currently under review. [5]  


More on
About the author