Rosamund Powell
Rosamund Powell from the Alan Turing Institute, with microphone, speaks at the joint U.K.-U.S. release of AI security guidelines in London, November 27, 2023. Image: U.K. NCSC

AI systems ‘subject to new types of vulnerabilities,’ British and US cyber agencies warn

British and U.S. cybersecurity authorities published guidance on Monday about how to develop artificial intelligence systems in a way that will minimize the risks they face from mischief-makers through to state-sponsored hackers.

“AI systems are subject to new types of vulnerabilities,” the 20-page document warns — specifically referring to machine-learning tools. The new guidelines have been agreed upon by 18 countries, including the members of the G7, a group that does not include China or Russia.

The guidance classifies these vulnerabilities within three categories: those “affecting the model’s classification or regression performance,” those “allowing users to perform unauthorized actions” and those involving users “extracting sensitive model information.”

It sets out practical steps to “design, develop, deploy and operate” AI systems while minimizing the cybersecurity risk.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said Lindy Cameron, chief executive of the U.K.’s National Cyber Security Centre (NCSC).

Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), described the release of the guidelines as “a key milestone in our collective commitment — by governments across the world — to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”

The NCSC in August warned about “prompt injection attacks” as an apparently fundamental security flaw affecting large language models (LLMs) — the type of machine learning used by ChatGPT to conduct human-like conversations.

“Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction,” the agency’s previous paper stated.

Monday’s guidance sets out how developers can secure their systems by considering the cybersecurity risks specific to the technologies that make up AI, including by providing effective guardrails around the outputs these models generate.

The most pressing issue among a panel of experts at the launch event in London was around threats such as model inversion attacks — when potentially sensitive training data can be retrieved from the trained model — rather than generative AI being manipulated to produce media that is later used to deceive people online.

Simon Llewellyn, the director of security architecture at the Canadian Centre for Cybersecurity, said he found himself having separate conversations with different audiences.

When speaking to security professionals, Llewellyn urged them to “secure the inputs, secure the model, secure the data, secure the outputs” but he worried that for the public at large it was “becoming increasingly difficult for the common consumer out there to recognise when they’re potential getting spear-phished.”

Composed on the heels of the AI Safety Summit, the guidance was developed with input from the NCSC and CISA’s sister agencies in 17 other countries — from New Zealand to Norway and Nigeria — as well as over a dozen organizations currently developing the technology, including Microsoft, Google and OpenAI.

The NCSC wrote in a press release that “agencies from 17 other countries have confirmed they will endorse and co-seal the new guidelines” as a “testament to the UK’s leadership in AI safety.”

Jonathan Berry, the Viscount Camrose — who inherited his seat in Britain’s unelected House of Lords before being appointed as the Minister for AI and Intellectual Property by Prime Minister Rishi Sunak — described the guidance as “only the start of the journey to secure AI” during a launch event at NCSC’s headquarters on Monday.

Berry said the British government did not immediately plan to legislate to improve AI security. He said the Department for Science, Innovation and Technology (DSIT) was currently developing a “voluntary code of practice” regarding AI development that would subsequently be scrutinized by a public consultation, with the hope of one day establishing an international standard.

Editor's Note, Dec. 5, 2023: Story updated to correct spelling of Simon Llewellyn's name.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles

Alexander Martin

Alexander Martin

is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.