Misuse & The European Union

by Paul Bricman, CEO

  • Policy
  • Governance
  • Compliance

Policymakers around the world are rushing to catch up with the breakneck pace of AI development. The regulatory landscape is evolving rapidly, with policies, guidelines, codes of practice, bills, standards, etc. being constantly proposed to address the emerging challenges posed by AI systems. In this article, we explore language dealing with misuse and loss of control in the European Union.

The main piece of EU legislation concerned with AI systems is the AI Act. Like other efforts aspiring towards a comprehensive, unifying framework for regulating AI, the AI Act attempts to deal with countless distinct concerns, ranging from the copyright of training data to socio-demographic fairness, and from privacy in the age of AI-enabled biometrics to the security of training infrastructure. The several hundred pages of the AI Act address dozens of contentious issues, and have been in the works for years.

Embrace The Patchwork

Before zooming in, it's worth highlighting a key distinction in the AI Act's taxonomy. Chapter III introduces the notion of "high-risk systems." AI systems are deemed to be high-risk not based on their inherent properties, but instead based on the context which they're being deployed in. For instance, health, law enforcement, and critical infrastructure are all deemed high-risk domains, and so any AI system targeting these is likely to face more stringent scrutiny on its performance, explainability, etc. Crucially, high-risk systems also include narrow, one-trick-pony systems which are only capable of carrying out the particular task they've been trained for (e.g. providing a diagnosis, surfacing promising candidates, etc.). In a certain sense, the extensive emphasis on domain-specific high-risk systems is a remnant of the pre-LLM era, with critics arguing that it's now shaping discourse in counterproductive ways.1

Complementary to high-risk systems, Chapter V introduces "general-purpose AI (GPAI) systems." Unsurprisingly, these refer to the more general systems, such as LLMs, which can be harnessed to address a broad range of tasks. In contrast to high-risk systems, GPAI systems are instead regulated based on their inherent structure and properties, rather than based on the specific use case they're deployed in. In addition, Recital 110 enumerates an array of "systemic risks" which GPAI systems could pose:

Key idea

"In particular, international approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent; chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use; offensive cyber capabilities, such as the ways in vulnerability discovery, exploitation, or operational use can be enabled; the effects of interaction and tool use, including for example the capacity to control physical systems and interfere with critical infrastructure; risks from models of making copies of themselves or ‘self-replicating’ or training other models [...]"

On the factors that could catalyze these risks, the document further elaborates:

Key idea

"Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors."

On what constitutes a GPAI system with systemic risks, the AI Act provides the following classification in Article 51:

Key idea

"A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: (a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; (b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel [...]"

In terms of actual obligations for providers of GPAI systems with systemic risks, Article 55 enumerates the following:

Key idea

"[...] providers of general-purpose AI models with systemic risk shall: (a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; (b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; (c) keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; (d) ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model."

Closing Thoughts

In case it wasn't clear already, the AI Act targets a high-level, agenda-setting position in the emerging constellation of policy projects. Despite spanning hundreds of pages, the sheer number of considerations being discussed limits the amount of detail afforded for any particular one. This is where codes of practice and standards come into play. Codes of practice drawn up by industry and civil society are interim compilations of more concrete best practices which authorities can greenlight for general use (i.e. "regulated self-regulation"). Pushing further, standards are even more concrete guidelines spearheaded by public entities (e.g. CEN/CENELEC), as contextualized by Article 55:

Key idea

"Providers of general-purpose AI models with systemic risk may rely on codes of practice [...] to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published."

There's a gradual process of moving from the high-level direction of the AI Act, via codes of practice, all the way to concrete guidelines established through standards. The entire process is expected to take several years, with the later stages striving to accommodate the inevitable new developments in AI. For all their massive overhead, though, these legislative leviathans are in motion, and have so far gained significant momentum. Given also the pressures for achieving international harmonization, and the Brussels effect more broadly, the AI Act is likely to have a significant impact on the global AI regulatory landscape, bringing misuse and loss of control to the forefront of the conversation.

Footnotes

Footnotes

  1. Max Tegmark, president of the Future of Life Institute, has previously argued that the push "to exempt the future of AI (LLMs) would make the EU AI Act the laughing-stock of the world, not worth the paper it’s printed on. After years of hard work, the EU has the opportunity to lead a world waking up to the need to regulate these increasingly powerful and dangerous systems."

More resources

Deconfusing AI-based IAM & IAM for AI Capabilities

Exploring the distinctions between AI-based Identity and Access Management and IAM for AI capabilities. How do these concepts intersect, and what are their implications?

Read more

Introducing Pinboard

We’re excited to share Pinboard, a command-line tool that streamlines workflows for developers working with generative systems. Learn how Pinboard can help you manage file references, request in-place file updates, and boost productivity in codebase-level development tasks.

Read more

Become a Challenger.

Challengers are individuals who can push frontier models to their absolute limits. They're passionate about the integrity of digital, biological, and social systems, and are stress-testing our simulators across cybersecurity, biosecurity, and beyond — for fun and profit.