Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-01-31T11:58:28.341Z Has data issue: false hasContentIssue false

Generative AI and international standardization

Published online by Cambridge University Press:  30 January 2025

Sebastian Hallensleben*
Affiliation:
Chair, JTC21, CEN-CENELEC, Independent Scholar, Aachen, Germany
Rights & Permissions [Opens in a new window]

Abstract

Standards complement regulation as frameworks for Artificial Intelligence governance. Within the European Union, this complementarity is laid down as the New Legislative Framework. Standards can be harmonised to provide a presumption of conformity with regulation. They draw legitimacy from the inclusion of all relevant stakeholders as well as the consensus principle although there are limitations in practice. At both European and international levels, standardisation for generative AI is still in its infancy due to standardisation following relying on a level of technical maturity. Therefore, most activity is currently seen in the policy domain. Potential directions for future generative AI standards are suggested. Generative AI drives the need for non-AI standards, too, especially in areas of digital trust and digital identity.

Type
Response Paper
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-ShareAlike licence (http://creativecommons.org/licenses/by-sa/4.0), which permits re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

It is hard for regulation to keep up with the rapid development of new technologies. This is partly due to the lack of specialist technical expertise among lawmakers, and partly due to the multi-year timescales for developing, proposing and negotiating complex regulation that lag behind technological advances. Generative AI has been a particularly egregious example of this situation but is by no means the first.

On the other hand, technical standardisation in global fora such as ISO and IEC generally does not suffer from a lack of specialist technical expertise. In many cases, it is also able to work on somewhat faster timescales than regulation.

Therefore, many jurisdictions have developed synergistic approaches that combine the respective strengths of regulation and standardisation to complement each other. The most prominent example is the European Union where this synergistic approach has been in use as the “New Legislative Framework” since 2008, with predecessors going back more than four decades to the 1980s. It is most recently being applied to the EU AI Act.

The EU New Legislative Framework is unique among supranational synergistic approaches in that it is codified in considerable depth. It is a combination of several instruments summarised in the “Blue Guide” (European Commission, 2022) on the implementation of EU product rules.

It is out of scope for this article to provide an overview of standardisation structures and processes. This has been done in depth e.g. recently by Micklitz (Micklitz, Reference Micklitz2023a, Reference Micklitz2023b) in a way that mirrors the author’s own experience as the chair of CEN-CENELEC JTC21, the committee tasked with developing the harmonised standards for the EU AI Act. However, taking Europe as the example, a few key points on the interplay between regulation and standardisation are worth outlining here:

  • Harmonised standards:

According to the New Legislative Framework, regulation only provides high-level objectives and requirements while standardisation provides technical solutions as a path towards achieving objectives and implementing requirements. The EU can “harmonise” standards that the European Commission deems to be sufficiently substantial and stringent to underpin a piece of regulation in this way. Such harmonised standards are typically published in the Official Journal of the European Union, i.e. in the same publication that also disseminates regulation.

  • Presumption of conformity:

Contrary to what might be expected from the publication route, harmonised standards are not mandatory, i.e. organisations are free to ignore them. However, adhering to harmonised standards leads to a “presumption of conformity.” This means that organisations that follow harmonised standards are automatically presumed to be compliant with the relevant part of regulation. This removes legal uncertainties for organisations. Harmonised standards are therefore effectively a preferred path to compliance.

  • Legitimacy through stakeholders and consensus:

European regulation draws its legitimacy from the democratic institutions and processes through which it is drafted and decided. By contrast, standardisation is not democratic, even though voting does play a role in its processes. Instead, standardisation draws its legitimacy from the combination of two factors: (i) Standards development is consensus-based, i.e. work on a draft standard continues until most stakeholders can agree on it. There is no exact threshold for “most stakeholders” but it is generally accepted to be close to unanimity. (ii) Standards development committees are required to include the full spectrum of stakeholders affected by a proposed standard. It is the duty of the officers of a committee (in particular chair and secretary) to reach out to unrepresented stakeholder groups and try to pull them into the work of the committee.

  • Legitimacy in practice:

The theoretical legitimacy of standardisation is tempered to some extent by unequal access to resources. While some standardisation experts are in dedicated full-time roles at large companies who see standardisation as a strategic need and advantage, other experts, especially those affiliated with academia or civil society, often have to find time next to other responsibilities and might have to scramble to cover travel expenses. Familiarity with making effective use of the intricate and sometimes unintuitive processes and structures of standardisation is similarly skewed towards full-time standardisation experts who therefore find it easier to advance the agenda of their employer.

  • Top down vs. bottom up:

Standardisation activities are typically initiated and driven in a bottom-up manner. Experts propose standardisation items through their national standardisation committees whenever critical mass of organisations – usually companies – see value in a new standard. By contrast, the development of harmonised standards is typically initiated top down through a “standardisation request” or “mandate” issued by the European Commission and addressed to European standardisation organisations, in the case of the AI to CEN and CENELEC. Such a request contains a list of specific items as well as a delivery deadline. This creates a certain tension among standardisation experts used to working at their own pace on topics of their own preference.

At the time of writing, standardisation work to underpin the EU AI Act is in progress, with intense work in CEN-CENELEC JTC21 in five working groups structured into more than 40 distinct work items. With more than 140 experts in JTC21, and an estimated 1000–2000 experts contributing in approximately 24 national mirror committees, this is the biggest standardisation effort in Europe to date.

At the international level, AI standardisation has been concentrated in ISO/IEC JTC1/SC42 since 2018. Given the global scope, this effort has drawn an even higher number of experts and national mirror committees than the European AI standardisation work, and includes major players such as the US and China. International standardisation does not automatically trump European standardisation but JTC21 aims to reuse as much SC42 output as possible in order to save time and to keep divergence between European and international AI standardisation as small as possible. In general, the formal relationship between international and European standardisation is governed by the Frankfurt (IEC vs. CENELEC) and Vienna (ISO vs. CEN) agreements.

Given the lack of equivalent regulatory institutions at the international level, there can be no direct international equivalent to the EU New Legislative Framework. However, current work at the OECD, specifically in the OECD ONE.AI expert group on AI Risk & Accountability, takes an approach that is similar to the New Legislative Framework. The objective at the OECD is to extend the existing Due Diligence Guidance for Responsible Business Conduct (OECD, 2023) with high-level requirements for AI governance, while pointing to existing standards and frameworks at international and regional levels that could be used to meet those high-level requirements. The results of this work are expected to be agreed at the political level and published in 2025.

The observations provided so far apply to AI in general and show the considerable breadth, intensity and relevance of AI standardisation. By contrast, zooming in on generative AI as a subset and as the focus of this handbook, a rather different picture emerges:

  • In Europe, the standardisation mandate of the European Commission under the New Legislative Framework was made available as a draft in May 2022 and formally in May 2023, and does not include any reference to generative AI. This reflects the fact that, at the time of developing the standardisation mandate, generative AI was not yet part of the draft AI Act, either. Therefore, working on generative AI standards is not a priority in CEN-CENELEC JTC21, and the agenda of CEN-CENELEC JTC21 is dominated by the need to develop harmonised standards requested in the current standardisation mandate, with a tight deadline of autumn 2025. At the time of writing there has not been a single work item in the JTC21 work programme addressing the specifics of generative AI.

  • Generative AI standardisation in Europe is also held back by the fact that the newly established AI Office of the EU has a mandate to create a Code of Practice on General Purpose AI within one year. This will be a multi-stakeholder process that resembles standardisation to some extent, but is designed with a top-down approach, within a much shorter time frame, and led directly by a quasi-governmental institution. It is therefore unclear what role CEN-CENELEC JTC21 standardisation would play with respect to generative AI.

  • At the international level, the situation is similar to Europe: Discussions in ISO/IEC on generative AI standards are still at an early, mostly informal stage. There are not even any draft standards yet that could indicate the potential direction of generative AI standardisation at the international level. Both structure and work programme in ISO/IEC JTC1/SC42 were established mostly in 2018–2021, i.e. before the generative AI hype. The breadth and ambition of the work programme, much of which is not concluded yet, are saturating the expert community, hence newer concerns including standards for generative AI have not yet received broad attention.

  • At national level, standards development organisations in more than 50 countries have mirror committees that follow, and vote on, standards development in ISO/IEC and/or CEN-CENELEC. However, their focus tends to be aligned with international or European work, and therefore attention on generative AI is also still scant.

It is not unusual for standardisation to lag behind technical progress. In fact, standardisation in a new field is usually only started when research and development activities have reached a certain level of maturity and stability. Such stability has not been reached in generative AI – rather, the cutting edge keeps shifting at enormous speed although the capabilities at least of foundation models, including large language models, might have started to level off.

There is another factor that hampers generative AI standards development: The industry experts that would be essential contributors to any standardisation effort are mainly employed by large players such as Open AI, Google, Mistral, Anthropic or Meta. These companies are currently singularly focused on competition and on building a dominant position in a rapidly and chaotically growing market. They therefore do not prioritise their generative AI engineers spending time in standardisation work.

There are, however, initiatives to create frameworks for generative AI governance outside of standardisation. A very recent example is the Singaporean Model AI Governance Framework for Generative AI (AI Verify Foundation, 2024). There is also ongoing work in the OECD and other fora to elaborate on, and to operationalise, the code of conduct agreed in the G7 Hiroshima process (European Commission, 2023).

While it is far too soon to provide any meaningful analysis of generative AI standardisation it is possible to list potential topics of standardisation. These might include

  • test methods and performance metrics for generative AI models

  • resource consumption metrics for generative AI, both with regards to training and with regards to use

  • standardised licences for conditioning the ingestion of content made available on the public internet,

  • terminology and taxonomy standards

  • similarity metrics for multiple similar generative AI models

  • standards, including thresholds, to consider and mark content as AI-generated

  • watermarking and provenance standards for AI-generated content, e.g. building on work by the Content Authenticity Initiative (contentauthenticity.org).

It should be noted that generative AI drives the need for standards in non-AI areas, too. A pertinent example stems from the capability of generative AI to generate content at a scale that was previously impossible or at least uneconomical. Generative AI also allows the automated generation of “humans” in the form of deceptively real bots that can e.g. act as influencers, news reporters or product reviewers. In the hands of bad actors, generative AI is likely to lead to a flood of such undesirable content that cannot be reliably detected. Therefore, frameworks and standards will be needed that help to make the digital space resilient and ensure that trust in both information and people remains possible. A detailed analysis was already conducted in 2021 by a European working group within the StandICT programme and was published in early 2022, i.e. before the current generative AI hype. The report of this group contains detailed recommendations on standards in the areas of trust as well as digital identities (Hallensleben et al., Reference Hallensleben, Adamski, Ferris, Fritz, Grün, Hosszu, Clement-Jones, Kaminski, Thomas, Frost, Walshe and Muscella2022).

Generative AI can be expected to lead to a plethora of standards over the coming years. The established mechanisms for regulation and standardisation to complement each other will be a suitable approach for the new technology just like for many earlier technologies. It remains to be seen if the formalised New Legislative Framework that has been governing this complementarity for several decades in Europe will inspire similar approaches at the international level.

Funding statement

No funding was received.

Competing interests

The author is the Chair of CEN-CENELEC JTC21, the committee mandated by the European Commission to develop harmonised standards to underpin the EU AI Act.

There are no competing interests to declare.

Dr Sebastian Hallensleben is the Chair of CEN-CENELEC JTC 21 where European AI standards to underpin EU regulation are being developed, and co-chairs the AI risk and accountability work at OECD. Sebastian is the initiator and Programme Chair of the Digital Trust Convention and is Principal Advisor Digital Trust at KI Park. As Chief Trust Officer at Resaro, he works towards drilling down to ground truths about capabilities of AI systems. - Previously, Sebastian Hallensleben headed Digitalisation and Artificial Intelligence at VDE Association for Electrical, Electronic and Information Technologies. He focuses in particular on operationalising AI ethics, on characterizing AI quality and on building privacy-preserving trust infrastructures for a more resilient digital space.

References

AI Verify Foundation (2024). Model AI Governance Framework for Generative AI. Retrieved June 6, 2024, from https://aiverifyfoundation.sg/wp-content/uploads/2024/06/Model-AI-Governance-Framework-for-Generative-AI-6-June-2024.pdfGoogle Scholar
European Commission (2022). The ‘Blue Guide’ on the implementation of EU product rules 2022 (Text with EEA relevance) 2022/C 247/01. Retrieved June 6, 2024, from https://eur-lex.europa.eu/legal-content/EN/TXT/?toc=OJ%3AC%3A2022%3A247%3ATOC&uri=uriserv%3AOJ.C_.2022.247.01.0001.01.ENGGoogle Scholar
European Commission (2023). Hiroshima Process International Code of Conduct for Advanced AI Systems. Retrieved June 6, 2024, from digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systemsGoogle Scholar
Hallensleben, S., Adamski, I., Ferris, P., Fritz, J., Grün, K., Hosszu, R., Clement-Jones, T., Kaminski, A., Thomas, C., Frost, L., Walshe, R. & Muscella, S. (2022). Trust in the European digital space in the age of automated bots and fakes. Retrieved June 6, 2024, from https://zenodo.org/record/5926395#.YhvmhejMK5dGoogle Scholar
Micklitz, H.-W. (2023a). The Role of Standards in Future EU Digital Policy Legislation. Retrieved June 6, 2024, from www.beuc.eu/sites/default/files/publications/BEUC-X-2023-096_The_Role_of_Standards_in_Future_EU_Digital_Policy_Legislation.pdfGoogle Scholar
Micklitz, H.-W. (2023b). The Role of Standards in Future EU Digital Policy Legislation (Executive Summary). Retrieved June 6, 2024, from https://anec.eu/images/Publications/position-papers/Digital/ANEC-DIGITAL-2023-G-263.pdfGoogle Scholar
OECD (2023). Updated Guidelines for Multinational Enterprises on Responsible Business Conduct. Retrieved June 6, 2024, from https://one.oecd.org/document/DAF/INV/ICD(2023)2/FINAL/en/pdfGoogle Scholar