Hostname: page-component-74d7c59bfc-56bg9 Total loading time: 0 Render date: 2026-01-27T10:18:47.269Z Has data issue: false hasContentIssue false

Governance by indicators and the re-politicisation of expertise

Published online by Cambridge University Press:  26 January 2026

Paul Beaumont
Affiliation:
Norwegian Institute of International Affairs, Oslo, Norway
Ole Jacob Sending*
Affiliation:
Norwegian Institute of International Affairs, Oslo, Norway
*
Corresponding author: Ole Jacob Sending; Email: ojs@nupi.no
Rights & Permissions [Opens in a new window]

Abstract

States are measured and ranked on an ever-expanding array of country performance indicators (CPIs). Such indicators are seductive because they provide actionable, accessible, and ostensibly objective information on complex phenomena to time-pressed officials and enable citizens to hold governments to account. At the same time, a sizeable body of research has explored how CPIs entail ‘black boxing’ and depoliticisation of political phenomena. This article advances our understanding of the consequences of governance by indicators by examining how CPIs generate specific forms of politicization that can undermine a given CPI’s authority over time. We contend that CPIs rely upon two different claims to authority that operate in tension with one another: i) the claim to provide expert, objective knowledge and ii) the claim to render the world more transparent and to secure democratic accountability. Analysing CPIs in the field of education, economic governance, and health and development, we theorize and empirically document how this tension leads to three distinct forms of politicisation: scrutiny from experts that politicises the value judgements embodied in a CPI; competition whereby rival CPIs contest the objectivity of knowledge of leading CPIs; and corruption, where gaming of CPIs challenges its claim to securing transparent access to social reality. While the analysis identifies multiple paths to the politicization and undermining of specific CPIs’ authority, the article elaborates why these processes tend to leave intact and even reproduce the legitimacy of CPIs as a governance technology.

Information

Type
Special Issue Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of The British International Studies Association.

Introduction

States are measured, ranked, and rated on an ever-expanding array of country performance indicators (CPI): from ease of doing business to human trafficking, from corruption to climate policy.Footnote 1 Indeed, CPIs have become ubiquitous and influential across virtually all international policy fields.Footnote 2 CPIs are ‘seductive’ because they transform an ambiguous complex world into simple, accessible knowledge, offering an ostensibly scientific means of evaluating state performance and enabling citizens to hold governments to account.Footnote 3 Whether democracy is rising or declining, whether states are meeting their Paris Agreement targets and Sustainable Development Goals – when global policymakers are determining who to blame, shame, or acclaim and, crucially, allocating resources to improve performance, they increasingly turn to CPIs.Footnote 4, Footnote 5 In this way, CPIs constitute part of the ‘epistemic infrastructure’Footnote 6 of global governance by helping to frame and define governance objects and attendant policy tools. Parallel to the rise of CPIs in global governance, a growing body of critical scholarship has identified a host of concerns regarding CPIs as a governance technology. Critics have highlighted the dubiousness of the underlying data of leading indicatorsFootnote 7 and how, by reducing complex phenomena to quantitative measures, CPIs generate blind spots and produce unintended consequences.Footnote 8 Critics have highlighted how their iterative, numerical, and objective form hides the interpretation and thus politics involved their construction.Footnote 9

In this way, critics have argued that CPIs ‘black box’ the contestation over value judgements that is inherent to the process of developing a CPI, thereby reducing politics to technocracy.Footnote 10 As Sally Merry argues, because ‘the distance between the underlying data and the final result is so great’, indicators are ‘in practice hard to contest’.Footnote 11 At the same time, an overlapping but more optimistic literature explores how CPIs exert influence through generating more transparency and accountability in global governance.Footnote 12 On this reading, CPIs introduce more comparable data in a particular aesthetic form that allows a wider range of actors – beyond experts – to participate in policy debates and influence policy outcomes. In so doing, CPIs have a democratising function by increasing transparency and accountability of governments.

While critics zoom in on the production and input factors of CPIs that may result in depoliticisation, the more optimistic literature zooms in on the output and uses of CPIs by a wider range of actors in public debates. Bringing these insights into dialogue with this special issue’s focus on the pluralisation of expertise, we argue that CPIs embody a claim to both epistemic authority and objectivity, suggesting depoliticisation, and to transparency and accountability, suggesting democratisation. In this way, what constitutes valid knowledge is not solely based on the idea of expertise but is also embedded in ideas of participation by broader publics in debates about governance. These two sources of authority are in tension with one another, however, because the former implies that expertise is in the hands of a few while the latter implies that it is available to all. This gives rise to analytically distinct forms of politicisation. In the following, we specify these three forms of politicisation, which are logically connected to how CPIs establish epistemic and democratic authority. By politicisation we mean public criticism and debate pertaining to the value judgements, validity, potential biases, and ultimately public utility of a particular CPI.Footnote 13 The first concerns public scrutiny: when a CPI becomes widely used in policy-making it begets additional scrutiny that highlights and thereby politicises the accuracy, value judgements, and blind spots embodied in a CPI. This scrutiny is made possible by the claim to transparency since CPIs are deployed by a wider range of actors with stakes in how it is used. The second is competition: successful CPIs are likely to stimulate competing CPIs that directly critique the substance of a CPI or indirectly challenge the position of an established CPI. The third is corruption: influential CPIs often foster gaming of the measures and indicators embedded in a CPI, thereby corrupting the indicator. When this consequence becomes publicly known it threatens CPIs’ claim to epistemic authority and its role in securing transparency.

We do not consider any political debate that is stimulated by a CPI to be a sign of politicisation. In fact, a clear symptom of successful depoliticisation is when political opponents coalesce around a CPI and use it to respond to a problem without reflection upon the political consequences of relying upon that particular CPI. We empirically showcase these mechanisms via analysis of how three major CPIs have been politicised. While our analysis suggests that although (re)politicisation of individual CPIs is likely to be quite frequent, these mechanisms may very well leave intact and even reproduce the prominence of CPIs as a generic governance technology. Taken together, our analysis suggests that to go beyond identifying the depoliticising effects of CPIs, scholars should explore under what conditions and to what extent politicisation of CPIs occurs, and with what consequences for a policy field. Below, we first chart the rise of CPIs and the claim to increased transparency and accountability associated with it, then move on to discuss research that shows how CPIs obscure politics under the cloak of claims to objectivity. We then move on to present our analytical framework, which aims to integrate both perspectives, developing and then empirically illustrating the three mechanisms of politicisation of CPIs.

The rise and promise of CPIs

Since the 1990s, the number of CPIs has grown almost exponentially, quadrupling in the 1990s and tripling over the following fifteen years.Footnote 14 Consequently, most policy fields in global governance are populated by several CPIs (Figure 1). CPIs are conceived as a ‘governance technology’ because they can generate pressures to comply with standards without requiring formalised relations of authority.Footnote 15 Notwithstanding other technological and political drivers, at least part of the appeal of ‘governance by numbers’ stems from addressing long-standing concerns about democratic deficits within global governance and its association with technocracy and rule by experts.Footnote 16 While CPIs certainly rely upon experts, they tend not to issue direct recommendations but instead establish rules of the game that states and other actors are encouraged to play.Footnote 17 Notwithstanding the social pressures that a CPI may catalyse, in theory CPIs respect the sovereignty of states and their citizens to determine whether or how to act upon the information conveyed by CPIs.Footnote 18

Figure 1. Rise of CPIs across policy fields.Footnote 19

The growth in CPIs forms part of a broader trend that the editors discuss in their introduction to this special issue, in terms of changes in how knowledge is produced and presented within global governance, which also ‘pluralises’ what counts as relevant knowledge and as ‘expertise’.Footnote 20 CPIs form part of a more general ‘data revolution’ that is said to be transforming the parameters of global governance,Footnote 21 making it easier to evaluate policy performance and thus increasing accountability of public bodies.Footnote 22

The transparency offered by CPIs is partly based on an accessible aesthetic, where charts and visual effects are used to present multifaceted phenomena in a way that appears quickly understandable. For instance, PISA circulates its results in the form of rankings that resemble the Olympic league tables,Footnote 23 while the World Bank’s Ease of Doing Business Index published rankings of ‘fastest risers’. Other common aesthetic techniques include blacklists and letter grading; meanwhile, colour coded maps are ubiquitous in the CPI world. Moreover, the use of interactive dashboards conveys the aspiration towards inclusion and becomes a source of legitimacy and appeal independent from the content itself.Footnote 24 What these ‘knowledge packaging’ techniques have in common is their reliance upon visual scripts that are familiar and legible to most non-experts, facilitating quick comprehension and understanding.Footnote 25 As we discuss in more detail below, CPIs’ success in shaping public debate and policy-making depends upon not only claims to objective expertise (input) but also what we call its output: the ability to (ostensibly) make the phenomenon (more) transparent to a broader constituency beyond governments and experts.

CPIs’ democratising potential

Concurrent to the rise of CPIs within global governance, an interdisciplinary research agenda on the CPI phenomenon has burgeoned over the last decade. Major works examine the causes and consequences of this rise of indicators in global governance, including research from anthropologists,Footnote 26 sociologists,Footnote 27 international legal scholars,Footnote 28 and international relations scholars.Footnote 29 A central theme of research on CPIs is how their influence relies not necessarily upon providing valuable information to target governments but rather by exerting social and economic pressure upon governments via domestic and transnational audiences that use CPIs in their advocacy.Footnote 30 Pioneering here has been Kelley and Simmons’s sustained theoretical and empirical workFootnote 31 exploring how, why, and when CPIs exert influence. Their findings suggest that CPIs democratise expertise by providing credible information that is accessible and usable by citizens and civil society. In this way, CPIs render government performance transparent and internationally comparable, thus potentially channelling domestic and international actors’ responses in ways that encourage governments to strive to improve their score. In other words, the emphasis on transparent, legible information allows CPIs to better enable governments and civil society to monitor performance, identify best practices, mobilise public opinion, and ultimately make governance more data-driven.Footnote 32 Rather than black-boxing expertise, then, the quick and accessible nature of data presented in CPIs is seen as enabling a far broader cross-section of society to engage in policy issues that may otherwise remain opaque and the preserve of experts.Footnote 33

The significance of non-experts’ role in translating CPIs into pressure and influence is well captured by Kelley and Simmons’s model (Figure 2). Two of the three mechanisms they identify for how CPIs exert power – via domestic politics or transnational pressure – rely upon intervening action by non-experts, using the CPI to directly pressure a government to take measures to improve its ranking. A major strength of this model is that it emphasises how self-consciously political agents in the public sphere respond to and act upon CPIs. Hence, Kelley and Simmons argue that ‘in responsive regimes’ such demands ‘might elicit policy change’ or at least provide incentives for the government to ‘claim they are addressing the issue’.Footnote 34 Conversely, they suggest ‘where institutions repress public input and suppress political demands, governments may respond not with reform, but by denigrating the GPI [CPI] or its creator’.Footnote 35 In other words, in liberal polities with well-functioning public spheres, CPIs should work as an effective conduit for new information, thereby influencing policy-making.

Figure 2. Kelley and Simmons’s mechanisms of indicators and policy change.Footnote 36

However, this model also reproduces uncritically CPI’s claim to impartiality and aspiration to remain ‘above politics’:Footnote 37 For example, ‘denigrating’ the ranking becomes a response associated with repressive regimes, while responding to rankings implies a ‘responsive regime’. It therefore overlooks how domestic actors (parties, NGOs, trade unions, etc.) within a liberal polity may also have good reasons to denigrate a ranking or oppose its use. It arguably also understates the degree to which also authoritarian states may contest and politicise CPIs by seeking to undermine those they do not like, and construct alternatives to advance their own agenda.Footnote 38 In other words, the arrows in Figure 2 may be reversed: rather than ‘information’ and pressure flowing one way from CPIs via users within transnational and domestic public spheres, actors in both responsive and repressive regimes may challenge and contest CPIs in good faith and politicise a CPI. As we discuss below, critical scholarship provides solid grounds for why actors would wish to contest a CPI and the use of CPIs in general, though limited empirical or theoretical research has been undertaken on whether and how this takes place in practice.Footnote 39

CPIs and the politics of expertise

That CPIs necessarily reduce multifaceted phenomena to de-contextualised and potentially oversimplified attributes is not lost on the already sizeable literature on this topic.Footnote 40 Critical scholarship points to how the process of constructing a CPI is an inherently political process and that despite the appearance of objectivity, human interpretation and thus politics are inescapable in determining what factors to count and how to weigh them.Footnote 41 From this perspective, CPIs necessarily embody a theory of what they claim to simply measure, and all theories rely on priors that have normative import.Footnote 42 What is rendered as mere ‘information’ in Kelley and Simmons’s model, then, has interpretation and thus politics baked in – even as its numerical presentation conceals this politics. The trick of CPI is – in part – to present as universal what are particularistic interests.Footnote 43

From this perspective, CPIs generate an illusion of expertise where the numerical and ‘science-y’ aesthetic of CPIs provides a false sense of certainty and objectivity when grappling with complex phenomena. Scholars have highlighted how indicators’ numerical form ‘hides and elides the politics of the construction’ and how the ‘development of CPIs often takes place behind closed doors by unaccountable experts insulated from domestic political processes’.Footnote 44 As Scott and Light argue, such indicators constitute a kind of ‘anti-politics machine’ that sweeps ‘vast realms of legitimate public debate out of the public sphere’, burying ‘vital politics in a series of conventions, measures, and assumptions that escape public scrutiny and dispute’.Footnote 45 Rather than democratising expertise, these scholars see CPIs as generating a ‘democratic deficit’, which ‘becomes acute when CPIs influence or are used as the basis to allocate resources and/or legitimate domestic political reforms’.Footnote 46 Rather than fostering transparency, then, this line of reasoning considers CPIs as a means of mystifying the political judgements that underpin them, which is said to enable a subtle form of control over the rules of the game. The so-called soft power of indicators thus sits in tension with democracy to the extent that actors being ranked by CPIs lack the agency to identify those rules and judgements, yet remain subject to the pressure of examination – a form of what Löwenheim calls ‘panoptic surveillance’.Footnote 47

Besides illuminating the politics of CPI construction, scholars have also documented major methodological shortcomings and perverse side effects with many of the most influential CPIs.Footnote 48 Taken together, this critical scholarship provides strong grounds to expect that politically engaged actors may oppose CPI influence out of good faith concerns about its utility for policy-making and its pernicious consequences on the public sphere.

The relationship between epistemic and democratic authority

Curiously, however, for all the research identifying the many shortcomings of CPIs, there has been little exploration of how CPIs may be contested and politicised on these very grounds in the public sphere. For example, existing research grasps how particular interests can be passed off as universal ones and embed a form of political power cloaked in epistemic credibility. It has, however, tended to bracket how the authority and credibility of CPIs may also become subject to politicisation by those that use and are targets of CPIs. Conversely, the idea that CPIs can have transparency and democracy enhancing features does highlight the uses of CPIs by reflexive political agents but overlooks the politics embedded in the very construction and use of CPIs. Indeed, this limitation is integral to the analytical scaffolding that each literature builds to analyse CPIs. Research that captures the politics of knowledge production has hitherto had limited interest in capturing how public debates and uses of CPIs can possibly enhance transparency and accountability: the identification of political choices being cloaked in technical language runs against the very premise that CPIs increase transparency.

Here, we take seriously the critical scholarship on CPIs that demonstrates the politics that go into their production, but we broaden the remit of these insights by exploring how these critiques enter the public debate and inform the use of CPIs. We show how actors are more reflexive about CPIs than the critical literature tends to assume. To identify the dynamic of this politicisation, we theorise how CPIs draw on two distinct sources of authority. The first is epistemic authority, which hails from the objectivity and precision that is claimed to flow from the use of ostensibly scientific methods and the disinterested expert that sits ‘above politics’. This is the ‘input’ part of the equation, which tracks close to standard claims to expert authority.Footnote 49 The second is a form of ‘democratising’ authority stemming from a claim to render the world more transparent, thereby anchoring the authority of CPIs in the ‘output’ or value for its users, which can now more readily call upon evidence (from the CPI) to track, assess, and politically mobilise for governance measures. In sum, a CPI that is widely regarded as authoritative represents a successful claim not solely to epistemic authority but also to democratic authority.

However, these two sources of authority – epistemic and democratic – run in tension with one another: while the claim to expertise in terms of scientific methods is necessary to produce ‘objective’ knowledge, it simultaneously undercuts the claim to transparency and accountability because it relies on expertise in the hands of few. The focus on these two sources of authority and the tension between them is important, because claims to authority are never settled once and for all, and actors have to mobilise resources not only to establish authority but also to keep and defend it in the face of criticism.Footnote 50 Here, we draw on works that have highlighted the myopia and corruption of some indicators but go beyond these works in theorising how these effects of CPIs can – over time – prompt counter-responses from the international and domestic societies under a CPI’s purview.

Further, authority describes a relationship between actors, not between things, and so to the degree that a CPI is considered ‘authoritative’ it serves as a stand-in to shape the relations between distinct actors.Footnote 51 This is important, because a key reason why CPIs became politicised is that they become battlefields between actors with conflicting views and interests. We therefore submit that this tension between the claim to epistemic and democratising authority makes CPIs prone to distinct forms of politicisation and are thus inherently unstable. As we elaborate below, this implies that the depoliticising effects of CPIs may very well prove temporary, and it is thus instructive to theorise how CPIs may be politicised over time. To this end, we specify three such forms of politicisation, in the form of scrutiny, competition, and corruption.

Three mechanisms of politicisation: Scrutiny, competition, and corruption

The first and most basic mechanism is public scrutiny: by making data legible and putting into the public sphere, CPIs not only encourage their use but also facilitate critique. As the critical literature has well identified, all CPIs suffer from blind spots and entail contestable value judgements, while many suffer from major issues with the validity of data.Footnote 52 While the knowledge packaging may initially succeed in putting an objective gloss upon the data, the more a CPI is utilised by governments and gets used as a basis for allocating resources, the more likely a CPI will attract critics that highlight a CPI’s shortcomings and value judgements. Thus, a CPI’s politicisation may stem from its popularity; little used CPIs will not beget the same scrutiny. While this mechanism may often be expert led (and critical CPI research forms a part of itFootnote 53), as our analysis below shows it is quite possible for non-experts to contest the value judgements and even validity of a CPI. Ultimately, such critiques can undermine both a CPI’s claim to epistemic authority, which in turn reduces its value in providing transparency and accountability (democratic authority).

Competition: Given the authority-premium associated with producing CPIs, other actors may produce competing CPIs that constitute politicisation by establishing an alternative with different value judgements and implied policy prescriptions. Broadly speaking, the competitive aspect can take two forms, direct and indirect. The first refers to CPIs that explicitly challenge and represent an alternative indicator to an established one. This form of competition can be seen as one type of ‘statactvism’ whereby statistics form ‘part of the repertoire of contention’ for activists or those seeking to contest the status quo.Footnote 54 It typically takes the form of a claim to offer superior and more relevant data for publics and policymakers.Footnote 55 The second, more indirect form is when CPIs do not directly challenge other CPIs’ key concept but rather seek to measure an underappreciated but related dimension of a phenomenon, thus making tangible and actionable dimensions hitherto marginalised due to a lack of presentation through a CPI.

Corruption: Success in generating social and economic incentives through a CPI begets incentives for gaming and the corruption of the CPI’s validity.Footnote 56 This association has long been established and was famously formulated by Campbell as the following law: ‘The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor’.Footnote 57 Cambell’s law leads to activities that include both efforts to game indicators – e.g., ‘teaching to the test’ – that are permitted but nonetheless undermine the validity of the measure, and more direct forms of cheating and corruption involving manipulation of data or false reporting. The first outcome is by far the most well documented in the literature. As Pollack and colleagues show, actors orientating activities to optimise indicator scores and lobbying the indicator is a rational response to the rise of influential CPIs, one that has even led to new forms of expertise in ‘analyst relations’ or ‘influencer relations’.Footnote 58

While numerous works have highlighted how leading indicators have indeed been gamed,Footnote 59 this becomes a mechanism of politicisation when the knowledge of such gaming becomes public and threatens the authority of the CPI. Similar to the other mechanisms, the more successful the indicator in terms of acquiring users and thus influence, the more it will encourage Campbell’s law, its corruption as a measure, and risk this becoming revealed in the public sphere. Needless to say, if a CPI becomes publicly known to be corrupted it will undermine its claims to foster transparency through objective credible information. While any CPI used to govern will generate incentives to game the indicator,Footnote 60 the more they are circulated in the public sphere, the higher the likelihood that such gaming will be identified.

In what follows, we illustrate each of the three analytically distinct but not mutually exclusive mechanisms by analysing how different CPIs have evolved and become politicised over time via scrutiny, competition, or corruption. Our goal is to demonstrate the usefulness of specifying distinct forms of politicisation and thereby provide a framework for exploring politicisation and depoliticisation dynamics of CPIs within global governance policy fields.

Scrutiny: The technocratic illusion meets domestic politics

In 1995, at the first meeting of OECD ministers I attended, every country boasted of its own success and its own brilliant reforms. Now international comparisons make it clear who is failing. There is no place to hide.Footnote 61

This observation from the OECD’s head of education research captures nicely a key insight from works by scholars such as Kelley and Simmons:Footnote 62 domestic and international groups can use CPIs to hold their government to account by providing them with credible and comparable information pertaining to a state’s performance. Among the most famous examples of this dynamic was the round of ‘PISA shocks’ that swept across the OECD in 2001.Footnote 63 Following the results of the first survey, many finished lower than expected on the new OECD ‘Programme for International Student Assessment’ (PISA), which measures the literacy, maths, and science skills of fifteen-year-olds.Footnote 64 The PISA shocks were closely followed by a wave of policy reforms, as governments strove to fix or at least be seen to address the shortcomings ostensibly illuminated by PISA.Footnote 65 Besides transforming the global education policy field over the next decade, the PISA shocks also illustrate the critical literature’s key point about how CPIs may depoliticise political issues: rather than critically examining the assumptions underpinning PISA’s measure of education quality, national actors tended to take the results at face value as an objective measure of educational quality.Footnote 66

Yet, with twenty-five years and eight PISA surveys behind us, it is possible to take stock of the extent to which PISA has been able to remain ‘above politics’. At a minimum, sufficient evidence exists to show that in some countries, while PISA initially succeeded in establishing itself as a source of objective information – one that cut across party political divides – PISA’s authority became steadily more contested through politicisation in the form of public scrutiny. We focus on the political dynamic surrounding PISA in one country, Norway, where the initial success of PISA as a measure of educational quality led to increased public scrutiny, which over time transformed the position and role of this CPI from an authoritative measure of educational quality to a contested one.

Certainly, during the first decade of PISA scores in Norway, both the political right and the left parties coalesced around the need to improve Norway’s PISA scores and fix the problem that PISA defined and identified. PISA scores revealed that Norway’s performance was ‘only’ average in the OECD, which was lower than expected and seen as a problem in need of a remedy across the political spectrum.Footnote 67 For example, when a new left government took over in 2005, they followed through with ‘Knowledge Elevation’ reforms that had been initiated under the previous conservative coalition government quite explicitly to address Norway’s performance in PISA.Footnote 68 As Norway’s PISA performance worsened through the decade, both parties made reversing the trend central to their education policies. For instance, Labour Prime Minister Jens Stoltenberg made PISA central to his New Year’s speech in 2008, telling voters that he had ‘got the message’ that Norway’s PISA scores must improve.Footnote 69 Not to be outdone, and indicating the cross-party consensus about the significance of PISA, the leader of the opposition (and future prime minister) Erna Solberg in 2009 offered a ‘guarantee’ that if elected her party would improve Norway’s PISA performance.Footnote 70 Indeed, political parties accepted and faced powerful political incentives to accept Norway’s mediocre PISA performance as a problem in need of urgent remedial action.Footnote 71 In this way, PISA generated a political demand and leeway to undertake education reforms that would have proven far more difficult (e.g., introducing national testing) in the absence of PISA. Ultimately, PISA became the gold standard for assessing education policy performance, standing seemingly above party politics.

Yet, PISA’s position as an objective measure of the quality of Norwegian schools became steadily eroded as a growing number of critics contested PISA’s outsized role in Norwegian education, charging that PISA did not adequately capture educational quality and contained an underacknowledged ‘hidden’ normative or political agenda. These critical voices in the public sphere were initially academics, but grew to include teachers, unions, and by 2013 three of the seven major political parties were not only sceptical of PISA but wrote in their party programmes that Norway should leave PISA altogether.Footnote 72 Notably, the substance of the criticisms reflects a politicisation of PISA’s underlying theory of education quality: contesting the normative assumptions that PISA used to define ‘good’. For instance, two academicsFootnote 73 reflected what would become a common view among Norwegian educators: that the Norwegian education system prioritises other qualities that are ‘hardly measured’ in PISA, such as encouraging students to ‘develop tolerance for other ethnic groups’ and ‘create an understanding of democracy and individual rights’. Thus, they suggested Norwegian schools likely performed better in a ‘number of places that are not reflected in the PISA survey’s ranking list’.Footnote 74

This scepticism towards PISA eventually spread to the press and political parties. Indeed, the ‘PISA shock’ itself has become reconstituted in the past decade as a lesson from history about the folly of paying too much heed to the PISA results.Footnote 75 For instance, reflecting on Norway’s best results since PISA began, Aftenposten columnist Helene Skjeggestad was openly sceptical. In an article titled ‘15 Years with the Test That Shows What You Want It to Show’, Skjeggestad looked back on Norway’s ‘PISA shock’ and how PISA had primarily been used to score cheap political points and as a blunt rhetorical instrument for legitimating reforms.Footnote 76 Reflecting upon the 2015 PISA scores, a leader in the same newspaperFootnote 77 prefaced its positive response by saying that Norwegians are ‘allowed to enjoy [the results], even if neither PISA or the TIMSS survey, which were presented last week, tells the whole truth about Norwegian schools’. Notably, where the government’s education policy in the 2000s let ‘PISA be the basis for almost every measure in their school policy’,Footnote 78 the government’s justification of its major education reforms in 2016 cite PISA just once, and even then was careful to add the caveat that PISA does ‘not cover all subjects in the school or the full breadth of the subjects’.Footnote 79

This vignette illustrates how PISA’s position was transformed by the very processes through which its influence operates, namely domestic politics. With its initial success in defining the problem of Norwegian education that consecutive governments scrambled to fix with their education reforms, it not only exerted influence over Norwegian education but also made PISA a focal point for public scrutiny. With PISA being used so routinely for legitimating major policy reforms, it made itself a target for academics, the media, and ultimately political parties’ critical gaze. The form of the politicisation flowed from its claim to authority anchored in ‘objective’ measures and data: the critique was focused on PISA as a narrow measure that does not fully or adequately do justice to the qualities of Norwegian education. With time, the political left came to argue that PISA focused upon ‘skills’ at the expense of values. In other words, the Norwegian experience with PISA illustrates how the very success of a CPI becoming influential within domestic politics leads to extra scrutiny, and an emerging reflexivity within the target audience around the methodology of the CPI and the interests underpinning it. Notably, had Norway ignored or even downplayed PISA as simply one of many means of evaluating its schools, then it is unlikely to have received much scrutiny nor become politicised.

While we have focused on the Norwegian experience here, a review of the literature suggests that PISA has been gradually politicised also in other countries. For example, a transnational movement of educational experts has mobilised against PISA and published open letters calling for a moratorium.Footnote 80 While this movement has not succeeded in getting its wish, it has become sufficiently significant that the Director of PISA has participated in public debates and responded in the media to counter this oppositional movement.Footnote 81

We argue that PISA’s re-politicisation is likely not an anomaly but instead predictable. When CPIs succeed in influencing policy, they will receive additional public scrutiny that will usually poke and prod at the methodology and thus the value judgements underpinning it.Footnote 82 Once this occurs, then within pluralistic public spheres and competitive party systems, this is liable to lead to a counter-movement that will push back against the CPI’s influence. In other words, the very mechanism through which some major CPIs work – through domestic politics – eventually leads to a CPI’s sheen of objectivity being washed off.

Competition: The emergence of alternative CPI providers

There are multiple CPIs in virtually all policy fields: in health, there is the Global Health Security Index organised by Johns Hopkins and the Economist Intelligence Unit as well as the Joint External Evaluation (JEE) organised by the World Health Organisation; on climate governance there is the Climate Change Performance Index (CCPI) and the Transitions Performance Index, and on digital performance there is the ICT Development Index (IDI) as well as the Productive Capacities Index (PCI). This growth in CPIs reflects, in part, how CPIs have become a key marker of an organisation’s perceived competence and importance in debates about how to govern a particular issue. For example, Kentikelenis and Seabrooke find that in the field of health preparedness, three different CPIs, advanced by three different constellations of actors, form part of a competitive dynamic where each CPI represents its producers’ claim to authority to shape health governance in particular directions.Footnote 83

This competitive dynamic can take the form of explicit and direct challenges to other CPIs: the GHSI, for example, was developed with explicit criticism of the reliance on self-reporting that undergirds the WHO’s health preparedness indicator.Footnote 84 This competitive dynamic also forms part of a broader trend where private actors can produce and use CPIs to establish authority on matters even as politically salient as pandemic preparedness.Footnote 85 This competition can also take a more indirect form, where the production and use of a CPI represent a deliberate effort to capture and make legible aspects of a phenomena or problem that has hitherto been marginalised. Developing and using a CPI thus forms part of organisations’ position-taking and effort to establish authority by drawing attention to specific aspects of a phenomena for which they have a formal mandate, particular experience, and resources to address.

Perhaps the oldest and most notable CPI – Gross Domestic Product (GDP) – is not recognised as such but functions as a naturalised and ‘objective’ anchor for policy debates. Developed as a measure for US economic policy-making by Simon Kuznets in the 1930s, it became integral to the work of the IMF and the World Bank following their establishment, and so it emerged as a de facto global CPI in the post-WWII period.Footnote 86 Notably, the UNDP developed its own CPI in the early 1990s when it introduced the Human Development Report (HDR) and the Human Development Index (HDI) along with it. The HDI was explicitly aimed at foregrounding aspects of development that other international organisations, including the World Bank, were perceived to ignore because of their primary focus on GDP as a measure of ‘development’.

As economist Mahbud ul Haq and colleagues at the UNDP explained, the HDI aimed to make visible and measurable the ‘simple truth’ that the key goal of development is to ‘create an enabling environment for people to enjoy long, healthy and creative lives’.Footnote 87 The concept of ‘human development’ thus aimed at making visible and legible a broader concept of development, beyond GDP, to include health, education, and a more encompassing focus on capabilities as conceived by Amartya Sen. It represented, then, an attempt to advance a new concept of poverty and development with the HDI as the vehicle for making it an ‘object’ to act upon and assess development with new parameters. Commenting on CPIs in general, Streeten notes ‘Such indexes are useful in focusing attention and simplifying problems. They are eye-catching. They have considerable political appeal. They have a stronger impact on the mind and draw public attention more powerfully than a long list of indicators combined with a qualitative discussion’.Footnote 88 He moves on to reflect more specifically on the HDI in particular, arguing that ‘The strongest argument in their favour is that they show up the inadequacies of other indexes, such GNP, contributing to an intellectual muscle therapy that helps us to avoid analytical cramps… They redirect our attention from one set of items to others – in the case of the HDI, to the social sectors: nutrition, education and health’.Footnote 89

At the same time, however, the HDI also represented an effort to position the UNDP vis-à-vis other international organisations by identifying an aspect of development over which the UNDP could claim authority. This formed part of a larger strategy on the part of the UNDP to move beyond technical assistance to development projects and become a ‘post-project agency’ with a broader remit on governance more generally.Footnote 90 In this way, the development of HDI corresponds to what Cooley and Snyder call ‘flag planting’,Footnote 91 where actors develop and use CPIs as a tool to claim authority and secure their position vis-à-vis other actors in global governance. The same dynamic is reflected in the OECD’s development of the Better Life Index (BLI), created in 2011 with the aim of using economic indicators beyond GDP and HDI to capture multiple dimensions of economic and social progress; it was conceived as a tool to capture ‘well-being’ in OECD countries and introduced sustainability as an integral part of the measure.Footnote 92 It is explicit in its criticism of GDP as a measure and was developed as part of the Commission on the Measurement of Economic Performance and Social Progress (CMEPSP).

The OECD and the UNDP, as well as the WHO and other international organisations, all compete for authority within specific policy fields and strategically select and produce expert knowledge to that end.Footnote 93 The production and use of CPIs are integral to this competition in that CPIs represent and serve as stand-ins for the epistemic authority that these organisations seek to convey.Footnote 94 At the same time, CPIs are being used to mobilise and enrol other actors (states, civil society, firms) to compare, assess, navigate, and debate a particular issue. The politicisation of CPIs in the form of competition, then, leads to proliferation of CPIs, as different organisations deploy them to mark territory and plant flags over particular issues. The result is, perhaps paradoxically, a peculiar form of politicisation in the sense that the ‘political’ aspect concerns what is measured and ranked – GDP v HDI v BLI – but the sum total is also that a broader array of dimensions of the same broader issue – development, health, etc. – is captured and available for others to use in public debates about these issues.

This competitive dynamic is in part a result of a broader feature of global governance, where different actors – international organisations, universities, foundations, firms, and civil society organisations – all seek to shape how different phenomena are defined and acted upon. As CPIs have become a key governance technology, such actors invest in them to signal not only their epistemic proficiency but also their authority over specific issues.Footnote 95 The specific form that this politicisation takes thus hails from how CPIs draw on a claim to both epistemic and democratic authority, where the latter offers an incentive for a wide range of actors to advance particular agendas by presenting it in the language of the former. We expect this dynamic to be more prevalent when several different international organisations have official mandates to act on the same issue (such as ‘development’ and ‘climate’). The broader trend in global governance, with a rise in so-called low cost institution’Footnote 96 (informal clubs and partnerships between public and private actors), points in the same direction, as a ‘market’ for governance emerges where authority over a particular issue is not determined by official mandates but is a result of competing claims from different actors. Conversely, we might expect this competitive dynamic to be less prevalent when a single international organisation is mandated by states to monitor and assess a particular issue (e.g., balance of payments and financial stability for the IMF), and when a CPI is based on outcome variables (e.g., life expectancy, mortality rates) rather than on the policies that may help produce these outcomes (e.g., governance quality, democracy, etc.).

Corruption: Incentivising gaming, undermining authority

A number of CPIs have direct, material consequences: Transparency International’s Corruption Perception Index has been used by governments as a basis for allocating aid.Footnote 97 Credit rating agencies are used by investors to determine risk and thus affect the cost of borrowing for countries. These CPIs have political effects by virtue of the pressure they exert, often independent from any domestic political actors’ assessment of their methodology.

A case in point is the Ease of Doing Business (EDB), which was highly influential in shaping developing countries’ efforts to attract foreign investments. Established in 2002, the Ease of Doing Business ranks states’ performance along ten equally weighted categories intended to assess the business environment in a country. The explicit goal was to provide a framework and incentive for states to improve the business environment and thereby stimulate growth. Aggregated and disaggregated results were published annually, thus constructing public competition among states. As Doshi and colleagues note, it had a 60 per cent market share, more than five times its nearest competitor in 2017, while the number of users of its website more than quadrupled between 2005 and 2015.Footnote 98 Moreover, research documented a positive empirical link between ranking on the index and foreign direct investment flows and its influence upon investor sentiment and bureaucratic reputation.Footnote 99

The incentives and pressure that the EDB orchestrated prompted governments of various regime types to seek advice and even consultants from the World Bank to increase their rank. For instance, both Russia and India established explicit targets to improve their EDB rating,Footnote 100 while the rapid ascent in the EDB of some countries indicates a conscious and successful effort to rise in the EDB.Footnote 101 In other words, the EDB appeared to be a roaring success. Yet despite its apparent success, the World Bank announced in 2021 that ‘after reviewing all the information available to date on Doing Business, including the findings of past reviews, audits, and the report the Bank released today on behalf of the Board of Executive Directors, World Bank Group management has taken the decision to discontinue the Doing Business report’.Footnote 102 Notably, the World Bank opened this press release by asserting that ‘Trust in the research of the World Bank Group is vital’.

To understand how and why the World Bank terminated its flagship CPI requires understanding the dynamic by which CPIs are subject to the potential for the corruptive effects of Campbell’s law and subsequent politicisation. Crucial for our purposes here is that the World Bank’s efforts to encourage states to use and be incentivised by the EDB in their policy-making amounted to the EDB rankings becoming a game of their own, thereby producing a context susceptible to Campbell’s law. In other words, EDB’s dominance as the world leader in its policy field led to states exerting special effort competing in the ranking but not always playing by the spirit of the game.Footnote 103 Indeed, two types of corruption brought about the demise of EDB: i) gaming, where targets of CPIs strategically change behaviour to score better but without addressing the substantive issue at hand, and ii) manipulation, where targets of CPIs seek to pressure the producers of a CPI to change or tweak data and indicators to improve their score. It was the revelation of both gaming and manipulation of the EDB that led to its dismantling.

Governments realised – including with the help of World Bank consultantsFootnote 104 – that it was possible to exploit the metric by which the EDB sought to measure the phenomenon (easy business) without necessarily needing to improve the actual ease of doing business. This has been well documented by scholars, who have drawn attention to several states that managed to improve their EDB ranking without a comparable rise in other similar indicators. For instance, Schueth shows how Georgia’s record-breaking eighty-eight-place rise in EDB was paralleled with a curiously static trajectory in the Global Competitiveness Index that measures perceptions of business ease.Footnote 105 Notably, the problem of gaming the EDB was alluded to in the World Bank’s 2021 external review, which recommended that any overhauled EDB should:

Measure the de facto reality, not just de jure rules, facing a representative cross-section of firms. A long-standing concern with Doing Business shared by many stakeholders is that the focus on de jure regulation fails to capture the de facto reality of many businesses. The exclusive use of hypothetical case studies also fails to capture the diversity of firms and sectors within countries, and obscures critical cross-country differences. We recommend a substantial methodological shift in favour of more data collection from representative samples of actual business owners and operators on their de facto experiences of doing business.Footnote 106

While gaming was a recognised problem in the EDB, evidence of direct manipulation and pandering to political pressure from powerful donor states was the efficient trigger of the World Bank’s decision to terminate EDB. A US-based law firm, Wilmer Hale, was hired to investigate data irregularities in the 2018 and 2020 editions of the EDB and found that ‘then-World Bank President Jim Yong Kim and then-World Bank CEO Kristalina Georgieva pressured staff to modify their methodology in the 2018 Doing Business Report to bolster China’s scores’. The report also noted that ‘senior Bank staff likely interfered with data pertaining to Saudi Arabia, the United Arab Emirates, and Azerbaijan to influence the 2020 edition of the report’.Footnote 107 In other words, the stakes involved in ranking high in the EDB led to pressures to manipulate the scores. It is a textbook example of Campbell’s law, where gaming and manipulation of the measure take priority over change in actual practice. Again, this shows just how CPIs are always of and in the world that they seek to stand outside of or ‘above’.Footnote 108

Unlike the first mechanism, where success begets scrutiny that identifies pre-existing methodological limitations and normative assumptions, here success begets corruption of the measure over time, rendering the relationship between the estimator and the estimand increasingly obscure. Put differently, the success of the EDB in shaping behaviour raised the question of whether the indicator replaced the actual target to achieve, which in turn raised questions about the impartiality, and authority, of the World Bank.Footnote 109

We recognise that there is a possibility that a CPI may become corrupted without it being politicised in the way we conceptualise it here, with public critique and contestation. We think, however, that this is unlikely, because while the chance of corruption increases with higher stakes, so too does public interest and coverage, including about suspicious results and attempts by target countries to apply pressure to the producers of CPIs. Given that such corruption involves numerous people and may even be advertised by consultants, the chances of it remaining fully hidden would appear slim. The question is rather: to what extent and with what consequences do such processes lead to the CPI’s politicisation?

Conclusion: Pluralisation of expertise and CPIs as a technology of governance

The era of indicators in global governance may herald the depoliticisation of policy fields and thereby narrow the scope for politics within national and international arenas. As argued above, however, there are countervailing forces in the form of politicisation of CPIs where they can engender public scrutiny, competition, and corruption. PISA was initially de-politicised and structured political decisions and political debate about education. However, it was subsequently politicised by public scrutiny about biases and limits of PISA as a measure of education quality, resulting in it losing its status as an impartial and authoritative measure around which to organise educational policy. CPIs in both development and health have also been politicised in the form of competition, where actors advance different CPIs that seek to measure and rank countries on similar issues, but focused on different dimensions and using different methods. Finally, we saw how the World Bank’s Ease of Doing Business was politicised and ultimately abandoned following a scandal around its corruption. Here, the success of the EDB and the incentives associated with improved scores led actors to focus on the estimator rather than the estimand, causing backlash and ultimately a reform of the EDB to secure the authority and credibility of the World Bank.

Across scrutiny, competition, and corruption, our discussion shows how CPIs’ dual claim to epistemic and democratic authority is in a productive but unstable tension. Scrutiny is where the two often meet: public and expert critique is intrinsic to both science and democracy, yet it also exposes the instability of epistemic authority because the assumptions and measures embedded in CPIs remain open to revision. Per established scientific practice, there is continuous debate and critique, and so no fact, data, or theory is ever stable and secure. But scrutiny may also slide into efforts to impute mere bias, where a hermeneutics of suspicion kicks in and where all knowledge claims are reduced to the (hidden) interests that drive the production of CPIs. Competition also combines the two: rival CPIs can improve knowledge by offering alternative operationalisations and also serve to widen democratic choice, but they can also be strategically used, including by authoritarian states, to displace epistemic contestation with rivalry over ideology expressed via CPIs.Footnote 110 Corruption is something akin to a success trap where CPIs – if they are authoritative – steer decisions and invite gaming (as per Campbell’s law) that, when it becomes visible, undermines both the claim to objectivity and the promise of transparency. Taken together, we see how CPIs – as other governance technologies – are double-edged swords: they can stabilise and authoritatively define the parameters for governance, but the very thing that allows them to do so – the combining of epistemic and democratic authority – can also be flipped around to destabilise them.

Despite these political dynamics – CPIs being challenged, changed, and/or replaced over time – we see little indication that CPI use in general is becoming discredited. What can account for this seeming paradox? We suggest that it can be explained by differentiating between CPI-specific politicisation, on the one hand, and politicisation of CPIs as a generic governance technology, on the other. Politicisation of a given CPI (CPI-specific politicisation) can leave unscathed the use of CPIs as a governance technology in general and even reproduce it by encouraging the refinement and development of new CPIs. It is thus similar to the dynamic characteristic of scientific knowledge production, where its authority is reproduced, precisely, via critique. By contrast, charges that a specific CPI as a mode of governance may close off political debate and hide political choices under the cloak of ‘methodology’ constitutes a more fundamental politicisation. Yet, the claim that CPIs make knowledge available to all is a political trump card, because whatever biases or hidden agenda may be embedded in CPIs as a result of the expertise that goes into its production, these can be discussed and thus also rendered visible and contestable through public debate. In this way, even this kind of politicisation of specific CPIs helps reproduce the legitimacy of CPIs as a governance technology. Nonetheless, and as we discuss below, the agency to politicise is unlikely to be evenly distributed across stakeholders nor policy fields, and thus taking a laissez faire approach to the era of indicators would prove unwise. Moreover, it remains an open question whether the competition mechanism will lead to a deepening of the knowledge base and greater reflexivity around CPIs, or generate fragmentation and forum shopping that ultimately work against collective action.

Besides contributing to the special issue’s focus on the consequences of pluralisation of expertise, this article also aims to spur CPI research to go beyond critique of governance indicators. As we saw, over the course of the last decades this literature has advanced our empirical understanding of the problems associated with individual CPIs and theorised CPIs’ generic shortcomings, laying the basis for a well-grounded scepticism of their utility within global governance. Yet this research has for the most part stopped at identifying pathologies – depoliticisation, gaming, etc. – rather than inquiring into their persistence. One consequence is that this research becomes fatalistic regarding the possibility of escaping or mitigating the worst consequences of governance by numbers. Problematising this assumption, this article has shown why the empirical record provides sufficient grounds to question this potentially self-fulfilling gloominess.

While here we have focused on theorising and empirically illustrating how the depoliticising effects of CPIs can be undone, the logical next step would be for research to explore the conditions under which CPIs are likely to be either de- or re-politicised. We get the ball rolling below by suggesting three conditions that appear likely to affect the degree of CPI politicisation: extent of its use to allocate resources, regime type and nature of its public sphere, and the barriers to entry of creating a CPI in a policy field.

Our analysis here suggests that the more a CPI successfully establishes its authority, the more prone it will be to politicisation. Most obviously, CPIs that go unused and exert little influence on resource allocation will not beget much scrutiny or incentivise competition nor corruption. Beyond these scope conditions – which eliminate a number of largely inconsequential CPIs – regime type seems likely to be consequential: the availability of information about the very existence, and use, of CPIs is surely correlated with pluralistic public spheres. However, a number of CPIs operate in a highly circumscribed ‘international public sphere’ dominated by professionals that represent governments, international organisations, firms, and civil society organisations. Citizens of the countries that are ranked are often not themselves represented in debates about the validity, potential biases, or perverse consequences of particular CPIs. In this way, the idea that CPIs produce transparency and have a ‘democratising’ effect, or that they represent a pluralisation of expertise, may be limited to a universe of professionals that claim to speak on behalf of others. Lastly, barriers to entry in terms of costs of developing similarly credible indicators affect the mechanisms we have discussed here. This, of course, raises the spectre of CPIs’ role in reproducing hierarchies in knowledge production if the resources are primarily located in the global north.Footnote 111

Finally, this article has explicitly been working with critical scholars’ normative assumption that politicisation is desirable and depoliticisation a problem. This is certainly defensible, but it warrants some caveats, and it is useful to discuss the downsides. To be sure, at one extreme, any CPI that becomes recognised as the gold standard for measuring policy quality will have the negative effect of marginalising some issues and thus generate bias and depoliticising effects. Were, for example, PISA to become accepted as the default measure of education quality indefinitely, it would impoverish the international understanding of successful education. At the other extreme, were all CPIs reduced to a reflection of the political interests that informed their production, it would also prove unhelpful, generating a hermeneutics of suspicion and leaving any effort at data collection and analysis to be discounted as ‘pure politics’.

In our view, there is a plausible middle way that would avoid throwing the baby out with the bathwater and would involve the conscious development of CPI literacy among policymakers, civil society, citizens, and business, and institutionalising processes that demand reflexive engagement with CPIs. CPI literacy would involve developing an understanding of the generic limitations of CPIs (myopia, perverse consequences, democratic deficits) and using this knowledge to inform a reflexive, judicious use of CPIs. To facilitate this, CPI users could take inspiration from the emerging ‘data humanism’ agenda.Footnote 112 This agenda promotes using visual design to encourage actors to find ways of visualising complex data that embraces and acknowledges the data’s complexity and imperfections. While principles from data humanism would help foster a critical reflexivity around CPI use, it is no panacea. Moreover, actors that use CPIs in decision-making should build into their practices periodic reviews to check for gaming and other perverse consequences, while also routinely surveying the policy field for alternative measures that can be used to complement and provide context for their existing preferred CPI. Crucially, such processes should be transparent, go beyond acknowledging the potential limits of CPIs, and find ways to extend the possibility of politicisation to those that CPIs claim to deliver for: making these agents of accountability more accountable in more contexts. This would appear to be urgent to order to address the likely unequal opportunity to politicise a CPI through scrutiny and competition.

Acknowledgements

First and foremost, we would like to thank Annabelle Littoz-Monnet, Juanita Uribe, and Leandro Montes Ruiz for giving us the opportunity to join and engage with their pluralisation of expertise project and for their thorough and insightful feedback on our paper throughout the process. We would also like to thank the special issue workshop participants for their detailed and incisive comments. The article has also benefited greatly from feedback from John de Bahl and Lucas de Oliveira Paes as well as the research assistance of Margrete Seiersnes. Last but not least, we would like to thank the anonymous reviewers for their consistently helpful feedback, as well as the editorial team at RIS for their professional and efficient guidance.

Funding

The article was supported by the Norwegian Research Council, Norwegian Center for Geopolitics, project number 345131, and the European Union (ERC, Navigator, Grant number 101164505). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.

References

1 This influence includes the direct and immediately observable ‘shocks’ associated with PISA as well as the broader and diffuse structuring of policy fields and problem-definitions. See A. Cooley and J. Snyder (eds), Ranking the World: Grading States as a Tool of Global Governance (Cambridge University Press, 2015).

2 Judith G. Kelley and Beth A. Simmons, ‘Introduction: The power of global performance indicators’, International Organization, 73:3 (2019), pp. 491–510; Kevin E. Davis, Benedict Kingsbury, and Sally E. Merry, ‘Indicators as a technology of global governance’, Law & Society Review, 46:1 (2012), pp. 71–104.

3 Sally E. Merry, The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking (University of Chicago Press, 2016).

4 Judith G. Kelley and Berth A. Simmons (eds), The Power of Global Performance Indicators (Cambridge University Press, 2020).

5 For example, the World Bank’s Country Policy and Institutional Assessment (CPIA) is a CPI that is used to allocate funding to low- and middle-income countries, where a better score yields a larger allocation of development assistance. Ole Jacob Sending and Jon Harald Sande Lie, ‘The limits of global authority: World Bank benchmarks in Ethiopia and Malawi’, Review of International Studies, 41:5 (2015), pp. 993–1010.

6 Marlee Tichenor, Sally E. Merry, Sotiria Grek, and Justyna Bandola-Gill, ‘Global public policy in a quantified world: Sustainable Development Goals as epistemic infrastructures’, Policy and Society, 41:4 (2022), pp. 431–44.

7 E.g., Morten Jerven, Poor Numbers: How We Are Misled by African Development Statistics and What to Do about It (Cornell University Press, 2013); Hana Attia and Julia Grauvogel, ‘Monitoring the monitor? Selective responses to human rights transgressions’, International Studies Quarterly, 67:2 (2023), sqad014; Svein Sjøberg, ‘PISA-syndromet. Hvordan norsk skolepolitikk blir styrt av OECD’, Nytt Norsk Tidsskrift 31:1 (2014), pp. 30–43; Staffan Andersson and Paul M. Heywood, ‘The politics of perception: Use and abuse of Transparency International’s approach to measuring corruption’, Political Studies, 57:4 (2009), pp. 746–67.

8 See for example Cooley and Snyder (eds), Ranking the World; André Broome and Joel Quirk, ‘The politics of numbers: The normative agendas of global benchmarking’, Review of International Studies, 41:5 (2015), pp. 813–18; André Broome and Joel Quirk, ‘Governing the world at a distance: The practice of global benchmarking’. Review of International Studies, 41:5 (2015), pp. 819–41; Davis et al., ‘Indicators as a technology of global governance’; Tero Erkkilä, ‘Global indicators and AI policy: Metrics, policy scripts, and narratives’, Review of Policy Research, 40:5 (2023), pp. 811–39; John Berten and Matthias Kranke, ‘Governing global challenges through quantified futures’, The British Journal of Politics and International Relations, 26:3 (2024), pp. 599–621.

9 Sally Engle Merry and John M. Conley, ‘Measuring the world: Indicators, human rights, and global governance’, Current Anthropology, 52:S3 (2011), pp. 83–95; Broom and Quirk, ‘The politics of numbers’; Broom and Quirk, ‘Governing the world at a distance’; André Broome, Alexandra Homolar, and Matthias Kranke, ‘Bad science: International organizations and the indirect power of global benchmarking’, European Journal of International Relations, 24:3 (2018), pp. 514–39.

10 See Merry, The Seductions of Quantification, p.31; Paul Beaumont and Ann E. Towns, ‘A Relational Approach to the Rankings Game’, International Studies Review, 23:4 (2021), p.1432.

11 Merry, The Seductions of Quantification, p. 31.

12 E.g., Andrea Mennicken and Robert Salais (eds), The New Politics of Numbers: Utopia, Evidence and Democracy (Springer, 2022); Kelley and Simmons, ‘Introduction: The power of global performance indicators’; Kelley and Simmons, The Power of Global Performance Indicators.

13 Our definition here is inspired by inverting what Scott and Light describe as the depoliticising effects of quantified performance indicators. James C. Scott and Matthew A. Light, ‘The misuse of numbers: Audits, quantification, and the obfuscation of politics’, in J. Purdy, A. Kronman, and C. Farrar (eds), Democratic Vistas: Reflections on the Life of American Democracy (Yale University Press, 2004).

14 Kelley and Simmons, ‘Introduction: The power of global performance indicators’.

15 Cooley and Snyder (eds), Ranking the World. See also Nikolas Rose and Peter Miller, ‘Political power beyond the state: Problematics of government’, British Journal of Sociology, 43:2 (1992), pp. 173–205.

16 Editors’ introduction; Judith G. Kelley and Berth A. Simmons, ‘Governance by other means: Rankings as regulatory systems’, International Theory, 13:1 (2021), pp. 169–78.

17 Kelley and Simmons, ‘Governance by other means’, pp. 174–6.

18 See Oded Löwenheim, ‘Examining the state: A Foucauldian perspective on international “governance indicators”’, Third World Quarterly, 29:2 (2008), pp. 255–74.

19 Kelley and Simmons. ‘Introduction: The power of global performance indicators’, p. 494.

20 Annabelle Littoz-Monnet, Leandro Montes Ruiz, Juanita Uribe, Astrid Skjold, ‘The Politics of the Pluralization of Expertise: Introduction’, Review of International Studies (forthcoming).

21 Laura Mann, ‘Left to other peoples’ devices? A political economy perspective on the big data revolution in development’, Development and Change, 49:1 (2018), pp. 3–36.

22 Kelley and Simmons, ‘Introduction: The power of global performance indicators’.

23 Paul D. Beaumont, The Grammar of Status Competition: International Hierarchies and Domestic Politics (Oxford University Press, 2024).

24 Justyna Bandola-Gill, Sotiria Grek, and Matteo Ronzani, ‘Beyond winners and losers: Ranking visualizations as alignment devices in global public policy’, Research in the Sociology of Organizations, 74, (2021) pp. 27–52.

25 Beaumont, The Grammar of Status Competition, p. 130; Littoz-Monnet et al., ‘Introduction’.

26 E.g., Merry, The Seductions of Quantification.

27 E.g., Andrea Mennicken and Wendy N. Espeland, ‘What’s new with numbers? Sociological approaches to the study of quantification’, Annual Review of Sociology, 45 (2019), pp. 223–45.

28 E.g. Davis et al., ‘Indicators as a technology of global governance’.

29 Cooley and Synder, Ranking the World; Alexander Cooley and Jack Snyder, ‘Rank has its privileges: How international ratings dumb down global governance’, Foreign Affairs, 94 (2015), pp.101-8; Broome et al., ‘Bad science’; Judith G. Kelley and Berth A. Simmons, ‘Politics by number: Indicators as social pressure in international relations’, American Journal of Political Science, 59:1 (2015), pp. 55–70; Kelley and Simmons, ‘Introduction: The power of global performance indicators’; Kelley and Simmons, The Power of Global Performance Indicators.

30 Kelley and Simmons, ‘Politics by number’; Kelley and Simmons, The Power of Global Performance Indicators; Cooley and Snyder (eds), Ranking the World; Cooley and Snyder, ‘Rank has its privileges’.

31 Kelley and Simmons, ‘Politics by number’; Kelley and Simmons, ‘Introduction: The power of global performance Indicators’; Kelley and Simmons, ‘Governance by other means’.

32 E.g., Børge Brende and Bent Høie, ‘Towards evidence-based, quantitative Sustainable Development Goals for 2030’, The Lancet, 385:9964 (2015), pp. 206–8.

33 Kelley and Simmons, ‘Politics by number’. See Leonard Seabrooke and Duncan Wigan, ‘How activists use benchmarks: Reformist and revolutionary benchmarks for global economic justice’, Review of International Studies, 41:5 (2015), pp. 887–904.

34 Kelley and Simmons, ‘Introduction: The power of global performance indicators’, p. 500.

35 Kelley and Simmons, ‘Introduction: The power of global performance indicators’, p. 500.

36 Kelley and Simmons, ‘Politics by number’, p. 38.

37 Littoz-Monnet et al., ‘Introduction’.

38 Alexander Cooley and Alexander Dukalskis, Dictating the Agenda: The Authoritarian Resurgence in World Politics (Oxford University Press, 2025); Tetyana Lokot and Mariëlle Wijermars, ‘The politics of internet freedom rankings’, Internet Policy Review, 12:2 (2023), pp. 1–35.

39 In the same ballpark but from the opposite angle, Ringel has explored how CPIs navigate contestation; meanwhile, Steffek and Wegmann identify and critically discuss reflexivity around global governance standards in general. Leopold Ringel, ‘Challenging valuations: How rankings navigate contestation’, Zeitschrift für Soziologie, 50:5 (2021), pp. 289–305; Jens Steffek and Philip Wegmann, ‘The standardization of “good governance” in the age of reflexive modernity’, Global Studies Quarterly, 1:4 (2021), ksab029.

40 Davis et al., ‘Indicators as a technology of global governance’, p. 9; Nehal Bhuta, ‘Measuring stateness, ranking political orders: Indices of state fragility and state failure’, in A. Cooley and J. Synder (eds), Ranking the World: Grading States as a Tool of Global Governance (Cambridge University Press, 2015), pp. 85–111.

41 Merry, The Seductions of Quantification; Sakiko Fukuda‐Parr and Desmond McNeill, ‘Knowledge and politics in setting and measuring the SDGs: Introduction to special issue’, Global Policy, 10:S1(2019), pp. 5–15.

42 Beaumont and Towns, ‘A relational approach’; cf. Robert W. Cox, ‘Social forces, states and world orders: Beyond international relations theory’, Millennium, 10:2 (1981), pp. 126–55.

43 Pierre Bourdieu, Language and Symbolic Power (Polity Press, 1991), p. 167.

44 Beaumont and Towns, ‘A relational approach’, p. 1472; see also Merry and Conley, ‘Measuring the world’; Cooley and Snyder, ‘Rank has its privileges’.

45 Scott and Light, ‘The misuse of numbers’, p. 119.

46 Beaumont and Towns, ‘A relational approach’, p. 1472; Broome et al., ‘Bad science’, pp. 17–20.

47 Löwenheim, ‘Examining the state’, p. 258.

48 See, for example, Yong Zhao, ‘Two decades of havoc: A synthesis of criticism against PISA’, Journal of Educational Change, 21 (2020), pp. 245–66; Jerven, Poor Numbers; Attia and Grauvogel, ‘Monitoring the monitor’.

49 Cf. Shelia Jasanoff, ‘The practices of objectivity in regulatory science’, in Charles Camic, Neil Cross and Michèle Lamont (eds), Social Knowledge in the Making (University of Chicago Press, 2011), pp. 307, 312–17; Ole Jacob Sending, The Politics of Expertise: Competing for Authority in Global Governance (University of Michigan Press, 2015); Annabelle Littoz-Monnet, The Politics of Expertise in International Organizations (Routledge, 2017).

50 Sending, The Politics of Expertise.

51 Cf. R. B. Friedman, ‘On the concept of authority in political philosophy’, in Joseph Raz (ed.), Authority (New York University Press, 1990).

52 Cooley and Snyder (eds), Ranking the World; Broome et al., ‘Bad science’; Merry, The Seductions of Quantification.

53 E.g. Sjøberg, ‘PISA-syndromet’.

54 Isabelle Bruno, Emanuel Didier, and Tommaso Vitale, ‘Statactivism: Forms of action between disclosure and affirmation’, Partecipazione e conflitto-Participation and Conflict, 7:2 (2014), pp. 198–220 (p. 200).

55 E.g. Seabrooke and Wigan, ‘How activists use benchmarks’.

56 To be clear, by corruption of the CPI’s validity we mean the process by which activities undertaken by the agents being assessed undermine the relationship between the indicator and the underlying quality that it seeks to measure. This need not involve unlawful or nefarious activities carried by the term corruption in the legal sense.

57 Donald T. Campbell, ‘Assessing the impact of planned social change’, Evaluation and Program Planning, 2:1 (1979), pp. 67–90 (p. 79).

58 Neill Pollock, Luciana D’Adderio, Robin A. Williams, and Ludovic Leforestier, ‘Conforming or transforming? How organizations respond to multiple rankings’, Accounting, Organizations and Society, 64:2 (2018), pp. 55–68 (p. 56).

59 E.g., Sam Schueth, ‘Assembling international competitiveness: The Republic of Georgia, USAID, and the Doing Business project’, Economic Geography, 87:1 (2011), pp. 51–77; Wendy Nelson Espeland, Michael Sauder, and Wendy Espeland, Engines of Anxiety: Academic Rankings, Reputation, and Accountability (Russell Sage Foundation, 2016).

60 E.g., Jon Harald Sande Lie, ‘Performing compliance with development indicators: Brokerage and transnational governance in aid partnerships’, Social Anthropology, 28:4 (2020), pp. 929–43.

61 Economist, ‘Top of the Class’, The Economist (26 June 2008), available at: {https://www.economist.com/international/2008/06/26/top-of-the-class}.

62 Kelley and Simmons, ‘Introduction: The power of global performance indicators’.

63 Xavier Pons, ‘Fifteen years of research on PISA effects on education governance: A critical review’. European Journal of Education, 52.2 (2017), pp. 131–44.

64 Tonia Bieber and Kerstin Martens, ‘The OECD PISA study as a so: Power in education? Lessons from Switzerland and the US’, European Journal of Education, 46:1 (2011), pp. 101–16.

65 Pons, ‘Fifteen years of research on PISA effects’.

66 See Sjøberg, ‘PISA-syndromet’; Pons, ‘Fifteen years of research on PISA effects’.

67 Beaumont, The Grammar of Status Competition.

68 Sjøberg, ‘PISA-syndromet’.

69 Quoted in Helene Skjeggestad, ‘15 år med testen som viser det du vil den skal vise’, Aftenposten= (4 December 2016), available at: {https://www.aftenposten.no/meninger/kommentar/i/kVBmB/15-aar-med-testen-som-viser-det-du-vil-den-skal-vise-heleneskjeggestad}.

70 Skjeggestad, ‘15 år med testen’.

71 Beaumont, The Grammar of Status Competition.

72 Beaumont, The Grammar of Status Competition.

73 J. Johnson and P. Østerud, ‘Medienes fordreide skolebilde’, Aftenposten (3 April 2002), available at: {https://retriever.no/no/product-mediearkivet-atekst}.

74 See also, e.g., S. Sjøberg, ‘Hva tester Pisa?’, Aftenposten (17 December 2007), available at: {http://folk.uio.no/sveinsj/PISA-kronikker-Sjoberg-des2007.pdf}.

75 Beaumont, The Grammar of Status Competition.

76 Skjeggestad, ‘15 år med testen’.

77 Aftenposten, ‘Aftenposten mener: Naturfaget må styrkes’ (6 December 2016), available at: {https://www.aftenposten.no/meninger/leder/i/Vvey1/aftenposten-mener-naturfaget-maa-styrkes}.

78 Sjøberg, ‘PISA-syndromet’.

79 Kunnskapsdepartementet, ‘Melding til Stortinget Fag – Fordypning – Forståelse En fornyelse av Kunnskapsløftet’, Meld. St. 28 (2016), p. 13.

80 E.g., Guardian [open letter], ‘OECD and Pisa Tests Are Damaging Education Worldwide – Academics’, Guardian (6 May 2014), available at: {https://www.theguardian.com/education/2014/may/06/oecd-pisa-tests-damaging-education-academics}.

81 E.g., Yong Zhao and Andreas Schleicher, ‘Panel Debate | Schools & Academies Show Birmingham 2018’, YouTube. Schools and Academies Show (2018), available at: {https://www.youtube.com/watch?v=Im2N-9KIO90}.

82 See Ringel, ‘Challenging valuations’.

83 Alexander E. Kentikelenis and Leonard Seabrooke, ‘Governing and measuring health security: The global push for pandemic preparedness indicators’, Global Policy, 13:4 (2022), pp. 571–8.

84 Kentikelenis and Seabrooke, ‘Governing and measuring health security’.

85 Kentikelenis and Seabrooke, ‘Governing and measuring health security’; Julian Eckl and Tine Hanrieder, ‘The political economy of consulting firms in reform processes: The case of the World Health Organization’, Review of International Political Economy, 30:6 (2023), pp. 2309–32.

86 Roya Wolverson, ‘GDP and economic policy’, Council on Foreign Relations (7 August 2013), available at: {https://www.cfr.org/article/gdp-and-economic-policy}.

87 UNDP, Human Development Report: Concept and Measurement of Human Development (New York, 1990).

88 P. Streeten, ‘Human development: Means and ends’, The Bangladesh Development Studies, 21:4 (1993), pp. 65–76 (p. 69).

89 Streeten, ‘Human development’, p. 69.

90 Morten Bøås and Desmond McNeill, Global Institutions and Development: Framing the World (Routledge, 2004).

91 Cooley and Snyder (eds), Ranking the World, pp. 21–2.

92 Romina Boarini, Alexandre Kolev, and Allister McGregor, ‘Measuring well-being and progress in countries at different stages of development: Towards a more universal conceptual framework’, OECD Development Centre Working Papers, No. 325 (2014), p. 1.

93 Fabrizio De Francesco and Edoardo Guaschino, ‘Reframing knowledge: A comparison of OECD and World Bank discourse on public governance reform’, Policy and Society, 39:1(2020), pp. 113–28.

94 Mike Zapp, ‘The authority of science and the legitimacy of international organisations: OECD, UNESCO and World Bank in global education governance’, Compare: A Journal of Comparative and International Education, 51:7 (2021), pp. 1022–41.

95 Leonard Seabrooke and Lasse F. Henriksen (eds), Professional Networks in Transnational Governance (Cambridge University Press, 2017) .

96 Kenneth W. Abbott and Benjamin Faude, ‘Choosing low-cost institutions in global governance’, International Theory, 13:3 (2021), pp. 397–426.

97 Andersson and Heywood, ‘The politics of perception’.

98 Rush Doshi, Judith G. Kelley, and Beth A. Simmons, ‘The power of ranking: The ease of doing business indicator and global regulatory behavior’, International Organization, 73:3 (2019), pp. 611–43.

99 Adrian Corcoran and Robert Gillanders. “Foreign direct investment and the ease of doing business.” Review of world economics 151 (2015) pp. 103-126; Doshi et al., ‘The power of ranking’.

100 Lisa Shmulyan, ‘Manipulation of the World Bank’s Ease of Doing Business Index’, LSE International Development Blog (5 October 2021), available at: {https://blogs.lse.ac.uk/internationaldevelopment/2021/10/05/manipulation-of-the-world-banks-ease-of-doing-business-index/?shared=email&msg=fail}.

101 Doshi et al., ‘The power of ranking’.

102 World Bank Group, ‘World Bank Group to Discontinue Doing Business Report’, Statement (16 September 2021), available at: {https://www.worldbank.org/en/news/statement/2021/09/16/world-bank-group-to-discontinue-doing-business-report}.

103 See Schueth, ‘Assembling international competitiveness’.

104 Schueth, ‘Assembling international competitiveness’.

105 Schueth, ‘Assembling international competitiveness’.

106 Laura Alfaro et al., ‘Doing Business: External Panel Review. Final Report. September 1st’ (World Bank Group, 2021).

107 Rebecca Nelson and Martin A. Weiss, ‘The World Bank’s Doing Business Report’ (Congressional Research Service 2021), p. 2.

108 See Beaumont and Towns, ‘A relational approach’.

109 Illustrating how the mechanisms are not mutually exclusive, the Ease of Doing Business also endured considerable scrutiny from the International Trade Union Confederation. It contested the EDB’s early ‘Employing Workers’ for its assessment of various kinds of labour rights as impediments to business. This would eventually lead to the revision of the metric to follow International Labor Organization standards. ITUC, ‘World Bank Takes Major Step on Labour Standards’, International Trade Union Congress Website (14 December 2006), available at: {https://www.ituc-csi.org/world-bank-takes-major-step-on}.

110 See Cooley and Dukalskis, Dictating the Agenda.

111 Merry, The Seductions of Quantification; Fukuda‐Parr and McNeill, ‘Knowledge and politics in setting and measuring the SDGs’.

112 Giorgia Lupi, ‘Data Humanism: The Revolutionary Future of Data Visualization’, PrintMag (30 January 2017), available at: {https://www.printmag.com/article/data-humanism-future-of-data-visualization/}.

Figure 0

Figure 1. Rise of CPIs across policy fields.19

Figure 1

Figure 2. Kelley and Simmons’s mechanisms of indicators and policy change.36