Which country was best prepared in 2019 to handle a pandemic? According to the Johns Hopkins Global Health Security Index, the United States showed the highest level of “preparedness for epidemics and pandemics” out of 195 countries and territories.
The second best was the UK. Singapore handled the COVID-19 crisis better than the UK and USA but did not make the top 20 on the Johns Hopkins list. Currently COVID-free New Zealand ranked 35th. In short, recent events have made a mockery of the Global Health Security Index.
Nor is this the only example of an index gone wrong. Transparency International’s Corruption Perception Index ranked the US and the UK amongst the least corrupt countries in 2006, 2007 and 2008, that is, in the run-up to the corruption-induced financial crisis of 2008-09. No doubt, various indices of political freedom and the quality of governance will continue to rank those two countries highly even as hundreds of thousands of their citizens demonstrate against the systemic racism of their criminal justice systems. In apparent seriousness, the Electoral Integrity Project ranks the fairness of Rwanda’s 2013 parliamentary election as “high”.
More important than the failure of any individual index, however, is the failure of the way of thinking it embodies. Ever since the UN Development Program launched its famous Human Development Index in 1990, indices have grown like weeds in the world of policy analysis. There are indices of freedom, state fragility and environmental sustainability, of the quality of governance, universities and urban living, and much else besides.
These indices tend to have four things in common. First, they reduce complex reality to a single index number and then rank countries (or cities, universities, or other organizations) in a list according to their score. These league tables are popular, since they are easy to understand and those near the top can boast about their high ranking. Lower down the list, reformers can use their organization’s poor ranking to demand change.
Recommended: COVID-19 Forces a Rethink of an Old Debate
Second, such indices employ a similar methodology: they combine multiple indicators, assign each indicator a more or less arbitrary weight, and then add up all the weighted indicators to get an index number. For example, the QS University Rankings use academic reputation (weighted at 40%), employer reputation (10%), faculty/student ratio (20%), citations per faculty (20%), international faculty ratio (5%) and international student ratio (5%) to measure the quality of universities worldwide. Some of these indicators are objective facts, like the faculty/student ratio, while the most important one, academic reputation, is largely subjective.
Third, these indices measure quantitatively things that are by their very nature fuzzy concepts: governance, freedom, fragility, preparedness, quality. Using several indicators in an index is supposed to capture the multiple dimensions of each of these fuzzwords.
But putting a precise quantitative value on something that is inherently vague and amorphous is intellectually dubious, to begin with.
Fourth, these indices are frequently used for naming and shaming. Combined with their cousin, best practices, they implicitly endorse a list of ‘villains’ and ‘heroes’, which is often taken as an absolute truth.
But putting a precise quantitative value on something that is inherently vague and amorphous is intellectually dubious, to begin with. This faith in the precise statistical measurement of the vague and amorphous reaches absurd levels when the Corruption Perception Index and the Electoral Integrity index both come with standard deviations calculated for each country! To this problem, add the less than scientific ways of weighting each indicator within an index. For instance, the major university ranking indices use both different variables and different weightings to measure the same thing.

So, what do these indices measure? The famous Corruption Perception Index does not even purport to measure how much corruption there is in a country, but how much corruption certain experts perceive there to be. Many indices rely in whole or in part on such subjective assessments by experts to “measure” things like institutional quality, the fairness of elections, or the reputation of a university. These subjective measures often tell more about the implicit biases and cultural and ideological blind spots of the experts who are consulted (and who agree to take part!) than they say about what is being “measured”.
The purpose of this blog is not to argue against measurement. We believe in measurement, both qualitative and quantitative, but only where the measures are appropriate. One can measure many social, economic and health variables and put them into meaningful league tables: think about maternal and under-five mortality rates, national income, its growth and distribution, and rates of poverty, literacy and social exclusion.
The purpose of this blog is not to argue against measurement. We believe in measurement, both qualitative and quantitative, but only where the measures are appropriate.
Sometimes, it makes sense to group multiple indicators into a composite index, as economists have long done with indices of agricultural or industrial production. Sometimes the point of an index, like UNDP’s Human Development Index, is to change the conversation by showing the inadequacy of using a single measure, in this case, national income per capita. These are legitimate things to do. But too many of today’s policy-related indices have pretensions to scientific accuracy – and an obsession with statistical razzle-dazzle – that logic and the facts cannot support. Many are freighted with heavy – and frequently unexamined – ideological and methodological assumptions related not only to the selection of indicators and their weighting, but to what they leave out of the index. Nobody at the Global Health Security Index factored in Donald Trump.
Many composite indices are examples of pseudo-scientific thinking that are simply misleading; at worst, they are ideological constructions. It’s one thing to rank countries by GDP or net primary school enrollment, but ranking by the level of pandemic preparedness is another thing entirely.
Scholars and practitioners need to think critically about the plethora of indices that increasingly frame our thinking and debates about important international and public policy issues. Reader beware!
Recommended: Can Canada Win a UN Security Council Seat?