Article

Evaluating national AI readiness with the Government AI Readiness Index

On any given day, a brief look at the news can probably get you a story about the potential pitfalls of AI, particularly in the public sector. But as we write this from the UK, we are witnessing a lower-tech scandal: hundreds of wrongful prosecutions of Post Office workers for false accounting and theft are now due to be overturned as a faulty computer system was found to be to blame for accounting shortfalls.
AI Ethics and Governance Lab - Government AI Readiness Index

Author: Richard Stirling, CEO, Oxford Insights 

It seems that, for years, those in power chose to trust the word of a computer over humans, with devastating consequences. If even such a straightforward-seeming technology as a computerised accounting system could generate such a disaster, it doesn’t feel like doom-mongering to wonder whether the public sector is truly ready for the much more complex and unintelligible systems that AI promises to give us.  

AI has clear potential to improve public services, from saving taxpayer money through automation to creating more personalised services for users. However, if used unthinkingly, AI also has the potential to entrench biases and lead to disastrous outcomes for the most vulnerable users. The critical question becomes: how can the public sector get ready for AI?  

At Oxford Insights, we have been considering this question for the past six years through our annual Government AI Readiness Index. The index has gone through several iterations, but all have sought to answer the same core question: how ready is a given government to implement AI in the delivery of public services to their citizens? Our approach uses a framework of 3 pillars — Government, Technology Sector, and Data & Infrastructure — to assess AI readiness in the public sector (summarised in the figure below). We use 39 indicators, from whether a country has a published national AI strategy, to how many AI research papers its researchers have published in the past year, to what percentage of households have internet access at home.  

We take this approach because we think that public sector AI readiness is a multi-faceted issue. An AI-ready government not only needs a strategic vision for how it develops and governs AI in an ethical way, but also strong internal digital capacity and adaptability in the face of new technologies. An AI-ready government depends on a good supply of AI tools, meaning the country's technology sector needs to be mature, innovative, and have high levels of human capital. Any AI tools a government uses or develops require lots of high-quality data which, to avoid bias and error, should be representative of the country’s citizens. Finally, this data’s potential cannot be realised without the infrastructure necessary to power AI tools and deliver them to citizens. We’ve tried to include indicators reflecting all of these ideas in our index, which are then normalised and averaged by dimension and pillar. All this number-crunching allows us to give each country a numerical score out of 100, with 100 being the most AI-ready a country can be. And this year, we’ve also released the Trustworthy AI Self Assessment, a companion tool to the index that helps public servants assess how prepared their government is to advance trustworthy AI in the public sector.  

A country’s score on the index and the self assessment can then serve as benchmarking tools for governments, allowing them to track their progress across time and compare themselves to other countries. The details of a country’s scores can be a starting point for taking stock, and countries can see where they’re excelling and where they’re falling short. We also hope that comparing countries and regions can be an exercise not necessarily in competition, but in collaboration. AI presents new challenges to all governments, and there should be no shame in looking to other countries to exchange ideas and learnings.  

But all data has its limitations. With respect to public sector AI readiness, these limitations come not only from the usual suspects of low country coverage or out-of-date numbers, but also from lack of data altogether. The first national AI strategy was published less than ten years ago, meaning data collection and evaluation in this field is still in its infancy. While some efforts, like the OECD’s AI Policy Observatory and UNIDIR’s AI Policy Portal, have sought to collect AI policy documents in one place, the burden is still on researchers to read through these documents and attempt to digest them. Not only is this labour-intensive, the varied nature of country policies makes cross-country comparison difficult. And even beyond publication of strategies and principles, how can we tell whether a country is following through with implementation?  

At Oxford Insights, we are hopeful that UNESCO’s Global AI Ethics and Governance Observatory will take the first steps toward filling this gap. UNESCO’s Readiness Assessment Methodology (RAM) collects a wealth of cross-country comparative data that is not available to the public anywhere else. The RAM not only asks whether countries have published a national AI strategy, but also includes qualitative questions like whether the strategy references human rights or was created by a diverse team. Creating a space where every country’s RAM is readily available would be a step change in accessibility for AI readiness data and enable both the internal reflection and external collaboration that is so needed if governments around the world are to be truly ready to adopt AI. 

AI’s potential promise and pitfalls for the public sector are clear, but it is critical that governments around the world take steps to prepare for this new technology. We hope that benchmarking and evaluation tools like our Government AI Readiness Index and Trustworthy AI Self Assessment and UNESCO’s Global AI Ethics and Governance Observatory can help governments take stock of where they are and find ways to improve their AI readiness.  


The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the publication do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.