Rankings and Accountability in Higher Education: Uses and Misuses
International university rankings affect public policy and the choices of students and their families. Rightly or wrongly, they are perceived as a measure of quality and so create intense competition between universities all over the world. But are they doing more harm than good?
A new UNESCO publication, Rankings and Accountability in Higher Education: Uses and Misuses, debates the pros and cons of classifying universities. It brings together the people behind university rankings and their critics to debate the uses and misuses of existing rankings. Featuring voices from five continents, the publication aims to help the ultimate readers of rankings and league tables – be they students, parents, governments or institutional leaders – become better, and more discerning users of these tools. It provides a comprehensive overview of current thinking on the subject, and sets out alternative approaches and complementary tools for a new era of transparent and informed use of higher education ranking tables.
Of the world’s 17,000+ universities, only 1% are the focus of the “world university rankings” published by three of the most prominent “ranking houses”. Although varied in many respects, the top 200 ranked schools tend to be older (200+ years) establishments, focusing mostly on scientific research, with around 25,000 students and 2,500 faculty; and annual budgets exceeding 2 billion USD. Contributing authors from well-known ranking organizations open the debate in Rankings and Accountability, offering a detailed look at the methodological approaches they use, their strengths and shortcomings, and their evolution over time. For Nian Cai Liu of Shanghai Jiao Tong University, who helped launch the first ever global university rankings in 2003, rankings are not and should not be used as the sole source of information that guides decisions pertaining to the quality of universities. Yet for Phil Baty of the Times Higher Education and Ben Sowter of QS University Rankings, there is no doubt that rankings are “set to stay”, and can help improve transparency and accountability in higher education, in a global market of higher education.
Rankings should evolve to give information that is more pertinent to the needs of universities, students and policy-makers, matches local contexts and contributes to the growth of world-class higher education systems, rather than a limited number of world-class institutions. Authors reveal many of the drawbacks of the prevailing ranking systems, one being an excessive focus on increasing research output at the expense of social impact in local communities. The Universidad Nacional Autónoma de México (UNAM) is a prime example of what could be called a “nation-building university”; it is home to the National Seismological System, National Astronomical Observatory, national library and periodicals repository, as well as a respected symphonic orchestra. If 63% of universities have based strategic decisions on improving their rankings, as one study shows, would this “race to the top” stifle diversity and innovation in university models?
Some see diversity to be lacking in the dominant approach. For Imanol Ordorika and Marion Lloyd of UNAM, Spanish-language research is clearly ignored in favour of English – a fact that ranking organizations tend to downplay. “Even at universities of the size and weight of UNAM and the Universidade de São Paulo, articles published in English still represent a minority of the research production of the universities, but they comprise the majority of articles registered in ISI and Scopus”, the main bibliographic databases used in the rankings. Furthermore, the primacy of scientific research in current rankings systems minimizes social sciences and the humanities, areas where Latin America has a long and respected tradition.
In Africa, higher education is growing at one of the highest rates, yet “increased access has failed to be matched by improvement in quality,” explains Peter Okebukola, who heads the Global University Network in Africa. The African Quality Rating Mechanism (AQRM) was instituted as a regional alternative to global rankings. Rather than compare the performance of one university to another, it evaluates a university’s performance against a “set of criteria that accounts for the unique context and challenges of higher education delivery on the continent.” Would benchmarking performance in achieving educational outcomes, rather than ranking institutions, be a more useful and effective approach? The World Bank is exploring this possibility through the design of a new assessment system that privileges factors relevant for developing regions and thus allowing for more reliable comparisons to be made among them.
In all the mania of university rankings, what students can actually do with the knowledge they acquire, upon graduation, can be easily forgotten. Job prospects are, after all, what many parents and students look for in their choice of schools. From an OECD perspective, Richard Yelland and Rodrigo Castañeda Valle advocate focusing on learning outcomes, believing that “Rankings are very frequently used nowadays as proxies for – or evidence of – the quality of teaching and learning in higher education institutes across the globe”. Testing the ability of recent graduates in generic skills, such as written communication and analytic reasoning, as well as technical knowledge, in fields like economics and engineering , could provide a better tab on how well education has equipped them to lead lives of their own choosing, and to participate in today’s knowledge economy.