news
Education
27 January 2020

The Human Capital Index: A Flawed Methodology to Measuring School Learning

The tragedy of the millions of children that are out-of-school (258 million), and the severe shortfall of funds to finance universal education (1.8 trillion USD) are recurring themes. In the past few years, however, a disaster of a different kind has made headlines: the huge gap between the expected years of school and the learning-adjusted years of school in a country. 

Going to school does not necessarily imply learning. 

In Chad, for example, a child who starts school at age four can expect to complete five years of school by their 18th birthday. But when the expected years of school are adjusted for actual learning, on average, girls in Chad go for only 2.2 years, boys for three. This means that half of their time is wasted in school for a variety of reasons: the infrastructure of the school is not conducive to learning, the average teacher to student ratio is 1:57, the language of instruction is different from the child’s home language, the majority of teachers only had a 45-day crash course on how to teach children, or, for girls in particular, the distance to school is too far and considered unsafe. 

The differentiation between education (expected years of school) and learning (learning-adjusted years of school) is the centrepiece of the Human Capital Index (HCI), the brainchild of economists at the World Economic Forum and the World Bank. Unsurprisingly, the HCI’s main interest lies in predicting the productivity of the next generation of workers compared to a benchmark of complete education and full health. 

Despite this narrow economistic view of what education is good for, the merit of the HCI is its differentiation between enrolment and learning. In contrast, the universally used Human Development Index only measured the mean and expected years of schooling without any consideration of the quality of education or learning.

Low-income countries are not the only countries that have years of schooling subtracted for ineffective learning. Switzerland is a good case in point for the differentiation between education and learning. According to the HCI, the expected years of schooling in Switzerland average 13.3 years, yet it drops to 11.1 years when the actual learning outcomes of students are taken into consideration. How is that possible?

In a study published recently, Ji Liu (Professor at Shaanxi Normal University’s Tin Ka Ping School of Education) and I scratched at the formula used by the World Bank to calculate the difference between the expected years and learning-adjusted years of schooling. The HCI utilises the test scores that a country attained in international large-scale student assessments to calculate the so-called Harmonized Test Score. 

The study demonstrated that the methodology used to calculate the Harmonized Test Score is seriously flawed for several reasons. Various intra- and extrapolations to compensate for missing data, in effect, introduce large score penalties for education systems that either did not participate or only partially participated in international large-scale student assessments.  For these countries, including Switzerland, which discontinued its participation in Trends in International Mathematics and Science Study (TIMSS) but continues to participate in each round of the Programme for International Student Assessment (PISA) every three years, the Harmonized Test Score is lowered by 37 points, or equivalent to one full year of learning. 

The HCI relies on and exacerbates – very much to the chagrin of teachers and parents – standardised testing in schools. At the global level, it is considered the new “soft power”, or the governance-by-numbers tool of international organisations because it rewards countries that follow the global agenda of test-based accountability by submitting a complete set of international student assessment data. At the same time, it penalises those who have chosen their own path, such as countries that only selectively participate in large-scale student assessments and therefore end up with “missing data” and become the objects of problematic interpolations and extrapolations.