If the NIRF ranking was not an exercise in chest-thumping, it would not attempt to condense all the information available on a university in just one number.
Rankings have never carried much weight in India. The National Institutional Ranking Framework (NIRF) formulated by the Ministry of Human Resource Development (MHRD) in 2015 was supposed to change all that. The first rankings came out in 2016 and, despite criticism, looked like an optimistic exercise by the government. The rankings for 2017 came out on April 3, 2017, but the space for optimism has since shrunk. The new list seems incomplete, incoherent and bordering on the random. While some of these outcomes are because of the Indian education system, some are simply because of ignorance on the part of the National Board of Accreditation, which drafted the list.
High levels of variation
This chart shows the suspiciously huge variations between the 2016 and 2017 rankings. Did Jamia Milia really improve to the extent of 71 places, from #83 to #12? Did Guru Gobind Singh University really fall in quality by 61 places (#22 to #82)? Apart from this, the standout pattern indicates that a large fraction of universities in the 2017 rankings are new entrants (marked in green) and, as a corollary, the same number of universities from the 2016 rankings have dropped off the list (marked in red). To be precise, 47 of the top 100 are new entrants. Is quality so transient that from year to year we will have to radically redefine our idea of which universities meet the bar?
Presidency University, which was #41 in 2016, does not feature in this year’s list. Similarly, 35 of the universities that were ranked between 50 and 100 last year disappeared from the 2017 rankings. The most beneficial construction is that these were universities that didn’t deserve to be in the top 100 and have been rightly dislodged as more deserving institutes took their place. Alternatively, they didn’t like their previous score and have decided not to participate at all. This is a big problem with not publishing the ranks of all universities that have submitted data. Right now, students can’t tell if a university didn’t make the cut or didn’t participate.
This is still only the second year and hopefully such teething troubles will disappear as future iterations are completed. On the other hand, if the government were to allocate funds based on this list any time soon, it would be disastrous. Institutions would receive and lose funding so rapidly that they would have to operate in a state of semi-emergency all the time.
Comparing apple trees to oranges
The fourth best university in India seems to be the Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), a highly specialised research institution in Bengaluru. The JNCASR has around 200-300 PhD students and faculty there at any given time. How does one compare it to the University of Hyderabad (UoH), which came fourth last year? UoH has about 4,500 students and 400 faculty. JNCASR isn’t even a university as the term is typically understood.
This is a structural issue. Many universities in India have an area of focus: engineering (like BITS-Pilani) or social sciences (like the Tata Institute of Social Sciences). Others offer courses across the arts and sciences, like JNU, UoH, Delhi University, etc. It is the latter that fits the general definition of a university. This means that comparing these two kinds of institutions is not very useful – both from the institutions’ and from the students’ points of view. The MHRD seems to understand this and publishes rankings of institutions according to subject-wise categories like pharmacy, engineering and management. It also admits that rankings for medicine, architecture and law couldn’t be formulated because of a lack of meaningful participation by the respective institutes. However, this understanding seems to be very partial. Many of the highly-ranked institutions are primarily engaged in teaching engineering. Their presence in the university list is what makes Tamil Nadu such a clear winner.
This partial understanding of the need for subject-wise rankings is short-sighted. The ideal way to handle subject-wise variation would be to implement rankings on a department basis, i.e., comparing the economics department of St. Stephen’s with the economics department of JNU. This is the unit that makes the most sense for students, who typically know which subject they wish to pursue. Any ranking that isn’t primarily an excuse for institutional chest-thumping would be organised in this manner. (To be fair, the NIRF allows for university departments to be ranked – but it doesn’t promote it as the default way of operating.)
One rank to rule them all
Finally, the fundamental issue with coming up with weighted rankings is the question of weights. Weights are an indication of priority – but whose priority? Currently, the NIRF has decided that research performance counts towards 30% of a university’s rank and 20% of a college’s rank. Does the average undergraduate student really value research to that extent? No; this is an indication of the government’s priorities. Again, if the ranking was not an exercise in chest-thumping, it would not attempt to condense all the information available on a university in just one number.
Providing a number of variables like the number of male and female students, student-faculty ratio, infrastructure available, spending per student, etc., and letting students sort the list of institutions based on their idea of a good institution, would be genuinely empowering them. This is the system that is followed in Germany and it allows for a less cut-throat, more diverse ecosystem for higher education.
Thomas Manuel is the winner of The Hindu Playwright Award 2016.