Using Citations to Rank Universities Isn’t Always as Simple or Sensible as You’d Think

Citations alone won’t be able to tell the full story: a good scientist needn’t be cited heavily – but not all those who are cited heavily are good scientists.

Credit: Pexels/pixabay

Citations alone won’t be able to tell the full story: a good scientist needn’t be cited heavily – but not all those who are cited heavily are good scientists.

Credit: Pexels/pixabay

Credit: Pexels/pixabay

Fun fact: Chennai’s Veltech University has the highest score for citations in Asia according to the latest Times Higher Education rankings. However, the data that propelled it to the top is not available. What then does this tell us about Veltech’s research quality as well as the rankings? It’s your call. There have been several other examples of inconsistencies in how rankings have evaluated their subjects. Some outcomes have just been dubious. It also pays to remember – even when the underlying data is accurate – that rankings are subjective and bibliometric measures like citations have their own nuances.

Gautam Desiraju, an eminent scientist at the Indian Institute of Science (IISc) known for his work on understanding hydrogen bonds, thinks self-motivation drives excellence in research. “Science is an ego-driven profession, and I mean this in a positive way [because] it drives us to become better,” he told The Wire. Each scientist feels that she has put their best effort when she communicates her findings. In a sense, a citation can be viewed as letting go of one’s ego and acknowledging the various efforts that they found important in framing and undertaking their own work.”

The number of times a paper is cited by other papers has often been used as a measure of its influence. The underlying principle is that the most influential and novel efforts are cited very frequently (as are some of the more controversial ones). A similar principle is at the heart of Google’s algorithm to identify the most relevant set of webpages for a particular combination of keywords.

Tracking citations was difficult just two decades ago – but the dawn of this millennium witnessed the birth of Web of Science, Google Scholar and other bibliometric and indexing services.

Humans of citations

“Till about 12 years ago, I had no idea about how my work was being utilised apart from some sporadic emails from other researchers,” Desiraju said. “Only after 2005 or so, I found out that my work was quite popular and I discovered various bibliometric measures. ”

And what has discovering these measures taught a prolific scientist about what kind of research is cited often?“I feel that the papers that get highly cited tend to bridge two pre-existing major ideas in the scientific literature in a way that was hitherto unknown,” he said. “These novel contributions tend to be highly influential.”

However, he cautioned that citations alone wouldn’t be able to tell the full story. A good scientist needn’t be cited heavily – but not all those who are cited heavily are good scientists.

“The number of citations depends on the number of practitioners in the field. Even in the vast field of chemistry, there are some areas which have only a handful of knowledgeable researchers. Certainly, the number of citations will be bound to this,” Desiraju explained. “Conversely, there are many fields with thousands of researchers, and even if you’re cited by a small fraction of them, it will overshadow the former case. So, the raw number of citations have to be taken into context, even within a field.”

The internet has also started to play an important role in how and how often scientific literature is discovered by different groups of people.

“Most researchers today don’t read journals from cover to cover anymore. Today, if I’m interested in a topic, I’d type in the keywords in an online search [engine] and take a detailed look at the top n relevant papers in my field,” said Giridhar Madras, of the IISc’s department of chemical engineering. “Today’s citations are less affected by which journal I publish them in for this reason. A ‘useful’ research paper is more likely to be cited more today than earlier.”

There are also other factors that affect the number of citations. The items in a scientific journal aren’t just papers; they are also classified as letters, articles, comments, reviews, etc. The length of each item, the publishing language, various statistical analyses of a journal’s citations, etc. – many factors have to be accounted for to get a journal’s worth, and different ranking systems weight them differently. For example, more than a few eyebrows were raised when Chandigarh-based Panjab University topped Nature’s citations survey in 2015. In fact, depending on the context, it may not even be clear if a paper with 200 authors and 10,000 citations is somehow better  than a three-author paper with 500 citations.

One level of correction has already been applied, according to Madras: “Engineering fields typically work on areas that have applications. Therefore, researchers in applied fields tend to have fewer citations compared to the fundamental sciences. Also, many rankings use field-normalised values.”

There are also cultural and interpersonal issues that affect citations.

“Two researchers who don’t like each other may not cite each other’s work,” Madras said. “Also, I’ve felt that multicultural research setups are more likely to cite similar work from around the world as compared to a culturally monolithic setup.”

But then what about the many Indian scientists feeling that their work in prestigious journals is not being cited as frequently? Would it be fair to claim that they’re part of a larger ‘cultural’ conspiracy that works against them? Desiraju thinks it’s complicated: “Many Indian researchers produce derivative work that provides support to an idea that was already prevalent. Therefore, even if it is published in a prestigious journal, it may not get cited as often as the foundational work.”

Citations, wheat and chaff

Other factors like professional jealousy can also play a part. “I wrote a series of review articles in my field after a definitive paper by other scientists in the 1980s,” Desiraju recalled. “My collaborator noticed that the citations to the earlier paper dropped progressively with each review of mine – except for a group of Indian researchers who kept citing the earlier work and didn’t see it fit to acknowledge my contribution when the whole world did so.”

Can researchers or institutions game the system in order to boost their overall count by underhand means? While they don the hat of reviewers, could they canvas for their papers to be cited? Researchers and institutions can also game the system to boost their overall citations count. According Madras, who also serves as an editor for many journals published by Springer, Elsevier and others, “Many citation metrics are not given to journals who excessively cite themselves. A researcher may be able to ‘push’ for her paper to get cited, but editors have been given a mandate to crack down on these attempts and blacklist them.”

The arms race for higher numbers have led to some deleterious aftereffects as well, opines Prof. Desiraju. “There are certain areas of chemistry where thousands of practitioners are pursuing research. Since this happens to be an area of high interest, many papers are accepted in standard journals and are naturally cited well. In these cases, journals are scared of not accepting submissions from these fields since they might miss out on the huge citation numbers”.

Governments play a significant role in identifying thrust areas for funding – which then allows them to define the research agenda. Researchers from the same country and from the same field are likely to collaborate and cite each other’s work. And interdisciplinary collaborations tend to produce more novel ideas and ambitious research due to synergistic combinations of expertise.

But setting aside all these intricacies: we’re still confronted by the problem of separating the wheat from the chaff, and in the solving of which citations could still play a useful – if not critical – role.

As Desiraju said, “Some ranking is better than no ranking, and I say this with respect to what was happening in the Indian context earlier. Earlier, many sub-par researchers would be rewarded due to personal equations due to absence of data. Nowadays, a student can obtain all the information with a click of a few buttons, and this has increased scrutiny. Certainly, a low cutoff can be used as an elimination criteria but a high value can never be used for selection.”

Vyasa Shastry is a materials engineer and a consultant who writes about science, technology and society in his spare time. He has contributed to Mint, The Hindu and Scroll.

Comments are closed.