Initiatives to check research fraud usually fall short because they are not eventually implemented or because techniques of fraud have advanced, rendering older measures as well as countermeasures irrelevant.
This is the second part of a two-part essay on the UGC’s attempts to improve the research output out of Indian institutions. The first part is here.
A new UGC committee is consulting experts and organisations from across disciplines – including the Indian Council of Medical Research (ICMR), the Council for Scientific and Industrial Research (CSIR) and others – to prepare an exhaustive list of legitimate journals across disciplines. The draft list is expected to be prepared in about six weeks. The selected journals will be classified in different categories based on their impact and relevance. The rationale behind this initiative is to prod Indian academics to publish in legitimate rather than in fake or substandard journals. If successful, this may help improve the quality of research and publications at India’s universities.
One of the challenges about this initiative is that the UGC’s master-list of legitimate journals will need to be sufficiently comprehensive and inclusive across disciplines. It will also need to be updated at regular intervals to include new journals. It is also very important, especially for the humanities and social sciences, that the list of journals is prepared and categorised in a way that it discriminates favourably toward those which are more relevant to India.
The other important question is whether the UGC’s master-list of legitimate journals will curb the larger problem of research fraud. Academics constantly find new techniques to cheat and the UGC initiative will therefore likely have only a limited impact.
The journals master-list
Preparing a comprehensive list of legitimate journals requires the labour of very competent people trained in library sciences. Academics need to be consulted, of course, but I do not think they have the proper training or the interest to do this kind of job properly. One has to wonder if we have enough professionally-trained librarians at our universities and other academic institutions. It is hard to know. This is the first problem with preparing an exhaustive list of legitimate journals.
A related issue is how frequently the master-list will be updated. These days, new journals, including legitimate ones, are introduced quite frequently (though certainly not as often as fake journals) and it will be necessary to update the master list at least on an annual basis. If that is not done, academics who publish in new journals may lose out until that time when new journals are included in the master list.
Second, the task of preparing a master-list of journals is best handled by organisations involved in the business of ranking journals. There are no such Indian organisations. Therefore, one must depend on the products of Thomson Reuters – which includes the prestigious Science Citation Index (SCI) – and Elsevier’s Scopus database. They are both highly-rated and used by academics and higher-education institutions worldwide.
However, for the reasons discussed below, the ranking and/or impact factor of journals by Thomson Reuters and Elsevier is not quite suitable for faculty in the humanities and social sciences who work on India (and this is also true to a lesser degree for other areas of study, including the sciences). For those who may not know, the impact factor was created by Eugene Garfield, a librarian, in the early 1950s and measures how often journal articles are cited.
Apropos research in humanities and the social sciences
The research carried out by Indian academics, as also by academics in other non-Western countries, tends to be country-specific and at best region-specific. The reasons for this are manifold but lack of resources and interest are both important factors. For example, in India, most historians research and write on Indian/South Asian history. Now, at the global level, the major humanities and social science journals – those with the highest impact factor – have a strong pro-Western bias.
This is clearly reflected in their content. An overwhelming majority of articles are focused on Western countries. For example, if one looks at the top-rated philosophy, politics or history journals, there is very little content on Indian philosophy, politics or history. In effect, Indian academics are nearly shut out from publishing in top-ranked journals because most of them write on India or South Asia. The top-ranked journals have no interest in such content unless it is on issues that affect Western countries in a very direct way (for example, Islamic terrorism).
India-specialists the world over publish in specific journals which they know academics from around the world working on India consult on a regular basis. Most of these journals tend to be ranked relatively low in terms of their impact factor. Two good examples, both from India, are Economic and Political Weekly and Contributions to Indian Sociology. Many other journals which are both popular and relevant for Indian scholars, even though published in Europe or North America, also have relatively low impact factor.
Consider the Scopus list of top-ranked humanities and social science journals, for instance: There are very few grade-A and even grade-B journals in anthropology, philosophy, sociology, history, politics, English, and other subject areas which publish India-content except occasionally. Indeed, many of these top-ranked journals do not carry India content in the form of an article or two devoted to Indian politics or history or something else even once a year. At the same time, those journals which include substantial India content – say, 20% of all articles in a particular journal over a period of two years – are nearly all region-specific, and rarely figure in grade-B and typically find a place in grade-C or lower. Interestingly, several region-specific journals which focus on Europe and Western countries are ranked grade-A and grade-B.
What this means is that India’s social scientists rarely publish or get published in Elsevier’s grade-A journals. And this is not just about India-educated or India-based social scientists; over the last two decades, leading India-experts based at the top 50-100 universities have only on occasion published in grade-A journals (the one exception is economics).
In sum, the UGC’s master list of top journals in the humanities and social sciences, in terms of relevance and impact factor, will need to be amended if it uses or ‘borrows’ from the Elsevier or Thomson Reuters lists. The issue is not one of competence and/or research abilities of India’s social scientists but more about the very limited opportunities they have for publishing in top-ranked journals.
There is a different kind of bias against faculty in the sciences, engineering and other areas. Unlike the humanities and social sciences, the sciences and engineering are region-neutral. Therefore, academics in these areas do not face the same challenges that social scientists do. However, it is widely believed that in those cases where journal submissions reveal the institution/country of origin of an academic, there is discrimination on the basis of language and to a lesser extent regarding content. Of course, there are studies which show that “linguistic bias” is a “myth” and find that the issue is more about the poor quality of research.
Based on my discussions with some very capable and widely-published colleagues, the truth may be closer to something in between. There are cases where poor quality research papers are submitted and rightly rejected; in other instances, however, rejections are handed out for the flimsiest reasons.
Overall, it is extremely important that the UGC’s master list of legitimate journals is suitably adapted for India. However, knowing how things work in the higher education sector, there are good reasons to worry that the master list of legit journals may not be up to the mark. It will likely remain work in progress for a very long time. It would certainly be too optimistic to hope that the master list would be updated at regular intervals.
What will the UGC’s master-list achieve?
The rationale behind the preparation of a master list of legit academic journals is to ensure that India’s academics do not publish in fake or substandard journals and benefit from their efforts, whether to clinch entry-level positions or for promotions. The question is whether this master list will achieve its objective of reining in such practices. At this time, it becomes necessary to recognise that publishing in fake journals is not the only kind of research fraud that is taking place; therefore, the UGC master list will not be adequate to address these other kinds of academic malpractices.
Even before the master list of legit journals is prepared however, the UGC itself seems to be undermining the rationale behind preparing such a list. First, the UGC may agree to the proposal that universities can recommend additional journals for inclusion in the master list (see Part I). By itself, this is not a bad thing. Despite good intentions, the committee involved with preparing the master list of legit journals may exclude good-enough journals that are relevant to Indian scholars. At the same time, however, there are also no guarantees that universities will not recommend substandard journals and that the committee will not accept them.
Second, there are reports that the modified API will provided greater flexibility under the “research” category so that academics do not need to publish journal articles to score high points. As I argued in Part I, the revised API scheme, if implemented, will allow faculty to accumulate points even without journal publications. It is hard to understand the logic behind doing the tedious work of preparing a list of legitimate journals if the importance of publishing in those journals is substantially diminished.
The arrival of SCIgen
In addition to the above, there are two larger and significant issues pertaining to research fraud that the UGC master list will not be able to address. Research fraud has evolved and taken up new forms in the 21st century. Plagiarism, fabricated research and other such related practices have always existed in academia, in India as well as elsewhere. Now, however, new technologies have permitted diversification of fraudulent research practices especially since the past two decades or so.
One of the new forms of fraudulent research and publishing is through the use of software to generate academic papers. In 2005, Jeremy Stribling, Dan Aguayo and Max Krohna at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a computer science paper generator that could stitch together nonsense papers with impressive graphs and such. The papers produced by their SCIgen software seemed genuine enough and were accepted at big conferences and by established journals. SCIgen has been utilized by scores of academics to publish journals articles and conference proceedings by reputed publishers such as Springer (Germany) and the Institute of Electrical and Electronic Engineers (IEEE, US). It took some chasing by Cyril Labbé, a French computer scientist at Joseph Fourier University, to bring down 120 SCIgen papers.
Incidentally, Labbé maintains a website where one can screen for SCIgen-created papers. Also, the inventors of Scigen have developed SCIpher which can detect Scigen papers. These days, Springer uses SciDetect to fight fake papers.
It is not exactly known how widely SCIgen has been used. If India-based legitimate computer science journals have not been using anti-SCIgen programmes, it is very likely that many of our computer scientists have published in them.
Computer generated papers are not limited to computer science and related disciplines. Les Perelman, a former director of undergraduate writing at MIT, and his students at Harvard and MIT developed Basic Automatic B.S. Essay Language Generator or Babel. Perlman was at war with essay-grading automatons which are being increasingly used to grade humanities and social science papers. Babel Generator is primarily designed to fool the machines but such programmes can perhaps create authentic-enough papers that dupe humans as well.
Fake peer reviews
The second new and popular method involves cheating through the peer-review process. When academics submit research articles for publication in journals, they are reviewed by a few experts with competence in that specific area of study. Though not without faults, this method has been in use for decades and has ensured that articles selected for publication are of good quality. Because academics are under increasing pressure to publish in the current era of world university rankings (which use research output as a key measure of ranking universities and in turn, universities pressure their faculty to publish more), more competitive research grants, and promotions and such, they have started to turn to new business firms devoted to profiting from ‘helping’ academics publish in prestigious journals.
In August 2015, Springer retracted 64 published articles for false peer reviews. These articles had appeared in journals on neurobiology, cancer research, biochemistry and other scientific topics. It announced that editorial checks had “spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports.” Nearly all the articles were authored by China-based academics. In March of the same year, BioMed Central retracted 43 articles for false peer reviews. Again, most authors were China-based.
Part of the problem stems from the fact that many scientists, especially non-native English speakers, seek outside help, usually from third-party firms, to publish their papers. In many cases, these firms offer services that go beyond language polishing and may include the creation of false identities for peer reviews. Most publishers are now aware of the systematic and organised attempts to manipulate the peer review processes. As a result, the whole peer-review system is under greater scrutiny for a broad range of flaws and irregularities, ranging from lackadaisical reviewing to cronyism to outright fraud.
Research fraud will remain a problem
We have to accept that research fraud of one kind or another will continue to take place. Further, it is not an Indian or a Chinese problem; Western academics also indulge in such practices. What is perhaps more important is that the concerned stakeholders, in particular governments, universities and publishers, take steps to minimise such practices. Journals have already started to address the problem seriously since publishing fraudulent articles gives them a bad name. Reputed universities too tend to take action against erring academics because research fraud hurts their image.
In many countries around the world, however, including India, even the most obvious and egregious kind of research fraud is often ignored and/or not penalised even when the culprits are exposed, especially when it involves those in office. It does not help that in these countries, governments tend to maintain tight control over the higher education sector and dictate policies to academic institutions. For example, initiatives to check research fraud, such as preparing a master list of legit journals, come from the government. These initiatives usually fall short either because they are not eventually implemented or because, as in this case, techniques of research fraud have advanced further making older measures dated and irrelevant.
Pushkar is an assistant professor at the Department of Humanities and Social Sciences, BITS Pilani-Goa.