Home Blogs Let’s do away with Institutional rankings

Let’s do away with Institutional rankings

0
Let’s do away with Institutional rankings

It would be sufficient to divide universities into different tiers and let the clientele, who need to decide, to delve deeper to find what suits them.

By Nazarul Islam

The Quacquarelli Symonds (QS) ranking of world universities has just been published for the year 2022. There is much jubilation in Pakistan at their institution LUMS University of Management Sciences, in Lahore being ranked the top research university in the world. I am delighted this is ranked at the top as one can look at the rankings critically without being subjected to the criticism that this is just a case of sour grapes.

Therefore, we need to take a critical look at the ranking process and ask ourselves how justified LUMS rank vis-à-vis those ranked at par or below the institution in research? Other universities that have found a mention include well-known centers of education and research such as MIT, Harvard University, Caltech, Stanford University, University of California – Berkeley and Cambridge University.

Surprisingly, BRAC University of Bangladesh founded by the philanthropist Sir Fazle Hasan Abid, enjoyed a unique ranking in the region of South Asia.

Therefore, it is vital to analyze the validity of research rankings and not place so much importance on LUMS rank that we become complacent about our research performance.

It is quite ludicrous that the performance of entire institutes can be quantified and reduced to a single number, much like an individual’s abilities are quantified by her of his IQ. After all, education and research involve the interplay of creativity, innovation and mentoring, and are much too complex to be reduced to a single performance metric.

Our (human) nature is to compare and quantify, and as long as people are willing to value rankings, there will be companies to do the task. But it is for people in decision-making capacities to view these critically and not make major decisions only based on the rankings.

The QS ranking system works by considering six criteria. These are: 1) academic reputation, 2) reputation, as perceived by employers, 3) faculty/student ratio, 4) citations per faculty, 5) the ratio of international faculty to national faculty, and 6) the ratio of international students to national students.

The first criterion – academic reputation – is decided by a survey conducted among about 100,000 experts in teaching and research. The subjective perception of these experts governs the score for academic standing, yet this criterion carries the most weight of 40 per cent in determining rankings.

A similar survey conducted among about 50,000 responses from employers regarding the second criterion contributes 10 per cent weight to the ranking system. Thus, half of the intangibles decide maximum ranking points.

The remaining four criteria can be quantified based on the data provided by universities. The faculty-to-student ratio of a university carries a weight of 20 per cent. However, a larger faculty-to-student ratio does not always translate to better instruction for students.

In many research institutes, support faculty such as scientific and technical officers are counted as teaching faculty, although they are only marginally involved in teaching. In many universities and institutes, well-established senior faculty are less accessible to students. They are often less involved in the institution’s academic activities as they tend to be busy working in various committees, nationally and internationally.

Another criterion is the citations per faculty, and this parameter constitutes 20 per cent of the ranking points. Citations are the number of times other publications cite a published paper. However, in scientific circles, to boost citations for one’s papers, one has to circulate among scientists by attending conferences, inviting other scientists for seminars and personally socializing one’s ideas.

So, the number of citations may scale more with the visibility of the paper than the true scientific impact of the work. It is also not uncommon to see citing a work to discredit it; this counts as a citation nevertheless. Notwithstanding these well-known drawbacks, the number of citations remains one of the leading metrics used by the research community and funding agencies to judge research performance.

For the QS rankings, the citations received per faculty over the five years starting from seven years before the assessment are considered. The citations that count towards ranking exclude self-citations, namely, publications that cite the authors’ own papers. Citation count is also normalized against the total citations in the field, as not all fields have the same number of researchers or total publications.

Citations can sometimes lead to incorrect conclusions. An example is India’s indigenous Punjab University, which was ranked the top Indian university/institute in Times Higher Education ranking of Asian institutes in 2013, upstaging all other institutes that have continuously outperformed Panjab University on almost all metrics.

The anomaly was attributed to many citations received by a few faculty and their students who were joint authors on papers on the discovery of the Higgs boson at the Large Hadron Collider (LHC) run by CERN in Switzerland. A few thousand authors from a few hundred universities and institutes authored these papers.

Besides such anomalies, usually the impact of a paper is not realized within the span of a few years of its publication. The impact of a paper is judged better by the longevity of its citations. Thus, restricting citations of publications to just a few years is flawed. Instead, the total citations received by all papers published from the beginning of an institute or university over the previous five years may be a better metric.

Honestly, I am generally opposed to the idea of ranking. This new way of reckoning citations score will weigh pioneering publications that have remained important over long periods. In another ranking system, the number of papers published by an institute was used as a parameter in deciding rankings. This led to an anomaly in the rankings in 2010 and involved the University of Alexandria. In that year’s rankings, the University of Alexandria was surprisingly placed very high.

This anomaly was traced to a single professor’s practice of misusing his position as an editor of a journal to publish a large number of articles in that journal.

The remaining two metrics, namely fraction of international faculty and international students, are driven by sociological, financial and geographical aspects of where an institute is located.

Although these two criteria have the least weight, they have the possibility of unfairly favoring one institute over the other for mostly non-academic reasons.

The question of whether it is necessary and desirable to rank universities and institutes is a moot point. Many universities often resort to window-dressing the data to improve their rankings.

Often, they have personnel entrusted with finding ways to improve the rankings and embellish the achievements of students and faculty. Many a time, scarce resources are frittered away in showcasing the institute.

Critical evaluation of the ranking system has become important as funding agencies find it an easy metric to base their decisions on it. The ranking process is further encouraged by the scientific publishing machinery (which is already a money-spinning business).

It has a lot more to profit when researchers tend to increase their publication numbers to improve the institute’s ranking, besides contributing to increasing their own individual metrics.

I believe that just as we need to do away with marks and ranks for the students in exams—we also need to think about the ranking of universities. Should we not do away with the tradition? Instead, it is sufficient to divide universities into different tiers and let the clientele, who need to decide, to delve deeper to find what suits them. Such a tier-based division can also be done for specific categories such as best faculty, best campus, best peer group and so forth.

It will certainly help prospective students and their parents to make decisions based on categories they care most about, instead of bestowing bragging rights for institutes or universities to advertise themselves.

[author title=”Nazarul Islam ” image=”https://sindhcourier.com/wp-content/uploads/2021/05/Nazarul-Islam-2.png”]The Bengal-born writer Nazarul Islam is a senior educationist based in USA. He writes for Sindh Courier and the newspapers of Bangladesh, India and America. He is author of a recently published book ‘Chasing Hope’ – a compilation of his 119 articles.[/author]