Global University Rankings
The European University Association (EUA) recently released a report they’d commissioned entitled Global University Rankings and Their Impact. The report was written by Andrejs Rauhvargers.
According to the EAU,Â one of theirÂ major motivations in commissioning the report was that their member universitiesÂ are “often under pressure to appear in the rankings, or to improve their position in one way or another.”
Some of theÂ report’s findings include the following:
-Shanghai Jiao Tong University published the first “global university ranking” in 2003.Â Just eight years later, there are now more thanÂ a dozen such rankings.Â
-International rankings typically include between 200 and 500 universities,Â meaning that they only cover between 1% and 3% of the world’s 17,000Â universities.Â
-Most of the rankings “focus predominantly” on research, as opposed to teaching. Likewise, “the importance of links to external stakeholders and environments” isÂ “largely ignored.”
-“Bibliometric indicators” are often used as a gauge for measuring research outcomes. The report argues that this advantages the natural sciences and medicine and disadvantages social sciences and humanities. Likewise, these indicators tend to disadvantage the publication of books and anthologies.
-The report finds that the indicators advantage English-language universities, as “non-English language work is both published and cited less.”
-The rankings disadvantage universities with specialized mandates, such as those that serve a specific region or that strive to be accessible to older students.
-The report finds that, “[i]n an attempt to improve their positions in the rankings, universities are strongly tempted to improve their performance specifically in those areas which are measured by ranking indicators.”
-Sometimes universities manipulate data in order to improve their standing (e.g. by merging with other universities and by manipulating student-staff ratios).
I’m not opposed to universities being held accountable to outside bodies. Nor am I opposed to the use ofÂ measured outcomes. But given that higher education is about so much more than research, wouldn’t it by wise to arrive at the methodology through a consultative process that includes student federations, labour groups andÂ faculty associations?Â And wouldn’t it be good if outcomes included teaching quality, knowledge translation, community engagement and accessibility to vulnerable groups?
I would like to see one of Ontario’s political parties adopt just such a proposal in their platform for this October’s provincial election campaign.Â The party in questionÂ could propose toÂ spearhead a processÂ that would aim to hold allÂ Ontario universitiesÂ accountable on outcomes thatÂ are agreed upon with key stakeholders, including the Canadian Federation of Students, the Ontario Confederation of University Faculty Associations, the Ontario Federation of Labour and the Council of Ontario Universities.
Ontario should set an example for the rest of the world to follow.
Russell Jacoby, in The Last Intellectuals, and other works, makes an argument about the decline of public intellectuals as a function of the very measures used to rate universities that you point out in your comment, Nick, among others.
He also argues society itself, as indicated by the decline of the general interest newspaper (I have just heard an ‘senior’ journalist on CPAC argue that it is a good thing for newspapers to find specialist niches), decline of ‘bohemias’, and decline of what some, certainly when I was an undergraduate the first time denied even existed–consciousness–has ceased to be the environment for the role of public intellectual.
The external measures you propose certainly step in the right direction.
Yet, I remember when I was at the University of Toronto in the 1970’s and was active in the Political Economy Course Union: I wrote many course evaluations based upon surveys.
I was surprised, when I returned to Carleton that course evaluations are run by the university and, according to my school administrator, are important in the advancement of instructors: that teaching is important.
We struggled for the recognition of teaching as an independent measure of the value of university faculty. Maybe we were victorious.
Yet, I can only wonder, as the specialization of university continues, as Jacoby, and also Ben Agger, describe, with the decline of the general interest public that public intellectuals such as John Kenneth Galbraith, Jane Jacobs, Lewis Mumford and so many more addressed, whether even the outside agencies you point to, Nick, can themselves even understand what they must do.
Obviously, those who undertake to rank universities,
base themselves on factors that can be easily measured.
The most important things — teaching quality and students’ accomplishment and happiness — cannot be measured easily.