The impact of ranking journals on the discipline of history remains fraught, particularly in Australia where The Australian Historical Association has recently opened a consultation on producing a ranked list of history journals. Robert Cribb responds to 

 

Robert Cribb

No-one who advocates the ranking of History journals thinks that this is an objectively good thing to do. Rather, we understand that historians in Australia inhabit a university environment where rankings have become pervasive and where they work directly to the disadvantage of historians. A ranking of History journals is an essential defensive measure for our discipline in a university environment that is often predatory and unsympathetic towards the Humanities.

journal ranking has become a lightning rod for deeper discontents about academic life
Journal ranking has become a lightning rod for deeper discontents about academic life. (Image: Flickr Creative Commons)

The best research brews over years and has an impact stretching over decades or even generations. The contemporary Australian university, however, demands instant judgments of quality. To judge the quality of an article which they have not read and will never read, our managers now commonly refer to the ranked quality of the journal in which it is published.

The ranking field is dominated, however, by providers who serve the History discipline poorly.

Thomson-Reuters determines ‘impact factor’ for each journal in its master-list, while Scopus produces a ‘Scimago Journal Rank’. Each of them uses citations – mentions of a scholarly work in other scholarly works – as a proxy for quality.

Neither of them covers History journals more than superficially. For the purposes of ERA (Excellence in Research for Australia) evaluations, the Australian Research Council identifies 561 journals in the field of History. Thomson-Reuters ranks just 87 History journals; Scimago has better coverage, but it does not use a tight ‘History’ category; its highest ranking ‘History’ journal is Public Opinion Quarterly. It fails to rank History Australia and many other regionally-focussed History journals.

A further shortcoming of these ranking systems is that they pay no attention to monographs and edited volumes, despite their importance to our discipline. As a result, and also because History’s sub-fields tend to be smaller, most of the citation scores in History are derisory. Thomson-Reuters’ highest ranked History journal, the American Historical Review, has an impact factor of 1.333. By contrast, the Geographical Journal, a premier journal in one of our sister disciplines, scores 3.206. History journals have a median impact factor of 0.286, making History the lowest ranked of Thomson-Reuters’ 234 discipline categories.

This poor outcome matters because historians in Australian universities are in competition with the citation-heavy disciplines for appointments (when the position allows for applicants from different disciplines), tenure, promotion and grants. History discipline leaders across the university sector complain that these journal rankings have been held against historians in academic competition.

One relatively small step that we can take to counter this unfavourable circumstance is to develop our own ranking of History journals. The value of such a ranking is both that it provides a separate, authoritative standard for judging journal quality in History. It also allows us to include the full range of History journals, rather than depending on the vagaries of selection and classification by commercial rankers.

These considerations would not count if journal ranking had truly evil consequences for the discipline. The reality, however, is that journal ranking lists simply formalize the informal ranking that has been routine amongst historians for generations. Selection committees made up entirely of historians regularly make judgments about the outlets in which competing candidates have placed their articles. A formal ranking system does not elevate Comparative Studies in Society and History over the Journal of Australasian Mining History. That elevation has already been set in place by the consensus of working historians.

In her thoughtful essay, ‘Journal “Quality” Rankings harm the Discipline of History’, Katie Barclay warns that journal ranking are conservative, working to preserve power hierarchies and to block innovation. Yet peer review, which is often held up as the gold standard for quality evaluation, is marked by exactly the same problem. Peer reviewers exercise their own subjective, generally unaccountable, judgements about quality. They shape the field by favouring one candidate over another, one approach over another. The conservative effect of journal rankings, too, is easy to over-state. Every journal editor is keen to identify future trends and to publish the articles that will become future decades’ key texts.

None of this is to suggest, however, that journal ranking is unproblematic. Previous ranking exercises were sometimes marked by the lack of transparent criteria for quality and by the arbitrary and partisan decisions of small ranking committees. We need developed criteria and a system of consultation with the profession that ensures the fairness and credibility of rankings. Criterion-based marking is standard in Australian universities and it should be standard in journal ranking. We thus need to revisit the arbitrary notion, imposed by the ARC in its first journal ranking exercise in 2010, that 50% of all journals should be ranked ‘C’. Any ranking system needs a clear commitment, too, to giving proper support to smaller sub-fields, so that all areas of History have similar opportunity to publish in A*, A, B and C journals.

It is not surprising that the 2010 ERA journal ranking was flawed in several respects, because it was a first, ambitious attempt. The fact that this now-outdated ranking continues to be used in much of the Australian system, sometimes with superficial amendments, shows how deeply entrenched ranking has become.

Another significant objection to journal ranking is that it devalues social and political engagement. In rankings, quality is judged in academic terms, not in terms of engagement with broader communities. There can be a real quandary in choosing between publishing an article in a local outlet where it will be read by people who are directly affected and publishing it in a national or international journal where it will come to the attention of other scholars. Journal ranking, however, does nothing to exacerbate this problem.

In many respects, the issue of journal ranking has become a lightning rod for deeper discontents about academic life. Hardly any of us are happy with the managerialism that infects modern universities or with the threats to academic freedom that it implies. But managerialism will not go away simply because we assert values that seemed unassailable in the 1950s.

Australian universities already use citation-based journal rankings to judge the quality of their academics. But they use the wrong ones. They also rely upon even blunter tools such as grant success and the volume of output. Peer review is time-consuming and cannot possibly meet all the demands of the system for quality evaluation. Our choice now is between allowing others to impose ranking systems that unfairly disadvantage History as a discipline, and grasping the responsibility to produce a fair and responsible ranking ourselves.

cribbRobert Cribb is Professor of Asian History at the Australian National University. His research focuses on Indonesia, especially mass violence and national identity.

7 Comments

Leave a Comment

Your email address will not be published. Required fields are marked *