Education

Journal ranking for History: necessary and not even evil

The impact of ranking journals on the discipline of history remains fraught, particularly in Australia where The Australian Historical Association has recently opened a consultation on producing a ranked list of history journals. Robert Cribb responds to 

No-one who advocates the ranking of History journals thinks that this is an objectively good thing to do. Rather, we understand that historians in Australia inhabit a university environment where rankings have become pervasive and where they work directly to the disadvantage of historians. A ranking of History journals is an essential defensive measure for our discipline in a university environment that is often predatory and unsympathetic towards the Humanities.

journal ranking has become a lightning rod for deeper discontents about academic life
Journal ranking has become a lightning rod for deeper discontents about academic life. (Image: Flickr Creative Commons)

The best research brews over years and has an impact stretching over decades or even generations. The contemporary Australian university, however, demands instant judgments of quality. To judge the quality of an article which they have not read and will never read, our managers now commonly refer to the ranked quality of the journal in which it is published.

The ranking field is dominated, however, by providers who serve the History discipline poorly.

Thomson-Reuters determines ‘impact factor’ for each journal in its master-list, while Scopus produces a ‘Scimago Journal Rank’. Each of them uses citations – mentions of a scholarly work in other scholarly works – as a proxy for quality.

Neither of them covers History journals more than superficially. For the purposes of ERA (Excellence in Research for Australia) evaluations, the Australian Research Council identifies 561 journals in the field of History. Thomson-Reuters ranks just 87 History journals; Scimago has better coverage, but it does not use a tight ‘History’ category; its highest ranking ‘History’ journal is Public Opinion Quarterly. It fails to rank History Australia and many other regionally-focussed History journals.

A further shortcoming of these ranking systems is that they pay no attention to monographs and edited volumes, despite their importance to our discipline. As a result, and also because History’s sub-fields tend to be smaller, most of the citation scores in History are derisory. Thomson-Reuters’ highest ranked History journal, the American Historical Review, has an impact factor of 1.333. By contrast, the Geographical Journal, a premier journal in one of our sister disciplines, scores 3.206. History journals have a median impact factor of 0.286, making History the lowest ranked of Thomson-Reuters’ 234 discipline categories.

This poor outcome matters because historians in Australian universities are in competition with the citation-heavy disciplines for appointments (when the position allows for applicants from different disciplines), tenure, promotion and grants. History discipline leaders across the university sector complain that these journal rankings have been held against historians in academic competition.

One relatively small step that we can take to counter this unfavourable circumstance is to develop our own ranking of History journals. The value of such a ranking is both that it provides a separate, authoritative standard for judging journal quality in History. It also allows us to include the full range of History journals, rather than depending on the vagaries of selection and classification by commercial rankers.

These considerations would not count if journal ranking had truly evil consequences for the discipline. The reality, however, is that journal ranking lists simply formalize the informal ranking that has been routine amongst historians for generations. Selection committees made up entirely of historians regularly make judgments about the outlets in which competing candidates have placed their articles. A formal ranking system does not elevate Comparative Studies in Society and History over the Journal of Australasian Mining History. That elevation has already been set in place by the consensus of working historians.

In her thoughtful essay, ‘Journal “Quality” Rankings harm the Discipline of History’, Katie Barclay warns that journal ranking are conservative, working to preserve power hierarchies and to block innovation. Yet peer review, which is often held up as the gold standard for quality evaluation, is marked by exactly the same problem. Peer reviewers exercise their own subjective, generally unaccountable, judgements about quality. They shape the field by favouring one candidate over another, one approach over another. The conservative effect of journal rankings, too, is easy to over-state. Every journal editor is keen to identify future trends and to publish the articles that will become future decades’ key texts.

None of this is to suggest, however, that journal ranking is unproblematic. Previous ranking exercises were sometimes marked by the lack of transparent criteria for quality and by the arbitrary and partisan decisions of small ranking committees. We need developed criteria and a system of consultation with the profession that ensures the fairness and credibility of rankings. Criterion-based marking is standard in Australian universities and it should be standard in journal ranking. We thus need to revisit the arbitrary notion, imposed by the ARC in its first journal ranking exercise in 2010, that 50% of all journals should be ranked ‘C’. Any ranking system needs a clear commitment, too, to giving proper support to smaller sub-fields, so that all areas of History have similar opportunity to publish in A*, A, B and C journals.

It is not surprising that the 2010 ERA journal ranking was flawed in several respects, because it was a first, ambitious attempt. The fact that this now-outdated ranking continues to be used in much of the Australian system, sometimes with superficial amendments, shows how deeply entrenched ranking has become.

Another significant objection to journal ranking is that it devalues social and political engagement. In rankings, quality is judged in academic terms, not in terms of engagement with broader communities. There can be a real quandary in choosing between publishing an article in a local outlet where it will be read by people who are directly affected and publishing it in a national or international journal where it will come to the attention of other scholars. Journal ranking, however, does nothing to exacerbate this problem.

In many respects, the issue of journal ranking has become a lightning rod for deeper discontents about academic life. Hardly any of us are happy with the managerialism that infects modern universities or with the threats to academic freedom that it implies. But managerialism will not go away simply because we assert values that seemed unassailable in the 1950s.

Australian universities already use citation-based journal rankings to judge the quality of their academics. But they use the wrong ones. They also rely upon even blunter tools such as grant success and the volume of output. Peer review is time-consuming and cannot possibly meet all the demands of the system for quality evaluation. Our choice now is between allowing others to impose ranking systems that unfairly disadvantage History as a discipline, and grasping the responsibility to produce a fair and responsible ranking ourselves.

7 Comments

  1. What seems odd about this debate is that in bibliometrics, journal impact factors have become almost a dirty word – for many of the reasons outlined above. The better metric is seen, not as improved journal rankings, but article level citations. I wonder if Australian historians might be better arguing along these lines instead.

  2. William Farrell makes an important point. The debate here, though, is only part of a broader discussion of bibliometrics by Australian academics. Whereas article-level citations are a tool for assessing the standing of established academics, they are less helpful (and less threatening) to early career researchers, who would like to be able to say ‘I have just published in an A* journal.

  3. This argument doesn’t make much sense to me. We’re told managerialism is inevitable and we need to play the game for the sake of the field. But managerialism can be resisted and even if not, it’s not clear quality lists would help us win.

    Universities are ran by humans and indeed humanists. These lists are not being imposed by outside but by us. Given this, we can resist; we can do academia differently. Why shouldn’t we fight for this? And what is the payoff for this infringement on our academic freedom?

    The argument that it helps us win is far from apparent. In Australia, universities feel pressured to perform in world rankings and in ERA. Some world rankings use metrics; none use a quality list. Using a quality list therefore does nothing to help us achieve in them. Given this why would using them give us more voice in a dispute over university resources? ERA uses peer review in humanities fields. A quality list is not peer review and there is no evidence that quality lists would improve our standing at ERA, if we’re already producing quality research. Indeed ERA explicitly rejected these lists as problematic. Why isn’t ERA itself enough a measure of our quality, with individuals managed through professional development reviews and guidance, that take account their goals and expertise?

    Universities that are using citations to measure humanities should also be resisted as these aren’t just an ‘imperfect’ measure. Most indexers don’t count citations or impact factors for humanities journals (history is complicated as some journals sit in social science but not all). They originally claimed this was because they didn’t cover books & chapters, & they didn’t count citations for articles older than five years, so it was a partial count. Google was enlightening because they told us the technology can’t read footnotes, only reference lists. This excluded many, maybe most humanities journals. It is likely other indexes also have this problem. Therefore citation counting in humanities is so unreliable as to be a farce. If you occasionally get a citation for a humanities article, it’s usually because it appeared in a social science journal. Saying quality lists are a better measure then is disingenuous as there is no other measure. The question is why are we measuring, what does this measure actually prove, and is the pay off worth it? I don’t think so.

  4. There are a few historians in the Australian system who are unaffected by rankings – the highest flyers and those whose careers have nowhere to go – but most of us are being held to account for the quality of our work, year in and year out, by managers and by members of promotion and grant committees. These people are mostly not familiar with our specific fields and they therefore look, amongst other things, for externally validated judgment of the quality of the journals in which we publish.
    Those judgments are easy to find in the existing journal rankings, and we cannot stop our managers and assessors from using them.
    The choice is not whether we have journal rankings, but which ones are to prevail. We can have the big commercial rankers who treat History badly (not out of malice but because we don’t fit neatly). We can have the local rankings that many universities have put in place, most of them patched-up amendments of the outdated 2010 ERA ranking. Or we can have our own list, prepared and endorsed by our own community of historians.
    The ERA, retrospectively assessing the quality of disciplines across each university, is only part of the picture. The local university level where decisions are made about the individuals is where the future of our discipline in Australia is decided.

    1. @R Cribb “The local university level where decisions are made about the individuals is where the future of our discipline in Australia is decided.” Therein lies part of the problem of empowering local universities the arbitrary ability to assign value for a discipline, or in deed a faculty. Firstly, we operate in a global world and in deed academics are a global commodity, so to assign a value in a faculty or department for a ranking must in inclusive of global influences, as Katie points out, in some faculties as was the case for my undergraduate degree some 30 years ago, history was a social science not a humanities subject. Also, lets take the case where a local bias may occur toward local publishing houses and also the case where the person engaged in the deicision making of this ranking may not have the requisite acumen to assess all publishers (books or journals) fairly, and may exclude unknowingly what might be construed as unjustly a specific publisher. If one is to employ metrics and utilise them as a KPI for performance which they are, then the standard in a global climate must be such that it should governed beyond the locality and administered by a body that has a broader framework and is unlikely to adopt such postures so as to exclude notable publishers. There is far too much closed shop mentality that exists, and furthermore, this precedent of having one standard for all disciplines is inappropriate and must be tailored to each individual discipline such that the e.g. AHA might provide that listing having wirked in conjunction with the American and British equivalent (AHA and RHS).

  5. Journal ranks are very helpful for scholars new to a field seeking to decide where to read and publish. Without the tacit knowledge of established scholars they have to resort to the journal impact factor if they can’t consult a senior scholar.

  6. The notion of using citation counts as a proxy for “quality” is a ruse perpetrated by citation providers. If something can be given a number then it suddenly becomes important. Citation counts, if anything, are simply a measure of popularity – driven mostly by exposure to social and research networks – which in no way is akin to “quality” in the context of scholarly endeavours. Ranking systems, particularly university systems and journal systems, are so flawed and inconsistent as to be a pointless distraction that sucks up resources. Paying attention to these rankings, which are essentially driven by profit makers, is to dig a hole for ourselves and leads to chasing the wind.

Your email address will not be published. Required fields are marked *