The Australian Historical Association has recently opened a consultation on producing a ranked list of history journals. The ranking would determine the ‘quality’ of journals to enable easy quantification. The idea will not be unfamiliar to many academics who remember similar lists designed to aid Research Assessment Exercises in the UK and Excellence in Research for Australia. It is driven by demands from management in a number of Australian institutions to quantify and measure the research of their staff, but with minimal cost in time and effort.
This is a conservative and retrograde step that professional historians have fought hard to resist. It is inevitable that rankings will force scholars to publish in particular outlets; as a result, ranking journals impacts on our academic freedom, our ability to innovate, and the long-term development of our fields of study. The decision of the committee to engage in this process could do irreparable damage to the historical profession.
Damage to the field
Journal rankings lists are innately conservative. Studies have demonstrated that the higher the journal’s ranking, the narrower its disciplinary focus. Not only are inter- and multi-disciplinary outlets generally poorly ranked, but so are journals focused on sub-disciplines that are not central to the disciplinary focus of the ranking committee or which contribute to local, rather than international, scholarship. Given that history is a particularly broad church, the sidelining of sub-disciplines is almost inevitable. Fields that engage in systems of ranking therefore have areas of research with no “high-quality” journals at all. As journal rankings lists are relatively static, they also fail to take into account new fields and emerging areas of research, whose journals have not had time to develop or be ranked.
This is a process that punishes small fields, interdisciplinary work and new research areas. It may cause the field of history to stagnate, as new perspectives from the fringes will not exist to challenge and disrupt core narratives in the field. Voices from the margin have been vital – perhaps particularly in history – to disrupting academic hegemonies and rejuvenating scholarship. Women’s history provides a key example of a once marginal field that transformed the study of history, not only through including the voices of women, but by challenging the boundaries of what mattered, what should be counted, and what shaped our social, economic and political worlds.
Damage to intellectual freedom
Journal quality rankings are never created in a neutral or apolitical environment. They reflect current disciplinary power hierarchies as much as they may reflect quality. Journals are also run by human beings who often bring their own biases about research quality to the table, placing some high-ranking journals out of limits for particular types of scholarship.
Given the messy, political and human processes involved in determining journal quality, ranking journals can become an act of intellectual hegemony (as defined by UNESCO) asserting that particular types of knowledge and ways of practising scholarship are more valuable than others. This challenges basic principles of pluralism and tolerance, and it undermines the operation of academic solidarity by divisively fracturing knowledge communities into ‘quality’ and ‘not quality’, restricting the possibilities of new forms of engagement and practice. It acts to exclude scholars working to rethink the nature of knowledge and the bases of academic practices, and in this we can point in particular to the work of indigenous scholars who seek to challenge the cultural hegemony of Western knowledge systems. The academy provides a key protection and support for such scholarship; yet journal ranking systems destroy such exciting and democratically vital possibilities for knowledge practices.
Damage to new ways of publishing
The focus of ranking lists on well-established journals also undermines scholars working to rethink the nature of academic publishing, particularly open access. There is increasing evidence that open access publications are more likely to be read and downloaded, and also cited. Yet, most humanities and many social science journals are not automatically open access and require significant payments from authors in order for their research to be made open access. There is also increasing concern from many scholars about the high cost of academic publishing, particularly of books. As a result, a number of academics are exploring alternative – but nonetheless rigorous and peer-reviewed – approaches to mainstream academic publishing. These decisions are sometimes underpinned by important commitments to open access academic research, to spreading knowledge beyond the academy, and to the democratisation of knowledge.
Publication ranking lists make this sort of social and political engagement impossible. This is anti-innovation, because it prevents scholars from engaging in this important academic movement with its potential to revolutionise how we make and enable access to knowledge.
Damage to those of us resisting the quantification of knowledge
I understand that there is a demand from institutions to quantify and evaluate the quality of our research. However, it is not clear that this ranking system for journals is an effective or impartial way to achieve that goal. Indeed, I would argue that when ranking lists are used within universities they restrict our academic freedom. The principle of academic freedom is widely recognised as vital to the functioning of the modern university, necessary to the production and democratisation of knowledge. Ranking lists that are tied to promotion, funding or other rewards undermine the basic principles outlined by UNESCO that academics “should be free to publish the results of research and scholarship in books, journals and databases of their own choice” (see esp paragraphs 12, 20 and 29 of the UNESCO resolution).
Rather than supporting the creation of journal ranking lists and contributing to the reduction of our academic freedoms, the AHA and similar historical bodies should make a clear statement explaining how ranking systems damage the field. Nothing less than the vitality, innovation and future of historical research is at stake.