How would you describe your overall experience with the peer review process in your field — both as an author and as a reviewer?
In a word, mixed. As an author, I don’t recall very many helpful reviews. Some were prejudiced, but most were anodyne. History is a fragmented discipline with no set or stable canon, editors may often not know a field very well, or may be unable to find suitable reviewers.
I did not face the same problem, as an editor of a field journal working as part of an editorial collective. Yet there were reviews from time to time, which fell short in some way. In those instances, the editor has a particular responsibility for keeping trust with the author(s). When it comes to accepting papers for review, I tend to be pretty discriminating, and while I have learnt a lot from reading fresh work, the quality of the work we are sent to review can sometimes surprise. Here again editors have a role in separating wheat from chaff.
In your view, what are the key elements of a high-quality peer review process, and how often do you think those are met in practice?
This is hard to summarise in the abstract, especially for a discipline in history. The quality of the submission also has some bearing on the quality of the process. In general, however, double-blind reviews by two or more reviewers are a must. Funding agencies are right to supply extensive guidelines and ranking criteria to reviewers of grant proposals, though perhaps with some avoidable redundancy. Journals generally have looser guidelines; hence the quality of the review depends greatly on the quality of the engagement the reviewer brings to the submission. Unlike in the case of grant proposals, where reports are codified into rankings and review committees may not have much time to pore over the reports, journal editors have an important role in ensuring the quality of the review process, and where necessary mediating and communicating reviewers’ reports to authors in a constructive way… Even more so where the authors happen to be early-career scholars.
However, all of this may become harder to realise with editorial management migrating to (potentially AI-assisted) online platforms. As the relationship between editors and reviewers becomes more impersonal, I wonder whether journals will start to encounter the same problem as some grant agencies, namely of AI-generated reviews.
What do you think could most improve the fairness and effectiveness (or quality) of the peer review process?
Here, I confine myself to journals. First and foremost is journal and editorial integrity, which cannot always be taken for granted, especially for journals owned by corporate publishers. This problem of integrity may seem less serious at present in the humanities and social sciences, but creeping citation biases may already be a sign of the times. As I said above, editors (ideally assisted by active and vibrant editorial boards) are a key pivot of the peer review process, which to be truly fair and capable of nurturing quality scholarship, also has to be make room for diversity, in terms of outlook, methods, personnel, and sensibilities, of both authors and scholarship.