Skip to main content
SearchLoginLogin or Signup

Experimenting with Workflows for Open Peer Review

Published onApr 28, 2023
Experimenting with Workflows for Open Peer Review
·

This is the seventh blogpost in a series documenting the COPIM/OHP Pilot Project Combinatorial Books: Gathering Flowers. You can find the previous blogposts here, here, here, here, here, and here.

COPIM’s Combinatorial Books: Gathering Flowers Pilot Project and book series has been looking at reusing and rewriting the books in Open Humanities Press back catalogue. As we have documented over the last few years, the first book in this series and pilot has been produced by a group of writers, early career researchers, and technologists from Mexico, who have rewritten, responded to, and in many ways extended, the arguments made in Marder and Tondeur’s The Chernobyl Herbarium, published by Open Humanities Press in 2016. Their response has been produced with the aid of annotation software tools and collaborative writing platforms, from initially drafting their first responses through a group annotation (in hypothes.is ) of the Chernobyl Herbarium, to extending these responses into larger reflections on a Hedgedoc pad. In the last stages of this project we have moved their final draft chapters (all in Spanish), which are accompanied by an introductory essay in English (which includes fragments from the Spanish chapters translated into English—mirroring the fragmentary nature of The Chernobyl Herbarium) to the PubPub publication platform. At that point in the publishing process several things still needed to be put in place: the publication had to be reviewed, copy-edited and proofread, its look and feel on the PubPub platform needed to be designed, and we needed to think about potential preservation strategies. This post reflects on how we set up an open peer review workflow for the publication, and with that for the book series as a whole. A next blogpost will look more closely at preservation aspects.

Before starting to write up our reflections on designing the review workflow and our experiences with the open peer review process, we would like to thank all the authors and reviewers that took part in this experiment with us, for their willingness, their engagement, and their time, which due to the experimental nature of this project included additional time preparing for the review and familiarising oneself with the tools and processes we set up and provided – and all of that on top of reading their allocated chapter and reviewing it. Everyone went above and beyond and has been incredibly understanding and accommodating, even at times when, as part of our role as editors, we have had to chase some reviewers in order to move the process along. We really appreciate it.

Designing guidelines and instructions

We took inspiration from several other open peer review processes and guides while designing our own and adapted some of their processes and instructions. We drew on our earlier conducted desk research, including some of the guidelines we developed the COPIM report Promoting and Nurturing Interactions with Open Access Books: Strategies for Publishers and Authors (2021). We were also inspired by projects and publishers who have been promoting a more nurturing review culture, such as the Public Philosophy Journal whose review process ‘involves supporting both the publications as they go through their development and the people involved in the formative peer review process.’1

As we were aware that many of the authors involved in this book-rewriting project were early career researchers and had perhaps not had much experience with peer review, we opted to design a semi-open peer review workflow, in which both authors and reviewers are known to each other and can respond to each other, but where they can do so in a closed group (and if requested anonymity could also be provided). As PubPub, our publication platform, at that point in time didn’t allow closed annotation groups we decided to use hypothes.is (with the added bonus that the authors were already familiar with this tool due to their annotating of The Chernobyl Herbarium).

To guide the review process, we wrote both contextual and technical instructions for both authors and reviewers (which are available here and here), adopted a code of conduct (available here) and divided the work of being moderaters amongst ourselves as editors. As the chapters were all written in Spanish, and often written in a non-traditional, more poetic and experimental way, we opted to approach reviewers of whom we knew (or in some occasions thought mistakenly!) that they first of all understood both Spanish and English, and second of all were familiar with more experimental publications or writing styles. About a third to half of the reviewers we invited accepted our invitation, and all except one completed their review on time or with a slight delay on hypothes.is (one of the series editors stepped in to review in this one case). This we thought was quite a good result, especially as, as we write in our guidelines ‘it’s even harder to find people who are willing and able to undertake rigorous peer-review that is out-of-the-ordinary and is explicitly designed NOT to simply go along with the tacit knowledge that is hidden in the traditional peer review system.’ But also because, given the nature of our project, several reviewers opted to read both Marder and Tondeurs ‘original’ volume that the chapter responded to, The Chernobyl Herbarium, as well as the response chapter to it that was generated.

Some of the conduct guidelines we developed for our open peer review process

 All in all, as we also write in our guidelines, our plan with designing and testing this process was to start nurturing a culture that will enable us to move Open Humanities Press more and more toward a dynamic meaning-producing process of open peer-review for experimental books that is community-based too.

Reflections on the open review process

In the end, the process as we set it up worked to a large extent, and as outlined above we were able to get peer review comments in on all of the chapters and the introduction. As is of course common with non-open peer review processes too, many reviewers left the review to the last moment (even though we encouraged them to submit their review comments a bit earlier to enable some time for conversation with the author(s)). In retrospect it might have worked better to set separate timeframes for the reviewers to leave their comments and for the authors to leave their responses, which is something we might implement as part of the process going forward. We of course need to take into consideration, too, that learning how to use a new piece of software (the majority of the reviewers were not familiar with using hypothes.is before) takes time, which might account for many of the reviews coming in at the end of the provided timeline. The time pressures and the unfamiliarity with this new process let to many last-minute questions and one-to-one calls betwen the editors and the reviewers, which we were happy to do and provide, but this also led to some duplication of labour, as many of the questions reviewers had were fairly straightforward and often already answered in the instructions we provided (which we tried to keep as short and helpful as possible). One of the options we would probably explore in the future (which wasn’t possible to implement in our process this time around as we were working towards tight end of project deadlines) is to organise a dedicated workshop with reviewers, which we would then ask them to attend before the review period starts, to introduce them to the basics of the software used, and the process and timelines. Something similar could be done for authors, which in our case was not necessary, as the group of rewriters had extensive experience with using hypothes.is already.

A page from our technical guidelines document

There were also several technical difficulties that we did not anticipate (well enough) in advance, including trying to navigate two annotation options, as PubPub also offers its own native inbuilt annotation function on top of hypothes.is, which (although we outlined this issue in our guidelines and provided instructions on how to select the right annotation service) led to several reviewers leaving their comments on PubPub itself as public comments, instead of leaving them in the dedicated private peer review group on hypothes.is. There were also some further glitches in the interaction between hypothes.is and PubPub which could lead to pop-up error messages, which was confusing, even though they could easily be clicked away without further interference. We also had issues with not all of the links to chapters or hypothes.is either reaching our authors and reviewers (gmail accounts in particular would send our messages directly to spam) or not working for them. In the end, we were able to resolve most of these issues for the reviewers, and where we weren’t able to or ran out of time, we uploaded their review comments to hypothes.is for them.

The majority of the authors decided not to respond directly to the reviewers in hypothes.is. Although some did and ended up having interesting exchanges with their reviewers, most of the authors opted to simply make the corrections the reviewers required or incorporated their suggestions and comments directly into their updated chapters. As this was an experiment and we were predominantly interested in testing out the systems and processes we designed, it has to be noted that we told the authors that interacting directly with the reviewers’ feedback was optional. In a next project, we might build in a more reflexive process, and might also develop clearer instructions for the reviewers (and the authors) were this is appropriate (given the experimental and non-traditionally academic nature of this rewriting project, and the fact that all the content in this book had already gone through extensive community and editorial review as part of its production process, we wanted to give the reviewers the space to respond in a way that they deemed fit). We did ask some of the authors whether they were interested in having (some of) their reviewers comments published alongside their published chapters (as many reviewers added (extensive) further reflections and suggestions), and several authors answered positively, so we are currently writing to the reviewers to ask for their permission, and will add their comments to the published version of the book where they are OK for us to do so. 

There was also a fairly large learning curve for us as editors as we tried to manage and shape the workflow partly in an ad-hoc manner, tweaking it here and there during testing, and in responding to glitches that come up along the way. As explained above, we suffered from several (minor, yet annoying) technical issues. With not everyone in our editorial team understanding Spanish, this added a further complication with respect to the moderation process, where most reviewers opted to do their review in Spanish (we suggested they could do their reviews either in English and Spanish). This complicated our moderation efforts slightly, but with a little bit of help from automatic translation services, we were still able to guide everything as best as we could.

All in all, the process worked and with some tweaks would be something we can continue to run (with adaptions made according to the needs of a specific project) as part of the Combinatorial Books book series. The main challenge remains the extra work doing this kind of work creates for everyone involved, authors, editors, and reviewers, mostly having to do with the unfamiliarity of reviewers and authors (and editors) with these kinds of systems and hence the added learning curve. What was notable for us was that the review process seemed to go much more smoothly for those reviewers who said they already had experience with using hypothes.is, or who had been involved in open peer review before. We therefore hope that experiments such as ours helps to build upon and widen this experience amongst academics and publishers, so that in the future, the process of conducting open peer review can become perhaps more common and also to some extend more standardised, so that everyone involved can have clearer expectations about what to expect.

Comments
0
comment
No comments here
Why not start the discussion?