COPIM’s governance workshops and the Exploring Community Governance Models report established both the situatedness of governance, and the fact that in any given case that situation is constantly shifting. Taken in tandem, these two factors mean that any project’s systems of governance need to be continuously—or at least periodically—self-correcting.
Regular self-assessment of governance structures, models, and practices is the first and most obvious step towards achieving this. In most cases this self-assessment is likely to be informal and ongoing, and any issues identified will be able to be dealt with through governance mechanisms already in place carrying out minor self-adjustments. Establishing a permanent governance working group can facilitate this process.
It is advisable, however, to consider complementing ongoing informal reviews by building into an organisation’s governance a more formalised self-assessment exercise, to be conducted at regular intervals (e.g., annually). As Katherine Skinner argues, such a process does not have to be punitive: instead it ‘can help us identify what elements in a program or organisation need additional investments and improvements’ (Skinner, 2019a: 8). A more formalised exercise can provide an opportunity to address more major governance issues, and avoids the risk of perpetually patching up a system that may have become fundamentally dysfunctional, however fitted to task it once was. It can also be a chance for a project to benefit from an external perspective.
There are several ways to run a governance self-assessment, and as always the specificities of any given project should dictate the forms it takes. These might include anonymised questionnaires circulated to all project members, workshops, and/or the establishment of a dedicated temporary working group consisting of members from across the project (in the absence of a standing governance working group).
In designing a system of self-assessment, one question worth particular attention is that of whether to evaluate a project’s performance against externally developed criteria. One recent example of a self-assessment exercise that uses such criteria is that of OA Switchboard, which published its results in October 2021. Like several other organisations, OA Switchboard has publicly committed to the ‘Principles of Open Scholarly Infrastructure’ (or ‘POSI’).1 These sixteen principles are divided into three sections: Governance (to which seven of the sixteen principles appertain), Sustainability, and Insurance. In a blog post OA Switchboard’s Executive Director, Yvonne Campfens, explained that they had adopted the principles ‘not just because of the guidance POSI gives on areas to address, but because POSI is also a leading example of garnering the trust of the broader scholarly community’ (Campfens, 2021).2
In many ways, OA Switchboard’s self-assessment presents an exemplary model to follow. Its report measures progress against POSI’s good governance criteria using the well-established traffic-light system (
green = good job, but not perfect;
amber = making progress;
red = failing, or significant work needed).3 This provides a useful at-a-glance evaluation, which is supplemented by a longer self-assessment report, offering a short (2-3 paragraph) discussion of OA Switchboard’s progress against each of POSI’s principles. Each discussion is prefaced by a short description of the principle, taken directly from the original (2015) POSI paper.4
This kind of self-assessment has several virtues. Firstly, in being openly published and foregrounded on the organisation’s website, it demonstrates to the broader community as well as to the participants themselves that the project is taking governance seriously, and models a transparency that brings with it a measure of public accountability. The inclusion of a blog post introducing and commenting upon the report also demonstrates (and makes public) a willingness to reflect upon its outcomes and consider the organisation’s direction of travel and any measures that might need to be taken to correct this. It also serves to further foreground the self-assessment. While a crude tool in isolation, the traffic-light system gives an immediate snapshot of progress and clearly identifies areas of concern. When deployed alongside a nuanced narrative or commentary, it also functions as a useful reminder that governance can both improve and degrade, as structures that functioned well under a certain set of circumstances become ill-suited to new situations. A self-assessment exercise such as this also serves as an opportunity for introspection for a project and its members, providing twin prompts to reflect upon how governance is working: first as the report is being put together, and secondly upon its publication. Without such a process, such occasions can be hard to manufacture under the everyday pressures of getting and keeping an open source infrastructure project up and running.
There are also obvious advantages to adopting a widely recognised and respected external set of criteria such as POSI. First, it can save a good deal of time, effort and debate, which in turn can make the whole exercise much more likely to happen, especially at organisations with slender resources. But adopting external criteria can also serve as a guard against unconsciously self-serving formulations or omissions. As the Executive Director’s blog post made clear, in the case of OA Switchboard’s adoption of POSI, the fact that these criteria are widely respected by the community within which OA Switchboard is situated was also important, and helped to lend a broader credibility to the exercise. This is likely to have a wider importance when it comes to attracting partners and reassuring existing or potential funders.
Whether an organisation decides to evaluate its governance structures against its own internally developed criteria or against a more widely recognised collection of principles such as POSI should depend upon the specificities of its own situation. Of course, the choice of criteria matters, and flaws in the principles against which any assessment is carried out will likely lead to shortcomings in the assessment itself.5 Given the issues discussed in chapter 2, criteria that also draw explicit ‘red lines’ and set out in clear and concrete terms what ‘open’ and ‘community-led’ do not mean may be especially important. There may be some signs of a shift in the direction of such an approach already: see, for example, the shift in both tone and content from LIBER’s ‘Five Principles for Open Access Negotiations’ (2017) to its recently published ‘Four Urgent Recommendations for Open Access Negotiations with Publishers’ (2022), with its insistence on ‘disallowing non-disclosure clauses’, and its call to ‘divest from subscription agreements and repurpose funds to support alternatives’.
Paradoxically perhaps, one of the most valuable effects of engaging with external sets of principles is that it helps an organisation identify and articulate the points where it differs or dissents from widely-held values within its broader community of communities. Both OA Switchboard and Crossref stick closely and explicitly to POSI in their self-evaluations, but also identify specific points where it doesn’t seem to apply to their own circumstances or values. This highlights what makes each organisation distinctive, original and innovative, both for the organisation itself and for those outside it. It should also make real mutual understanding and the forming of genuine coalitions more likely.
Looking forward, it will be interesting to see whether signatories to POSI such as OA Switchboard and Crossref continue to assess their own performance solely or even principally against its sixteen principles, or move towards a more individuated set of criteria, reflecting the particularities of their own values and circumstances, and their own more established status. It seems probable and desirable that just as the governance of emergent infrastructures will be moulded by their desire to adhere to external principles such as POSI, so pressure from such projects will lead to modifications to those principles, or their ultimate replacement by alternatives. This is one way in which an infrastructural organisation or network can develop a symbiotic, mutually beneficial relationship with the wider open source community.
One further set of external criteria (which are also highly adaptable) that should be mentioned in this context has been developed by the Next Generation Library Publishing project. They see their Values and Principles Framework and Assessment Checklist as an assessment tool that can help scholarly communications funders, organisations, and communities to identify how well their operations align to and ‘demonstrate their adherence to and support for agreed-upon academic values and principles’. Their framework consists of six top-level values (diversity, equity, and inclusion; transparency; openness and interoperability; access to knowledge; financial and organizational stability; and representative governance) followed by several principles or ways in which these values might be manifested. Finally, each principle includes several measurable indicators and suggestions for possible evidence that might show adherence to that Indicator. They also include use cases for this framework, for example as part of self-assessments, reviews by trained peers, and ongoing reviews. One of their further takeaways is that ‘those that demonstrate high alignment with academic values and principles tend to seek out a close engagement with a community of practice to whom they intentionally offer both control and responsibility’ (Skinner & Lippincott, 2020). This is an important point to make and once again highlights the importance of community outreach to ensure they are able to engage with both governance and its ongoing assessment and have a say into how this assessment happens and what it is based upon.
Asura Enkhbayar from Invest in Open Infrastructure similarly highlights how their assessment criteria are designed to centre community (alongside reliability, and transformative influence). For them this is reflected in two principles. First of all they argue for the importance of prioritising stakeholders (and their interests and needs) over shareholders, which in community-governed organisations would be reflected in a focus on community responsiveness, which in the context of governance assessment would include not only ensuring the community is involved in governance, but that an organisation’s operations are also transparent and have clear mechanisms of accountability (which for them includes a community-driven mission). Ensuring that the community is involved in these kinds of governance assessments exercises (and hence that they do not just function as something that needs doing because of funder requirements, for example) is something that we would therefore recommend, for example by making sure that there are clear community oversight mechanisms built into your governance. Finally, as Enkhabayar highlights, in particular also in the context of DEI and ensuring equitable access to infrastructures (and in this context we would add to the design of governance structures), ‘community engagement should not be a one-way street of knowledge extraction but instead constitute an exchange of knowledge and power’ (Enkhbayar, 2021). In this respect a focus on the labour involved in community governance as put in by its community and stakeholders, and a focus on good governance as an essential aspect of the maintenance of open infrastructures, seems pertinent in this context.
Skinner, K., & Lippincott, S. (2020). NGLP Values and Principles Framework and Assessment Checklist. Commonplace. https://doi.org/10.21428/6ffd8432.5175bab1
Skinner, K., & Lippincott, S. (2020). NGLP Assessment Checklist. Commonplace. https://doi.org/10.21428/6ffd8432.5175bab1/00710d8a
FAIR Data Principles and their process of ‘FAIRification’: https://www.go-fair.org/fair-principles/fairification-process/
CARE Principles of Indigenous Data Governance. Global Indigenous Data Alliance. https://www.gida-global.org/care. See also Ruckstuhl, 2022.
National Open Science Plan. (2019, April 17). Exemplarity criteria for funding from the National Open Science Fund through platforms, infrastructures and editorial content. Ouvrir La Science. https://www.ouvrirlascience.fr/examplarity-criteria-for-funding-from-the-national-open-science-fund