Skip to main content
SearchLoginLogin or Signup

Part 3: Technical Workflows, Tools, and Platforms for Experimental Publishing, Interaction, and Reuse of Books

Published onApr 26, 2022
Part 3: Technical Workflows, Tools, and Platforms for Experimental Publishing, Interaction, and Reuse of Books
·
history

You're viewing an older Release (#3) of this Pub.

  • This Release (#3) was created on Feb 01, 2021 ()
  • The latest Release (#4) was created on Apr 26, 2022 ().
key-enterThis Pub is a Version of
Books Contain Multitudes: Exploring Experimental Publishing
Description

A COPIM WP6 Research and Scoping Report Books contain multitudes: Exploring Experimental Publishing is a three-part research and scoping report created to support the Experimental Publishing and Reuse Work Package (WP 6) of the COPIM project. It also serves as a resource for the scholarly community, especially for authors and publishers interested in pursuing more experimental forms of book publishing. COPIM (Community-led Open Publication Infrastructures for Monographs) is a 3-year project led by Coventry University as part of an international partnership of researchers, universities, librarians, open access (OA) book publishers and infrastructure providers and is funded by The Research England Development Fund and Arcadia—a charitable fund of Lisbet Rausing and Peter Baldwin. COPIM is building community-owned, open systems and infrastructures to enable OA book publishing to flourish, delivering major improvements in the infrastructures used by OA book publishers and those publishers making a transition to OA. The project addresses the key technological, structural, and organisational hurdles—around funding, production, dissemination, discovery, reuse, and archiving—that are standing in the way of the wider adoption and impact of OA books. COPIM will realign OA book publishing away from competing commercial service providers to a more horizontal and cooperative knowledge-sharing approach. As part of seven connected Work Packages, COPIM will work on 1) integrated capacity-building amongst presses; 2) access to and development of consortial, institutional, and other funding channels; 3) development and piloting of appropriate business models; 4) cost reductions achieved by economies of scale; 5) mutually supportive governance models; 6) integration into library, repository, and digital learning environments; 7) the re-use of and experimentation with OA books; 8) the effective and robust archiving of OA content; and 9) knowledge transfer to stakeholders through various pilots. The Experimental Publishing and Reuse Work Package looks at ways to more closely align existing software, tools and technologies, workflows and infrastructures for experimental publishing with the workflows of OA book publishers. To do so, it will produce a set of pilot cases of experimental books, which will be developed with the aid of these new tools and workflows and integrated into COPIM’s infrastructure. As part of these pilot cases, relationships will be established with open source publishing platforms, software providers, and projects focused on experimental long-form publications and outreach activities will be conducted with OA book publishers and authors to further promote experimental publishing opportunities. This Work Package will also explore how non-experimental OA books are (re)used by the scholarly community. As such, it will examine those technologies and cultural strategies that are most effective in promoting OA book content interaction and reuse. This includes building communities around content and collections via annotations, comments, and post-publication review (e.g., via the social annotation platform hypothes.is) to enable more collaborative forms of knowledge production. To achieve this, this work package will map both existing technological solutions as well as cultural barriers and best practices with respect to reuse. This Work Package will also produce an online resource to promote and support the publication of experimental books. This report has been produced to support both the development of this online resource and the pilot cases we are developing together with partner presses (including Open Humanities Press and Mattering Press). In parts one and two of this report, we situate experimental books in the context of academic research and map current experiments in book publishing in order to create a typology accompanied by a selection of examples of experimental book publishing projects. In part three of this report we then review existing resources on tools, platforms, and software used in the production of experimental books, and we sketch a roadmap and methodology towards the creation of the online resource mentioned previously. To support the pilot cases we have made a start with exploring two key practices within experimental publishing and the creation of experimental books that feature within this online resource: collaborative writing and annotation. As such we outline tools, platforms, software, and workflows that support and enable these practices next to describing the desired aspects we argue this technical infrastructure should cover. Our thanks go out to our COPIM colleagues for feedback on earlier drafts of this report (with special thanks to Gary Hall, Julien McHardy, Samuel Moore, and Agata Morka) as well as to the participants of COPIM’s Experimental Publishing Workshop, who read and engaged with the first part of this report (Mapping and Situating Experimental Books). Our appreciation also goes out to the Next Generation Library Publishing Project for sharing an early catalogue-in-progress version of SComCat with us, and to members of the Radical Open Access Collective for suggesting examples for the Typology of Experimental Books (part 2 of this report) — especially to Nicolás Arata, Dominique Babini, Maria Fernanda Pampin, Sebastian Nordhoff, Abel Packer, and Armanda Ramalho. --- Note: The Living Document version of this report will be evolving further along with the progress made in the Work Package, with iterative updates subsequently published on PubPub: https://copim.pubpub.org/books-contain-multitudes-exploring-experimental-publishing

For this third part of the scoping report, we will be looking at the technical developments around experimental book publishing. We will be doing so in a three-fold manner in the next three sections. First, instead of conducting a landscape study ourselves, we will be reviewing a number of studies and resources that have recently been released and that have tried to categorise, analyse, and map the open source publishing tools and platforms currently available to support open access (book) publishing. Our focus in this analysis will predominantly be on those tools and technologies that can support the kinds of experimental publications that we have identified in the first two parts of this scoping report.

Secondly, in section 2, we will outline a proposed methodology to analyse and categorise the currently available tools and technologies to support the creation of an online resource for publishers and authors in year 3 of the COPIM project. This online resource will include the technological support and workflows available to enable more experimental forms of book publishing, whilst showcasing examples and best practices for different levels of technical know-how.

Thirdly, in section 3, we will make an initial attempt at categorising a selection of tools following this proposed methodology, where we will be focusing on collaborative writing tools and on annotation tools—and the software, platforms, and workflows that support these—in first instance. The choice for these tools is driven by the first Pilot Case we are supporting as part of the COPIM Experimental Publishing and Reuse Work Package, which is run by Open Humanities Press and tentatively titled Combinatorial Books: Gathering Flowers. This Pilot Case looks at elements of annotation and collaborative writing as part of its research and publishing process; hence we will be supporting this Pilot Case through this scoping work at the same time.

Review and Analysis of Key Studies and Resources

Maxwell, J. W., Hanson, E., Desai, L., Tiampo, C., O’Donnell, K., Ketheeswaran, A., Sun, M., Walter, E., & Michelle, E. (2019). Mind the Gap: A Landscape Analysis of Open Source Publishing Tools and Platforms. PubPub. https://doi.org/10.21428/6bc8b38c.2e2f6c3f

The first resource or environmental scan we looked at was the Mind the Gap report, conducted by John Maxwell et al. at Simon Fraser University in Vancouver on behalf of the MIT Press after they secured a grant from the Mellon foundation in 2018. As they state in the report, the award was to

‘conduct a landscape analysis of open source publishing systems, suggest sustainability models that can be adopted to ensure that these systems fully support research communication and provide durable alternatives to complex and costly proprietary services.’ (Maxwell et al., 2019)

As they note, the last few years have seen an increase in the number of open source publishing platforms (many well-developed, stable, and supported) or, in other words, production and hosting platforms for both scholarly books and journals. The report argues that this is evidence of an infrastructure ‘ecology’ emerging which includes complementary, non-competitive service technologies instead of proprietary and often bespoke software systems. This is of particular relevance for our work with COPIM, as

‘at a more ambitious level, they may even form a layer of community infrastructure that rivals—or at least provides a functional alternative—to the commercial infrastructure run by a small number of for-profit entities’ (p. 1).

Mind the Gap provides a guidebook through this proliferating yet noisy landscape, as they work to help ‘the university press community and other mission-focused enterprises’ (p. 1) with decision-making and project planning. Next to being a catalogue of open source publishing tools, the report also examines the ecosystem in which these tools and projects exist. The element of community infrastructure and interoperability is key here, as a ‘system in which these components can be mobilized to serve larger goals’ (p. 2).

Part II of the report serves as a catalogue of open source publishing projects. For each open source project, Maxwell et al. provide a summary description plus details on the host organisation, the project's principal investigator or leadership, funders, partners (both strategic and development), date of original release, and current version, plus some basic data drawn from the projects’ Github/Gitlab repositories, including development language, license, and number of contributors. As part of their methodology, they looked at tools and projects that were ‘available, documented open source software relevant to scholarly publishing’ and that ‘were ‘still alive’—that is, with evidence of active development’ (p. 2). They emphasise however that this is a dynamic space, and that their cataloguing is a snapshot of a specific moment in time. As such, Maxwell et al.’s analysis is not only based on individual tools but on a consideration of the dynamic landscape as a whole. Their categorising is mainly based on exclusion, where they did not include tools and projects that were closed-source, cloud-based services, research (instead of publishing) tools, library infrastructure, DIY ad-hoc toolchains, and dormant projects.

The key themes that informed their research were sustainability, scale, collaboration, and ecosystem integration. One key research question was ‘who will care about these projects?’ In other words, ‘care enough to fund, contribute, promote, use, and ultimately further their useful life? What are the values and mechanisms that cause people—especially external stakeholders—to care enough about these projects to keep them alive, and even thriving, going forward?’ (p. 3). The gap that they have noticed as part of their research is one of co-ordination and integration between and among projects. In other words, there is a lack of interoperability and incentives for collaboration between projects.

In Maxwell et al.’s mapping of the tools and projects they emphasise a few main characteristics:

  • Difference between journal publishing and book publishing

  • Centralised vs distributed models

  • Old projects and new projects

  • Functional scope (i.e., development across hypothetical workflow stages)

  • Operational details (development features, languages and frameworks, licenses, and funding)

  • Traditional functions vs. new capacities (i.e., interactive scholarly works)

  • Technological approaches and trends (approaches to XML, conversion and ingestion strategies)

  • Workflow modeling and management

  • Innovating new possibilities

Key findings were issues of:

  • Siloed development, with the recommendation that ‘where possible, collaboration, standardization, and even common code layers can provide considerable benefit to project ambitions, functionality, and sustainability’ (“Prospects,” p. 21).

  • The organisation of the community-owned ecosystem itself, where the recommendation is that ‘neither a chaotic plurality of disparate projects nor an efficiency-driven, enforced standard is itself desirable, but mediating between these two will require broad agreement about high-level goals, governance, and funding priorities—and perhaps some agency for integration/mediation’ (“Prospects,” pp. 20-1).

  • Funding, where the question was ‘what would project funding look like if it prioritized community governance, collaboration, and integration across a wider ecosystem?’ (“Prospects,” p. 22).

  • Longevity and maintenance, with the recommendation that ‘if the care and upkeep of projects could be extended to multiple groups, multiple institutions, then not only is there a larger and more diverse set of people who care, but opportunities for resourcing increase, and also, when one group’s priorities inevitably shift, it is less likely that a project is simply abandoned’ (“Prospects,” p. 23).

  • Ecosystem integration, with the reminder that ‘if the goal of community-owned infrastructure is to succeed, then structural attention needs to be paid to the integration of projects, goals, and development efforts across the ecosystem’ (“Prospects,” p. 24).

  • Whether we need centralised or distributed options, or a tertiary service provider? With the recommendation that ‘if longer-term funding for sustainability is needed, then a mediating layer might productively function as a broker of such funding, assuming overhead costs remain low’ (“Prospects,” p. 28).

  • Scale, where almost all of the projects they examined are too small, niche or specialised to be sustainable on their own. Additional funding will be needed.

  • The importance of trust in open scholarly communication, which presents challenges for scalability. Recommendation that ‘community coordination may go some distance towards addressing this [issue]’ (“Prospects,” p. 28).

Lewis, D. W. (2020). A Bibliographic Scan of Digital Scholarly Communication Infrastructure | Educopia Institute. Educopia Institute. https://educopia.org/mapping-the-scholarly-communication-landscape-bibliographic-scan/

The second resource we looked at is a Bibliographic Scan by David W. Lewis on behalf of the Educopia Institute. The blurb accompanying this resource summarises its aims quite well:

This Bibliographic Scan by David W. Lewis provides an extensive literature review and overview of today’s digital scholarly communications ecosystem, including information about 206 tools, services, and systems that are instrumental to the publishing and distribution of the scholarly record. The Bibliographic Scan includes 67 commercial and 139 non-profit scholarly communication organizations, programs, and projects that support researchers, repositories, publishing, discovery, preservation, and assessment. 

The review includes three sections: 1) Scholarly citations of works that discuss various functional areas of digital scholarly communication ecosystem (e.g., Repositories, Research Data, Discovery, Evaluation and Assessment, and Preservation); 2) Charts that record the major players active in each functional area;  and 3) Descriptions of each organization/program/project included in the Bibliographic Scan.  This work has been produced as part of the “Mapping the Scholarly Communication Infrastructure” project (Andrew W. Mellon Foundation; Middlebury College, 2018-20).

The second and third part of the report list and describe projects, programs, and products (as well as listing some key literature on these), and categorises them according to Researcher Tools (Reading, Writing, Annotation, and Collaboration), Repositories, Publishing, Discovery, Evaluation and Assessment, Preservation, and General Services. This categorisation also indicates whether the organisation hosting the project or product is non-profit (NP) or for-profit (P).

Confederation of Open Access Repositories (COAR), & Next Generation Libraries Publishing. (2021). SComCaT: Scholarly Communication Technology Catalogue. https://www.scomcat.net/

The third resource we looked at is the Scholarly Communication Technology Catalogue (ScomCat), a catalogue or database of open tools, platforms, and technologies that identifies relationships and dependencies between them. Developed by Antleaf for the Confederation of Open Access Repositories (COAR) as part of the Next Generation Libraries Publishing project, the catalogue maps these technologies according to adoption levels, functions, categories, governance, and readiness. This catalogue has now been made openly available since January 2021. Our thanks go out to the Next Generation Libraries Publishing Project for sharing the early catalogue-in-progress version with us. From the catalogue’s home page:

SComCat comprises a catalogue (knowledge base) of scholarly communication open technologies where the term "technologies" is defined to include software and some essential running services. The aim is to assist potential users in making decisions about which technologies they will adopt by providing an overview of the functionality, organizational models, dependencies, use of standards, and levels of adoption of each technology.

The scan includes tools, platforms, and standards that can be locally adopted to support one or more of functions of the lifecycle of scholarly communication, which is conceptualized as including the following activities: creation, evaluation, publication, dissemination, preservation, and reuse. (COAR & NGLP, 2021)

Radical Open Access Collective. (n.d.). Information Portal: OA Publishing Tools. https://radicaloa.disruptivemedia.org.uk/resources/publishing-tools/

The fourth resource we looked at is the Radical Open Access Collective’s Information Portal, which includes a list of Open Access Publishing Tools. This page contains a list of open source tools, software, and platforms for scholar-led approaches to open access publishing. It lists all-in one platforms or services as well as more targeted solutions. It provides descriptions of the tools and links to their home pages and to other resources related to the tools or platforms.

Kramer, B., & Bosman, J. (n.d.). 400+ Tools and innovations in scholarly communication. Google Docs. https://bit.ly/innoscholcomm-list

The fifth resource is a shared crowd-sourced database of tools and technologies in scholarly communications, that grew out of the "101 innovations in scholarly communication" project led by Bianca Kramer and Jeroen Bosman at Utrecht University in the Netherlands. As they explain:

When we published the 101 list of selected innovations our database already contained some 200 innovations/tools. The 101 selection was strictly on innovativeness and thus did not contain recent tools if they where not innovative compared to older ones with the same functionality, even if the more recent ones were more popular or well-known. The database shared here has dropped that strict innovativeness criterion and thus contains multiple tools offering basically the same functionality. (Kramer & Bosman, n.d.)

Tools are identified by workflow phase (preparation, discovery, analysis, writing, publication, outreach, assessment) and short descriptions of each tool are provided.

Tennant, J. P., Bielczyk, N., Tzovaras, B. G., Masuzzo, P., & Steiner, T. (2020). Introducing Massively Open Online Papers (MOOPs). KULA: Knowledge Creation, Dissemination, and Preservation Studies, 4(1), 1. https://doi.org/10.5334/kula.63

This sixth resource is included here due to its approach to identifying and discussing common traits of collaborative writing tools: while the main focus of '“Introducing Massively Open Online Papers (MOOPs)” is on ‘collaboratively author[ing] research articles in an openly participatory and dynamic format’ (Tennant et al., 2020), the workflows that are explored in the paper and the steps taken to identify common features to evaluate a variety of tools along a set of predefined criteria (see the paper’s Table 2) that are posited as user requirements for collaborative writing platforms, are introduced here in a concise fashion that warrants further adoption and expansion to fit the needs of experimental book publishing.

Categories introduced by this paper that might also inform our discussion of experimental publishing tools (Authorea, CryptPad, Google Docs, Overleaf , HackMD1) include:

  • Sustainability2 model (FLOSS (open source, self-hostable), freemium [basic functionality for free, premium add-ons], proprietary but free-to-use (via user account/login).

  • Based on Open Source platform (yes, no - open repository of software code available).

  • Option to export to open formats, (if yes, which kind of output format - markdown, git, Word, Open Document Text, html).

  • Interactive multi-user collaboration (commenting, editing, etc.) .

  • Integration of Reference Management solutions (i.e., using Zotero and other RefManager tools with your collaborative writing tool).

  • Predefined Formatting / Layout styles to fit journal house styles where possible.

Proposed Methodology for an Online Resource to Support Experimental Publishing

In year 3 of the COPIM project, we will be delivering an online resource to support authors and publishers in publishing more experimental long-form works. As part of this research and scoping report, we want to propose a methodology or a set of methodologies to support the development of this resource, which we hope will become community-maintained in the future. By publishing this report and updates to it, we hope to receive further feedback from publishers, authors, technologists, and platform providers on this proposed methodology and on the set-up and usefulness of the online resource. We then hope to be able to incorporate this feedback to further develop and fine-tune the ideas presented in this report over the next couple of years (as part of various updated versions of this report).

The first aspect we will be focusing on is identifying those open source tools, platforms, and technologies that are particularly useful for more experimental forms of publishing (because they support the creation of experimental books, for example). We will in the first instance use the resources listed in the previous section to identify those tools that are currently available. As part of our subsequent analysis of these tools we propose the following methodology or set-up for the online resource:

  • An introductory part/glossary that defines what we mean when we refer to open source tools, and how - within the category of open source tools - one can differentiate between software packages and hosted solutions, and between the commercial, not-for-profit, and other underlying business models (e.g., institutional support) that support these services or platforms.

  • A review of those tools we deem most useful to support the publication of experimental books. Next to providing a basic description of the tool and its purpose and usage, this review will consider collaborative capabilities and features (e.g., synchronous editing, in-document change-tracking and versioning) and its availability as a stand-alone tool and/or platform, while also focusing on the skills level of both publishers and authors, focusing on the technical knowledge required to install and use the tool, software, or platform discussed. In addition to this, the review will focus on the longevity and stability (sustainability) of the tools under review. For example, we will explore who is maintaining them under which conditions and in what way, and how many times they have been successfully implemented.

  • A categorisation/tagging of tools according to the main experimental publishing functionalities we will identify (i.e., annotation, collaborative writing, open peer review, multimodal publishing, versioning, enhancing existing documents). Our aim with this categorisation is to provide authors and publishers with a range of tools to choose from if they are interested in experimenting with, for example, open peer review or multi-modal publishing. But we also want to outline the difference in functionality between tools, and the skills-level required to implement the specific tool in the research or publishing workflow, and show what you can do with the tools based on your skills level. (From a developer’s perspective, for example, how easy is it to install and run the tool locally or on a VPS.)

  • An identification of relations between tools: i.e., which ones work well together and/or are interoperable, and can evolve into a service ‘stack’ of related, complementary service technologies, or into a workflow for publishers and authors to experiment with and adapt as part of their own research and publishing workflows. The other side of this coin would be to identify specific workflows for publishers and authors and to map available tools and technologies on them.

  • Work backwards from a few key examples of previously published experimental books to analyse which tools and workflows were used to produce those experimental books (while linking back to potential alternative tools, or new tools or updates to tools released after the example book was published). This would include user experiences or stories/narratives (where available) about the research and publishing process involved in their creation. In other words, our aim is to map tools and technologies onto real examples of OA experimental books to showcase what you can do with these tools and to show proof of concept.

This proposed methodology comes with certain risks and unknowns that we hope to more clearly map and identify when we request community feedback on this scoping report. These are some of the risks we have identified up to now:

  • How to involve the community of technologists, software, and platform providers in the set-up of this online resource (again, as a community-led endeavour), while at the same time being able to provide an assessment / review of the tools discussed as part of the online resource? One way to resolve this is by looking at clear categories to base our assessment on, which can be devised with the aid of the technologists involved.

  • How to make sure we adequately capture researchers’ and publishers’ workflows or are able to suggest software stacks that can be implemented in publishing or research workflows? One of the ways we hope to achieve this is by first of all requesting feedback from the ScholarLed presses involved in the COPIM project; and second of all by requesting feedback from other presses (for example, via workshops and interviews).

  • How to ensure the online resource will be maintained after the project ends? As we are keen to develop this online resource from the start as a community-led project, we hope to involve the community of authors and publishers interested in the publishing of experimental books in the set-up of this online resource. We imagine that in the future it can be maintained by a community of volunteers (led by an Advisory Board, for example), or can be integrated in the wider COPIM infrastructural provision. As the tools and resources we will be describing and analysing as part of this online resource will be highly dynamic, it is crucial that we design this online resource as a processual endeavour that can easily be updated and maintained by the scholarly and publishing community. As part of the research for this online resource (and in collaboration with the COPIM Governance Work Package) we will be studying the governance of similar projects and resources (such as the Electronic Literature Directory) that have been able to achieve a certain level of longevity.

Categorising Tools

On ‘Open Source’ Tools

To make a head start on the proposed methodology for an online resource around experimental book publishing described in the previous section, we want to outline both here for this report and for any future work based on our research, some of the principles and concepts that underlie our work, as well as what we feel would be desired aspects for technical workflows to have in the context of experimental book publishing. Similar to Maxwell et al. (2019), our approach to ‘open source’ is informed by the understanding encapsulated in the (F/L)OSS acronym, i.e., the notion of Free/Libre and Open Source Software that is ‘developed in such a way that its source code is open and available online, and explicitly licensed as such’ (“Setting Context,” 2019). Hence, we limit our selection to those tools that have been made available as self-hostable packages under the premise of open, permissible licensing (e.g., GPL, Apache 2.0). We also highlight the underlying value system and modus operandi chosen by each of the tools so as to make visible the features that may prove conducive for inclusion in a curated selection of such tools, as we seek to do in the COPIM project.

From a historical perspective, it seems pertinent to keep the underlying factions of the struggle to define open software in mind: while the Free/Libre Open Source Software (FLOSS/FOSS) camp has postulated four fundamental freedoms that are governing its value-based proposition, this is not necessarily true for the open source approach to software, which is more occupied with the practical means of software production/development following a ‘bazaar’ model of collaboration (Raymond, 1998), which in turn does not explicitly enshrine the Free Software movement’s fundamental freedoms.3

Graphical User Interfaces vs Command Line Interfaces

Many interesting experiments happen (both in digital scholarship and publishing) when using and combining different tools together in new ways. If these attempts are successful there is a significant chance the newly introduced (combined) technique will become a feature of existing tools or even a tool in its own right. To encourage scholars and publishers to start experimenting with new digital tools and technologies as part of their research and publishing practices, we want to make the argument that it is productive, from a technical perspective, to understand and capture this process as a sequence of steps, performed by orchestrated human labour and/or software tools, moving from the beginning to the end of a specific work (or research or publishing) process. This is what is commonly called a workflow. A workflow’s sequence consists of distinctive repeatable patterns, and those patterns might overlap throughout authoring and publishing workflows.

Most distinctive operations in the sequence of a workflow are exposed to the user through a user interface. The most popular and wide-spread one is the so-called 'point & click' graphical user interface, with its iconic drop-down menus where one can choose which operation to be performed by the tool.4 In general, people know how to point & click in the drop-down menu of MS Word, LibreOffice, or Google Docs, for example, and open a file, select text, apply italic or bold font styling, and save the file in one of the available file formats the tool offers. If we would have to express the level of user expertise needed in order to work with these kinds of tools, we could classify them as ‘a regular user.’

Authoring tools such as MS Word, LibreOffice, or Google Docs expect a user to open a certain number of supported input file formats such as .ods, .doc, .md,5 and export or save them in, again, a certain number of supported output file formats. Almost everything a user can do in these kinds of tools is supposed to be done manually by pointing & clicking on drop-down or contextual (i.e., right-click on one’s mouse/pointing device) menus. If, for example, a user needs to process digital photos, she can use a similar GUI tool such as Photoshop. Following the suggested workflow sequence, she would then open a photo, point & click on menus in Photoshop, and save the graphics into a file format (e.g., .jpg, .png) that text authoring tools such as MS Word are able to import.

These tools can be used in a sequence of steps and following distinctive patterns of use, but due to the design principles that many of these GUI-based tools follow,6 their role in an open workflow potentially involving a set of interchangeable tools/applications is doubtful.

While there is nothing in a graphical user interface that would make a single tool in a workflow less interoperable with other tools, both the evolution of proprietary file format standards and corresponding developments pushed by commercial software companies to make their GUIs uniquely fit their distinguished user group, has led to substantial problems with regards to interoperability that, through years of use of these GUIs by its users, have led to a profound silo-isation of GUI tools.7

However, an alternative culture does exist, one mostly built around the so-called ‘command line interface’, which preceded the GUI era. This culture derives from and is based on decades of development of the Unix operating systems ecosystem. In summary, this cultures underlying philosophy states: ‘Write programs that do one thing and do it well. Write programs to work together. Write programs that handle text streams, because that is a universal interface’ (Salus, 1994, p. 52). In Unix, interoperability is key, where it is expected that the output of one tool (for example ….) can be used as an input for another tool. This tool’s output could then, again, become the input for yet another tool, a third, fourth or as many tools as one would want to link together in a pipeline of interoperable tools to form what is generally called a toolchain.

This flexibility comes with a price, however. Not all users are happy or are familiar with typing commands into a terminal (aka the ‘command line’), especially when their usual interactions with a computer have been solely mediated through GUI-based desktop applications.

However, if one wants to explore experimental research or publishing pipelines, forms of automation such as batch processing—including the automated generation of different output formats from one source format; automated and streamlined lay-outing along a pre-defined set of rules; and/or massive conversion of files such as the transformation of image files to one compatible format for web publications—would really benefit from command line tools/utilities, which are also often developed years before these kinds of features get implemented in mainstream GUI authoring tools.8 As such, research teams or publishing operations that are open to typing lines of commands into the terminal will most likely be able to get things done much quicker.9 Command line based tools such as Pandoc, PDFtk, Xpdf-utils or Sphinx, Jekyll, and Hugo are able to manipulate, extract, convert, and process PDFs, plain text, LaTeX, HTML or Markdown files into all kinds of documents, websites, or publications ready to be served to end users or just passed further down the tools pipeline. To be able to really explore the many possibilities experimental publishing and experimental books can offer, we would therefore always recommend research teams and publishing projects familiarise themselves with the basics of the command line interface.10

Desired Aspects of Technical Workflows

From a technical perspective, we at COPIM are committed to open source solutions. To accommodate the creation of experimental books in the best way possible, we recommend that any technical research or publishing workflow takes into consideration the following desired aspects:

  • The code used within the workflow should be open source available in a version control system.11

  • The workflow should be user friendly (ideally when working with both command line and graphical user interfaces).

  • The workflow should be easily installable/deployable in a cross-platform environment (available for a variety of Computer Operating Systems including Linux, Apple’s OSX, Microsoft Windows, Google Android, Apple’s iOS, as well as taking different types of platforms such as desktop computers / laptops, mobile phones, tablets, and web servers and cloud services into account),

  • The workflow should be modular, so that any work done as part of one certain phase/step of the workflow can be re-used further down the pipeline of another compatible workflow. This translates to an operationalisation of steps that can be actioned by (sets of) commands in the CLI to be combined in a modular way.

  • The workflow should be interoperable and support established standards such as xml-based document formats (.ods, .odt, .xml, .epub) or plain text markups such as HTML and Markdown, both for its inputs and outputs. This would be to enable the workflow to follow up on what has already been done in another compatible workflow; or to enable its output(s) to be used as (an) input(s) for another compatible workflow.

  • It should be possible to build distributed services around/on top of a given workflow, meaning that it:

    • can be installed and run on your own computer/server,

    • can be installed and run as a node in a federated network (such as email infrastructure, the Mastodon social network, PeerTube video delivery, or the XMPP instant messaging protocol),

    • can be installed and run as a node in a peer2peer/mesh network (such as BitTorrent content delivery, the Tor anonymity network, or the Freifunk wireless community network).

  • A workflow’s sources should remain human-readable and should not require idiosyncratic (versions of the) software in order to use the workflow (i.e., this would be an argument for using Markdown documents over Rich Text formats that tend to bury information relevant for text output in the depths of their xml-based document structure). This would also make source materials easier to archive.

  • The workflow should be collaborative in either an asynchronous or synchronous way.12

  • The workflow should track the edits/versions of who, when, and what changed in a (collaborative) document.

  • The workflow should allow for (interoperable) annotations and/or comments. This means that, ideally, annotations and/or comments are available as human readable, versioned source materials that include contextual information/metadata about e.g., their relation to the annotated text.

  • The workflow should render/transform user input into results/output(s) that manifest in an online and/or offline-ready website, EPUB, PDF or other formats ready to be read, edited, annotated, commented, widely distributed, preserved, archived, and used by other compatible workflows.

We are aware that it will be difficult for any technical workflow to cover or include all of the aspects listed here. In most research and publishing contexts, workflows are chosen based on criteria of speed, ease of use, and availability. Familiar user interfaces therefore have a better chance to be picked up in the first instance (which also explains the continued preference for print-based interfaces and workflows in digital scholarship and publishing). Similarly, through our institutional settings, we have grown accustomed to working with commercial software solutions (e.g., provided by Microsoft, Apple, Google). This is why, for example, interfaces that are similar to Google Docs (often used to support collaborative writing projects) will be the starting point for many collaborative research projects. However, as a piece of software, Google Docs is proprietary, cloud based, not installable/deployable, and hardly modular or interoperable. Still, even the option of being able to export a given document via "Save as" into different formats can present a first step and an entry point to opening up publishing to experiments, as this output can then be used as a starting point to follow-up with workflows that cover more of the desired aspects listed here.

Plenty of alternatives to GoogleDocs exist in the free & open source world. For example, within the COPIM project we use ONLYOFFICE integrated with our own instance of the file hosting service NextCloud. Both projects are open source, interoperable, support established standards, are well integrated, relatively easy to set up and to run on a server. NextCloud has a fairly modular architecture which has attracted a whole ecosystem of plugins that can address different tasks, among which sits ONLYOFFICE, which follows the familiar paradigm of the Microsoft Office Suite. Experimental books or publishing projects that involve elements of (collaborative) writing and editing, just as is the case in proprietary office suites, will most likely, benefit most from the possibility to save their outputs in a variety of output formats, giving them the flexibility to incorporate that output into another (follow up) workflow again.

Some of the desired workflow aspects listed previously are only achievable if they are set up, ran, and maintained by publishers or researchers who have a certain (minimal) level of computer literacy and skills (which is often lacking as Adema and Stone have shown (2017). But for some of these steps only a few basic tweaks to software settings are needed to achieve the desired set up or results. In some cases, as explained, this involves being familiar with a command line interface (including reading the documentation about option flags which should be added to the software in order to make it do something specific, for example).

If publishers or researchers are able to connect to a server via SSH and to edit in the server's shell (configuration) text files or if they can run command line tools, a lot more options for experimental work are opened up and become possible. We feel, that these basic skills together with the openly available documentation that accompanies many of the tools and technologies we will discuss in this report, should be enough for authors and publishers to experiment with these tools and adapt them according to their needs. One of the things we want to start to explore with this research and scoping report, is how we can aid in this process of enabling researchers and publishers to use and adapt the tools needed to create experimental books.

The more expert knowledge of system administrators and programmers is primarily needed when experiments fail or get stuck. However, recent trends around cultures of software deployment, which were introduced by the use of virtual machines in the cloud, followed by the acceptance of light virtualisation aka containerisation, greatly improved the testing and usage of software tools. These days any software tool developed to be run on a server should come with decent accompanying documentation and should in most cases only need a few lines pasted into the command line to use the tool according to one’s needs. To support the uptake of tools and software that can help publishers and authors in the creation and publication of experimental books, we will in this report, where appropriate, try to describe the basic competencies needed (as a basic or regular user, an advanced user, or an expert user) to successfully test different types of software.

Collaborative Writing Tools

Within COPIM we are running a series of pilot cases focused on creating experimental books together with a selection of authors and publishers. In this section we will focus on two types of tools that support two kinds of practices or modes of research that accompany or form the basis of various experimental publishing projects, namely collaborative writing and annotation tools.

Collaborative real-time writing / editing as an idea was introduced in 1968 by Douglas Engelbart in The Mother of All Demos13 but it took another forty years to be implemented in such a way that people could work collaboratively from their personal computers and rely on the service to keep their documents in place. In order for that to happen, Google played an important role by first acquiring Writely in 2006 and then in 2009 the team of AppJet created the, at that time, very impressive EtherPad application (mostly as a demo for their underlying technology). AppJet's engineers joined the Google Wave team and EtherPad was made available by Google as open source software.14

Pads

In the following decade we witnessed the development of a new culture of collaborative writing/editing that developed around so-called ‘pads’. The common denominator of pads is that their source text is always available in some simple human readable form (most recently Markdown) and their features have been mostly developed to support the communities using the tool.

EtherPad Lite was a rewrite of EtherPad, aiming to to make it less resource hungry. It was written in a popular programming language (Javascript), making EtherPad Lite easy to install on one’s own server—i.e., EtherPad Lite can be installed via Linux distribution package managers or via Docker. Many activist organisations have chosen to use EtherPad.15

One notable project which follows the pad paradigm is CodiMD.16 In CodiMD’s Software-as-as-Service rendition HackMD, the platform is focused on providing an online space for collaborative text editing by integrating an account login system with popular online services (Google, Facebook, Twitter, Dropbox, GitHub...) and integration with GitHub for easier development of documentation. This wide range of log-ins makes the platform an interesting exemplar for experiments in the field of publishing, as it facilitates potential participation across a wide range of stakeholders. Next to the platform offer, and similar to Etherpad, self-hosted instances of CodiMD have grown popular in and beyond the HE context.17

Another example of a collaborative writing pad is the employee-owned French company XWiki SAS, which has developed a suite of tools focusing on cryptography, following the ‘zero knowledge’ approach where every web browser encrypts its own pad content so that even the owners of the server serving the web app to the web browser cannot decipher the encrypted content. This whole ecosystem of apps can also be installed on one’s own server.

The following (linked) table displays a list of current tool examples that can be used to facilitate collaborative writing in a variety of ways. The list is limited to collaborative writing tool solutions that are under active maintenance (i.e., updated in the recent past). This spreadsheet and the spreadsheet listing annotation tools added to the next section of this report, are works-in-progress and will continue to be updated after the first release of this report.

Figure 1: Overview of Collaborative Writing Tools considered in this study. View this spreadsheet on CryptPad.

Git-based Collaboration

The world of collaborative software development was revolutionised by Git, which was developed by Linus Torvalds in 2005. Git was developed primarily for Torvalds' needs in maintaining one of the largest software collaborations ever—the Linux kernel. The approach and architecture of Git is also known and described as a distributed version-control system for tracking changes in source code during software development. The history of changes keeps its consistency and reproducibility by generating cryptographic hashes18 for every change of the content. The whole repository with its history of changes is then cloned for every user of the system. Future synchronisations of a code repository could thus be done in between any of the software instances, which allows for a true so-called ‘peer2peer topology’. With Git's internal architecture and forking/branching mechanism,19 Torvalds addressed another well-known problem in software collaboration: the issue of experimenting and introducing new features or even rewriting code. Creating new forks and branches of code, while providing synchronisation with the others became much easier with the introduction of Git, resulting in drastic changes in the world of software development.

But this change did not happen more generally until GitHub (2008) made a proprietary web frontend for Git, enabling software developers to use it through a user-friendly web interface. GitHub also wrote an extensive documentation and a recorded series of screencasts explaining how to actually use Git (both in the command line and using one’s own web user interface).

Now in its 12th year of existence, GitHub has become an essential part of the infrastructure of storage and history of changes in the development of open source software. While GitHub itself is now a commercial entity owned by Microsoft (2018), throughout its history it did introduce a number of important and influential open source projects, namely: Atom (a text editor),20 Electron (a web browser engine as desktop application),21 and Jekyll (a static site generator).

Many powerful and popular text editors, such as Emacs and Vim,22 which have been used for decades in software development, are also known to have a steep learning curve. However, due to again decades of customisation, these editors are often the first to provide support for new technologies—including technologies needed for scholarly research and writing. Many scientists in particular started to use Emacs or Vim because they wanted to have support for LaTeX, BibTeX and/or other bibliographic and citation management options.

The popularity of Atom, together with the ever-growing popularity of web technologies, fuelled the development of text editing components for the web (and for desktop via Electron). Some of the most powerful and elegant amongst these, such as CodeMiror and ProseMirror by Marijn Haverbeke, have supported a new generation of web-based text editors. These text editors share their underlying technology with ProseMirror and/or CodeMirror, and based on feedback from their users would, usually, iteratively grow into specific niche contexts.

Due to the latest developments of the CSS standard,23 web browser engines are becoming increasingly an environment where well-structured content can be processed into a PDF publication with user control over the required layout (header, footer, margins) and pagination (links to specific pages etc.). Free software libraries that have been helping developers to integrate these features include paged.js, developed by Cabbage Tree Labs in their endeavour to provide the underlying technology for Editoria. Editoria is a full-stack24 web-based publishing workflow, supported with its own underlying set of technologies, including Wax, which is an online rich text editor (component) based on ProseMirror and paged.js for its typesetting,25 and the XSweet converter, which converts Microsoft Word documents to HTML (and vice versa).

Also relying on ProseMirror, and combining this with Vivliostyle, another established open source library for typesetting/rendering PDFs, is Fidus Writer — ‘an online collaborative editor especially made for academics who need to use citations and/or formulas.’ (n.d.) It proposes semantic editing, which is focused on the structure of the document rather than its look and feel. If the document is developed following the proposed semantic editing, Fidus Writer is able to render and export its output in different formats (HTML, Epub, LaTeX, Journal Article Tag Suite (JATS), .docx, .odt and PDF). It supports citations via drag'n'drop or copy-paste of BibLaTeX26, easily exported from a reference manager such as Zotero and from text into the text editing area. Fidus Writer uses ProseMirror as its underlying text editing component, and Vivliostyle for typesetting and it can be easily installed locally (or on a server) as a docker container.

GitHub not only took care of educating people about and simplifying the use of Git, it also changed the way tutorials and documentation look. GitHub tried to encourage developers to add basic documentation for projects in their README.md files, to enable the repository page to open as a nicely designed HTML page with lists of the directories and files and below that the content of the Markdown formatted README.md file, processed automatically on GitHub's server. A well-designed front page, functioning as basic documentation, made software projects distinctive and more comprehensible if compared to other web frontends for version control systems.

In 2008, in its early days, GitHub introduced GitHub Pages based on the Jekyll static website generator. This allowed—predominantly software developers at that time—a simple way to create a web site. The existence of themes would help people to choose the design and layout of their website, in a similar way as they would do in WordPress. The content creation in GitHub Pages was based on Markdown markup, a human readable syntax to structure the content of a given web page. The hierarchy of documents would follow the hierarchy of the directory structure. With a simple configuration file inside a repository, Jekyll would know how to make a menu for the website and render the rest of the website. The web site would be rendered as a simple HTML, CSS and maybe some basic Javascript, easily served by GitHub servers with no hassle for developers to maintain their project's website or any web server.

In 2011, GitLab started as a project that would be able to provide the efficiency of code management that had been introduced by Github while also allowing more control over where a project’s code is stored. Today, GitLab is available in two distinct flavours; while its Enterprise Edition (GitLab EE) is the software-as-a-service (SAAS) branch, the Community Edition (GitLab CE) follows the open source route of making its codebase available for others so that everyone has the ability to run one’s own self-hosted GitLab server. And similar to the earlier-described publishing interface of GitHub Pages, such a set-up is also possible with GitLab Pages.27

Next to the static site generators mentioned above—Jekyll, GitHub Pages, and GitLab Pages—the Jamstack approach has led to the rise of a plethora of static site generator variants,28 including Hugo, which the COPIM project is using for its website. Many of these generators have eventually found their respective ways into open publishing workflows, for journals, books, as well as fully-digital, experimental modes of publishing.29

Annotation Tools

From its early days, the World Wide Web has been perceived as a medium enabling everyone and anyone to participate. It seemed that the limitations that Brecht found unacceptable for radio—as a public medium, to be only unidirectional— and called for a transformation ‘from a distribution apparatus into a communication apparatus,’ (Brecht & Silberman, 2020) could now finally be cured with the World Wide Web.

Following this perception, it was easy to imagine that anyone could write their prose in HTML and have it published online; that one could share a URL to a comment or threaded discussion; that one could do everything we are used to do in text and/or literary criticism, with the promise of endless possibilities to expand even further. In other words, the idea that anyone, not just experts, could edit any web page, was, at the time, inseparable from the idea of Word Wide Web. It was reflected in everything from WikiWikiWeb, created in 1995 by Ward Cunnigham as a user-editable website, to the ‘View source’ button, which was a prominent menu item in the original web browser written by Tim Berners Lee, a feature that since then has been inherited by all other web browsers.

The history of annotation tools proved once again that many simple and elegant ideas become difficult to implement and sustain once they are presented with the myriad of competing standards and technical specifications now existing in the real world.30 Fully successful implementations of a standalone (open source) annotation layer on top of regular web standards is still to be developed.31 Some of the challenges, affecting its promise to be useful, include ever-changing—or even disappearing—web pages which then, as a consequence, require a permanent online service to be able to consistently provide the annotated version of the web page. Archiving web pages for longer periods of time also became a non-trivial problem as nowadays the actual content of a web page does not only comprise static HTML content served by a web server anymore, which would lend itself more readily to referencing due to its static nature. Today, content is in many cases dynamically assembled by Javascript at the very last moment before a web page is displayed to the end user. And while in its daily role of simply surfing the Internet, the Javascript engine is known to be very demanding on CPU and RAM resources (even in the rather standard scenario of one single user’s day-to-day web browsing on a powerful personal computer), it is still one of the most widespread frameworks used in web development.

The above-mentioned obstacles probably played an important role in the rise and subsequent demise of a number of annotation projects (both open source and proprietary). Having grown familiar with this kind of history, many recent projects—unfortunately—have decided to develop annotation as a feature that would only cover their respective projects’ scope, with most of them not dedicating enough time to questions of interoperability. To provide one recent example, we can nowadays find a very good implementation of annotations on the PubPub platform developed by MIT Media Lab, with the limitation that annotations only work within that platform.

Still, there is a project which keeps up our collective hopes by the name of Hypothes.is — an open source project following the open standard developed by the W3C Web Annotation Working Group. The project gathered a scholarly coalition (Annotating All Knowledge (AAK)) — a group that includes more than seventy scholarly publishers and platforms. Their mission is to ‘deploy annotations across much of scholarship.’ A lot of other promising technologies were relinquished in the past because of a lack of widespread adoption (see, for example, RSS32 or the above mentioned ‘View source’ button), meaning that this approach focusing on this specific segment of scholarly engagement, seems reasonable and hopefully sustainable.

Hypothes.is has a special partnership program with publishers and educational institutions which often results in new features and spin-off projects, including a collaboration with the ReadiumJS team to bring annotations to EPUBs, initiated by NYU Press.

A particularly interesting project worth mentioning is dokieli, a client-side tool for decentralised article publishing, annotations, and social interactions based on open Web standards and best practices (Capadisli et al., 2017). It is part of an ecosystem around project Solid, which has been initiated by Tim Berners Lee in 2016 with the aim ‘to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.’ 33

Dokieli as a project is in its early stages of development and possibly a great candidate for experiments in annotations as part of a future (more) decentralised web. That said, for experimental publishing projects relying on a robust implementation and easy-to-use annotation system, our recommendation here would be to use Hypothes.is.

Overview of available tools

The following (linked) table displays a list of current tool examples that can be used to facilitate annotation in one way or the other. The list is limited to annotation tool solutions that are under active maintenance (i.e., updated in the recent past) and thus do not feature earlier implementation examples such as those listed on the AnnotatorJS page, as AnnotatorJS has now been integrated as a core W3C standard, and many of the tools created from around 2012 to 2015 have either ceased to exist or are not seeing active maintenance and/or further development today.

Figure 2: Overview of Annotation Tools considered in this study. View this spreadsheet on CryptPad.

Conclusion

This research and scoping report will develop further in instalments to incorporate both community feedback from the COPIM partners and other stakeholders (publishers, authors, technology developers) and updates in a rapidly changing technological landscape. We will also continue to update the examples listed in the experimental books typology section to include more non-English language examples from a wider geographical region. We will release new versions of this report on a periodical basis and would very much welcome comments and feedback which we hope to be able to add into subsequent versions. The idea is that this report, in of course a different set-up and form, will morph into the online resource we will be creating in year 3, as well as functioning as a documentation of the process behind the establishment of this online resource and the thinking and decision-making informing it.

Works Cited

Adema, J., & Stone, G. (2017). Changing publishing ecologies: A landscape study of new university presses and academic-led publishing (p. 102). Jisc. http://doi.org/10.5281/zenodo.4420993

Andreessen, M. (1993). WWW-Talk Apr-Jun 1993: Group annotation server guinea pigs? http://1997.webhistory.org/www.lists/www-talk.1993q2/0416.html

Bates, M. (2014). Conquering the Command Line. http://conqueringthecommandline.com

Blansit, B. D. (2008). An Introduction to Cascading Style Sheets (CSS). Journal of Electronic Resources in Medical Libraries, 5(4), 395–409. https://doi.org/10.1080/15424060802453811

Brecht, B., & Silberman, M. (2020). Brecht on film and radio. Methuen. https://doi.org/10.5040/9781408185285

Capadisli, S., Guy, A., Verborgh, R., Lange, C., Auer, S., & Berners-Lee, T. (2017). Decentralised Authoring, Annotations and Notifications for a Read-Write Web with dokieli. In J. Cabot, R. De Virgilio, & R. Torlone (Eds.), Web Engineering (Vol. 10360, pp. 469–481). Springer International Publishing. https://doi.org/10.1007/978-3-319-60131-1_33

Chacon, S. (2014). Pro Git (Second edition). Apress. https://github.com/progit/progit2/releases/download/2.1.277/progit.pdf

Chang, V., Mills, H., & Newhouse, S. (2007). From Open Source to long-term sustainability: Review of Business Models and Case studies (V. Chang, Ed.). https://eprints.soton.ac.uk/263925/

Charoy, F. (2016, June 6). Keynote: From group collaboration to large scale social collaboration. 25th IEEE International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE-2016). https://hal.inria.fr/hal-01342751

Coko Foundation. (n.d.). XSweet. Retrieved 11 December 2020, from https://xsweet.org/

Confederation of Open Access Repositories (COAR), & Next Generation Libraries Publishing. (2021). SComCaT: Scholarly Communication Technology Catalogue. Retrieved 27 January 2021, from https://www.scomcat.net/

DeLisle, C. J. (2017, February 20). Time to Encrypt the Cloud. CryptPad Blog. https://blog.cryptpad.fr/2017/02/20/Time-to-Encrypt-the-Cloud/index.html

Fidus Writer. (n.d.). What is it? Retrieved 29 January 2021, from https://www.fiduswriter.org/how-it-works/

FSF. (2009). Introduction to the Command Line. Free Software Foundation (FSF). http://archive.flossmanuals.net/command-line/

Garlan, D., Allen, R., & Ockerbloom, J. (1995). Architectural mismatch: Why reuse is so hard. IEEE Software, 12(6), 17–26. https://doi.org/10.1109/52.469757

Ginsberg, D. (2010). Ways to Collaborate: Google and Beyond. Presentations. https://scholarship.kentlaw.iit.edu/lib_pres/44

Git—About Version Control. (n.d.). Retrieved 11 December 2020, from https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control

Git—Contributing to a Project. (n.d.). Retrieved 11 December 2020, from https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project

GitLab. (n.d.). Administer GitLab Pages for self-managed instances. Retrieved 11 December 2020, from https://docs.gitlab.com/ee/user/project/pages/#administer-gitlab-pages-for-self-managed-instances

Heller, L., The, R., & Bartling, S. (2014). Dynamic Publication Formats and Collaborative Authoring. In S. Bartling & S. Friesike (Eds.), Opening Science (pp. 191–211). Springer International Publishing. https://doi.org/10.1007/978-3-319-00026-8_13

Hoe, N. S. (2006). Free/Open Source Software—Open Standards. United Nations Development Programme – Asia-Pacific Development Information Programme (UNDP-APDIP). https://idl-bnc-idrc.dspacedirect.org/handle/10625/50703

Holvoet, K. (2006). What Is RSS and How Can Libraries Use It to Improve Patron Service? Library Hi Tech News, 23(8), 32–33. https://doi.org/10.1108/07419050610713718

Hoya, B. (2010). Google Docs, EtherPad, and then some: Word processing and collaboration in today’s portable work environment. Texas Library Journal, 86(2), 60–62.

Jullien, N., Stol, K.-J., & Herbsleb, J. D. (2019). A Preliminary Theory for Open Source Ecosystem Micro-economics. In B. Fitzgerald, A. Mockus, & M. Zhou (Eds.), Towards Engineering Free/Libre Open Source Software (FLOSS) Ecosystems for Impact and Sustainability. Springer. https://hal.archives-ouvertes.fr/hal-02127185

Kelty, C. (2014). Beyond Copyright and Technology: What Open Access Can Tell Us about Precarity, Authority, Innovation, and Automation in the University Today. Cultural Anthropology, 29(2), 203–215. https://doi.org/10.14506/ca29.2.02

Kim, E. (2020, October 21). How to Publish a Book with GitBook CLI and GitHub Pages in 7 Minutes | Hacker Noon. Hackernoon. https://hackernoon.com/how-to-publish-a-book-with-gitbook-cli-and-github-pages-in-7-minutes-i61w3wjn

Kramer, B., & Bosman, J. (n.d.). 400+ Tools and innovations in scholarly communication. Google Docs. Retrieved 11 December 2020, from https://bit.ly/innoscholcomm-list

Lehman, P. (2010). The Biblatex Package. Programmable Bibliographies and Citations. https://www.sys.kth.se/docs/texlive/texmf-dist/doc/latex/biblatex/biblatex.pdf

Lewis, D. W. (2020). A Bibliographic Scan of Digital Scholarly Communication Infrastructure | Educopia Institute. Educopia Institute. https://educopia.org/mapping-the-scholarly-communication-landscape-bibliographic-scan/

M. Schweik, C. (2013). Sustainability in Open Source Software Commons: Lessons Learned from an Empirical Study of SourceForge Projects. Technology Innovation Management Review, 3(1), 13–19. https://doi.org/10.22215/timreview/645

Maxwell, J. W., Hanson, E., Desai, L., Tiampo, C., O’Donnell, K., Ketheeswaran, A., Sun, M., Walter, E., & Michelle, E. (2019). Mind the Gap: A Landscape Analysis of Open Source Publishing Tools and Platforms. PubPub. https://doi.org/10.21428/6bc8b38c.2e2f6c3f

Mercier, C. (2017, February 23). Three recommendations to enable Annotations on the Web | W3C News. https://www.w3.org/blog/news/archives/6156

Microsoft. (2018, October 26). Microsoft completes GitHub acquisition. The Official Microsoft Blog. https://blogs.microsoft.com/blog/2018/10/26/microsoft-completes-github-acquisition/

Open Annotation Community Group. (n.d.). Open Annotation Community Group. Retrieved 15 December 2020, from https://www.w3.org/community/openannotation/

Radical Open Access Collective. (n.d.). Information Portal: OA Publishing Tools. Retrieved 27 January 2021, from https://radicaloa.disruptivemedia.org.uk/resources/publishing-tools/

Raymond, E. S. (1998). The cathedral and the bazaar. First Monday. https://doi.org/1459875599

Salus, P. H. (1994). A quarter century of UNIX. Addison-Wesley Pub. Co. https://wiki.tuhs.org/lib/exe/fetch.php?media=publications:qcu.pdf

Shah, R. C., & Kesan, J. P. (2008). Lost in Translation: Interoperability Issues for Open Standards [SSRN Scholarly Paper]. Social Science Research Network. https://papers.ssrn.com/abstract=1201708

Shaw, Z. (2011). The CLI Crash Course: Controlling Your Computer With The Terminal. samizdat. https://library.memoryoftheworld.org/#/book/9223b1f6-cda7-469d-b7a2-fd32eb96cb7c

Signorini, G. F. (n.d.). Open source and sustainability: The role of universities. Retrieved 1 December 2020, from https://flore.unifi.it/handle/2158/1151000

Solid. (n.d.). Retrieved 29 January 2021, from https://solid.mit.edu/

Solid Project. (n.d.). Home. Retrieved 29 January 2021, from https://solidproject.org/

Stallman, R. (2007). Why Open Source misses the point of Free Software. https://www.gnu.org/philosophy/open-source-misses-the-point.html.en

Stallman, R. (2013, December 5). FLOSS and FOSS. https://www.gnu.org/philosophy/floss-and-foss.en.html

Tennant, J. P., Bielczyk, N., Tzovaras, B. G., Masuzzo, P., & Steiner, T. (2020). Introducing Massively Open Online Papers (MOOPs). KULA: Knowledge Creation, Dissemination, and Preservation Studies, 4(1), 1. https://doi.org/10.5334/kula.63

The Executable Book Project. (n.d.). Documentation. Retrieved 29 January 2021, from https://executablebooks.org/en/latest/

The Mother of All Demos, presented by Douglas Engelbart (1968)—YouTube. (1968, December 9). https://web.archive.org/web/20201210094618if_/https://www.youtube.com/watch?v=yJDv-zdhzMY

The Web Annotation Working Group. (2017, February 23). Web Annotation Protocol: W3C Recommendation 23 February 2017. https://www.w3.org/TR/annotation-protocol/

Wikipedia. (2020a). Point and click. In Wikipedia. https://en.wikipedia.org/w/index.php?title=Point_and_click&oldid=990779820

Wikipedia. (2020b). WikiWikiWeb. In Wikipedia. https://en.wikipedia.org/w/index.php?title=WikiWikiWeb&oldid=991654568

Wikipedia. (2020c). Editor war. In Wikipedia. https://en.wikipedia.org/w/index.php?title=Editor_war&oldid=993024851

Wikipedia. (2020d). Google Docs#Supported_file_formats. In Wikipedia. https://en.wikipedia.org/w/index.php?title=Google_Docs&oldid=993309823

Wikipedia. (2020e). Cryptographic hash function. In Wikipedia. https://en.wikipedia.org/w/index.php?title=Cryptographic_hash_function&oldid=993402727

Wusteman, J. (2004). RSS: The latest feed. Library Hi Tech, 22(4), 404–413. https://doi.org/10.1108/07378830410570511

Xie, Y. (n.d.). 6.3 Publishers | bookdown: Authoring Books and Technical Documents with R Markdown. Retrieved 14 December 2020, from https://bookdown.org/yihui/bookdown/

External resource

The bibliographies for all parts of this report are openly available on Zotero.


Header image: Page spread of Writing Machines by Katherine Hayles and Anne Burdick. Hayles, N. K., Burdick, A., Loyer, E., Lunenfeld, P. (2002). Writing machines. MIT Press.

Comments
0
comment
No comments here
Why not start the discussion?