Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

A new revolutionary workflow for a unified collection of citations: say hello to the OpenCitations Index

Blog post by Ivan Heibi (University of Bologna), Arianna Moretti (University of Bologna) and Chiara Di Giambattista (University of Bologna).

In the past five years, the OpenCitations data has been enriched with numerous new indexes of open citation data from different sources. However, the quantity and diversification of the ingested information have raised several issues, which recently made it essential to conduct a complete revision of the ingestion workflow. The result was a revolution in the way OpenCitations data is delivered. In this blog post, we will explain the context and challenges raised by the old procedure. Then, we will present the new ingestion workflow, designed to produce just two comprehensive collections: OpenCitations Index, collecting open citation data, and OpenCitations Meta, for the open bibliographical metadata. 

Once upon a time, there were five OpenCitations indexes…

In 2018, OpenCitations released the kickoff version of its first citation index, COCI (citations from Crossref), which contained around 300 million citation links derived from the subset of the reference lists in the Crossref database, where citing and cited entities were identified using Digital Object Identifiers (DOIs). COCI gathered citations with associated metadata in compliance with the recommendations from the Initiative for Open Citations (I4OC) that citation data should be structured, separable, and open, thus marking a turning point by providing a disruptive and free and open alternative to earlier sources such as Google Scholar, which provided freely accessible data although not downloadable, and Web of Science or Scopus, which demanded paid access. 

In a short time, COCI became a competitive and trusted index of citation data, used by numerous institutional repositories, including B!son and Optimeta. In 2021, COCI was taken into account in a comparative study with the most relevant sources in the landscape, including the proprietary ones, which showed its coverage approaching parity with those of the other sources involved in the analysis (Microsoft Academic, Scopus, Dimensions, and Web of Science). At the time of its most recent update in January 2023, COCI counted more than 1.4 billion citations. The reason behind this outstanding number lies in several factors, including Elsevier’s endorsement of the Declaration on Research Assessment (DORA) in December 2020, leading to the open release via Crossref of the reference lists of the articles published in all its journals, and confirming the value of initiatives such as the Initiative for Open Citations (I4OC)

However, before this change of heart, in 2019 OpenCitations had tried to narrow the open citations coverage gap by launching its second index, the Crowdsourced Open Citations Index (CROCI). This index allowed publishers and scholars to contribute directly by uploading crowdsourced open citations into the OpenCitations infrastructure.

In December 2022, a new concrete step towards a factual plurality of OpenCitations indexes was taken by the ingestion of new data sources into the infrastructure, with the publication of the inaugural dumps of DOCI (citations from DataCite) and POCI (citations from PubMed). In June 2023, the first version of the OROCI (citations from OpenAIRE) dump was released too, and JOCI (citations from JALC) is expected to be available by the end of November 2023, for a total of five collections from different sources. 

Why a new workflow? The issues with multiple sources management and new challenges

While having such a variety and richness of indexes helped present the extent of OpenCitations sources, the recent increment in the number of sources and the diversification of data integrated led to two primary issues:

    1. the necessity to handle the ingestion of new identifier types in a DOI-based software infrastructure, and
    2. the consequent possibility of encountering the same citation expressed by several sources with different identifiers.

Moreover, it soon became evident the need to optimize the reuse of the already developed software components to facilitate the metadata crosswalk processes between the new sources’ data models and the OpenCitations Data Model, with the aim to define a functional and easily extendable workflow to be easily reused when it comes to incorporating new data sources, which should be: 

    1. sufficiently generic to establish a globally unique procedure; 
    2. customizable enough to capture the necessary information within each of the specific data models and formats. 

As a solution, we decided to use OpenCitations Meta, the new OpenCitations database and tool for managing bibliographic data related to the publications involved in the citations. OpenCitations Meta makes it possible to assign each entity involved in a citation an internal identifier, nominally the OpenCitations Meta Identifier (OMID), to which all the associated persistent identifiers of the same publication are redirected.

As a result, the allocation of an OMID for each bibliographic resource also enabled the unambiguous identification of each citation, regardless of the persistent identifier schema originally used by the data source to identify the resources. This approach allowed us to perform data deduplication and finally make all the sources’ contributions converge into a unified index containing all the unique citations managed by OpenCitations, expressed as OMID to OMID citation links.

The revised workflow

The new workflow is based on three main components with the benefit of optimizing the process both in terms of computational cost and in terms of flexibility. As shown in Fig. 1, in a preliminary step, source-specific software converts the input dataset – structured according to the source data model – to extract two OpenCitations Data Model compliant data collections in tabular format for bibliographic metadata and citation data, respectively.

The following steps are common to the process of each dataset.  

STEP 1: The bibliographic metadata collection is used as input for the META software. At this stage, it is checked whether or not the bibliographic entities have been previously integrated into our infrastructure (coming from other data sources). If so, the existing OMID is linked also to the new alternative identifiers of the new bibliographic resources. New metadata values, if any, are also integrated. A new OMID identifier is produced for entities never previously encountered, uniquely representing the bibliographic resource in OpenCitations. The outputs of the process are: (I) an updated version of the OpenCitations Meta collection that also includes the metadata of the bibliographic entities provided by the new source, and (II) a collection of provenance data. An internal database is constantly refreshed to preserve correspondence between IDs and the associated internal OMIDs.

STEP 2: Starting from the collection of citations expressed as directional links between identifiers of potentially any type (e.g., DOI-DOI, PMID-PMID, PMC-PMID, etc.), the INDEX software queries the internal database mapping IDs to OMIDs to produce an updated version of the OpenCitations Index: unique citations expressed as OMID-OMID links in different formats, accompanied by their corresponding provenance data.

Fig. 1: An overview of the data ingestion workflow, starting from the data source-specific conversion and production of citations and bibliographic metadata tables, progressing through the META process and the assignation of an OMID identifier to each bibliographic record involved in a citation, and culminating with the exposition of the OpenCitations Index collection of OMID-OMID unique citations.

What we have now: The OpenCitations Index 

From now on, OpenCitations will no longer display an index of citation data for each source. Instead, we will publish a single collection of citations into which the contributions from each of the sources will flow, which we will simply call ‘The OpenCitations Index‘. The first version of this unified index of OMID-OMID citations is posted on Figshare. It was produced in RDF, CSV, and SCHOLIX formats, together with a collection of its provenance information, provided in RDF and CSV formats. For each citation, it is possible to trace the source of the information by consulting the Provenance data collection, thanks to the http://www.w3.org/ns/prov#atLocation property, which defines the location of each citation.

This new solution has the benefit of simplifying the consultation of the data maintained by our infrastructure without reducing the information content. In addition, by including efficient handling of the deduplication problem, the new Index not only provides accurate data on the exact number of unique citations exposed by the framework but also verifies the individual contribution of each source, as well as their overlapping data (Fig. 2).

Fig. 2: An overview of the number of citations stored in the OpenCitations Index as of October 31, 2023. The diagonal cells in the table (highlighted in yellow) show the unique contribution of each collection to the OpenCitations Index, while the other cells represent the citations that are shared between the collections. More in detail, the green cells show the overall input of each source, while the pink cells represent the number of overlapping citations between two data sources.

Currently, the Index contains almost 2 billion unique citations. By the end of November, a new version of the collection will be published, including the contribution of the new Japan Link Centre (JaLC) source. 

How to access the OpenCitations Index data

To maximize the reuse of the exposed information and to ensure the greatest possible interoperability, the collection will always be published on Figshare in all formats listed above. In addition, the data will be accessible via an API, a SPARQL endpoint, and a web interface.

The redesign of the ingestion workflow marks a fundamental step for OpenCitations towards a more intuitive and simple access to our services while always preserving and improving the quality of our data. If you need further information on how the new workflow works, please visit our website, contact us at contact@opencitations.net  or leave feedback and/or suggestions in the dedicated card on our public roadmap to help us improve our services and communications. Thank you!

From little acorns . . . A retrospective on OpenCitations

The initial vision

Now that OpenCitations is hosting over one billion freely available scholarly bibliographic citations, this is perhaps an opportune moment to look back to the start of this initiative. A little over eleven years ago, on 24 April 2010, I spoke at the Open Knowledge Foundation Conference, OKCon2010, in London, on the topic

OpenCitations: Publishing Bibliographic Citations as Linked Open Data

I reported that, earlier that same week, I had applied to Jisc for a one-year grant to fund the OpenCitations Project (opencitations.net). Jisc (at that time ‘The JISC’, the Joint Information Systems Committee) was tasked by the UK government, among other things, to support research and development in information technology for the benefit of the academic community.

The purpose of that original OpenCitations R&D project was to develop a prototype in which we:

  • harvested citations from the open access biomedical literature in PubMed Central;
  • described and linked them using CiTO, the Citation Typing Ontology [1];
  • encoded and organized them in an RDF triplestore; and
  • published them as Linked Open Data in the OpenCitations Corpus (OCC).

I told those at the conference that in this demonstration project, with limited JISC funding, we could not hope to “boil the whole ocean”, but that nevertheless there would be substantial benefits from even partial coverage of citation data from the scholarly literature:

  • We could show the way and establish best practice.
  • Despite partial coverage, all key papers would most likely be cited several times.
  • The overall topological structure of the citation network would be revealed.
  • We would create a ‘benchmark’ corpus of high-quality RDF citation data that could be used to develop analytical and visualization tools.
  • We could show the value of open citation data in helping scholars to discover full text articles of all types, and thus encourage subscription-access publishers to release their reference metadata.

The important thing, I said, was to make a start!

The Jisc OpenCitations Project

That JISC grant application was funded, and the project, to last for a year with modest funding of £100K, started in my lab in the Department of Zoology at Oxford University on 1st June 2010, and was subsequently extended for a further six months.

Using data from the Open Access subset of PubMed Central, we created the first prototype release of the OpenCitations Corpus of linked bibliographic citation data, containing 6,529,815 independent bibliographic records of both citing and cited entities, comprising references to ~20% of all post-1980 articles recorded in PubMed, including those to all the most important highly cited papers in every field of biomedical endeavour.

This achievement was almost entirely the result of the excellent work by our chief data wrangler Alex Dutton, whose skill and natural feel for linked data did wonders for this project. Ben O’Steen, Graham Klyne and Alistair Miles made important contributions.

The project also resulted in many other development, described here, most which were developed or at least initiated during a short but wonderfully productive collaboration with Silvio Peroni, who spent six months with me in 2010 as a doctoral student intern from the University of Bologna, to which he subsequently returned to complete his thesis and develop his academic career.

These included:

  • the deconstruction and re-development of the original version of CiTO into a suite of orthogonal and complementary ontologies covering the whole domain of scholarly publishing – the SPAR (Semantic Publishing and Referencing) Ontologies [2, 3];
  • the mapping of various existing metadata schemas into RDF using SPAR, including the DataCite Metadata Schema, and subsequently JATS, now the default NISO standard for XML markup of scholarly documents) [4]; and
  • the initiation of the Semantic Publishing Blog and this OpenCitations Blog.

Life after Jisc – the flowering of OpenCitations

After the Jisc funding ended and I, after a long career in biological teaching and research, formally retired from the Department of Zoology at the Oxford University, members of the initial OpenCitations team moved on to other things. Like so many grant-funded academic project whose initial financial support had dried up, OpenCitations could have foundered at that stage, as an interesting prototype but with too little content to be useful. However, the concept of providing an open alternative to proprietary citation indexes was too important to abandon. But how could it be transitioned into something enduring and useful, particularly when as a matter of principle one had decided that the citation data should be made freely available, thus precluding income generation by charging for ‘premium’ services or the formation of a commercial spin-off?

Finally, I realized that something radical needed to be done to move OpenCitations forward. I had maintained a lively collaboration with Silvio Peroni at the University of Bologna, resulting between 2011 and 2014 in the publication of 18 articles and conference papers concerning the SPAR ontologies, ontology development, documentation and visualization, and related topics, and in 2015 I invited him to start working with me directly on OpenCitations. It was the best decision I could have made. We decided to take the initial concept and re-implement it from the bottom up. OpenCitations gave Silvio a major computer science project to which he could apply his considerable talent, and soon resulted in the development of a revised RDF data model for describing citation data, the OpenCitations Data Model (OCDM) [5] and a suite of new software tools to harvest, organise and publish citations at linked open data [6]. The credit for almost all the subsequent conceptual and technical developments within OpenCitations, which have incrementally led to our present situation, is due to Silvio Peroni, and the scholarly community is indebted to him for the intelligence, skill and diligent application he has given to OpenCitations over the past six years. I am truly honoured to have Silvio as co-Director of OpenCitations, and wish to take this opportunity to acknowledge his contributions and to thank him publicly.

Our work on OpenCitations at that stage, summarized in [7], would not have been possible without the enthusiastic support of Silvio’s senior colleague Fabio Vitali and of the Department of Computer Science and Engineering at the University of Bologna, which not only provided a stimulating environment for Silvio’s post-doctoral work, but also supplied computing services and infrastructure at no charge to OpenCitations. It was also greatly helped by Professor David De Roure of Oxford University, who gave me an academic home and a formal affiliation within the Oxford e-Research Centre after my retirement from the Department of Zoology, which enabled me to continue to hold research grants.

As has been documented in earlier posts in this blog, we greatly benefitted in 2017 from a grant from the Alfred P. Sloan Foundation which enabled us to purchase a new and more powerful computing infrastructure for the sole use of OpenCitations and to extend and improve our software, and subsequently in 2019 by a project grant from the Wellcome Trust to develop the Open Biomedical Citations in Context Corpus, that permitted the extension of OCDM and SPAR for the characterization of in-text references and their textual contexts.

A significant breakthrough came in January 2018 with our decision to treat citations as first-class data entities, each with its own persistent identifier (PID), the Open Citations Identifier (OCI) [8]. This gave Silvio the freedom to envision a new kind of database, a citation index in which each citation had its own metadata, including citation timespan, citation categorization (e.g. self-citation), and of course the DOIs of the citing and cited publications. The creation of this new index was possible only with the incredible effort by Ivan Heibi, who served as a Research Fellow in the project funded by the Alfred P. Sloan Foundation at that time, and who was entirely responsible for developing the first version of the code necessary for creating such a database. Having harvested all the open references from Crossref metadata dumps, Silvio and Ivan created COCI, the OpenCitations Index of Crossref DOI-to-DOI Citations, which immediately became our principal source of open citations, the original OpenCitations Corpus being retained as a ‘sandbox’ in which to experiment with new data representations, for example those required for the Open Biomedical Citations in Context Corpus. Access to COCI was facilitated by Silvio’s development of a REST API, using his software tool RAMOSE (Restful API Manager Over SPARQL Endpoints), which enables the easily configurable deployment of a REST API over any SPARQL endpoint to an RDF triplestore [9]. We were able to organize our all data, both ‘traditional’ and new, and to encode it in RDF, thanks to the comprehensive OpenCitations Data Model [5], itself based on our SPAR Ontologies [3], which we evolved as necessary to accommodate new data representation requirements.

During this period we published a number of definitions, conference papers and journal articles documenting these advances, details of which can be found here. Of these, the most recent canonical publication describing OpenCitations as an infrastructure for open scholarship, and its datasets, tools, services and activities, is Peroni and Shotton (2020) [10]. We also established the Research Centre for Open Scholarly Metadata at the University of Bologna, primarily to handle administrative, financial and academic aspects of OpenCitations activities.

OpenCitations’ future

The problem remained: how to sustain the OpenCitations infrastructure financially. We were greatly helped by Bilder, Lin and Neylon’s formulation of the Principles of Open Scholarly Infrastructures (POSI) [11], in which they clearly pointing out that reliance solely on grant funding for specific projects was not the answer. OpenCitations compliance with POSI is described here. We were thus immensely grateful that SPARC Europe and other institutions had the wisdom to establish SCOSS (The Global Sustainability Coalition for Open Science Services) to facilitate the crowd-sourced financial support of useful open infrastructures by the scholarly community, including academic libraries, government agencies and other stakeholders. OpenCitations applied for SCOSS support in 2019, which led to the selection of OpenCitations for support in the SCOSS second round.

The donations we are now starting to receive from such stakeholders, and the new staff that this funding has recently allowed us to hire, signal the start of our transition from a financially vulnerable academic project to a sustainable open scholarly infrastructure of real value to the community.

The work of opening more of the global citation graph now requires two things:

  • that each publisher takes responsibility for ensuring that the references from all of its journal articles and books are submitted, together with all other bibliographic metadata, to open scholarly bibliographic metadata aggregators such as Crossref and DataCite, from which they can be indexed into open citation indexes of sufficient quality, depth of detail and breadth of coverage that these offer genuine alternatives to the expensive proprietary citation indexing services upon which the academic community presently relies; and
  • that the entire scholarly stakeholder community re-directs a fraction of the enormous sums currently spent on its subscriptions to proprietary bibliographic services in order to support Open Science infrastructures such as OpenCitations that making citations and other forms of scholarly metadata and objects freely available.

References

[1] David Shotton (2010). CiTO, the Citation Typing Ontology. J. Biomedical Semantics 1 (Suppl. 1): S6. http://dx.doi.org/10.1186/2041-1480-1-S1-S6

[2] Silvio Peroni, David Shotton (2012). FaBiO and CiTO: ontologies for describing bibliographic resources and citations. Web Semantics, 17: 33-34. https://doi.org/10.1016/j.websem.2012.08.001, OA at http://speroni.web.cs.unibo.it/publications/peroni-2012-fabio-cito-ontologies.pdf

[3] Silvio Peroni, David Shotton (2018). The SPAR Ontologies. In Proceedings of the 17th International Semantic Web Conference (ISWC 2018): 119-136. https://doi.org/10.1007/978-3-030-00668-6_8

[4] Peroni S, Lapeyre DA and Shotton D (2012) From Markup to Linked Data: Mapping NISO JATS v1.0 to RDF using the SPAR (Semantic Publishing and Referencing) Ontologies. Proc. 2012 JATS Conference, National Library of Medicine, Bethesda, Maryland, USA (October 2012): 16-17. http://www.ncbi.nlm.nih.gov/books/NBK100491/

[5] Marilena Daquino, Silvio Peroni , David Shotton (2020). The OpenCitations Data Model. Figshare. https://doi.org/10.6084/m9.figshare.3443876.v7

[6] Silvio Peroni, David Shotton, Fabio Vitali (2017). One Year of the OpenCitations Corpus: Releasing RDF-based scholarly citation data into the Public Domain. In The Semantic Web – ISWC 2017 (Lecture Notes in Computer Science Vol. 10588, pp. 184–192). Springer, Cham. https://doi.org/10.1007/978-3-319-68204-4_19

[7] Silvio Peroni, Alexander Dutton, Tanya Gray, David Shotton (2015). Setting our bibliographic references free: towards open citation data. Journal of Documentation, 71 (2): 253-277. http://dx.doi.org/10.1108/JD-12-2013-0166, OA at http://speroni.web.cs.unibo.it/publications/peroni-2015-setting-bibliographic-references.pdf

[8] Silvio Peroni, David Shotton (2019). Open Citation Identifier: Definition. Figshare. https://doi.org/10.6084/m9.figshare.7127816

[9] Daquino, M., Heibi, I., Peroni, S., & Shotton, D. (2021). Creating Restful APIs over SPARQL endpoints with RAMOSE. Semantic Web. http://arxiv.org/abs/2007.16079

[10] Silvio Peroni, David Shotton (2020). OpenCitations, an infrastructure organization for open scholarship. Quantitative Science Studies, 1(1): 428-444. https://doi.org/10.1162/qss_a_00023

[11] Geoffrey Bilder, Jenny Lin, Cameron Neylon (2015). Principles for Open Scholarly Infrastructure. http://dx.doi.org/10.6084/m9.figshare.1314859

The Open Biomedical Citations in Context Corpus: Progress Report

The creation of the Open Biomedical Citations in Context Corpus (CCC) is the goal of a one-year project funded by the Wellcome Trust. The aim is to create a new open corpus of bibliographic and citation data that contain detailed information about individual in-text reference pointers in biomedical journal articles. The project is led by Professor Silvio Peroni of the Research Centre for Open Scholarly Metadata (University of Bologna), is being undertaken by Dr Marilena Daquino (University of Bologna), and actively involves the Oxford e-Research Centre (University of Oxford), the École de Bibliothéconomie et des Sciences de l’Information (Université de Montréal), and the Centre for Science and Technology Studies (CWTS), (Leiden University).

An in-text reference pointer is a textual device (e.g. “[1]”, or “(Peroni and Shotton 2012)”) that appears in the main text of a citing work and denotes a bibliographic reference listed in the Bibliography section of the citing work. While a single in-text reference pointer uniquely denotes a single bibliographic reference, it can occur together with one or more other pointers, forming an in-text reference pointer list that denotes several references (e.g. “[5-13]”, or “(Peroni and Shotton 2012; Peroni and Shotton 2019)”). In-text reference pointers may appear in several places within the same citing publication (e.g. Introduction, Methods, Discussion), may occur within different document components (e.g. body text, figure captions, tables), and may address the cited publication for different purposes (e.g. as the source of an experimental protocol, as a data source, or for general background information).

Unfortunately, current citation indexes contain no information about in-text reference pointers, such as the number of times a particular work is referenced in the citing work, the text of the sentences in which they occur, or the rhetorical purpose of such citations.

Having data at the level of individual in-text reference pointers offers many new opportunities, enabling one: (1) to distinguish between works that are referenced just once in a citing publication and those that are referenced multiple times, and thereby (potentially) to distinguish when a citation is fundamental for the understanding or the development of the citing work, or merely incidental; (2) to see which in-text reference pointers occur together (e.g. in the same sentence or the same paragraph), thus, potentially, to infer similarities between the co-cited publications; and (3) to determine in which specific sections of the publication these in-text references occur (e.g. Introduction, Methods, Results), and thus, potentially, by means of textual analysis of the citation contexts, to retrieve the rhetorical functions of the citations – i.e. the reason why an author cites another work. 

The goal of the CCC Project is to provide stakeholders with an exemplar Linked Open Data corpus, created from the open access biomedical research literature, that is tailored for such deep citation analyses. The corpus will be a new member of the collection of OpenCitations datasets, and will be accompanied by services for accessing and querying data.

In the CCC Project, we have achieved or are currently dealing with the following developments:

  • Extending the OpenCitations Data Model (OCDM). The OpenCitations Data Model has been extended and enriched with new terms and relations to represent bibliographic entities related to in-text reference pointers, such as the in-text reference pointers themselves, in-text reference pointer lists, discourse elements (e.g. sections, paragraphs, sentences), and annotations on citations, bibliographic references and in-text reference pointers. In addition, the provenance layer of the data model has been revised to provide meaningful provenance information in a more compact way. A revised version of the OCDM including these terms was published on November 8, 2019, and it is available on Figshare [1].
  • Extending the OpenCitation harvesting and data re-engineering pipeline. The CCC Project leverages existing OpenCitations technologies for building this new corpus, using as input articles from the Open Access Subset of biomedical literature hosted by Europe PubMed Central (EPMC) and encoded in XML. The OpenCitations pipelines for knowledge extraction (i.e. the software called BEE) and for data re-engineering (i.e. the software called SPACIN) have been enhanced so as to harvest relevant information from the full-text of the XML sources provided by EPMC, rather than just the reference lists,  and to transform these data into RDF according to the revised OCDM. The source code of the new pipeline is available on GitHub.
  • Creating InTRePID, a new persistent identifier for in-text reference pointers. Different in-text reference pointers denoting the same bibliographic reference have distinct logical, rhetorical and textual contexts wherein they occur. To permit them to be identified individually and handled properly, we have recently developed a new persistent identifier, the In-Text Reference Pointer Identifier (InTRePID), for identifying individual in-text reference pointers relating to an open bibliographic citation. The InTRePID is based on the Open Citation Identifier (OCI), currently being used to identify the >624 million citations present in the new release of COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations, described in the previous blog post. The formal definition of an InTRePID is available on Figshare [2]. In addition, an InTRePID Resolution Service has been developed (currently in beta-testing) to facilitate the retrieval of the metadata relating to in-text reference pointers. At the moment, a subset of the CCC corpus is available online for testing the InTRePID Resolution Service.
  • Development of services for accessing and querying the Citations in Context Corpus. Along with the development of the CCC itself, we are also developing services for querying data within the CCC. In particular, we are currently working to extend the RAMOSE software to provide an API for accessing the CCC triplestore. This CCC API will permit users to access the CCC corpus and retrieve detailed information about in-text reference pointers and their related annotations in a variety of human- and machine-readable formats. The source code of the API Manager is available on GitHub. The configuration file for querying the CCC corpus is still in the process of development.

Moreover, we are currently working to evaluate the content data quality of the CCC corpus and to develop reconciliation activities with information stored in Crossref. Specifically, by means of new validation methods, we are testing whether the extracted in-text reference pointers are complete (i.e. determining that all the in-text reference pointers for a particular bibliographic reference have been correctly extracted from the text), and that in-text reference pointer lists (e.g. “[5-13]”) have been correctly parsed to extract all the implicit pointers (in this case “[6]”, “[7]”, “[8]”, “[9]”, “[10]”, “[11]” and “[12]”), and to associate them correctly with the appropriate bibliographic references that they denote. This activity is fundamental, in order to address the diverse citation styles adopted by different journals and to overcome possible incoherencies in the publishers’ XML markup of the articles. Secondly, whenever a DOI is not specified for the citing or cited publications in the full-text of the citing publication, a text search using the Crossref API is performed in order to match possible candidates and supply the missing DOI. This reconciliation process itself can be error-prone since recommended matches are obtained by means of a non-transparent scoring mechanism. Therefore we are currently testing the application of a scoring threshold that will eliminate false positives and provide us only with correct results.

The deployment of the enhanced OpenCitations pipeline for populating the CCC corpus automatically is planned to start in the next weeks. For more details and to provide suggestions, please contact us!

References

[1] Marilena Daquino, Silvio Peroni and David Shotton (2019). The OpenCitations Data Model. Version 2.0. Figshare. DOI:  ​https://doi.org/10.6084/m9.figshare.3443876 

[2] David Shotton, Marilena Daquino and Silvio Peroni (2020). In-Text Reference Pointer Identifier: Definition. Figshare.  DOI: https://doi.org/10.6084/m9.figshare.11674032

COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations

Abstract

In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (http://opencitations.net/index/coci). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described in RDF by means of the new extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data by means of different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes.

Introduction

The availability of open scholarly citations [21] is a public good, of significant value to the academic community and the general public. In fact, citations not only serve as an acknowledgment medium [16], but also can be characterised topologically (by defining the connected graph between citing and cited entities and its evolution over time [19]), sociologically (such as for identifying odd conduct within or elitist access paths to scientific research [18]), quantitatively by creating citation-based metrics for evaluating the impact of an idea or a person [17], and financially by defining the scholarly value of a researcher within his/her own academic community [20]. The Initiative for Open Citations (I4OC, https://i4oc.org) has dedicated the past two years to persuading publishers to provide open citation data by means of the Crossref platform (https://crossref.org), obtaining the release of the reference lists of more than 43 million articles (as of February 2019), and it is this change of behaviour by the majority of academic publishers that has permitted COCI to be created.

OpenCitations (http://opencitations.net) is a scholarly infrastructure organization dedicated to open scholarship and the publication of open bibliographic and citation data by the use of Semantic Web (Linked Data) technologies, and is a founding member of I4OC. It has created and maintains the SPAR (Semantic Publishing and Referencing) Ontologies (http://www.sparontologies.net) [22] for encoding scholarly bibliographic and citation data in RDF, and has previously developed the OpenCitations Corpus (OCC) of open downloadable bibliographic and citation data recorded in RDF [4].

In this paper, we introduce a new dataset made available a few months ago by OpenCitations, namely COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (https://w3id.org/oc/index/coci). This dataset, launched in July 2018, is the first of the indexes proposed by OpenCitations (https://w3id.org/oc/index), in which citations are exposed as first-class data entities with accompanying properties (i.e. individuals of the class cito:Citation as defined in CiTO [7]) instead of being defined simply as relations among two bibliographic resources (via the property cito:cites). Currently COCI, contains more than 445 million DOI-to-DOI citation links made available under a Creative Commons CC0 public domain waiver, that can be accessed and queried through a SPARQL endpoint, an HTTP REST API, by means of searching/browsing Web interfaces, by bulk download in different formats (CSV and N-Triples), or by direct access via HTTP content negotiation.

The rest of the paper is organized as follows. In Section 2 we introduce some of the main RDF datasets containing scholarly bibliographic metadata and citations. In Section 3, we provide some details on the rationale and the technologies used to describe citations as first-class data entities, which are the main foundation of the development of COCI. In Section 4, we present COCI, including the workflow process developed for ingesting and exposing the open citation data available and other tools used for accessing these data. In Section 5, we show the scale of the community uptake of COCI since its launch by means of quantitative statistics on the use of its related services and by listing existing projects that are using it for specific purposes. Finally, in Section 6, we conclude the paper sketching out related and upcoming projects.

Related works

We have noticed a recent growing interest within the Semantic Web community for creating and making available RDF datasets concerning the metadata of scholarly resources, particularly bibliographic resources. In this section, we briefly introduce some of the most relevant ones.

ScholarlyData (http://www.scholarlydata.org) [1] is a project that refactors the Semantic Web Dog Food so as to keep the dataset growing in good health. It uses the Conference Ontology, an improvement version of the Semantic Web Conference Ontology, to describe metadata of documents (5,415, as of March 31, 2019), people (more than 1,100), and data about academic events (592) where such documents have been presented.

Another important source of bibliographic data in RDF is OpenAIRE (https://www.openaire.eu) [3]. Created by funding from the European Union, its RDF dataset makes available data for around 34 million research products created in the context of around 2.5 million research projects.

While important, these aforementioned datasets do not provide citation links between publications as part of their RDF data. In contrast, the following datasets do include citation data as part of the information they make available.

In 2017, Springer Nature announced SciGraph (https://scigraph.springernature.com) [2], a Linked Open Data platform aggregating data sources from Springer Nature and other key partners managing scholarly domain data. It contains data about journal articles (around 8 millions, as of March 31, 2019) and book chapters (around 4.5 millions), including their related citations, and information on around 7 million people involved in the publishing process.

The OpenCitations Corpus (OCC, https://w3id.org/oc/corpus) [4] is a collection of open bibliographic and citation data created by ourselves, harvested from the open access literature available in PubMed Central. As of March 31, 2019, it contains information about almost 14 million citation links to more than 7.5 million cited bibliographic resources.

WikiCite (https://meta.wikimedia.org/wiki/WikiCite) is a proposal, with a related series of workshops, which aims at building a bibliographic database in Wikidata [10] to serve all Wikimedia projects. Currently Wikidata hosts (as of March 29, 2019) more than 170 million citations.

Biotea (https://biotea.github.io) [5] is an RDF datasets containing information about some of the articles available in the Open Access subset of PubMed Central, that have been enhanced with specialized annotation pipelines. The last released dataset includes information extracted from 2,811 articles, including data on their citations.

Finally, Semantic Lancet [6] proposes to build a dataset of scholarly publication metadata and citations (including the specification of the citation functions) starting from articles published by Elsevier. To date it includes bibliographic metadata, abstract and citations of 291 articles published in the Journal of Web Semantics.

Indexing citations as first-class data entities

Citations are normally defined simply as links between published entities (from a citing entity to a cited entity). However, an alternative richer view is to regard each citation as a data entity in its own right, as illustrated in Figure 1. This alternative approach permits us to endow a citation with descriptive properties, such as those ones introduced in Table 11.

Figure 1. Two different ways of describing citations: as a relation between two bibliographic entities (top), or as an individual first-class data entitiy in its own right where the citing entity and the cited entity are among its attributed data.

The advantages of treating citations as first-class data entities are:

  • all the information regarding each citation is available in one place, since such information is defined as attributes of the citation itself;
  • citations become easier to describe, distinguish, count and process, and it becomes possible to distinguish separate citations within the citing entity to the cited entity, enabling one to count how many times, from which sections of the citing entity, and (in principle) for what purposes a particular cited entity is cited within the source paper;
  • if available in aggregate, citations described in this manner are easier to analyse using bibliometric methods, for example to determine how citation time spans vary by discipline.

We have appropriately extended the OpenCitations Data Model (OCDM, http://opencitations.net/model) [23] so as to define each citation as a first-class entity in machine-readable manner. In particular, we have used the class cito:Citation defined in the revised and expanded Citation Typing Ontology (CiTO, http://purl.org/spar/cito) [7], which is part of the SPAR Ontologies [22]. This class allows us to define a permanent conceptual directional link from the citing bibliographic entity to a cited bibliographic entity, that can be accompanied by additional ontological terms for defining specific attributes, as introduced in Table 1.

Characteristic Description CiTO entity
citing entity The bibliographic entity which acts as source for the citation. Object property cito:hasCitingEntity.
cited entity The bibliographic entity which acts as target for the citation. Object property cito:hasCitedEntity.
citation creation date The date on which the citation was created. This has the same numerical value as the publication date of the citing bibliographic resource, but is a property of the citation itself. When combined with the citation time span, it permits that citation to be located in history. Data property cito:hasCitationCreationDate, one of xsd:date, xsd:gYearMonth, or xsd:gYear as datatype value.
citation timespan The temporal characteristic of a citation, namely the interval between the publication date of the cited entity and the publication date of the citing entity. Data property cito:hasCitationTimespan, xsd:duration as datatype value.
type A classification of the citation according to particular dimensions, e.g. whether or not it is a self-citation. Property rdf:type associated with one or more subclasses of cito:Citation – in particular, for example cito:AuthorSelfCitation (i.e. citing and the cited entities have at least one author in common) and cito:JournalSelfCitation (i.e. citing and the cited entities are published in the same journal).

Table 1. List of characteristics that can be associated with a citation when it is described as first-class data entity, using the properties and classes available in CiTO for their definition in RDF.

So as to identify each citation precisely, when described as first-class data entity and included in an open dataset, we have also developed the Open Citation Identifier (OCI) [24], which is a new globally unique persistent identifier for citations. OCIs are registered in the Identifiers.org platform (https://identifiers.org/oci) and recognized as persistent identifiers for citations by the EU FREYA Project (https://www.project-freya.eu) [25]. Each OCI has a simple structure: the lower-case letters oci followed by a colon, followed by two sequences of numerals separated by a dash, where the first sequence is the identifier for the citing bibliographic resource and the second sequence is the identifier for the cited bibliographic resource. For example, oci:0301-03018 is a valid OCI for a citation defined within the OpenCitations Corpus, while oci:02001010806360107050663080702026306630509-02001010806360107050663080702026305630301 is a valid OCI for a citation included in Crossref. It is worth mentioning that OCIs are not opaque identifiers, since they explicitly encode directional relationships between identified citing and cited entities, the provenance of the citation, i.e. the database that contains it, and the type of identifiers used in that database to identify the citing and cited entities. In addition, we have created the Open Citation Identifier Resolution Service (http://opencitations.net/oci), which is a resolution service for OCIs based on the Python application oci.py available at https://github.com/opencitations/oci. Given a valid OCI as input, this resolution service is able to retrieve citation data in RDF (either as RDF/XML, Turtle or JSON-LD), or in Scholix, JSON or CSV formats. A more detailed explanation of OCIs and related material is available in [24].

At OpenCitations, we define an open citation index as a dataset containing citations that complies with the following requirements:

  • the citations contained are all open, according to the definition provided in [21];
  • the citations are all treated as first-class data entities;
  • each citation is identified by an Open Citation Identifier (OCI) [24];
  • the citation data are recorded in RDF according to the OpenCitations Data Model (OCDM) [23], where the OCI of a citation is embedded in the IRI defining it in RDF;
  • each citation defines the attributes shown in Table 1.

COCI: ingestion workflow, data, and services

COCI, the OpenCitations Index of Crossref open DOI-to-DOI references, is the first citation index to be published by OpenCitations, in which we have applied the concept of citations as first-class data entities, introduced in the previous section, to index the contents of one of the major open databases of scholarly citation information, namely Crossref (https://crossref.org), and to render and make available this information in machine-readable RDF under a CC0 waiver. Crossref contains metadata about publications (mainly academic journal articles) that are identified using Digital Object Identifiers (DOIs). Out of more than 100 million publications recorded in Crossref, Crossref also stores the reference lists of more than 43 million publications deposited by the publishers. Many of these references are to other publications bearing DOIs that are also described in Crossref, while others are to publications that lack DOIs and do not have Crossref descriptions. Crossref organises such publications with associated reference lists according to three categories: closed, limited and open. These categories to publications for which the reference lists are not visible to anyone outside the Crossref Cited-by membership, are visible only to them and to Crossref Metadata Plus members, or are visible to all users, respectively2.

Figure 2. The diagram of the data model adopted to define the new class for defining citations as first-class data entities, which forms part of the OpenCitations Data Model. This model uses terms from the Citation Typing Ontology (CiTO, http://purl.org/spar/cito) for describing the data, and from the Provenance Ontology (PROV-O, http://www.w3.org/ns/prov) to define the citation’s provenance.

Followed the first release of COCI on June 4, 2018, the most recent version of COCI, released on November 12, 2018, contains more that 445 million DOI-to-DOI citations included in the open and the limited datasets of Crossref reference data3. All the citation data in COCI and their provenance information, described according the Graffoo diagram [27] presented in Figure 2, are included in two distinct graphs – https://w3id.org/oc/index/coci/ and https://w3id.org/oc/index/coci/prov/ respectively – released under a CC0 waiver, and compliant with the FAIR data principles [26].

An example of a citation included in COCI is shown in the following excerpt (in Turtle), where the OCI is embedded as part of the IRI of the citation (without the oci: prefix) after the ci/ (meaning citation according to the OpenCitations Data Model [23]):

@prefix cito: <http://purl.org/spar/cito/> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .

<https://w3id.org/oc/index/coci/ci/02001010806360107050663080702026306630509-02001010806360107050663080702026305630301>
  a 
    cito:Citation,
    cito:JournalSelfCitation ;
  cito:hasCitationCreationDate "2013"^^xsd:gYear ;
  cito:hasCitationTimeSpan "P1Y"^^xsd:duration ;
  cito:hasCitingEntity <http://dx.doi.org/10.1186/1756-8722-6-59> ;
  cito:hasCitedEntity <http://dx.doi.org/10.1186/1756-8722-5-31> ;
  prov:generatedAtTime "2018-11-01T05:47:54+00:00"^^xsd:dateTime ;
  prov:hadPrimarySource <https://api.crossref.org/works/10.1186/1756-8722-6-59> ;
  prov:wasAttributedTo <https://w3id.org/oc/index/coci/prov/pa/1> .

In the following subsections we introduce the ingestion workflow developed for creating COCI, we provides some figures on the citations it contains, and we list the resources and services we have made available to permit access to and querying of the dataset.

Ingestion workflow

We processed all the data included in the October 2018 JSON dump of Crossref data, available to all the Crossref Metadata Plus members. The ingestion workflow, summarised in Figure 3, was organised in four distinct phases, and all the related scripts developed and used are released as open source code according to the ISC License and downloadable from the official GitHub repository of COCI at https://github.com/opencitations/coci.

Figure 3. A flowchart scheme describing the workflow to build COCI. It is divided in four phases: (1) global data generation, (2) CSV generation, (3) conversion into RDF, and (4) updating the triplestore.

Phase 1: global data generation. We parse and process the entire Crossref bibliographic database to extract all the publications having a DOI and their available list of references. Through this process three datasets are generated, which are used in the next phase:

  • Dates, the publication dates of all the bibliographic entities in Crossref and of all their references if they explicitly specify a DOI and a publication date as structured data – e.g. see the fields DOI and year in the array reference in https://api.crossref.org/works/10.1007/978-3-030-00668-6_8. Where the same DOI is encountered multiple times, e.g. as a proper item indexed in Crossref and also as a reference in the reference list of another article deposited in the Crossref, we use the full publication date defined in the indexed item.
  • ISSN: the ISSN (if any) and publication type (journal-article, book-chapter, etc.) of each bibliographic entity identified by a DOI indexed in Crossref.
  • ORCID: the ORCIDs (if any) associated with the authors of each bibliographic entity identified by a DOI indexed in Crossref.

Phase 2: CSV generation. We generate a CSV file such that each row represents a particular citation between a citing entity and a cited entity according to the data available in the Crossref dump, by looking at the DOI identifying the citing entity and all the DOIs specified in the reference list of such a citing entity according to the Crossref data. In particular, we execute the following four steps for each citation identified:

  1. We generate the OCI for the citation by encoding the DOIs of the citing and cited entities into numerical sequences using the lookup table available at https://github.com/opencitations/oci/blob/master/lookup.csv, which are prefixed by the supplier prefix 020 to indicate Crossref as the source of the citation.
  2. We retrieve the publication date of the citing entity from the Dates dataset and assign it as citation creation date.
  3. We retrieve the publication date of the cited entity (from the Dates dataset) and we use it, together with the publication date of the citing entity retrieved in the previous step, to calculate the citation timespan.
  4. We use the data contained in the ISSN and ORCID datasets to establish whether the citing and cited entity have been published in the same journal and/or have at least one author in common, and in these cases we assign the appropriate self-citation type(s) to the citation.

Simultaneously with the creation of the CSV file of citation data, we generate a second CSV file containing the provenance information for each citation (identified by its OCI generated in the aforementioned Step 1). These provenance data include the agent responsible for the generation of the citation, the Crossref API call that refers to the data of the citing bibliographic entity containing the reference used to create the citation, and the creation date of the citation.

Phase 3: converting into RDF. The CSV files generated in the previous phase are then converted into RDF according to the N-Triples format, following the OWL model introduced in Figure 2, where the DOIs of the citing and cited entities become DOI URLs starting with http://dx.doi.org/4, while the IRI of the citation includes its OCI (without the oci: prefix), as illustrated in the example given in the previous section.

Phase 4: updating the triplestore. The final RDF files generated in Phase 3 are used to update the triplestore used for the OpenCitations Indexes.

Data

COCI was first created and released on July 4, 2018, and most recently updated on November 12, 2018. Currently, it contains 445,826,118 citations between 46,534,705 bibliographic entities. These are stored by means of 2,259,134,894 RDF statements (around 5 RDF statements per citation) for describing the citation data, and 1,337,478,354 RDF statements (3 statements per citation) for describing the related provenance information. Of the citations stored, 29,755,045 (6.7%) are journal self-citations, while 250,991 (0.06%) are author self-citations. The number of identified author self-citations, based on author ORCIDs, is a significant underestimate of the true number, mainly due to the sparsity of the data concerning the ORCID author identifiers within the Crossref dump. Journal entities (i.e. journals, volumes, issues, and articles) are the type of the bibliographic entities that are mostly cited, with over 420 million citations.

We also classify the cited documents according to their publishers – Table 2 shows the ten top publishers of citing and cited documents, calculated by looking at the DOI prefixes of the entities involved in each citation. As we can see, Elsevier is by far the publisher having the majority of cited documents. It is also the largest publisher that is not participating in the Initiative for Open Citations by making its publications’ reference lists open at Crossref – which is highlighted by the very limited amount of outgoing citations recorded in COCI. Its present refusal to open its article reference lists in Crossref, contrary to the practice of most of the major scholarly publishers, is contributing significantly to the invisibility of Elsevier’s own publications within the corpora of open citation data such as COCI that are increasingly being used by the scholarly community for discovery, citation network visualization and bibliometric analysis, as we introduce below in the section entitled Section 5.

Publisher Outgoing citations Incoming citations
Springer Nature 79,860,827 52,257,862
Wiley 76,819,685 48,174,542
Elsevier 2,853,739 96,310,027
Informa UK Limited 41,433,917 14,975,989
Institute of Electrical and Electronics Engineers (IEEE) 30,114,985 20,940,703
American Physical Society (APS) 15,729,297 16,065,862
SAGE Publications 15,933,805 7,915,082
Ovid Technologies (Wolters Kluwer Health) 9,971,274 12,840,293
Oxford University Press (OUP) 9,891,000 11,466,659
AIP Publishing 10,130,022 8,455,097

Table 2. A classification of the COCI citations according to the publishers of the cited (incoming citations) and citing (outgoing citations) documents. The table shows the top ten publishers by the overall amount of incoming and outgoing to/from their published works. Those publishers shown in italics are not participating in the Initiative for Open Citations by making their publications’ reference lists open at Crossref – see https://i4oc.org for additional information.

Resources and services

The citation data in COCI can be accessed in a variety of convenient ways, listed as follows.

Open Citation Index SPARQL endpoint. We have made available a SPARQL endpoint for all the indexes released by OpenCitations, including COCI, which is available at https://w3id.org/oc/index/sparql. When accessed with a browser, it shows a SPARQL endpoint editor GUI generated with YASGUI [8]. Of course, this SPARQL endpoint can additionally be queried using the REST HTTP protocol, e.g. via curl. In order to access to COCI data, the graph https://w3id.org/oc/index/coci/ must be specified in the SPARQL query.

COCI REST API. Citation data in COCI can be retrieved by using the COCI REST API, available at https://w3id.org/oc/index/coci/api/v1. The rationale of making a REST API available in addition to the SPARQL endpoint was to provide convenient access to the the citation data included in COCI for Web developers and users who are not necessarily experts in Semantic Web technologies. This REST API, as are all the other REST APIs made available by OpenCitations, has been implemented by means of RAMOSE, the Restful API Manager Over SPARQL Endpoints (https://github.com/opencitations/ramose), which is a Python application that allows one to simply create a REST API over any SPARQL endpoint by means of a simple configuration file that execute a SPARQL query dependently of the particular API call specified. The configuration file for the COCI API is available at https://github.com/opencitations/api/blob/master/coci_v1.hf. Currently, the COCI REST API makes available four operations, that will retrieve either (a) the citation data for all the references of a given DOI (operation: references), or (b) the citation data for all the citations received by a given DOI (operation: citations), or (c) the citation data for the citation identified by an OCI (operation: citation), or (d) the metadata for the articles identified by the specified DOIs (operation: metadata). It is worth mentioning that the latter operation strictly depends on live API calls to external services, namely the Crossref API (https://api.crossref.org), the DataCite API (https://api.datacite.org), and the Unpaywall API (http://api.unpaywall.org), to gather the metadata of the requested articles, such as the title, the authors, and the journal name, that are not explicitly included within the OpenCitations Index triplestore.

Searching and browsing interfaces. We have additionally developed a user-friendly text search interface (https://w3id.org/oc/index/search), and a browsing interface (e.g. https://w3id.org/oc/index/browser/coci/ci/02001010806360107050663080702026306630509-02001010806360107050663080702026305630301), that can be used to search citation data in all the OpenCitations Indexes, including COCI, and to visualise and browse them, respectively. These two interfaces have been developed by means of OSCAR, the OpenCitations RDF Search Application (https://github.com/opencitations/oscar) [9], and LUCINDA, the OpenCitations RDF Resource Browser (https://github.com/opencitations/lucinda), that provide a configurable layer over SPARQL endpoints that permit one easily to create Web interfaces for querying and visualising the results of SPARQL queries.

Data dumps. All the citation data and provenance information in COCI are available as dumps stored in Figshare (https://figshare.com) in both CSV and N-Triples formats, while a dump of the whole triplestore is available on The Internet Archive (https://archive.org). The links to these dumps are available on the download page of the OpenCitations website (http://opencitations.net/download#coci).

Direct HTTP access. All the citation data in COCI can be accessed directly by means of the HTTP IRIs of the stored resources (via content negotiation, e.g. https://w3id.org/oc/index/coci/ci/02001010806360107050663080702026306630509-02001010806360107050663080702026305630301).

Quantifying the use of COCI citation data

In the past months, we have monitored the accesses to COCI data since its launch in July 2018. The statistics and graphics we show in this section highlight two different aspects: the quantification of the use of COCI data – and related services – and the community uptake, i.e. the use of COCI data for specific reuses within cross-community projects and studies. All the data of the charts described in this section are freely available for download from Figshare [15].

Quantitative analysis

Figure 4 shows the number of accesses made between July 2018 and February 2019 (inclusive) to the various COCI services described above – the search/browse interfaces, the REST API, SPARQL queries, and others (e.g. direct HTTP access to particular citations and visits to COCI webpages in the OpenCitations website). We have excluded from all these counts all accesses made by automated agents and bots. As shown, the REST API is, by far, the most used service, with extensive usage recorded in the last four months, following the announcement of the second release of COCI. This is reasonable, considering that the REST API has been developed exactly for accommodating the needs of generic Web users and developers, including (and in particular) those who are not expert in Semantic Web technologies. There is just one exception in November 2018, where the SPARQL endpoint was used to retrieve quite a large amount of citation data. After further investigation, we noticed a large proportion of the SPARQL calls were coming from a single source (according to the IP data stored in our log), which probably collected citation data for a specific set of entities.

Figure 4. The number of accesses to COCI-related services since July 2018 to February 2019. The scale used in the y-axis is logarithmic.

Figure 5 shows a particular cut of the figures given in Figure 4, which focuses on the REST API accesses only. In particular, we analysed which operations of the API were used the most. According to these figures, the most used operation is metadata (which was first introduced in the API in August 2018) which allows one to retrieve all the metadata describing certain publications. In contrast to the other API operations, this metadata search accepts one or more DOIs as input. The least used operation was citation, which allows one to retrieve citation data given an OCI, which should not be surprising, considering the currently limited knowledge of this new identifier system for citations.

Figure 5. The number of access made to each different COCI REST-API operation since the release of COCI on July 2018. Classified into 4 categories (requested resource): references, citations, citation, and metadata, as defined in the text.Note again the logarithmic scale of the y-axis.

In addition, we have also retrieved data about the views and downloads (as of March 29, 2019) of all the dumps uploaded to Figshare and to the Internet Archive. The CSV data dump received 1,321 views and 454 downloads, followed by the N-Triples data dump with 316 views and 93 downloads. The CSV provenance information dump has 166 and 127 downloads, while the N-Triples provenance information dump had 95 views and 34 downloads. Finally, the least accessed dump was that of the entire triplestore available in the Internet Archive, uploaded for the very first time in November 2018, that had only 3 views.

Community uptake

The data in COCI has been already used in various projects and initiatives. In this section, we list all the tools and studies doing this of which we are aware.

VOSviewer (http://www.vosviewer.com) [11] is a software tool, developed at the Leiden University’s Centre for Science and Technology Studies (CWTS), for constructing and visualizing bibliometric networks, which may include journals, researchers, or individual publications, and may be constructed based on citation, bibliographic coupling, co-citation, and co-authorship relations. Starting from version 1.6.10 (released on January 10, 2019), VOSviewer can now directly use citation data stored in COCI, retrieved by means of the COCI REST API.

Citation Gecko (http://citationgecko.com) is a novel literature mapping tool that allows one to map a research citation network using some initial seed articles. Citation Gecko is able to leverage citation links between seed papers and other papers to highlight papers of possible interest to the user, for which it uses COCI data (accessed via the REST API) to generate the citation network.

OCI Graphe (https://dossier-ng.univ-st-etienne.fr/scd/www/oci/OCI_graphe_accueil.html) is a Web tool that allows one to search articles by means of the COCI REST API, that are then visualised in a graph showing citations to the retrieved articles. It enriches this visualisation by adding additional information about the publication venues, publication dates, and other related metadata.

Zotero [12] is a free, easy-to-use tool to help users collect, organize, cite, and share research. Recently, the Open Citations Plugin for Zotero (https://github.com/zuphilip/zotero-open-citations) has been released, which allows users to retrieve open citation data extracted from COCI (via its REST API) for one or more articles included in a Zotero library.

COCI data, downloaded from the CSV dump available on Figshare, have been also used in at least two bibliometric studies. In particular, during the LIS Bibliometrics 2019 Event, Stephen Pearson presented a study (https://blog.research-plus.library.manchester.ac.uk/2019/03/04/using-open-citation-data-to-identify-new-research-opportunities/) run on publications by scholars at the University of Manchester which used COCI to retrieve citations between these publications so as to investigate possible cross-discipline and cross-department potential collaborations. Similarly, COCI data were used to conduct an experiment on the latest Italian Scientific Habilitation [13] (the national exercise that evaluates whether a scholar is appropriate to receive an Associate/Full Professorship position in an Italian university), which aimed at trying to replicate part of the outcomes of this evaluation exercise for the Computer Science research field by using only open scholarly data, including the citations available in COCI, rather than citation data from subscription services.

Conclusions

In this paper, we have introduced COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. After an initial introduction of the notion of citations as first-class data entities, we have presented the ingestion workflow that has been implemented to create COCI, have detailed the data COCI contains, and have described the various services and resources that we have made available to access COCI data. Finally, we have presented some statistics about the use of COCI data, and have mentioned the tools and studies that have adopted COCI in recent months.

COCI is just the first open citations index that OpenCitations will make available. Using the experience we have gathered by creating it, we now plan the release of additional indexes, so as to extend the coverage of open citations available through the OpenCitations infrastructure. The first of these, recently released, is CROCI (https://w3id.org/oc/index/croci) [14], the Crowdsourced Open Citations Index, which contains citations deposited by individuals. CROCI is designed to permit scholars proactively to fill the open citations gap in COCI resulting from four causes: (a) the failure of many publishers using Crossref DOIs to deposit reference lists of their publications at Crossref, (b) the failure of some publishers that do deposit their reference lists to make these reference lists open, in accordance with the recommendations of the Initiative for Open Citations; (c) the absence from ~11% of Crossref reference metadata of the DOIs for cited articles which in fact have been assigned DOIs (https://www.crossref.org/blog/underreporting-of-matched-references-in-crossref-metadata/), a problem that Crossref are currently working hard to rectify; and (d) the existence of citations to published entities that lack Crossref DOIs. In the near future, we plan to extend the number of indexes by harvesting citations from other open datasets including Wikidata (https://www.wikidata.org), DataCite (https://datacite.org), and Dryad (https://datadryad.org). In addition, we plan to extend and generalise the current software developed for COCI, so as to facilitate most frequent updates of the indexes.

Acknowledgements

We gratefully acknowledge the financial support provided to us by the Alfred P. Sloan Foundation for the OpenCitations Enhancement Project (grant number G‐2017‐9800).

References

  1. Nuzzolese, A. G., Gentile, A. L., Presutti, V., Gangemi, A. (2016). Conference Linked Data: The ScholarlyData project. In Proceedings of the 15th International Semantic Web Conference (ISWC 2015): 150-158. DOI: https://doi.org/10.1007/978-3-319-46547-0_16
  2. Hammond, T., Pasin, M., & Theodoridis, E. (2017). Data integration and disintegration: Managing Springer Nature SciGraph with SHACL and OWL. In International Semantic Web Conference (Posters, Demos & Industry Tracks). http://ceur-ws.org/Vol-1963/paper493.pdf
  3. Alexiou, G., Vahdati, S., Lange, C., Papastefanatos, G., Lohmann, S. (2016). OpenAIRE LOD services: scholarly communication data as linked data. In Semantics, Analytics, Visualization. Enhancing Scholarly Data: 45-50. DOI: https://doi.org/10.1007/978-3-319-53637-8_6
  4. Peroni, S., Shotton, D., Vitali, F. (2017). One year of the OpenCitations Corpus – releasing RDF-based scholarly citation data into the public domain. In Proceedings of the 16th International Semantic Web Conference (ISWC 2017): 184-192. DOI: https://doi.org/10.1007/978-3-319-68204-4_19
  5. Garcia, A., Lopez, F., Garcia, L., Giraldo, O., Bucheli, V., Dumontier, M. (2018). Biotea: semantics for Pubmed Central. PeerJ, 6: e4201. DOI: https://doi.org/10.7717/peerj.4201
  6. Bagnacani, A., Ciancarini, P., Di Iorio, A., Nuzzolese, A. G., Peroni, S., Vitali, F. (2014). The Semantic Lancet Project: A Linked Open Dataset for Scholarly Publishing. In EKAW 2014 Satellite Events: 101-105. DOI: https://doi.org/10.1007/978-3-319-17966-7_10
  7. Silvio Peroni, David Shotton (2012). FaBiO and CiTO: ontologies for describing bibliographic resources and citations. Web Semantics, 17: 33-34. DOI: https://doi.org/10.1016/j.websem.2012.08.001
  8. Rietveld, L., Hoekstra, R. (2017). The YASGUI family of SPARQL clients Semantic Web, 8(3): 373-383. DOI: https://doi.org/10.3233/SW-150197
  9. Heibi, I., Peroni, S., Shotton, D. (2018). OSCAR: A Customisable Tool for Free-Text Search over SPARQL Endpoints. In Semantics, Analytics, Visualization: 121-137. DOI: https://doi.org/10.1007/978-3-030-01379-0_9
  10. Erxleben, F., Günther, M., Krötzsch, M., Mendez, J., Vrandečić, D. (2014). Introducing Wikidata to the linked data web. In Proceedings of the 13th International Semantic Web Conference (ISWC 2013): 50-65. DOI: https://doi.org/10.1007/978-3-319-11964-9_4
  11. van Eck, N., & Waltman, L. (2009). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523-538. DOI: https://doi.org/10.1007/s11192-009-0146-3
  12. Ahmed, K. M., Al Dhubaib, B. (2011). Zotero: A bibliographic assistant to researcher. Journal of Pharmacology and Pharmacotherapeutics, 2(4), 303. DOI: https://doi.org/
    10.4103/0976-500X.85940
  13. Di Iorio, A., Peroni, S., Poggi, F. (2019). Open data to evaluate academic researchers: an experiment with the Italian Scientific Habilitation. (To appear) Proceedings of the 17th International Conference on Scientometrics and Informetrics (ISSI 2019). https://arxiv.org/abs/1902.03287
  14. Heibi, I., Peroni, S., Shotton, D. (2019). Crowdsourcing open citations with CROCI – An analysis of the current status of open citations, and a proposal. (To appear) Proceedings of the 17th International Conference on Scientometrics and Informetrics (ISSI 2019). https://arxiv.org/abs/1902.02534
  15. Heibi, I., Peroni, S., Shotton, D. (2019). Usage statistics of COCI data. Figshare. DOI: https://doi.org/10.6084/m9.figshare.7873559
  16. Newton, I. (1675). Isaac Newton letter to Robert Hooke – Cambridge, 5 February 1675. https://digitallibrary.hsp.org/index.php/Detail/objects/9792 (last visited 23 March 2019)
  17. Schiermeier, Q. (2017). Initiative aims to break science’s citation paywall. Nature. DOI: https://doi.org/10.1038/nature.2017.21800
  18. Sugimoto, C. R., Waltman, L., Larivière, V., van Eck, N. J, Boyack, K. W., Wouters, P., de Rijcke, S. (2017). Open citations: A letter from the scientometric community to scholarly publishers. ISSI Society. http://issi-society.org/open-citations-letter (last visited 23 March 2019)
  19. Chawla, D. S. (2017). Now free: citation data from 14 million papers, and more might come. Science. https://www.sciencemag.org/news/2017/04/now-free-citation-data-14-million-papers-and-more-might-come (last visited 23 March 2019)
  20. Molteni, M. (2017). Tearing Down Science’s Citation Paywall, One Link at a Time. Wired. https://www.wired.com/2017/04/tearing-sciences-citation-paywall-one-link-time/ (last visited 23 March 2019)
  21. Peroni, S., Shotton, D.. (2018). Open Citation: Definition. Figshare. DOI: https://doi.org/10.6084/m9.figshare.6683855
  22. Peroni, S., Shotton, D. (2018). The SPAR Ontologies. In Proceedings of the 17th International Semantic Web Conference (ISWC 2018): 119-136. DOI: https://doi.org/10.1007/978-3-030-00668-6_8
  23. Peroni, S., Shotton, D. (2018). The OpenCitations Data Model. Figshare. DOI: https://doi.org/10.6084/m9.figshare.3443876
  24. Peroni, S., Shotton, D. (2019). Open Citation Identifier: Definition. Figshare. DOI: https://doi.org/10.6084/m9.figshare.7127816
  25. Ferguson, C., McEntrye, J., Bunakov, V., Lambert, S., van der Sandt, S., Kotarski, R., … McCafferty, S. (2018). Survey of Current PID Services Landscape (Deliverable No. D3.1). Retrieved from FREYA project (EC Grant Agreement No 777523) website: https://www.project-freya.eu/en/deliverables/freya_d3-1.pdf
  26. Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3, 160018. DOI: https://doi.org/10.1038/sdata.2016.18
  27. Falco, R., Gangemi, A., Peroni, S., Shotton, D., Vitali, F. (2014). Modelling OWL Ontologies with Graffoo. In The Semantic Web: ESWC 2014 Satellite Events: 320–325. DOI: https://doi.org/10.1007/978-3-319-11955-7_42

Footnotes

1. An in-depth description about the definition and use of citations as first-class data entities can be found at https://opencitations.hypotheses.org/816. [back]
2. Additional information on this classification of Crossref reference lists is available at https://www.crossref.org/reference-distribution/.[back]
3. We have access to the limited dataset since we are members of the Crossref Metadata Plus plan.[back]
4. We are aware that the current practice for DOI URLs is to use the base https://doi.org/ instead of http://dx.doi.org/. However, when one tries to resolve a DOI URL owned by Crossref by specifying an RDF format (e.g. Turtle) in the accept header of the request, the bibliographic entity is actually defined using the old URL structure starting with http://dx.doi.org/. For this reason, since COCI is derived entirely from Crossref data, we decided to stay with the approach currently used by Crossref.[back]

Citations as First-Class Data Entities: Citation Descriptions

Requirements for citations to be treated as First-Class Data Entities

In my introductory blog post, I listed five requirements for the treatment of citations as first-class data entities.  The first of these requirements is that they must be definable in a machine-readable manner as a member of the class “Citation”, and describable using appropriate ontology terms.

This blog post describes recent additions to the OpenCitations Data Model, and to CiTO, the Citation Typing Ontology, that permit the required richer description of citations.

Changes to the OpenCitations Data Model

In the OpenCitations Data Model (OCDM), itself described in the following blog post, we have created the following new classes and properties that permit the descriptions of citations in richer ways that are appropriate for bibliometric research.  These changes have been inspired by the publications of Vincent Larivière, Ludo Waltman and their colleagues [1-3].

These new classes and properties and their definitions are described below:

New classes

  • Citation: a permanent conceptual directional link from the citing bibliographic resource to a cited bibliographic resource, created by the performative act of an author citing a published work that is relevant to the current work, typically made by including a bibliographic reference in the reference list of the citing work, or by the inclusion within the citing work of a link, in the form of an HTTP Uniform Resource Locator (URL), to the cited bibliographic resource on the World Wide Web.

The class Citation has sub-classes defining a particular type of citation.

  • Self-citation: a citation in which the citing and the cited entities have something significant in common with one another. Sub-classes include:
    • Affiliation self-citation: a citation in which at least one author from each of the citing and the cited entities is affiliated with the same academic institution.
    • Author network self-citation: a citation in which at least one author of the citing entity has direct or indirect co-authorship links with one of the authors of the cited entity.
    • Author self-citation: a citation in which the citing and the cited entities have at least one author in common.
    • Funder self-citation: a citation in which the works reported in the citing and the cited entities were funded by the same funding agency.
    • Journal self-citation: a citation in which the citing and the cited entities are published in the same journal.
  • Journal cartel citation: a citation from one journal to another journal which forms one of a very large number of citations from the citing journal to recent articles in the cited journal.
  • Distant citation: a citation in which the citing and the cited entities have nothing significant in common with one another over and beyond their subject matter.

New object properties

  • has citing document: The bibliographic resource which acts as source for the citation.
  • has cited document: The bibliographic resource which acts as target for the citation.

New data properties

  • has citation creation date:The date on which the citation was created. This has the same numerical value as the publication date of the citing bibliographic resource, but is a property of the citation itself. When combined with the citation time span, it permits that citation to be located in history.
  • has citation time span: The temporal characteristic of a citation, namely the interval between the publication date of the cited entity and the publication date of the citing entity.

Changes to CiTO, the Citation Typing Ontology

To complement these additions to the OpenCitations Data Model, and to permit these richer characteristics of citations to be encoded in RDF, we have additionally made the following changes to CiTO, the Citation Typing Ontology.

New classes

The class cito:SelfCitation has been renamed cito:AuthorSelfCitation, with an unchanged definition (“a citation in which the citing and the cited entities have at least one author in common”).

A new class cito:SelfCitation has been created, with same the more general definition as for this sub-class in the OCDM (“a citation in which the citing and the cited entities have something significant in common with one another”). In CiTO, this now includes five new sub-classes:

  • cito:AuthorSelfCitation
  • cito:JournalSelfCitation
  • cito:FunderSelfCitation
  • cito:AffiliationSelfCitation
  • cito:AuthorNetworkSelfCitation

with the definitions given above for these sub-classes in the OCDM.

New object properties

To complement the OCDM properties, we have within CiTO the following object properties:

  • cito:hasCitedEntity (“A property that relates a citation to the cited entity”) and
  • cito:hasCitingEntity (“A property that relates a citation to the cited entity”).

CiTO also has the following relevant object property:

  • cito:sharesPublicationVenueWith

with the sub-property cito:sharesJournalWith.

New data properties

To match the additions in the OCDM, we have added these new data properties to CiTO, which have the same definitions as those in the OCDM:

  • cito:hasCitationCreationDate
  • cito:hasCitationTimeSpan.

In addition, the class cito:AuthorNetworkSelfCitation is accompanied by the new data property:

  • cito:hasCoAuthorshipCitationLevel

which specifies the minimal distance that one of the authors of a citing entity has with regards to one of the authors of a cited entity according to their co-author network. For instance, a citation has a co-authorship citation level equal to 1 if at least one author of the citing entity has previously published as co-author with one of the authors of the cited entity. Similarly, we say that a citation has a co-authorship citation level equal to 2 if at least one author of the citing entity has previously published as co-author with someone who him/herself has previously published as co-author with one of the authors of the cited entity. And so on.

Describing a citation in RDF

Describing a citation between two articles in RDF as a simple link is straightforward but relatively uninformative:

<https://w3id.org/oc/corpus/br/1>
      cito:cites
          <https://w3id.org/oc/corpus/br/18> . 

The alternative RDF description of a citation as a first-class date entity could include the following triples (omitting any provenance information in this example), where br/1 and br/18 are the internal identifiers for the citing bibliographic resource and the cited bibliographic resource within the OpenCitations Corpus:

<https://w3id.org/oc/virtual/ci/1-18> a cito:Citation ;
     cito:hasCitingEntity <https://w3id.org/oc/corpus/br/1> ;
     cito:hasCitedEntity <https://w3id.org/oc/corpus/br/18> ;
     cito:hasCitationCreationDate "2016"^^xsd:gYear ;
     cito:hasCitationTimeSpan "P10Y"^^xsd:duration ;
     datacite:hasIdentifier <https://w3id.org/oc/virtual/id/ci-1-18> .

The meaning of “virtual” in the URI of this citation is explained in the following blog post about the OpenCitations Data Model.

The following diagram prepared by Silvio Peroni shows the semantic relationships for a citation currently handled by the OpenCitations Corpus (omitting the sub-classes of the class cito:Citation).  Explanation of OCI, the Open Citation Identifier, is given in a subsequent post.

References

[1]     Matthew L. Wallace, Vincent Larivière and Yves Gingras (2012. A Small World of Citations? The Influence of Collaboration Networks on Citation Practices.  PLoS ONE 7(3): e33339. https://doi.org/10.1371/journal.pone.0033339

[2]     Philippe Mongeon, Ludo Waltman and Sarah de Rijcke (2016). What do we know about journal citation cartels? A call for information.  CWTS blog post. Available at https://www.cwts.nl/blog?article=n-q2w2b4

[3]       Ludo Waltman and Caspar Chorus (2016). Journal self-citations are increasingly biased toward impact factor years. CWTS blog post. Available at https://www.cwts.nl/blog?article=n-q2x264

Citations as First-Class Data Entities: Introduction

Citations are now centre stage

As a result of the Initiative for Open Citations (I4OC), launched on April 6 last year, almost all the major scholarly publishers now open the reference lists they submit to Crossref, resulting in more than half a billion references being openly available via the Crossref API.

It is therefore time to think carefully about how citations are treated, and how they might be better handled as part of the Linked Open Data Web.

Citations are normally treated simply as the links between published entities.

Conventional citation

However, an alternative richer view is to regard a citation as a data entity in its own right.

First class citation

This permits us to endow a citation with descriptive properties, such as

has citation creation date:   3rd March 2015
has citation time span:       6 years, 5 months and 23 days
has type:                     Self-citation
has identifier:               oci:7295288-3962641

[Note: a later blog post entitled “Open Citation Identifiers” will include an explanation of the identifier shown here.]

Advantages of treating citations as First-Class Data Entities

  • All the information regarding each citation is available in one place.
  • Citations become easier to describe, distinguish, count and process.
  • If available in aggregate, citations described in this manner are easier to analyze using bibliometric methods, for example to determine how citation time spans vary by discipline.

Requirements for citations to be treated as First-Class Data Entities

  • They must be definable in a machine-readable manner as a member of the class “Citation”, and describable using appropriate ontology terms.
  • They must have metadata structured using a generic yet appropriately detailed data model.
  • They must be storable, searchable and retrievable in an open database designed for bibliographic citations.
  • They must be identifiable using a global persistent identifier scheme.
  • There must be a Web-based identifier resolution service that takes the citation identifier as input and returns a description of the citation.

Blog post detailing how these requirements are met

Subsequent blog posts will describe how we at OpenCitations have satisfied these requirements, permitting citations to indeed be treated as First-Class Data Entities:

  1. Citations as First-Class Data Entities: Citation Descriptions
  2. Citations as First-Class Data Entities: The OpenCitations Data Model
  3. Citations as First-Class Data Entities: The OpenCitations Corpus
  4. Citations as First-Class Data Entities: Open Citation Identifiers
  5. Citations as First-Class Data Entities: The Open Citation Identifier Resolution Service

‘Likes’ joins the semantic web: cito:likes

A ‘like’ button is a well-known feature in communication software such as social networking services, Internet forums, news websites and blogs that permits a user to indicate that he/she likes, enjoys or supports certain content.  Internet services that feature ‘like’ buttons usually also display the number of users who have expressed that they ‘like’ a particular item of content, providing a quantitative estimate of the strength of support for it.

In particular, the ‘Like’ button is one of Facebook’s social plug-ins, which can be use on websites outside Facebook as part of Facebook’s Open Graph.  It is valued by advertisers who wish to attract ‘likes’ for their products (and who pay Facebook for the privilege), but its use has aroused privacy concerns because it permits Facebook to track visitors to participating sites, even if they not Facebook users, giving Facebook a vast amount of information about who visits which sites.

Like it or not, however, this form of social communication has now become an integral feature of online social interactions.  For this reason, we thought it would be worthwhile to enable encoding of such ‘likes’ as open linked data, in the form of a new object property in CiTO, the Citation Typing Ontology.

This new property, cito:likes, has the following definition:

“A property that permits you to express appreciation of or interest in something, or to express that it is worth thinking about even if you do not agree with its content, enabling social media ‘likes’ statements to be encoded in RDF.  Use of this property does NOT imply the existence of a formal citation of the entity that is ‘liked’.”

An exemplar usage of cito:likes (in Turtle format) is:

sioc: <http://rdfs.org/sioc/ns#> .
foaf: <http://xmlns.com/foaf/0.1/> .
today: <http://opencitation.wordpress.com/2012/07/13/> .

today:cito-likes a sioc:Post ;
	sioc:has_creator [ 
		a sioc:UserAccount ; 
		sioc:account_of [ 
			a foaf:Person ;
			foaf:givenName "David" ;
			foaf:familyName "Shotton" ] ] .

<https://www.facebook.com/silvioperoni> a sioc:UserAccount ;
	sioc:account_of <http://www.essepuntato.it/me>;
	cito:likes today:cito-likes .

To our surprise, we found that existing ontologies did not include such a property – a search in the excellent new LOV (Linked Open Vocabularies) service revealed that no other open ontology contains the same concept as is now represented by cito:likes.

The Trait Ontology has trait:likes, but this object property has a gender-related domain, and its definition indicates that its usage is designed for expressing sexual fetish preferences.

Schema.org at first sight appears to have something resembling cito:likes, but inspection of schema:UserLike reveals this use to be specific for events.

Even the SIOC ontology, a product of the SIOC initiative (Semantically-Interlinked Online Communities) aimed at enabling the integration of online community information, which is described in an award-winning paper from DERI [1], lacks the concept ‘likes’.

So here we offer cito:likes, a property (like all other cito properties) without domain or range constraints, permitting it to be used in a wide variety of situations.

Like it? Click the Like button below!

David Shotton
Silvio Peroni

Reference

[1]     John G. Breslin, Andreas Harth, Uldis Bojars, and Stefan Decker (2005). Towards Semantically-Interlinked Online Communities.  In Proc. ESWC 2005 (A. Gómez-Pérez and J. Euzenat, Eds.); Lecture Notes in Computer Science 3532, pp. 500–514.  doi:10.1007/11431053_34.  Available from http://bit.ly/KQ2iK4.

Five Stars Ontology

To accompany today’s publication in D-Lib Magazine of the article The Five Stars of Online Journal Articles – a framework for article evaluation highlighted in the previous post, I have today also published The Five Stars Ontology, a simple ontology written in OWL 2 DL that forms part of SPAR, a suite of Semantic Publishing and Referencing Ontologies. It is intended for use by publishers and others wishing to encode Five Stars ratings, such as those exemplified in the D-Lib article, in machine-readable form, so they can accompany other machine-readable metadata for the article.  To exemplify this, the following RDF graph, shown in turtle notation, gives the Five Stars ratings for the D-Lib article itself:

<http://dx.doi.org/10.1045/january2012-shotton>
     fivestars:hasPeerReviewRating “3”^^xsd:nonNegativeInteger ;
     fivestars:peerReviewRatingComment “Post-publication responsive
          peer review of the preprint.” ;
     fivestars:hasOpenAccessRating “4”^^xsd:nonNegativeInteger ;
     fivestars:openAccessRatingComment “Gold/libre open access
          without author fee!” ;
     fivestars:hasEnhancedContentRating “1”^^xsd:nonNegativeInteger ;
     fivestars:enhancedContentRatingComment “Plentiful Web links in
          text and to all references. No additional semantic
          enhancement of text.” ;
     fivestars:hasAvailableDatasetsRating “0”^^xsd:nonNegativeInteger ;
     fivestars:availableDatasetsRatingComment “Not applicable.” ;
     fivestars:hasMachine-readableMetadataRating “1”^^xsd:nonNegativeInteger ;
     fivestars:machine-readableMetadataRatingComment “Structural
          markup in HTML only.” ;
     fivestars:hasOverallFiveStarsRating “9”^^xsd:nonNegativeInteger ;
     fivestars:overallFiveStarsRatingComment “The nature of this
          article, being a position paper rather than a research
          paper with primary research data, has influenced the
          overall rating obtained.” .

IBRG projects to facilitate data publication and data citation

In the previous post, I outlined reasons why researchers don’t publish data, presented as evidence to the Royal Society’s Policy Study “Science as a Public Enterprise” Call for Evidence.  Here, I summarize activities by members of my Image Bioinformatics Research Group (IBRG) at Oxford University to facilitate data publication and data citation, and thus to help catalyze a cultural shift to a situation in which data publication is as natural a part of research life as is undertaking experiments.

= = =

Data management services and data repositories

We are developing tools and services to assist researchers in their local data management, for their own personal benefit, while facilitating automated data submission to appropriate institutional or subject-specific data repositories, in ways that fit with their normal working practices and impose as little as possible in terms of cognitive overhead – what we term sheer curation.  These include the two-stage data management services we are currently funded to develop by the University Modernization Fund through the JISC DataFlow Project, namely (a) DataStage, a private local data management file system, with automated backup, Web access, and security access control, for use by individual research groups, and (b) DataBank, a cloud-deployable data repository for use by universities, research institutes or large research consortia.  These open source services will be made available for installation by third parties on the Eduserv academic cloud and elsewhere, as required by research groups, institutions and universities both in the UK and internationally.  We seek early adopters!

Curation by addition

For automated data submissions from DataStage to DataBank, that will use the SWORDv2 repository submission protocol to standardize data package ingest, we are intentionally lowering the barriers in terms of metadata requirements for initial data submission, with the the possibility of enriching the metadata at a later date – what we call curation by addition – in order to kick-start the cultural sea change required for data deposition to become routine.  We are trying to avoid the best – the requirement for perfect and complete metadata – becoming the enemy of the good – data publication by any means.

Dryad

We are, through the JISC Dryad-UK Project, working to promote the Dryad Data Repository, a domain-specific repository for biological datasets linked to peer-reviewed journal articles, by bringing additional publishers and journals on board, and enabling Dryad metadata to be published as open linked data.

SWORD

We are also promoting the adoption of SWORDv2 repository communication protocol for data package wrapping, to permit automated deposit to DataBank, Dryad or other SWORD-compliant repositories, and the exchange of metadata between them.

SPAR (Semantic Publishing and Referencing) Ontologies

To enable Dryad, DataBank and similar repository metadata to be published as open linked data, we are creating appropriate data description and data citation ontologies, including FaBiO and CiTO4Data, as part of our suite of SPAR Ontologies, and are using them to provide mappings from the DataCite XML Metadata Kernel to RDF.

Data citation

We are working with DataCite to assign DOIs to Dryad and DataBank datasets, so that data publications become citable, gaining academic credit for the data depositor.

These data citations, when they exist, will fit naturally within the Open Citations Corpus, a collection of some 3.4 million bibliographic citations from within PubMed Central that we have recently established as open linked data, as part of the JISC Open Citations Project.

We have also worked to establish best practice for citing data publications from within the literature, and with one open access journal publisher to influence their Data Publishing Policies and Guidelines to Authors regarding data citation, as detailed in earlier posts on this blog.

Tools for metadata curation

The above tools and services are generic.  Specifically in the biomedical area, we are developing MIIDI, a Minimal Information standard for reporting an Infectious Disease Investigation, to specify the metadata that should for completeness accompany such an investigation, and have recently developed MIIDI Forms, a web tool that facilitates the entry of such metadata, that involves interaction with appropriate web services to enable autocompleting of bibliographic information and specification of geo-coordinates for place names, and permits automated look-up of ontology terms from the NCBI BioPortal

Open Research Reports

We are working to create Open Research Reports, open access structured digital abstracts in both human- and machine-readable form that describe datasets or journal articles that relate to infectious disease, based on MIIDI and to be published in an instant data journal format with DOIs to permit referencing and citation.

Tools for creating data management plans

We have recently started working with the Digital Curation Centre to help improve their DMPonline data management planning tool for creating the data management plans increasingly required to accompany grant applications, and useful for managing the flow of data from funded projects.  If our current funding application is successful, this work will be carried forward in the OXFORD DMPonline Project, in which, in addition to adoption, adaption, customization and integration of the tool for use by University of Oxford researchers, we will develop the following generic improvements to the tool that will be fed back to the DCC as open source enhancements for general use across UK academia and internationally:

a)     creation of DaMO, a simple data management ontology,

b)     use of DaMO to create RDF metadata for data management plans,

c)     SWORDv2-wrapping of data management plans for repository submission, and

d)     creation of DMPBank, a DataBank instance specifically tailored for archiving and publishing data management plans.