Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

A new revolutionary workflow for a unified collection of citations: say hello to the OpenCitations Index

Blog post by Ivan Heibi (University of Bologna), Arianna Moretti (University of Bologna) and Chiara Di Giambattista (University of Bologna).

In the past five years, the OpenCitations data has been enriched with numerous new indexes of open citation data from different sources. However, the quantity and diversification of the ingested information have raised several issues, which recently made it essential to conduct a complete revision of the ingestion workflow. The result was a revolution in the way OpenCitations data is delivered. In this blog post, we will explain the context and challenges raised by the old procedure. Then, we will present the new ingestion workflow, designed to produce just two comprehensive collections: OpenCitations Index, collecting open citation data, and OpenCitations Meta, for the open bibliographical metadata. 

Once upon a time, there were five OpenCitations indexes…

In 2018, OpenCitations released the kickoff version of its first citation index, COCI (citations from Crossref), which contained around 300 million citation links derived from the subset of the reference lists in the Crossref database, where citing and cited entities were identified using Digital Object Identifiers (DOIs). COCI gathered citations with associated metadata in compliance with the recommendations from the Initiative for Open Citations (I4OC) that citation data should be structured, separable, and open, thus marking a turning point by providing a disruptive and free and open alternative to earlier sources such as Google Scholar, which provided freely accessible data although not downloadable, and Web of Science or Scopus, which demanded paid access. 

In a short time, COCI became a competitive and trusted index of citation data, used by numerous institutional repositories, including B!son and Optimeta. In 2021, COCI was taken into account in a comparative study with the most relevant sources in the landscape, including the proprietary ones, which showed its coverage approaching parity with those of the other sources involved in the analysis (Microsoft Academic, Scopus, Dimensions, and Web of Science). At the time of its most recent update in January 2023, COCI counted more than 1.4 billion citations. The reason behind this outstanding number lies in several factors, including Elsevier’s endorsement of the Declaration on Research Assessment (DORA) in December 2020, leading to the open release via Crossref of the reference lists of the articles published in all its journals, and confirming the value of initiatives such as the Initiative for Open Citations (I4OC)

However, before this change of heart, in 2019 OpenCitations had tried to narrow the open citations coverage gap by launching its second index, the Crowdsourced Open Citations Index (CROCI). This index allowed publishers and scholars to contribute directly by uploading crowdsourced open citations into the OpenCitations infrastructure.

In December 2022, a new concrete step towards a factual plurality of OpenCitations indexes was taken by the ingestion of new data sources into the infrastructure, with the publication of the inaugural dumps of DOCI (citations from DataCite) and POCI (citations from PubMed). In June 2023, the first version of the OROCI (citations from OpenAIRE) dump was released too, and JOCI (citations from JALC) is expected to be available by the end of November 2023, for a total of five collections from different sources. 

Why a new workflow? The issues with multiple sources management and new challenges

While having such a variety and richness of indexes helped present the extent of OpenCitations sources, the recent increment in the number of sources and the diversification of data integrated led to two primary issues:

    1. the necessity to handle the ingestion of new identifier types in a DOI-based software infrastructure, and
    2. the consequent possibility of encountering the same citation expressed by several sources with different identifiers.

Moreover, it soon became evident the need to optimize the reuse of the already developed software components to facilitate the metadata crosswalk processes between the new sources’ data models and the OpenCitations Data Model, with the aim to define a functional and easily extendable workflow to be easily reused when it comes to incorporating new data sources, which should be: 

    1. sufficiently generic to establish a globally unique procedure; 
    2. customizable enough to capture the necessary information within each of the specific data models and formats. 

As a solution, we decided to use OpenCitations Meta, the new OpenCitations database and tool for managing bibliographic data related to the publications involved in the citations. OpenCitations Meta makes it possible to assign each entity involved in a citation an internal identifier, nominally the OpenCitations Meta Identifier (OMID), to which all the associated persistent identifiers of the same publication are redirected.

As a result, the allocation of an OMID for each bibliographic resource also enabled the unambiguous identification of each citation, regardless of the persistent identifier schema originally used by the data source to identify the resources. This approach allowed us to perform data deduplication and finally make all the sources’ contributions converge into a unified index containing all the unique citations managed by OpenCitations, expressed as OMID to OMID citation links.

The revised workflow

The new workflow is based on three main components with the benefit of optimizing the process both in terms of computational cost and in terms of flexibility. As shown in Fig. 1, in a preliminary step, source-specific software converts the input dataset – structured according to the source data model – to extract two OpenCitations Data Model compliant data collections in tabular format for bibliographic metadata and citation data, respectively.

The following steps are common to the process of each dataset.  

STEP 1: The bibliographic metadata collection is used as input for the META software. At this stage, it is checked whether or not the bibliographic entities have been previously integrated into our infrastructure (coming from other data sources). If so, the existing OMID is linked also to the new alternative identifiers of the new bibliographic resources. New metadata values, if any, are also integrated. A new OMID identifier is produced for entities never previously encountered, uniquely representing the bibliographic resource in OpenCitations. The outputs of the process are: (I) an updated version of the OpenCitations Meta collection that also includes the metadata of the bibliographic entities provided by the new source, and (II) a collection of provenance data. An internal database is constantly refreshed to preserve correspondence between IDs and the associated internal OMIDs.

STEP 2: Starting from the collection of citations expressed as directional links between identifiers of potentially any type (e.g., DOI-DOI, PMID-PMID, PMC-PMID, etc.), the INDEX software queries the internal database mapping IDs to OMIDs to produce an updated version of the OpenCitations Index: unique citations expressed as OMID-OMID links in different formats, accompanied by their corresponding provenance data.

Fig. 1: An overview of the data ingestion workflow, starting from the data source-specific conversion and production of citations and bibliographic metadata tables, progressing through the META process and the assignation of an OMID identifier to each bibliographic record involved in a citation, and culminating with the exposition of the OpenCitations Index collection of OMID-OMID unique citations.

What we have now: The OpenCitations Index 

From now on, OpenCitations will no longer display an index of citation data for each source. Instead, we will publish a single collection of citations into which the contributions from each of the sources will flow, which we will simply call ‘The OpenCitations Index‘. The first version of this unified index of OMID-OMID citations is posted on Figshare. It was produced in RDF, CSV, and SCHOLIX formats, together with a collection of its provenance information, provided in RDF and CSV formats. For each citation, it is possible to trace the source of the information by consulting the Provenance data collection, thanks to the http://www.w3.org/ns/prov#atLocation property, which defines the location of each citation.

This new solution has the benefit of simplifying the consultation of the data maintained by our infrastructure without reducing the information content. In addition, by including efficient handling of the deduplication problem, the new Index not only provides accurate data on the exact number of unique citations exposed by the framework but also verifies the individual contribution of each source, as well as their overlapping data (Fig. 2).

Fig. 2: An overview of the number of citations stored in the OpenCitations Index as of October 31, 2023. The diagonal cells in the table (highlighted in yellow) show the unique contribution of each collection to the OpenCitations Index, while the other cells represent the citations that are shared between the collections. More in detail, the green cells show the overall input of each source, while the pink cells represent the number of overlapping citations between two data sources.

Currently, the Index contains almost 2 billion unique citations. By the end of November, a new version of the collection will be published, including the contribution of the new Japan Link Centre (JaLC) source. 

How to access the OpenCitations Index data

To maximize the reuse of the exposed information and to ensure the greatest possible interoperability, the collection will always be published on Figshare in all formats listed above. In addition, the data will be accessible via an API, a SPARQL endpoint, and a web interface.

The redesign of the ingestion workflow marks a fundamental step for OpenCitations towards a more intuitive and simple access to our services while always preserving and improving the quality of our data. If you need further information on how the new workflow works, please visit our website, contact us at contact@opencitations.net  or leave feedback and/or suggestions in the dedicated card on our public roadmap to help us improve our services and communications. Thank you!

Using the ORCID Public API for author disambiguation in the OpenCitations Corpus

Among the external services used, the ORCID Public API is of crucial importance for the task of author disambiguation. During the OCC ingestion workflow, the main metadata of an article are usually retrieved from the Crossref API. While the JSON schema used by Crossref to return the information requested by its APIs includes a field for specifying the ORCID for each of the authors of an article, this field is usually blank, since such information is commonly not available in the data provided by publishers. We therefore routinely use the ORCID Public API to try to retrieve ORCIDs for all authors and editors named in the Crossref metadata for a given DOI.

The process is organised as follows. Once we get back from Crossref the metadata about an article, we call the ORCID Public API and search for ORCIDs associated with the family names returned by Crossref of all the authors and editors (‘agents’) associated with that particular DOI. For instance, using the Crossref metadata about the article with DOI “10.1108/jd-12-2013-0166” (API call: https://api.crossref.org/works/10.1108/jd-12-2013-0166), we extract all the agents’ family names and call the ORCID Public API as follows:

https://pub.orcid.org/v2.1/search?q=(doi-self:10.1108/JD-12-2013-0166%20OR%20doi-self:10.1108/jd-12-2013-0166)%20AND%20(family-name:Peroni%20OR%20family-name:Dutton%20OR%20family-name:Gray%20OR%20family-name:Shotton)

The result of this query returned by ORCID is as follows:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<search:search num-found="2" 
  xmlns:search="http://www.orcid.org/ns/search" 
  xmlns:common="http://www.orcid.org/ns/common">
  <search:result>
    <common:orcid-identifier>
      <common:uri>https://orcid.org/0000-0003-0530-4305</common:uri>
      <common:path>0000-0003-0530-4305</common:path>
      <common:host>orcid.org</common:host>
    </common:orcid-identifier>
  </search:result>
  <search:result>
    <common:orcid-identifier>
      <common:uri>https://orcid.org/0000-0003-1448-3114</common:uri>
      <common:path>0000-0003-1448-3114</common:path>
      <common:host>orcid.org</common:host>
    </common:orcid-identifier>
  </search:result>
</search:search>

Then, for each ORCID returned, we call again the ORCID Public API, shown as follows for ORCID “0000-0003-0530-4305”, so as to get the full personal details of the agent with that ORCID:

https://pub.orcid.org/v2.1/0000-0003-0530-4305/personal-details

The result of this query is shown as follows:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<personal-details:personal-details
  path="/0000-0003-0530-4305/personal-details"
  xmlns:personal-details="http://www.orcid.org/ns/personal-details"
  ...>
  <personal-details:name 
    visibility="public" path="0000-0003-0530-4305">
    ...
    <personal-details:given-names>
      Silvio
    </personal-details:given-names>
    <personal-details:family-name>
      Peroni
    </personal-details:family-name>
  </personal-details:name>
  ...
</personal-details:personal-details>

Then, two possible alternative situations exist:

  • If the OpenCitations Corpus has already recorded the personal details and ORCID of that agent, we associate that agent with the new bibliographic resource identified by the input DOI; otherwise,
  • If the personal details and ORCID of that agent have not been previously recorded in the OpenCitation Corpus, we create a new agent record with that ORCID as external identifier, specified by means of the DataCite Ontology, and we associate this new agent with the new bibliographic resource identified by the input DOI.
  • This process is repeated for all ORCIDs associated with that DOI.

Software reuse in different applications

While the OCC ingestion workflow explained above regulates the ingestion of new citation data directly into the OpenCitations Corpus, the particular software library that implements this ingestion is generic in form, and is being reused in another application that we have recently released in prototype, namely BCite (sources available on GitHub). BCite is a Web application that enables users such as journal editors, starting with the ‘raw’ reference text strings supplied by the author as items in an article’s reference list, to obtain ‘clean’ verified and enriched bibliographic reference text strings, for inclusion in the reference list of the citing article they have in hand, so that accurate rather than erroneous references can be published in the version of record.  Additionally, these references are at the same time transformed into RDF data compliant with the OpenCitation Data Model, including ORCIDs where available, thereby (in principle, although not yet in practice) permitting inclusion of the metadata for these cited works, and the citations for which they are the targets, into the OpenCitations Corpus itself.