Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

A new revolutionary workflow for a unified collection of citations: say hello to the OpenCitations Index

Blog post by Ivan Heibi (University of Bologna), Arianna Moretti (University of Bologna) and Chiara Di Giambattista (University of Bologna).

In the past five years, the OpenCitations data has been enriched with numerous new indexes of open citation data from different sources. However, the quantity and diversification of the ingested information have raised several issues, which recently made it essential to conduct a complete revision of the ingestion workflow. The result was a revolution in the way OpenCitations data is delivered. In this blog post, we will explain the context and challenges raised by the old procedure. Then, we will present the new ingestion workflow, designed to produce just two comprehensive collections: OpenCitations Index, collecting open citation data, and OpenCitations Meta, for the open bibliographical metadata. 

Once upon a time, there were five OpenCitations indexes…

In 2018, OpenCitations released the kickoff version of its first citation index, COCI (citations from Crossref), which contained around 300 million citation links derived from the subset of the reference lists in the Crossref database, where citing and cited entities were identified using Digital Object Identifiers (DOIs). COCI gathered citations with associated metadata in compliance with the recommendations from the Initiative for Open Citations (I4OC) that citation data should be structured, separable, and open, thus marking a turning point by providing a disruptive and free and open alternative to earlier sources such as Google Scholar, which provided freely accessible data although not downloadable, and Web of Science or Scopus, which demanded paid access. 

In a short time, COCI became a competitive and trusted index of citation data, used by numerous institutional repositories, including B!son and Optimeta. In 2021, COCI was taken into account in a comparative study with the most relevant sources in the landscape, including the proprietary ones, which showed its coverage approaching parity with those of the other sources involved in the analysis (Microsoft Academic, Scopus, Dimensions, and Web of Science). At the time of its most recent update in January 2023, COCI counted more than 1.4 billion citations. The reason behind this outstanding number lies in several factors, including Elsevier’s endorsement of the Declaration on Research Assessment (DORA) in December 2020, leading to the open release via Crossref of the reference lists of the articles published in all its journals, and confirming the value of initiatives such as the Initiative for Open Citations (I4OC)

However, before this change of heart, in 2019 OpenCitations had tried to narrow the open citations coverage gap by launching its second index, the Crowdsourced Open Citations Index (CROCI). This index allowed publishers and scholars to contribute directly by uploading crowdsourced open citations into the OpenCitations infrastructure.

In December 2022, a new concrete step towards a factual plurality of OpenCitations indexes was taken by the ingestion of new data sources into the infrastructure, with the publication of the inaugural dumps of DOCI (citations from DataCite) and POCI (citations from PubMed). In June 2023, the first version of the OROCI (citations from OpenAIRE) dump was released too, and JOCI (citations from JALC) is expected to be available by the end of November 2023, for a total of five collections from different sources. 

Why a new workflow? The issues with multiple sources management and new challenges

While having such a variety and richness of indexes helped present the extent of OpenCitations sources, the recent increment in the number of sources and the diversification of data integrated led to two primary issues:

    1. the necessity to handle the ingestion of new identifier types in a DOI-based software infrastructure, and
    2. the consequent possibility of encountering the same citation expressed by several sources with different identifiers.

Moreover, it soon became evident the need to optimize the reuse of the already developed software components to facilitate the metadata crosswalk processes between the new sources’ data models and the OpenCitations Data Model, with the aim to define a functional and easily extendable workflow to be easily reused when it comes to incorporating new data sources, which should be: 

    1. sufficiently generic to establish a globally unique procedure; 
    2. customizable enough to capture the necessary information within each of the specific data models and formats. 

As a solution, we decided to use OpenCitations Meta, the new OpenCitations database and tool for managing bibliographic data related to the publications involved in the citations. OpenCitations Meta makes it possible to assign each entity involved in a citation an internal identifier, nominally the OpenCitations Meta Identifier (OMID), to which all the associated persistent identifiers of the same publication are redirected.

As a result, the allocation of an OMID for each bibliographic resource also enabled the unambiguous identification of each citation, regardless of the persistent identifier schema originally used by the data source to identify the resources. This approach allowed us to perform data deduplication and finally make all the sources’ contributions converge into a unified index containing all the unique citations managed by OpenCitations, expressed as OMID to OMID citation links.

The revised workflow

The new workflow is based on three main components with the benefit of optimizing the process both in terms of computational cost and in terms of flexibility. As shown in Fig. 1, in a preliminary step, source-specific software converts the input dataset – structured according to the source data model – to extract two OpenCitations Data Model compliant data collections in tabular format for bibliographic metadata and citation data, respectively.

The following steps are common to the process of each dataset.  

STEP 1: The bibliographic metadata collection is used as input for the META software. At this stage, it is checked whether or not the bibliographic entities have been previously integrated into our infrastructure (coming from other data sources). If so, the existing OMID is linked also to the new alternative identifiers of the new bibliographic resources. New metadata values, if any, are also integrated. A new OMID identifier is produced for entities never previously encountered, uniquely representing the bibliographic resource in OpenCitations. The outputs of the process are: (I) an updated version of the OpenCitations Meta collection that also includes the metadata of the bibliographic entities provided by the new source, and (II) a collection of provenance data. An internal database is constantly refreshed to preserve correspondence between IDs and the associated internal OMIDs.

STEP 2: Starting from the collection of citations expressed as directional links between identifiers of potentially any type (e.g., DOI-DOI, PMID-PMID, PMC-PMID, etc.), the INDEX software queries the internal database mapping IDs to OMIDs to produce an updated version of the OpenCitations Index: unique citations expressed as OMID-OMID links in different formats, accompanied by their corresponding provenance data.

Fig. 1: An overview of the data ingestion workflow, starting from the data source-specific conversion and production of citations and bibliographic metadata tables, progressing through the META process and the assignation of an OMID identifier to each bibliographic record involved in a citation, and culminating with the exposition of the OpenCitations Index collection of OMID-OMID unique citations.

What we have now: The OpenCitations Index 

From now on, OpenCitations will no longer display an index of citation data for each source. Instead, we will publish a single collection of citations into which the contributions from each of the sources will flow, which we will simply call ‘The OpenCitations Index‘. The first version of this unified index of OMID-OMID citations is posted on Figshare. It was produced in RDF, CSV, and SCHOLIX formats, together with a collection of its provenance information, provided in RDF and CSV formats. For each citation, it is possible to trace the source of the information by consulting the Provenance data collection, thanks to the http://www.w3.org/ns/prov#atLocation property, which defines the location of each citation.

This new solution has the benefit of simplifying the consultation of the data maintained by our infrastructure without reducing the information content. In addition, by including efficient handling of the deduplication problem, the new Index not only provides accurate data on the exact number of unique citations exposed by the framework but also verifies the individual contribution of each source, as well as their overlapping data (Fig. 2).

Fig. 2: An overview of the number of citations stored in the OpenCitations Index as of October 31, 2023. The diagonal cells in the table (highlighted in yellow) show the unique contribution of each collection to the OpenCitations Index, while the other cells represent the citations that are shared between the collections. More in detail, the green cells show the overall input of each source, while the pink cells represent the number of overlapping citations between two data sources.

Currently, the Index contains almost 2 billion unique citations. By the end of November, a new version of the collection will be published, including the contribution of the new Japan Link Centre (JaLC) source. 

How to access the OpenCitations Index data

To maximize the reuse of the exposed information and to ensure the greatest possible interoperability, the collection will always be published on Figshare in all formats listed above. In addition, the data will be accessible via an API, a SPARQL endpoint, and a web interface.

The redesign of the ingestion workflow marks a fundamental step for OpenCitations towards a more intuitive and simple access to our services while always preserving and improving the quality of our data. If you need further information on how the new workflow works, please visit our website, contact us at contact@opencitations.net  or leave feedback and/or suggestions in the dedicated card on our public roadmap to help us improve our services and communications. Thank you!

The OpenCitations blog posts are now archived on Rogue Scholar with DOIs

Last April, Martin Fenner launched Rogue Scholar, an archive of science blogs aiming to index full-text of blog posts, establish a full-text search, and register DOIs and metadata for all posts. Rogue Scholar works with all blogging platforms that publish scholarly content and have an RSS or Atom feed with full-text content distributed under a Creative Commons Attribution (CC-BY 4.0) license.    

Rogue Scholar currently features 40 blogs, including the OpenCitations blog, with more than 1,000 blog posts that are available via full-text search, with DOIs linking to the original post on one of 40 science blogs. By exploring the OpenCitations blog’s profile on Rogue Scholar, you will find the last 40 posts with summaries (derived from the information you get in the RSS feed), and by clicking on the DOI you will be redirected to the full post on the blog. The DOI metadata include abstract, language, license, and (OECD Fields of Science) subject category for all posts.  

Rogue Scholar is growing day by day, and the increasing involvement of science blogs – from different disciplines and various languages – demonstrates that a central archive of science blogs with full-text content, and DOIs for all blog posts with relevant metadata is feasible, making an important contribution to Open Scholarly infrastructure”. 

Thanks to a recently launched Mastodon instance at Rogue Scholar Social, the OpenCitations blog has now its own Mastodon feed, where you can keep yourself updated about the last posts of the OpenCitations blog, by finding summaries and the related DOIs linking to the full post – and you can also boost and comment, of course! Please follow us at https://rogue-scholar.social/@opencitations  to not miss anything.  

If you are managing a Science Blog and are interested in adding it to the Rogue Scholar archive, or if you are just interested in the topic of metadata for scholarly blogs, please contact info@front-matter.io  

 

Introducing InTRePIDs – In-Text Reference Pointer Identifiers

Rationale

Readers of this blog will be familiar with Open Citation Identifiers (OCIs), described in an earlier post and formally defined in [1]. OCIs enable bibliographic citations, treated as first class information entities, to be uniquely identified and referenced, and are used to identify the >624 million individual citations indexed in the latest release of COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations, as described in a recent post.

However, COCI and similar citation indexes do not provide any information about where within the citing paper a citation is generated, the textual contexts of the in-text reference pointers, or the reasons for including different in-text reference pointers denoting the same reference at different points within the text.

As explained in the preceding post describing the Open Biomedical Citations in Context Corpus funded by the Wellcome Trust and under development by OpenCitations, deep citation analysis requires a more nuanced approach to citations, which acknowledges that each in-text reference pointer that denotes a bibliographic reference in the reference list of a citing publication instantiates its own citation, as shown in Figure 1.

Figure 1. Citations between a citing paper and a cited paper instantiated both by the inclusion of a bibliographic reference within the reference list of the citing paper and by the inclusion within the text of the citing paper of one or more in-text reference pointers denoting that reference.

The pointer citations clearly involve the same cited publication as does the reference citation itself, but each has its own unique characteristics: the location and textual context of its in-text reference pointer within the text of the citing publication, and its particular rhetorical function which is determined by that context.

If the reference citation is open (as defined in [2]) and identified by an OCI, each in-text reference pointer related to that citation can be identified uniquely using an In-Text Reference Pointer Identifier (InTRePID).

InTRePIDs facilitate in-depth scholarship on in-text reference pointer locations and citation functions, and fine-grained analysis of the relationships between publications, by making it possible

  • to identify each in-text reference pointer with a unique PID,
  • to distinguish references that are cited only once from those that are cited multiple times,
  • to see which references are cited together (e.g. in the same sentence or within an in-text reference pointer list),
  • to determine from which section(s) of the article references are cited (e.g. Introduction, Methods, Discussion), and, potentially,
  • to determine the rhetorical function of the citations from analysis of their textual contexts, by the application of natural language processing, machine learning and artificial intelligence techniques to conduct sentiment analysis on the citation contexts.

Definition of an InTRePID

An InTRePID is composed of two parts separated by an oblique stroke

intrepid:<oci-numerals>/<ordinal><total>

where

  • <oci-numerals> is the numerical part of the OCI uniquely identifying the particular open citation to which the in-text reference pointer and its denoted bibliographic reference relate. Thus an InTRePID can be assigned for any in-text reference pointer that relates to an open citation for which a valid OCI has been assigned;
  • <ordinal> identifies the nth occurrence of an in-text reference pointer within the text of the citing paper relating to that citation; and
  • <total> defines the total number of in-text reference pointers denoting that bibliographic reference within the citing paper.

For example, intrepid:070433-070475/46 is a valid InTRePID for an in-text reference pointer defined within the OpenCitations Citations in Context Corpus.

A formal definition document for the InTRePID is given in [3].

Exemplar in-text reference pointers

Consider the following citing paper:

Zou, J. et al. (2020). Phenotypic and genotypic correlates of penicillin susceptibility in nontoxigenic Corynebacterium diphtheriae, British Columbia, Canada, 2015–2018. Emerging Infectious Diseases, 26: 97-103. https://doi.org/10.3201/eid2601.191241

This paper contains six in-text reference pointers denoting Reference 13 in the reference list:

13. Lowe, C. et al. (2011). Cutaneous diphtheria in the urban poor population of Vancouver, British Columbia, Canada: a 10-year review. J. Clinical Microbiology 49: 2664-2666. https://doi.org/10.1128/JCM.00362-11

The InTRePIDs for these pointers are recorded within the OpenCitations Biomedical Citations in Context Corpus, together with the corpus identifiers and DOIs of the citing and cited papers, as shown in the excerpt presented in Figure 2.

Figure 2. An excerpt from the OpenCitations Biomedical Citations in Context Corpus, showing highlighted the InTRePIDs for the six in-text reference pointers within Zou, J. et al. (2020) denoting Reference 13, the reference to Lowe, C. et al. (2011), together with the internal corpus identifiers for each in-text reference pointer, and the corpus identifiers and DOIs for the citing and cited papers.

Of these six in-text reference pointers, having InTRePIDs intrepid:070433-070475/1-6 to intrepid:070433-070475/6-6, the first and the fourth of these, together with their document locations, their embedding sentences, their in-text reference pointer lists, and their InTRePIDs, chosen as examples, are as follows:

Introduction. “Nontoxigenic strains have been shown to have epidemic potential, causing infections in persons afflicted by homelessness, alcohol abuse, and injection drug use (9,13–15).” (intrepid:070433-070475/1-6)

Discussion. “We also noted ST5 and ST32 in our review from downtown Vancouver during 1998–2007 (13).” (intrepid:070433-070475/4-6)

The first of these discusses those people most susceptible to diphtheria infection, while the other discusses which multilocus sequence types (STs) of C. diphtheriae were found, thus relating to the organism causing the infection rather than to the infected individuals. The rhetorical function of these two in-text reference pointers is quite distinct.

To permit this information to be recorded within the OpenCitations Citations in Context Corpus, extensions were required to the OpenCitations Data Model, a new extended version of which was recently published [4], as described in a related blog post.

The OpenCitations InTRePID Resolution Service

To support the use of InTRePIDs to identify in-text reference pointers, OpenCitations has recently developed an InTRePID Resolution Service (currently in ‘beta’ in its development cycle), which is running at http://opencitations.net/intrepid. A screenshot of this service is shown in Figure 3.

Figure 3. A screenshot of the user interface of the InTRePID Resolution Service.

In addition to using the Web user interface shown in Figure 3, InTRePIDs can be entered into this resolution service in the form of resolvable URIs, e.g.

http://opencitations.net/intrepid/070433-070475/4-6

As shown in Figure 4, the OpenCitations InTRePID Resolution service returns metadata concerning the in-text reference pointer identified by the InTRePID, and the bibliographic reference that it denotes, from which further information about the citation and the citing and cited publications may be obtained by following the links provided.

Figure 4. A screenshot of the Web page displaying metadata returned by the InTRePID Resolution Service.

Note that as well as rendering this information in HTML on a web page, the resolution service can also provide it in a variety of machine-readable formats.

Conclusion

InTRePIDs, which enable the identification of individual in-text reference pointers, and the InTRePID Resolution Service, are new services from OpenCitations that will facilitate scholarship on the textual contexts and rhetorical functions of such in-text reference pointers, and of the citations that they instantiate.

InTRePIDs were first announced on 30th January 2020 at PIDapalooza 2020 in Lisbon, the Open Festival of Persistent Identifiers.

References

[1] Silvio Peroni and David Shotton (2019): Open Citation Identifier: Definition. Figshare. https://doi.org/10.6084/m9.figshare.7127816.v2

[2] Silvio Peroni and David Shotton (2018). Open Citation: Definition. Figshare. https://doi.org/10.6084/m9.figshare.6683855

[3] David Shotton, Marilena Daquino and Silvio Peroni (2020). In-Text Reference Pointer Identifier: Definition. Figshare. https://doi.org/10.6084/m9.figshare.11674032

[4] Marilena Daquino, Silvio Peroni and David Shotton (2019). The OpenCitations Data Model. Version 2.0. Figshare. https://doi.org/10.6084/m9.figshare.3443876

Citations as First-Class Data Entities: Open Citation Identifiers

Requirements for citations to be treated as First-Class Data Entities

In my introductory blog post, I listed five requirements for the treatment of citations as first-class data entities.  The fourth of these requirements is that they must be identifiable using a global persistent identifier scheme.

At the recent PIDapalooza Conference on persistent identifiers, held in Girona, Spain, I launched the Open Citation Identifier (abbreviated OCI, in line with DOI), the new persistent identifier for citations [1].

In this post, I describe the Open Citation Identifier scheme, created and operated by OpenCitations, which supports the assignment of Open Citation Identifiers not only to the citations present in the OpenCitations Corpus (OCC) but also to open citations present in other bibliographic databases.

Structure and syntax of the Open Citation Identifier

Each OCI has a simple structure: oci:number-number, where “oci:” is the identifier prefix.

OCIs for citations stored within the OpenCitations Corpus are constructed by combining the OpenCitations Corpus local identifiers for the citing and cited bibliographic resources, separating them with a dash.  (For definition of OCC local identifiers, see the OpenCitations Data Model).

For example, oci:2544384-7295288 is a valid OCI for the citation between two papers stored within the OpenCitations Corpus, the first number being the OCC local identifier for the citing bibliographic resource [2], and the second being the OCC local identifier for the cited bibliographic resource [3], these bibliographic resource local identifiers being unique within the OCC.  [Note: Supplier prefixes are omitted from OCC local identifiers of bibliographic resources ingested into the OpenCitations Corpus prior to February 2018, but will be included within all OCC local identifiers of bibliographic resources ingested into Corpus after that date.]

OCIs for external resources identifies by numerical identifiers

OCIs can also be created for bibliographic resources described in an external bibliographic database, if they are similarly identified there by identifiers having a unique numerical part.  For example, the OCI for the citation that exists between Wikidata resources Q27931310 (the citing resource, [4]) and Q22252312 (the cited resource, [5]) is oci:0102793131001022252312, where “010” is the assigned OCC supplier prefix for Wikidata.  (The colours here and below are added simply for clarity.)

The OCC supplier prefix consist of a positive number (following the pattern “nnn”, where “nnn” is a string of numerals of variable length which includes no zeros), enclosed between two zeros (e.g. “0420”).  The list of all assigned OCC supplier prefixes is given at https://github.com/opencitations/oci/blob/master/suppliers.csv.

OCIs for citations between resources identified by DOIs

OCIs can also be created for bibliographic resources described in external bibliographic database such as Crossref or DataCite where they are identified by alphanumeric Digital Object Identifiers (DOIs), rather than purely numerical strings.

To achieve this, each case-insensitive DOI is first normalized to lower case letters. Then, after omitting the initial “doi:10.” prefix, the alphanumeric string of the DOI is converted reversibly to a pure numerical string using the simple two-numeral lookup table for numerals, lower case letters and other characters presented at https://github.com/opencitations/oci/blob/master/lookup.csv. For example, using this lockup table, “1” becomes “01”, “2” becomes “02”, “a” becomes “10”, “b” becomes “11”, and “/” becomes “36”.  To the resulting number, the appropriate OCC supplier prefix is then added, to clearly identify its provenance.

A citation documented in Crossref exists between the two publications [3] and [6], which are there identified by the DOIs doi:10.1108/jd-12-2013-0166 and doi:10.1371/journal.pcbi.1000361.  We can thus create an OCI for this Crossref citation by using numerical representations of the two DOIs. These numerical representations are:

0200101000836191363010263020001036300010606

and

02001030701361924302723102137251211183701000000030601

where the initial “020” in each case is the assigned OCC supplier prefix for Crossref.

From these two numerical representations of DOIs, the OCI for the Crossref citation between these two paper is easily constructed, and is:

oci:0200101000836191363010263020001036300010606-02001030701361924302723102137251211183701000000030601

While this is long for an identifier, it should be remembered that it will be processed computationally, and is not intended for human readability.

In this way, Crossref OCIs can be assigned to all ~350 million open references within Crossref in which the cited paper as well as the citing paper has a DOI [7].

OCIs for the same citation recorded within different databases

If a citation is recorded in more than one bibliographic database, a separate OCI can be created for each instance, each OCI having a distinct supplier prefix and being specific to that database.

Thus, in addition to the Crossref OCI created from DOIs and described above for the citation from [3] to [6], a Wikidata OCI exists for the same citation recorded within Wikidata, having the form oci:01024260641-01021092566.

Upon resolution of an OCI, the Open Citation Identifier Resolution Service will pull metadata only from the database specified by the supplier prefix of the OCI.  Details of the Open Citation Identifier Resolution Service are given in the next blog post.

It is important to note that an OCI can only be used to specify a citation between a citing and a cited publication which is actually recorded within a bibliographic database.  For this reason, the OCI “oci:7295288-3962641” shown below the second diagram in the introductory blog post to this series is presently invalid.  While the OpenCitations Corpus has metadata describing both bibliographic resources [3] and [6], it has not yet ingested the reference list for the first bibliographic resource [3] (which has the OCC local identifier 7295288), having information about it only from a reference within a third paper, with no information about the references [3] itself contains.  As a result, at present OCC has no record that a citation actually exists between [3] and the second bibliographic resource [6] (which has the OCC local identifier 3962641).

Representing OCIs in RDF

To permit the description of OCIs in RDF, “oci” has been added as a new member of the class datacite:ResourceIdentifierScheme within the DataCite Ontology.

The resolvable URL for any citation identified by a OCI has the form “https://w3id.org/oc/virtual/ci/nnn-mmm”, where nnn-mmm represents the OCI with its “oci:” prefix removed. Currently, we are able to return the RDF description of all the citations contained in the OpenCitations Corpus and Wikidata. We are working to extend the coverage so as to include other datasets, e.g. Crossref.

References

[1]     David Shotton (2018). Citations as first-class data entities. Open Citation Identifiers.  Conference presentation. PIDapalooza 2018, Girona, 23-23 January 2018. https://doi.org/10.6084/m9.figshare.5844972

[2]     Armen Yuri Gasparyan, Marlen Yessirkepov et al. (2015). Preserving the integrity of citations and references by all stakeholders of science communication.  J. Korean Med. Sci. 30:1545-1552. (English.)  https://doi.org/10.3346/jkms.2015.30.11.1545

[3]     Silvio Peroni, Alexander Dutton, Tanya Gray and David Shotton (2015). Setting our bibliographic references free: towards open citation data. Journal of Documentation, 71 (2): 253-277.  https://doi.org/10.1108/jd-12-2013-0166

[4]     Daniel K. Bricker, Eric B. Taylor et al. (2012). A Mitochondrial Pyruvate Carrier Required for Pyruvate Uptake in Yeast, Drosophila, and Humans. Science 337: 96-100.
https://doi.org/10.1126/science.1218099

[5]     Douglas Hanahan and Robert A. Weinberg (2011). Hallmarks of cancer: the next generation.  Cell 144: 646–674.  https://doi.org/10.1016/j.cell.2011.02.013

[6]     David Shotton, Katie Portwin, Graham Klyne and Alistair Miles (2009).  Adventures in semantic publishing: exemplar semantic enhancement of a research article. PLoS Computational Biology 5: e1000361. http://dx.doi.org/10.1371/journal.pcbi.1000361

[7]     Daniel Ecer (2017). Crossref Data Notebook (updated). Available at https://elifesci.org/crossref-data-notebook

 

Pensoft Journals policy and author guidelines on data publication and citation

In a recent blog post, Heather Piwowar, in discussing the advantages of citing datasets in the reference list of the article, said “No journals have standardized on this approach so far”. However, Pensoft Journals, a publisher that specializes in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, has exactly such a policy.

Recently, in response to my Data Citation Best Practice Discussion Document [1] discussed in the preceding blog post, I was invited to work with Pensoft Journals to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [2].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals.

While recognising that citations of Genbank and similar bioinformatics datasets are by custom made by placing the database accession number somewhere in the text, with no entry in the reference list of the article, we make the following generic recommendation:

“Data citations may relate either to the author’s own data, or to data created and published by others (“third-party data”). In the former case, the dataset may have been previously published, or may be published for the first time in association with the article that is now citing it. All these types of data should, for consistency, be cited in the same manner.

“As is the norm when citing another research article, any citation of a data publication, including a citation of one’s own data, should always have two components:

  • An in-text citation statement containing an in-text reference pointer that directs the reader to a formal data reference in the paper’s reference list.

and

  • A formal data reference within the article’s reference list.

“The data reference in the article’s reference list should contain the minimal components recommended in the DataCite Metadata Kernel v2.0 specification. In DataCite terms: Creator PublicationYear Title Publisher Identifier; alternatively (but meaning the same thing): Author PublicationYear Title DataRepositoryName DOI. These components should be presented in whatever format and punctuation style the journal specifies for its references. The following example demonstrates in general terms what is required.

“In-text citation:

This paper uses data from the [name] data repository at http://dx.doi.org/***** (Jones et al. 2008a), first described in Jones et al. 2008b.

“Data reference in reference list:

Jones A, Bloggs B, Smith C (2008a). Title of data package. Repository name. doi:*****.

“Article reference in reference list:

Jones A, Saul D, Smith C (2008b). Title of journal article. Journal Volume: Pages. doi:###. ”

Pensoft also recommends that the in-text data citation statement in Pensoft journals should be included in the body of the paper, in a separate section named Data Resources situated after the Material and Methods section.  More details are given in the paper [2].

Furthermore, Pensoft has reached an agreement for cooperation in data hosting and developing of data publishing workflows with GBIF, the Global Biodiversity Information Facility, with the Dryad Data Repository and with the Consortium for Barcode of Life.

Clearly, these Pensoft data citation recommendations, which work fine for on-line journals without a numerical limit on the number of citations, would not be feasible in journal articles with a strict limit to the number of citations, which is why Heather’s emphasis of exploring alternative ways for data citation in such cases is important.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.  

[2]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

How to cite data

As an approach towards developing best practice for data citation, I recently wrote a Data Citation Best Practice Discussion Document that is available on Google Docs, and that I have now slightly revised to Version 2 [1].

In that document, I first compared what is recommended by DataCite [2] and by Altman and King [3] with what currently practised by the Dryad Data Repository and what presently occurs ‘in the wild’ in a handful of journal articles that reference Dryad datasets.  I then proposed some ‘internal’ recommendations for Dryad to adopt, and concluded with draft Data Citation Best Practice Recommendations.  As I say in the preface to the document:

“Since Dryad is pioneering data management in terms of data resources that are linked to journal articles, it is to be hoped that by first developing citation best practice in the Dryad context we can thereby catalyse its wider spread.  If we can thus agree what such best practice should be among the Dryad community and implement such best practice proposals, we can then promote such practices within the wider scholarly community.”

I realized that much of the confusion and disagreement concerning the best method of citing data resources within earlier e-mail threads resulted from a conflation of ideas about two entities which in the conventional citation of journal articles are quite distinct:

  • the in-text citation containing an in-text reference pointer, e.g. “this paper builds upon the work of Jones et al. [15].”     and
  • the actual reference to Jones et al. within the article’s reference list, e.g. “[15] Jones A, Bloggs B and Smith C (2008). Title. JournalName
    14:132-134. doi:*****.”

Thus, in an e-mail I wrote on 27 April, where I said

“Excellent, but what we really want is for the data citations to be included in the reference list along with the bibliographic citations, following the DataCite model: Creator (PublicationYear): Title. Version. Publisher. ResourceType. Identifier “

. . . I should also have stressed the need for explicit in-text citations that denote such references.

All that is explained within the Google Docs paper.  In that paper I also proposed having a separate Data Resources section within the body text of a journal article, in which data resource citations can be gathered.  That does not preclude these resources also being cited, where appropriate, within the Methods and Materials or Results sections of the paper, but is designed to put data resource citations “on the map”, so to speak, as important new publication performative acts.

It is not appropriate, in my mind, for data citations to be included in the Acknowledgements section of a paper, which is designed for acknowledging contributions to the work from people and funding agencies, even if Thomson Reuters has developed methods to parse such entries, since they also have well-established mechanisms for harvesting proper (data) references from the reference list.

All the ontological terms required to mark up in-text reference pointers and their textual contexts, references, reference lists, etc., to permit automated detection and harvesting of data citations and references, are available as RDF within the SPAR (Semantic Publishing and Referencing) Ontologies (http://purl.org/spar/), which were designed precisely to facilitate such work.

Since writing my Data Citation Best Practice Discussion Document, I was invited (on a purely voluntary non-commercial basis, I should add!) to work with Pensoft Journals, a publisher that specialises in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [4].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals, which I discuss in the next blog post, and which I am pleased to say includes all the recommendations discussed above.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.

[2]    The DataCite Metadata Kernel version 2.0 (2011). http://datacite.org/schema/DataCite-MetadataKernel_v2.0.pdf.

[3]    Micah Altman and Gary King (2007). A proposed standard for the scholarly citation of quantitative data. D-Lib Magazine. 13. http://www.dlib.org/dlib/march07/altman/03altman.html.

[4]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.