Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

JISC Open Citations Project web site

The JISC Open Citations Project website at http://opencitations.net exists for several purposes:

and, most importantly,

The JISC Open Citations Project Home Page

For the purposes of this demonstration, we are running an instance of Fuseki, that provides access to the Open Citation Corpus in an underlying TDB quad store, with a local patch to enable query timeouts, as was done for the SPARQL endpoint of CLAROS, a related project in which we have been intimately involved.

The web site and the underlying RDF citation data follow key linked data principles:

  • We don’t use blank nodes
  • All URIs are dereferenceable and content-negotiable.
  • Where possible we use standard identifiers based on DOIs, ISSNs and ISBNs.

For users wishing to jump into the data, we provide tabs to access data about journals and articles, each giving greater detail about the contents as one clicks in.

The Journals page

Each Journal page displays metadata about the journal, and links to useful information and the ability to download the journal metadata in a variety of formats.


Details of one journal

The articles tab displays a subset of about 10% of the entire corpus, for speed of loading the demonstration.

The Articles Page

Each ‘article’ page displays details about the selected article and its citation network, using graphviz, along with a user input form to customise the display of the network, namely for input or output citations, and for different forms of display. The article metadata and the citation diagrams can be downloaded in various formats.

Input citations for Codon, C. (2007). Maturation and degradation of RNA in bacteria. Current Opinion in Microbiology 10(3): 271-278. doi:10.1016/j.mib.2007.05.008.

We also display the RDF properties of each entity, allowing a ‘follow your nose’ style of data discovery. Each page lists the SPARQL queries that were run to generate it, enabling the user to see how the site works and to tweak the queries for their own ends.

Further work is planned to display citations along a time axis, and to develop other tools to permit users to exploit the data.

Nomenclature for citations and references

Reis et al. (2008) [1] cites an earlier paper from Albert Ko’s research group, Ko et al. (1999) [2].

In conventional parlance, as the following diagram shows, the word “reference” can mean either what is found in the text, what is found in the reference list, the act of citation, or the object of the citation itself, as in the sentence “All the references you will need to prepare for the journal club are on Kevin’s desk”.

This situation does not make for unambiguous machine-readable encoding, so to improve the situation we have, in the SPAR (Semantic Publishing and Referencing) Ontologies, introduced a more principled way of referring to these items and actions, as the second diagram illustrates:

This permits us to create RDF describing all aspect of the citation process. Within the body of the text we have an in-text citation containing an in-text reference pointer, while the actual bibliographic reference that the in-text reference pointer denotes is to be found within the article’s reference list. That bibliographic reference references the cited article, while the whole performative act of including the reference in the article constitutes the act of citation.

Are we all clear now?

[1] Reis RB, Ribeiro GS, Felzemburgh RDM, Santana FS, Mohr S, Melendez AXTO, Queiroz A, Santos AC, Ravines RR, Tassinari WS, Carvalho MS, Reis MG, Ko AI (2008) Impact of Environment and Social Gradient on Leptospira Infection in Urban Slums. PLoS Negl Trop Dis 2(4): e228. doi:10.1371/journal.pntd.0000228

[2]    Ko AI, Reis MG, Ribeiro Dourado CM, Johnson WD Jr, Riley LW and the Salvador Leptospirosis Study Group. (1999) Urban epidemic of severe leptospirosis in Brazil. Lancet 354: 820–825. doi:10.1016/S0140-6736%2899%2980012-9.

The citation processing pipeline and the Open Citations Corpus

The input PubMed Central Open Access subset XML reference data, our starting corpus, were transformed into Open Citations RDF in multiple stages:

  1. The original XML was first transformed into an intermediate form using XSLT. The multitudinous ways different publishers have developed of encoding the same information can be more easily handled in this way, by generating an intermediate XML output dataset in which things are described in a more consistent manner, and enabled the resulting information to be parsed more easily from within a non-XML-based programming environment. Our transform pulled out information about articles, people, organisations, in-text reference pointers, and the reference list, and the links between them.
  2. The intermediate XML dataset was then transformed into BibJSON using a Python script. BibJSON is a relatively standard method of encoding bibliographic information. Each of the ~200,000 generated BibJSON dataset contains the information extracted from one marked-up Open Access article. We extended the standard BibJSON records with additional attributes (named with an ‘x-‘ prefix) for other properties we wish later to encode as RDF. At this and later stages, the BibJSON datasets are packed into a single gzipped tarball. Since it would be unwise to unpack such a tarball into ~200,000 independent files would give data management problems, the contents are extracted from the tarball as required using the Python tarfile module.
  3. Another Python script was then used to extract all the PubMed IDs, and to use these as inputs to the Entrez API, in order to extract independent information about the cited entities from the PubMed database. The returned PubMed records were then added alongside the original BibJSON records. These additional data were extremely useful for comparison when attempting to spot erroneous citations, as previously described.
  4. Next we ran a ‘sanitization script’ over the data, which performed the following functions:
  1. URL normalization (e.g. adding URL schemes, undoing character substitutions (e.g. en-dashes for hyphens, quotation marks for apostrophes).
  2. Splitting issue information from journal attributes.
  3. Fixing malformed DOIs (e.g. those missing the ’10.’ prefix). Where DOIs could not be fixed they were removed.
  4. Pulling “doi:****” DOIs out of “http://dx.doi.org/**** URLs.
  5. Removing spurious publication dates (those before 1900 and after 2011).

These corrections are easily extensible if we discover other classes of error in the data.

  1. The records were next unified by taking the transitive closure on a number of identifiers. These identifiers included DOIs, PubMed IDs, PubMed Central IDs and URLs for articles and other cited works, and ISSNs, eISSNs and ISO title abbreviations for journals.
  2. The BibJSON data were then rearranged so that each dataset contains multiple records believed to reference the same bibliographic entity, if it had multiple citations.
  3. Owing to mis-citation (in this case, the use of incorrect or incomplete identifiers) there were a number of clearly different that had been mistakenly declared to refer to the same entity. For this reason we use a distance metric to recluster record groups based on similarity.
  4. Finally, a Python script transforms the BibJSON tarball into RDF. The input tarball contains datasets, each of which comprises records believed to refer to the same entity. The script takes each of these datasets and merges them into a single ‘best’ record using the majority vote procedure previously described. The resultant record is then transformed into a number of quads for inclusion in the final RDF N-Quads Open Citations Corpus, principally modelled using the suite of SPAR ontologies created for this purpose.

This Open Citations Corpus of rdf citation data extracted from the open access subset of PubMed Central, detailing every reference list in the OASS articles, holds each reference list as an individual named graph (hence the storage in N-Quads rather than triples), and comprises 236,499,781 quads occupying 2.1 gigabytes of storage in its compressed state. It includes references to ~20% of all post-1980 papers recorded in PubMed, including all the highly cited papers in every field of biomedical research, and is freely available under a CC0 waiver from http://opencitations.net/data/.

The Open Citations Corpus can be queried via a Web query form or via a SPARQL interface from the Open Citations Project web site at http://opencitations.net/, described in a subsequent blog post, where more information about the project is given.

All the scripts used to transform the OASS input data into the Open Citations Corpus, described above, are available under an MIT Open Source licence at https://github.com/opencitations/.

Citation correction methods

As previously described, the PubMed Central Open Access subset of journal articles yielded 6,529,815 independent bibliographic records of both citing and cited entities, while our use of the PubMed Entrez API provided a further 2,304,143 bibliographic records for the same cited entities. Before converting these references into RDF to create the Open Citations Corpust, we attempted to remove errors in the data.

Some of the references we collected were to highly cited papers, while 2,505,879 referenced papers were only cited once. Figure 1 shows the number of citations per paper for the 100 most highly cited papers in our records – the left hand end of what is a classical long-tail dataset.

 Figure 1

We have not yet analysed the topics of these papers, but can reveal that the paper most highly cited from within the OASS, with 2150 citations, is

Altschul et al. (1997). Gapped BLAST and PSI-BLAST: a new generation of protein database. Nucleic Acids Res. 25(17):3389-3402. doi: 10.1093/nar/25.17.3389.

In an ideal world, all OASS references to an individual paper would be identical, and would exactly match the data on that paper extracted from the Entrez API. However, as we have already seen for author names, this is not the case. As with most datasets in the world, a significant proportion (~1%) of our input reference data is either incomplete or erroneous.

We attempted to correct for these errors by comparing references that appeared to reference the same bibliographic entity, and from this comparison extract the correct data for authors, title, etc. using the following rules:

  1. Accept the longest author list and names bearing accents over those lacking them.
  2. Accept DOIs and PubMed IDs from references having them, after eliminate mis-formed identifiers (e.g. DOIs lacking the journal prefix “10.****”), using the majority vote if different identifiers were given for the same paper.
  3. Accepting those variants of titles, journal names, etc. held in common by the majority of references.

This voting method was weighted in favour of data we judged to be most reliable, namely the PubMed records returned from the Entrez API, and metadata about cited papers that were within the OASS coming from the independent bibliographic records we had for those papers.

As a result of these activities, not only did we coalesce the independent references from different OASS articles to the same multiply cited papers into a set of 3,578,598 unique bibliographic citation target records describing 204,637 OASS articles and 3,373,961 articles outside the OASS, but we were able to select from the multiple references those elements (author list, title, etc.) that were judged to be correct for each target.

However, of the 2,505,879 papers that are only cited once from within the OASS, 1,246,967 lacked a PubMed ID, so for these we were unable to gather confirmatory evidence for the accuracy of the citation from the Entrez API. These references, which are to the least significant papers in the corpus, are therefore provided “as is” from PubMed Central, without any external corroboration of their accuracy.

How these error-correction processes fitted into the data processing pipeline used to create the Open Citations Corpus is described in the next blog post.

Who wrote this paper? Author list problems in PubMed Central references

To illustrate three kinds of problems in obtaining correct author lists for Open Citation data from articles in the PubMed Central Open Access subset (OASS), I take three examples, the first of which is the result of a publication policy, the second due to mis-handling of an authorship attribution at the time of publication, and the third exemplifing errors introduced when handling non-English personal names.

Example 1

In the paper by Reis et al. (2008) [1], that we took as the subject for our exercise in semantic publishing enhancements described in [2], we find the following entry for Reference 40 in the reference list:

40.    Maciel EAP, Carvalho ALF, Nascimento SF, Matos RB, Gouveia EL, et al. (2008) Household transmission of Leptospira infection in urban slum communities. PLoS Negl Trop Dis 2: e154. doi:10.1371/journal.pntd.0000154.

Note that it is the policy of the publisher, the Public Library of Science, to list only the first five authors in references to papers that have more than five, despite publishing online-only journals where article length is not an issue.

The XML for this reference in the document is as follows:

    <meta name="citation_reference" content="citation_title=Household transmission of Leptospira infection in urban slum communities.; citation_author=EAP Maciel; citation_author=ALF Carvalho; citation_author=SF Nascimento; citation_author=RB Matos; citation_author=EL Gouveia; citation_journal_title=PLoS Negl Trop Dis; citation_volume=2; citation_number=40; citation_pages=e154. doi:10.1371/journal.pntd.0000154; citation_date=2008; " /> 

Note that in the XML, all indication that there are more than five authors, i.e. the “et al.” present in the human-readable reference, is totally lost. There is thus no way of telling from the XML for this paper retrieved from PubMed Central that the full authorship for this paper is as follows:

Elves A. P. Maciel, Ana Luiza F. de Carvalho, Simone F. Nascimento, Rosan B. de Matos, Edilane L. Gouveia, Mitermayer G. Reis and Albert I. Ko.

The last two authors of the cited paper, Mitermayer Reis and Albert Ko, who are the lead author and the senior author, respectively, of the citing paper, are both omitted from the PLoS reference to the cited paper, and hence from the data automatically extracted by Open Citations from the OASS.

Example 2

A second example from the same citing paper is the cited reference to Ko et al. (1999) [2]. As the reference at the foot of this page shows, the author list includes six names “and the Salvador Leptospirosis Study Group”. Group attributions of this kind are commonplace, particularly in papers resulting from large collaborative projects. However, conventional markup systems such as the NLM-DTD have no systematic way of handling such information. Surprisingly, it is even incorrectly stated in the human readable version of the reference in the Reis et al. paper. Reference 6 in the article’s reference list reads:

6.    Ko AI, Reis MG, Ribeiro Dourado CM, Johnson WD Jr, Riley LW (1999) Urban epidemic of severe leptospirosis in Brazil. Salvador Leptospirosis Study Group. Lancet 354: 820–825.

thus including “Salvador Leptospirosis Study Group” as part of the title, an error also present in the XML version of the paper:

<meta name="citation_reference" content="citation_title=Urban epidemic of severe leptospirosis in Brazil. Salvador Leptospirosis Study Group.; citation_author=AI Ko; citation_author=MG Reis; citation_author=CM Ribeiro Dourado; citation_author=WD Johnson; citation_author=LW Riley; citation_journal_title=Lancet; citation_volume=354; citation_number=6; citation_pages=820-825; citation_date=1999; " /> 

Example 3

The third example, also taken from the reference list of Reis et al. (2008) [1], illustrated the problems of handling non-English names and titles. Reference 39 in the PLOS article’s reference list [3] reads:

Dias JP, Teixeira MG, Costa MC, Mendes CM, Guimaraes P, et al. (2007) Factors associated with Leptospira sp infection in a large urban center in Northeastern Brazil. Rev Soc Bras Med Trop 40: 499–504.

Note that the author list is truncated, that the generic name “Leptospira sp” in the title is italicized, and that neither the DOI nor the PubMed ID is provided, although the article has both.

The landing page for this article on the publisher’s web site (http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0037-86822007000500002&lng=en&nrm=iso&tlng=en) shows the following:

Revista da Sociedade Brasileira de Medicina Tropical

Print version ISSN 0037-8682

Rev. Soc. Bras. Med. Trop. vol.40 no.5 Uberaba Oct. 2007

doi: 10.1590/S0037-86822007000500002  ARTIGO ARTICLE

Factors associated with Leptospira sp infection in a large urban center in northeastern Brazil

Fatores associados à infecção por Leptospira sp em um grande centro urbano do Nordeste do Brasil

Juarez Pereira DiasI; Maria Glória TeixeiraI; Maria Conceição Nascimento CostaI; Carlos Maurício Cardeal MendesI; Patrícia GuimarãesII; Mitermayer Galvão ReisII; Albert KoII,III; Maurício Lima BarretoI

The paper is published in English. Note that there is an alternative Portuguese title, that the generic name “Leptospira sp” is italicized in both, and that a DOI is provided, although page numbers are not give for this on-line version of the article. Note also the accents and structures of the full Brazilian author names.

Clicking on the tab Article in PDF format on the landing page takes one to the PDF download page that gives the following reference with page numbers, but lacking English title and DOI, and lacking the full author list:

DIAS, Juarez Pereira et al. Factors associated with Leptospira sp infection in a large urban center in northeastern Brazil. Rev. Soc. Bras. Med. Trop. [online]. 2007, vol.40, n.5, pp. 499-504. ISSN 0037-8682.

A manual search for the same article in PubMed returns the following information:

Rev Soc Bras Med Trop. 2007 Sep-Oct;40(5):499-504.

Factors associated with Leptospira sp infection in a large urban center in northeastern Brazil.

Dias JP, Teixeira MG, Costa MC, Mendes CM, Guimarães P, Reis MG, Ko A, Barreto ML.

PMID: 17992402

Note the correct accentuation of the surname “Guimarães”, but the loss of the last Christian Name initial for Maria Conceição Nascimento Costa, and the loss of italicization of the generic name “Leptospira sp“in the tile. Note also the absence of the Portuguese title and the DOI, and the addition of a PubMed ID.

The real problems with this reference arise when we look at the starting XML corpus upon which our linked open citation data output is based, which in turn is based on the original PLoS submission of the Reis et al. (2008) paper to PubMed Central.

The author list for this reference #39 in the PMC XML for Reis et al (2008) is as follows:

<ref id=“pntd.0000228-Dias1”>
<label>39</label>
<citation citation-type=“journal”>
<person-group person-group-type=“author”>
<name>
<surname>Dias</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>Teixeira</surname>
<given-names>MG</given-names>
</name>
<name>
<surname>Costa</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Mendes</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>Guimaraes</surname>
<given-names>P</given-names>
</name>
<etal/>
</person-group>

Here we see, as in the HTML reference list of the original Reis et al. (2008) paper, the truncation of the list of authors to the first five, and the loss of accent on the surname “Guimarães”. Surprisingly, the title is recorded as

<article-title>Factors associated with <italic>Leptospira</italic> sp infection in a large urban center in Northeastern Brazil.</article-title>

correctly showing the italic “Leptospira” but also correctly not italicizing the following “sp”!

All is not lost, however, in terms of the full author list. Since the OASS XML for Reis et al. (2008) contains a Pubmed ID for this Dias et al. (2007) paper [3]:

<pub-id pub-id-type=“pmid”>17992402</pub-id>

we can retrieve the PubMed bibliographic record for this paper by querying the Entrez API, from which we recover:

<Item Name=“AuthorList” Type=“List”>
<Item Name=“Author” Type=“String”>Dias JP</Item>
<Item Name=“Author” Type=“String”>Teixeira MG</Item>
<Item Name=“Author” Type=“String”>Costa MC</Item>
<Item Name=“Author” Type=“String”>Mendes CM</Item>
<Item Name=“Author” Type=“String”>Guimarães P</Item>
<Item Name=“Author” Type=“String”>Reis MG</Item>
<Item Name=“Author” Type=“String”>Ko A</Item>
<Item Name=“Author” Type=“String”>Barreto ML</Item>
</Item>
<Item Name=“LastAuthor” Type=“String”>Barreto ML</Item>

Note that here we have the full author list containing the correctly accented surname “Guimarães”.

Thus, by matching this record with the original PLoS record for Dias et al., and selecting the longer list and the more accentuated names, we can correct the omissions in the original PubMed Central OASS data.

Example 4

However, Entrez output is not infallible. In the Entrez xml output for reference #44 in Reis et al. (2008), namely the paper by Travassos and Williams (2004) [4], we find:

<Item Name=“doi” Type=“String”>/S0102-311X2004000300003</Item>

The correct DOI for this article is doi:10.1590/S0102-311X2004000300003. However, the Entrez output from PubMed is missing the journal prefix “10.1590”, which when parsed to its URI form during our automated processing of the source data from XML to RDF would become

<http://dx.doi.org//S0102-311X2004000300003>

if we did not take steps to check for the correct DOI syntax.

These few examples, all taken from a single OASS article, illustrates some of the problems we have had to face in creating accurate and reliable RDF to enable us to publish these reference lists as open citation data.

What is shocking to me with regard to PLoS, perhaps the leading Open Access publisher, is that they don’t systematically include both DOIs and PubMed IDs in both HTML and XML versions of article references on the PLoS web site, despite the fact that they insert PubMed IDs into the records they mark up in NLM-DTD XML and send to PubMed Central, and also that PLoS persists with its policy of not listing all the authors, and that it does not include proper accents and diacritical marks, particularly for non-English names.

The methods used to correct citation errors are described in the next blog post, while the data processing pipeline through which we pass the input data to generate our RDF output Open Citations Corpus is described in the following blog post.

[1] Reis RB, Ribeiro GS, Felzemburgh RDM, Santana FS, Mohr S, Melendez SXTO, Queiroz A, Santos AC, Ravines RR, Tassinari WS, Carvalho MS, Reis MG, Ko AI
(2008). Impact of environment and social gradient on Leptospira infection in urban slums. PLoS Negl Trop Dis
2(4): e228. doi:10.1371/journal.pntd.0000228

[2]    Ko AI, Reis MG, Ribeiro Dourado CM, Johnson WD Jr, Riley LW and the Salvador Leptospirosis Study Group (1999). Urban epidemic of severe leptospirosis in Brazil. Lancet
354: 820–825. doi:10.1016/S0140-6736%2899%2980012-9.

[3]    Dias JP, Teixeira MG, Costa MC, Mendes CM, Guimarães P, Reis MG, Ko A, Barreto ML (2007). Factors associated with Leptospira sp infection in a large urban center in northeastern Brazil. Rev Soc Bras Med Trop. 40(5): 499-504. doi:10.1590/S0037-86822007000500002.

[4]     Travassos C, Williams DR (2004). The concept and measurement of race and their relationship to public health: A review focused on Brazil and the United States. Cad Saude Publica
20: 660–678. doi:10.1590/S0102-311X2004000300003.

Garbage in, garbage out – problems with bibliographic references

The Open Citations Project has aimed to liberate bibliographic references from biomedical research literature as Open Linked Data, using as its starting corpus the Open Access Subset (OASS) of articles within PubMed Central. The greatest problem faced during this project, naively unanticipated before we started, was the extend of incompleteness, noise and errors of various sorts within the reference information extracted from the OASS articles. So significant has this problem been, that it has taken almost the entire time and effort of Alex Dutton, our skilled data munger working on the project, to sort out, and without his skill, dedication and effort the project would not have succeeded.

In this context, any deviation of the bibliographic reference in an OASS article from the bibliographic citation text for that paper provided by the original publisher, is taken to be an error. These may be as slight as the substitution of a “β” character by the word “beta” in the title of a cited work by the author of the OASS article as he was creating his reference list, and as severe as including in the reference to one paper the DOI of another unrelated paper.

So it is worth taking time, before explaining how we addressed this problem, to describe its nature and magnitude, and to illustrate it with typical examples. We have found that errors of one sort or another occur in about 1% of all extracted references from the OASS. Since we extracted 6,325,178 individual references from our starting corpus, this constitutes well over 50,000 references containing errors.

Author errors

Errors have different sources. Authors are largely to blame, for not exercised due care when creating the reference lists of their paper (or earlier, when creating bibliographic records in a reference management system such as EndNote). However, if one of their EndNote records used to populate a reference list had been pulled automatically from some third-party source, the error might be due to that source, something of which the author was totally unaware.

Some references have incorrect punctuation or capitalisation of titles, or omit some sub-part, as exemplified by the following screenshot – note the use of “alternate”, “alternative” or “Alternate”, and of “intensivist” and “intensive” in different OASS references to the same paper. The boxed title is the correct one:


Some references omit one or more author names, or omit diacritics, a tendency particularly correlated with the degree to which a name is ‘foreign’ or ‘unusual’, as the next example illustrates – note the lack of “Mariotte-Labarre S” in the first two references, and the variation in punctuation of the journal name abbreviation in the third:


There are also numerous instances where the text of a reference is correct, but the associated identifiers (e.g. DOIs and PubMed IDs) are incorrect. By way of example, references 15 and 16 of PMC1839102 are both given the same DOI; in PMC2896208 the DOIs for references 52 and 72 are swapped; and in PMC2778786 references 15, 40, and 49 are all given the DOI of another (uncited) paper.

We should point out that these examples are entirely anecdotal, and that we haven’t investigated the frequency with which these or any other classes of error occurs.

Publisher errors

Publishers are the other main culprits, particularly for introducing errors into documents during the XML encoding stage, from which it is almost impossible to recover by the automated parsing systems we have used to extract information into our RDF-encoded records.

Individual publishers, while all working to the same National Library of Medicine DTD for encoding the XML markup of their articles submitted to PubMed Central, might take different approaches to encoding the same information. For example, the text

“… was found to be significant[1,3–6]

might be marked up as either of the following:

“… was found to be significant<sup>[<xref rid=”CR1″>1</xref>,<xref rid=”CR3″>3</xref>–<xref rid=”CR6″>6</xref>]</sup>”

“… was found to be significant<sup><xref rid=”CR1,CR3,CR4,CR5,CR6″>[1,3–6]</xref></sup>”.

Note that in the former case there is no explicit mention of the references with identifiers CR4 and CR5, making things a little harder to parse.

We have found occasions where a four-digit number in the title has been marked-up as a the publication year. For example, reference 21 of PMC2743650 claims that the cited article was published in 7942. The cited article’s real title refers to the bacterium strain PCC7942, information that has been removed from the title in the OACC reference.

The information returned by the Entrez API, which we used as our ‘gold standard’ against which to check OASS references containing PubMed IDs, was itself not without error. We found a number of PubMed records where DOIs had been truncated to just the prefix, or were missing a prefix.

Editors and referees

Others are also culpable. At the end of paragraph 12 of the PLoS ONE paper by Pickart et al. 2006 [1] we find the text:

We consider the current 12% detection rate to be a lower estimate of observable specific phenotypes from the screen, as additional screening will examine the morpholino collection using a variety of novel assays (such as newly generated enhancer and gene trap lines; Balciunas et al., 2004; Kawakami et al., 2004; Parinov et al., 2004) and may reveal developmental and/or functional aspects not readily visible by morphological criteria.

However, these three references do not appear in the reference list and so are totally lost to the system – the authors knew whom they were citing, but no-one else. How this escaped the eagle eyes of the authors, the journal editor and the reviewers is beyond my understanding!

In a separate blog post we describes how we have corrected some of these errors, while examples of errors in author lists are documented in the next post.

[1] Pickart MA et al. (2006). Genome-Wide Reverse Genetics Framework to Identify Novel Functions of the Vertebrate Secretome. PLoS ONE 1(1): e104. 10.1371/journal.pone.0000104.

Input data for Open Citations – the PMC Open Access Subset

PubMed, created by the US National Library of Medicine in DATE, holds bibliographic records and abstracts for essentially all journal articles published in the biomedical sciences. It currently records almost a million new entries each year!

PubMed Central (PMC), created as an extension of PubMed, is designed to hold full text articles from among the PubMed entries. At present, PMC holds entries for ~9.3% of the papers indexed in PubMed published between 1980 and 2010, 1,428,675 out of a total of 15,319,102. Many of these PMC articles (192,452 for the years 1980 to 2010, ~13.5% of the PMC holdings) are truly Open Access articles, that users can download and repurpose as they will. However, the majority are articles from subscription access journals deposited in PMC under licence agreements with funding agencies that, while providing read access to the full text, prevent readers from downloading the articles and from making derivative works.

The Open Citations Project has to date worked exclusively with the Open Access subset (OASS) of PMC. As of 24 January 2011, there were 204,637 OASS articles, including a few published before 1980. In almost all of these OASS articles, the reference lists were nicely marked up in NLM-DTD XML, making the task of identifying individual references straightforward. In a few cases, the articles were present as scanned page images, lacking any internal markup – those we were unable to process.

From the XML reference lists of these papers, we were able to identify and extract 6,325,178 individual references, which, together with the bibliographic information we had on the OASS articles themselves gave us 6,529,815 independent bibliographic records of both citing and cited entities. As explained in the next blog post, these records showed varying degrees of completeness and accuracy.

Using the Entrez API, we were able to use PubMed IDs, where these were available in the references, to extract a further 2,304,143 bibliographic records from PubMed, which, in the ideal world, would each exactly duplicate the information we had previously obtained from the OASS bibliographic reference containing that PubMed ID. As we shall describe, these additional PubMed records proved exceptionally useful in correcting imperfect OASS references.

Since the OASS articles cite papers outside the OASS, as well as a few within it, the majority of the bibliographic information we thus acquired related to papers represented within PubMed but not within PubMed Central. And because many OASS papers independently contained references to the most highly cited biomedical papers, many of our records were to the same bibliographic entities.

An important part of our data processing was thus to coalesce independent references from different OASS articles to the same multiply cited papers into a set of unique bibliographic records, each for one paper. Once this had been achieved, we were left with 3,578,598 unique bibliographic records, 204,637 describing the OASS articles themselves, and 3,373,961 describing articles outside the OASS, mostly from subscription-access journals.

The following table and figure tabulates and illustrates the number of papers in each category between 1980 and 2010 inclusive. The most striking thing about these data are that they show how, between these years, the relatively small number of articles in the Open Access subset of PMC (approx. 200,000 articles) referenced >20% of all PubMed Central papers published between 1950 and 2010 (approx. 15.3 million papers), and in doing so reference all the most important highly cited papers in every field of biomedical endeavour. This inclusive coverage means that citation graphs created from the Open Citations dataset will capture all the important aspects of any field.

Table 1

Year

Number of papers

 

Pubmed

PMC

OASS

Cited by OASS

1950-1979

5,128,602

427,877

8352

146027

1980

278,069

23,218

631

15708

1981

278,069

23,685

543

16627

1982

292,219

25,215

740

18389

1983

305,725

25,688

738

21263

1984

314,737

26,316

543

23249

1985

331,706

25,916

637

25780

1986

345,501

26,721

590

28761

1987

363,754

27,834

555

32222

1988

381,976

28,802

442

36320

1989

398,620

29,855

616

42005

1990

398,620

30,143

704

48422

1991

407,465

31,337

733

53655

1992

412,457

32,325

719

61091

1993

420,935

33,203

1055

70272

1994

431,160

33,456

1279

80206

1995

441,967

34,276

1148

91814

1996

452,218

34,755

1155

101853

1997

451,533

34,800

1314

114967

1998

469,466

36,179

1341

131510

1999

469,466

37,534

1420

146623

2000

528,243

39,047

1608

170330

2001

542,854

40,235

2546

179203

2002

560,006

43,265

3199

195879

2003

590,317

46,442

4015

211423

2004

634,432

51,416

6005

229423

2005

694,687

60,411

10333

236678

2006

740,007

72,295

14264

238387

2007

777,311

87,744

20070

222085

2008

824,612

120,004

31416

190071

2009

862,372

146,413

41848

124894

2010

918,598

120,145

40245

27877

Total

15,319,102

1,428,675

192,452

3,186,987

% of PubMed  

9.33%

 

20.80%

% of PMC    

13.47%

 

Figure 1

Figure 2

The OASS source data give the types of cited entity, aggregated after coalescing, shown in the Figure 3.

Figure 3

Pensoft Journals policy and author guidelines on data publication and citation

In a recent blog post, Heather Piwowar, in discussing the advantages of citing datasets in the reference list of the article, said “No journals have standardized on this approach so far”. However, Pensoft Journals, a publisher that specializes in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, has exactly such a policy.

Recently, in response to my Data Citation Best Practice Discussion Document [1] discussed in the preceding blog post, I was invited to work with Pensoft Journals to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [2].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals.

While recognising that citations of Genbank and similar bioinformatics datasets are by custom made by placing the database accession number somewhere in the text, with no entry in the reference list of the article, we make the following generic recommendation:

“Data citations may relate either to the author’s own data, or to data created and published by others (“third-party data”). In the former case, the dataset may have been previously published, or may be published for the first time in association with the article that is now citing it. All these types of data should, for consistency, be cited in the same manner.

“As is the norm when citing another research article, any citation of a data publication, including a citation of one’s own data, should always have two components:

  • An in-text citation statement containing an in-text reference pointer that directs the reader to a formal data reference in the paper’s reference list.

and

  • A formal data reference within the article’s reference list.

“The data reference in the article’s reference list should contain the minimal components recommended in the DataCite Metadata Kernel v2.0 specification. In DataCite terms: Creator PublicationYear Title Publisher Identifier; alternatively (but meaning the same thing): Author PublicationYear Title DataRepositoryName DOI. These components should be presented in whatever format and punctuation style the journal specifies for its references. The following example demonstrates in general terms what is required.

“In-text citation:

This paper uses data from the [name] data repository at http://dx.doi.org/***** (Jones et al. 2008a), first described in Jones et al. 2008b.

“Data reference in reference list:

Jones A, Bloggs B, Smith C (2008a). Title of data package. Repository name. doi:*****.

“Article reference in reference list:

Jones A, Saul D, Smith C (2008b). Title of journal article. Journal Volume: Pages. doi:###. ”

Pensoft also recommends that the in-text data citation statement in Pensoft journals should be included in the body of the paper, in a separate section named Data Resources situated after the Material and Methods section.  More details are given in the paper [2].

Furthermore, Pensoft has reached an agreement for cooperation in data hosting and developing of data publishing workflows with GBIF, the Global Biodiversity Information Facility, with the Dryad Data Repository and with the Consortium for Barcode of Life.

Clearly, these Pensoft data citation recommendations, which work fine for on-line journals without a numerical limit on the number of citations, would not be feasible in journal articles with a strict limit to the number of citations, which is why Heather’s emphasis of exploring alternative ways for data citation in such cases is important.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.  

[2]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

How to cite data

As an approach towards developing best practice for data citation, I recently wrote a Data Citation Best Practice Discussion Document that is available on Google Docs, and that I have now slightly revised to Version 2 [1].

In that document, I first compared what is recommended by DataCite [2] and by Altman and King [3] with what currently practised by the Dryad Data Repository and what presently occurs ‘in the wild’ in a handful of journal articles that reference Dryad datasets.  I then proposed some ‘internal’ recommendations for Dryad to adopt, and concluded with draft Data Citation Best Practice Recommendations.  As I say in the preface to the document:

“Since Dryad is pioneering data management in terms of data resources that are linked to journal articles, it is to be hoped that by first developing citation best practice in the Dryad context we can thereby catalyse its wider spread.  If we can thus agree what such best practice should be among the Dryad community and implement such best practice proposals, we can then promote such practices within the wider scholarly community.”

I realized that much of the confusion and disagreement concerning the best method of citing data resources within earlier e-mail threads resulted from a conflation of ideas about two entities which in the conventional citation of journal articles are quite distinct:

  • the in-text citation containing an in-text reference pointer, e.g. “this paper builds upon the work of Jones et al. [15].”     and
  • the actual reference to Jones et al. within the article’s reference list, e.g. “[15] Jones A, Bloggs B and Smith C (2008). Title. JournalName
    14:132-134. doi:*****.”

Thus, in an e-mail I wrote on 27 April, where I said

“Excellent, but what we really want is for the data citations to be included in the reference list along with the bibliographic citations, following the DataCite model: Creator (PublicationYear): Title. Version. Publisher. ResourceType. Identifier “

. . . I should also have stressed the need for explicit in-text citations that denote such references.

All that is explained within the Google Docs paper.  In that paper I also proposed having a separate Data Resources section within the body text of a journal article, in which data resource citations can be gathered.  That does not preclude these resources also being cited, where appropriate, within the Methods and Materials or Results sections of the paper, but is designed to put data resource citations “on the map”, so to speak, as important new publication performative acts.

It is not appropriate, in my mind, for data citations to be included in the Acknowledgements section of a paper, which is designed for acknowledging contributions to the work from people and funding agencies, even if Thomson Reuters has developed methods to parse such entries, since they also have well-established mechanisms for harvesting proper (data) references from the reference list.

All the ontological terms required to mark up in-text reference pointers and their textual contexts, references, reference lists, etc., to permit automated detection and harvesting of data citations and references, are available as RDF within the SPAR (Semantic Publishing and Referencing) Ontologies (http://purl.org/spar/), which were designed precisely to facilitate such work.

Since writing my Data Citation Best Practice Discussion Document, I was invited (on a purely voluntary non-commercial basis, I should add!) to work with Pensoft Journals, a publisher that specialises in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [4].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals, which I discuss in the next blog post, and which I am pleased to say includes all the recommendations discussed above.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.

[2]    The DataCite Metadata Kernel version 2.0 (2011). http://datacite.org/schema/DataCite-MetadataKernel_v2.0.pdf.

[3]    Micah Altman and Gary King (2007). A proposed standard for the scholarly citation of quantitative data. D-Lib Magazine. 13. http://www.dlib.org/dlib/march07/altman/03altman.html.

[4]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

Questions of granularity – Dryad’s use of DataCite DOIs for data citation, and the Annotation Ontology

DataCite is an international organisation, founded in 2009, which promotes the use of DOIs (Digital Object Identifiers) for published datasets, in order to establish easier access to research data, to increase acceptance of research data as legitimate contributions in the scholarly record, and to support data archiving to permit results to be verified and re-purposed for future study.

Its founding members were the British Library; the Technical Information Center of Denmark; TU Delft Library; the National Research Council’s Canada Institute for Scientific and Technical Information (NRC-CISTI); California Digital Library; Purdue University; and the German National Library of Science and Technology. Since its foundation, it has been joined by several other leading organisations from around the world, and it therefore provides a stable basis for the ongoing use of DOIs for data.

This recent availability of DOIs from DataCite for the identification of data entities has made all the difference to data repositories wishing to give unique global identifiers to their data holdings, since DOIs are widely recognised and respected throughout the academic world, because of their widespread prior use for identifying journal articles, made possible by CrossRef.

However, in their recent discussion paper Data Citation and Linking, published on 8th June 2011, Alex Ball and Monica Duke of UKOLN at the University of Bath ask:

“At what granularity should data be made citable? If single datasets are given identifiers, what about collections of datasets, or subsets of data?”

Individual data files and metadata documents will, of course, have their own unique internal identifiers within any data repository, but may not have externally resolvable identifiers such as DOIs.  Practice varies.

This post is to explain how DOIs are employed in the Dryad Data Repository, that specializes in publishing data linked to peer-reviewed biological journal articles, since it is both elegant and addresses at least some of the issues raised by Alex and Monica.

The Dryad DOI usage policy is described at https://www.nescent.org/wg_dryad/DOI_Usage, and involves assigning unique DOIs to each version of every data package, and to each version of every data file, in a principled and easy-to-understand manner. In summary:

  • Each data package is given a DataCite DOI, which can be versioned by adding “.2”, “.3”, etc. after the original DOI to create new DOIs for new versions of the same data package.
  • Within each data package, each data file has a unique DOI defines by suffixing the data package DOI with “/1”, “/2”. etc., with versions indicated as for data packages.

Thus the third version of the second data file in the second version of a Dryad data package would have a DOI of the form doi:10.5061/dryad.1234.2/2.3.

One might argue that it would result in an awfully large number of DOIs if a single data package was made up of thousands of data files. True, but numbers themselves are limitless and free, and the cost of a DataCite DOI is small relative to the cost of data creation and preservation. The real problem at present is lack of identifiable, citable data entities within repositories – to have so many that the cost of DOIs becomes an issue should be regarded as an achievement, not a problem!

Dryad does not have a mechanism for assigning identifiers to a portion of a data file (“a subset of data”), and DOIs are probably not the correct identifiers for that purpose, since they are primarily designed for citation and resource discovery.

A more appropriate method for identifying portions of a data file, or of any other digital object or document, is to use the Annotation Ontology (AO) developed by Paolo Ciccarese of Harvard University, described at http://code.google.com/p/annotation-ontology/wiki/Homepage. AO can be used to identify and annotate portions of a wide variety of resources such as HTML, PDF, Word, Excel, XML documents, images, videos, databases, web services, experimental data and metadata files. Paolo is currently working with a group in Harvard that focuses on biodiversity, who are using OA to address databases and data, and he anticipates publishing version 2.0 of AO in September.