Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

IBRG projects to facilitate data publication and data citation

In the previous post, I outlined reasons why researchers don’t publish data, presented as evidence to the Royal Society’s Policy Study “Science as a Public Enterprise” Call for Evidence.  Here, I summarize activities by members of my Image Bioinformatics Research Group (IBRG) at Oxford University to facilitate data publication and data citation, and thus to help catalyze a cultural shift to a situation in which data publication is as natural a part of research life as is undertaking experiments.

= = =

Data management services and data repositories

We are developing tools and services to assist researchers in their local data management, for their own personal benefit, while facilitating automated data submission to appropriate institutional or subject-specific data repositories, in ways that fit with their normal working practices and impose as little as possible in terms of cognitive overhead – what we term sheer curation.  These include the two-stage data management services we are currently funded to develop by the University Modernization Fund through the JISC DataFlow Project, namely (a) DataStage, a private local data management file system, with automated backup, Web access, and security access control, for use by individual research groups, and (b) DataBank, a cloud-deployable data repository for use by universities, research institutes or large research consortia.  These open source services will be made available for installation by third parties on the Eduserv academic cloud and elsewhere, as required by research groups, institutions and universities both in the UK and internationally.  We seek early adopters!

Curation by addition

For automated data submissions from DataStage to DataBank, that will use the SWORDv2 repository submission protocol to standardize data package ingest, we are intentionally lowering the barriers in terms of metadata requirements for initial data submission, with the the possibility of enriching the metadata at a later date – what we call curation by addition – in order to kick-start the cultural sea change required for data deposition to become routine.  We are trying to avoid the best – the requirement for perfect and complete metadata – becoming the enemy of the good – data publication by any means.

Dryad

We are, through the JISC Dryad-UK Project, working to promote the Dryad Data Repository, a domain-specific repository for biological datasets linked to peer-reviewed journal articles, by bringing additional publishers and journals on board, and enabling Dryad metadata to be published as open linked data.

SWORD

We are also promoting the adoption of SWORDv2 repository communication protocol for data package wrapping, to permit automated deposit to DataBank, Dryad or other SWORD-compliant repositories, and the exchange of metadata between them.

SPAR (Semantic Publishing and Referencing) Ontologies

To enable Dryad, DataBank and similar repository metadata to be published as open linked data, we are creating appropriate data description and data citation ontologies, including FaBiO and CiTO4Data, as part of our suite of SPAR Ontologies, and are using them to provide mappings from the DataCite XML Metadata Kernel to RDF.

Data citation

We are working with DataCite to assign DOIs to Dryad and DataBank datasets, so that data publications become citable, gaining academic credit for the data depositor.

These data citations, when they exist, will fit naturally within the Open Citations Corpus, a collection of some 3.4 million bibliographic citations from within PubMed Central that we have recently established as open linked data, as part of the JISC Open Citations Project.

We have also worked to establish best practice for citing data publications from within the literature, and with one open access journal publisher to influence their Data Publishing Policies and Guidelines to Authors regarding data citation, as detailed in earlier posts on this blog.

Tools for metadata curation

The above tools and services are generic.  Specifically in the biomedical area, we are developing MIIDI, a Minimal Information standard for reporting an Infectious Disease Investigation, to specify the metadata that should for completeness accompany such an investigation, and have recently developed MIIDI Forms, a web tool that facilitates the entry of such metadata, that involves interaction with appropriate web services to enable autocompleting of bibliographic information and specification of geo-coordinates for place names, and permits automated look-up of ontology terms from the NCBI BioPortal

Open Research Reports

We are working to create Open Research Reports, open access structured digital abstracts in both human- and machine-readable form that describe datasets or journal articles that relate to infectious disease, based on MIIDI and to be published in an instant data journal format with DOIs to permit referencing and citation.

Tools for creating data management plans

We have recently started working with the Digital Curation Centre to help improve their DMPonline data management planning tool for creating the data management plans increasingly required to accompany grant applications, and useful for managing the flow of data from funded projects.  If our current funding application is successful, this work will be carried forward in the OXFORD DMPonline Project, in which, in addition to adoption, adaption, customization and integration of the tool for use by University of Oxford researchers, we will develop the following generic improvements to the tool that will be fed back to the DCC as open source enhancements for general use across UK academia and internationally:

a)     creation of DaMO, a simple data management ontology,

b)     use of DaMO to create RDF metadata for data management plans,

c)     SWORDv2-wrapping of data management plans for repository submission, and

d)     creation of DMPBank, a DataBank instance specifically tailored for archiving and publishing data management plans.

Like a kid with a new train set! Exploring citation networks

As part of the Open Citations Project, Alex Dutton recently completed a graphing plug-in for the Open Citations web site, that permits users to generate different kinds of graphs of citation networks by querying the Open Citation Corpus for a particular article, and either display the network of papers citing that article (input citations), papers cited by that article (output citations), or both.  These can be displayed on screen in the web browser in a variety of layouts, or conveniently downloaded in a number of useful formats.

THIS IS SOOOOO COOL!

Having survived the preparation and posting of the JISC Open Citations Project Final Blog Post last night, minutes before the midnight deadline, I’m now like a kid with a new train set, playing with this display tool and exploring the citation networks present in the Open Citation Corpus, something I have dreamed of doing for two years now.

Remember first that in the Open Citations Corpus we have some 200,000 citing articles – those within the Open Access Subset (OASS) of Pubmed Central – citing ~3.4 million papers out there in the big wide world, which are only recipients of citations.  The consequence of this limited corpus is that the majority of citation chains are of length one – from a paper in the OASS to a paper outside the OASS.  Not very interesting.  Add to this the fact that PubMed Central is new – over 90% of the papers in the Open Access Subset were published in the 21st Century, and 77% of them in the last 5 years.  Thus there are only a very few citation from articles within the OASS to other articles within the OASS.  That means that the maximum length of our citation chains, at present, to three or four on links the input side – a selected article may be cited by a chain of three or four other OASS articles, and three or four on the output side – the selected article may cite other OASS articles in addition to non-OASS articles, and these in turn will cite others.  However, in most cases, the citations chains are much shorter.

simple network
simple networkFigure 1. A simple citation network of input citation chain length of 2 links within the Open Citations Corpus, and an output chain length of 1 link – the selected article (red) receives citations from other OASS articles (green), and itself cites only articles outside the OASS (white).

Let’s start with something familiar – the article in PLoS Neglected Tropical Diseases by Reis et al. (2008) [1] that I used for our semantic publishing exemplar [2].  Its inward citation graph, limited to a citation chain length of two links, created by and copied from the Open Citations Project web site, looks like this:

input citations of Reis
Figure 2. The input citation network of Reis et al. (2008), limited to an citation chain length of 2 links.

I, of course, cited the Reis et al. (2008) paper [1] in our 2009 Adventures paper [3] that we based upon it, and also in my first paper on CiTO in 2010 [3], which also cites the Adventures paper.   Reis et al. (2008) is also cited by Fink et al. (2010) [4], who also cited our Adventures paper, and by Bourhy et al. (2010) [5], another PLoS Neglected Tropical Diseases paper in the OASS, which in turn is cited by Galloway and Levett (2010) [6], while our Adventures paper is also cited by Gerner and Nenadic (2010) [7].

The following image shows this graph as it was originally created within the Open Citations web page:

Input citations of Reis in web page
Figure 3. The same input citation network of Reis et al. (2008), as shown in the Open Citations web site.

Since Reis et al. (2008) has a reference list containing 52 references, its output citation graph is much more complex, even when limited to a citation chain length of 2, since several of its cited papers are also members of the Open Access Subset.  The following figure shows the whole output citation network a citation chain length of 2, which is too demagnified to be legible.

Citations by Reis
Figure 4. The outward citation network of Reis et al. (2008), limited to a citation chain length of 2 links.

The next figure shows a close-up of part of the previous diagram – the output citation network of Reis et al. (2008), again showing the Reis et al. (2008) paper in red, and one of the key papers it cites, Maciel et al. (2010) [8], a slightly earlier paper from the same research group, forming a second key node in the top right of the diagram.

Cited by Reis closeup
Figure 5. A close-up of a central portion of the outward citation network of Reis et al. (2008), limited to a citation chain length of 2 links.

Clearly, there is lots of information that can be extracted from these graphs, particularly when we display them in a tool like GraphViz that permits interactions with the data.  While the Open Citations web site simply displays such citation graphs created using one of several layout algorithms selected by the user, the raw data can also be downloaded in a variety of formats including GraphViz, GraphML and SVG, while the resulting network images can be downloaded in as PNG, JPEG and PDF images, and the underlying RDF metadata can be downloaded as RDF/XML. N-triples, Notation3 and Turtle.

Having used our new Open Citations web site and its network display interface for a short while, I am already aware of many shortcomings and limitations that we will attempt to improve upon in the next few days.  However, we would very much like to hear from you – as a user of the Open Citations web site – both to learn what you like about what we have done and to hear what you find to be shortcomings of the functionality and new features that you would like to see implemented, which we will record as user stories to input into our next round of development.  These can either be recorded as comments on this blog post, or can be e-mailed with the subject line “Open Citations web site” either to me <david.shotton@zoo.ox.ac.uk> or to Alex Dutton <Alexander.dutton@zoo.ox.ac.uk>, who is the person who deserves all the credit for the present system.  We look forward to hearing from you.

[1]  Reis RB, Ribeiro GS, Felzemburgh RDM, Santana FS, Mohr S, Melendez SXTO, Queiroz A, Santos AC, Ravines RR, Tassinari WS, Carvalho MS, Reis MG, Ko AI (2008). Impact of environment and social gradient on Leptospira infection in urban slums. PLoS Negl Trop Dis 2(4): e228. doi:10.1371/journal.pntd.0000228.

[2] Shotton D, Portwin K, Klyne G, Miles A (2009). Adventures in semantic publishing: exemplar semantic enhancements of a research article. PLoS Comput Biol 5:e1000361. doi:10.1371/journal.pcbi.1000361.

[3] Shotton D (2010). CiTO, the Citation Typing Ontology. Journal of Biomedical Semantics  1 (Suppl. 1): S6. doi:10.1186/2041-1480-1-S1-S6.

[4]  Fink JL, Fernicola P, Chandran R, Parastatidis S, Wade A, Naim O, Quinn GB, Bourne PE (2010). Word add-in for ontology recognition: semantic enrichment of scientific literature.  BMC Bioinformatics 11:103. doi:10.1186/1471-2105-11-103.

[5]  Bourhy P, Collet L, Clément S, Huerre M, Ave P, Giry C, Pettinelli F, Picardeau M (2010). Isolation and Characterization of New Leptospira Genotypes from Patients in Mayotte (Indian Ocean). PLoS Negl Trop Dis 4(6): e724. doi:10.1371/journal.pntd.0000724.

[6]  Galloway RL, Levett PN (2010) Application and Validation of PFGE for Serovar Identification of Leptospira Clinical Isolates. PLoS Negl Trop Dis 4(9): e824. doi:10.1371/journal.pntd.0000824.

[7]  Gerner M, Nenadic G (2010). LINNAEUS: A species name identification system for biomedical literature. BMC Bioinformatics 11:85. doi:10.1186/1471-2105-11-85.

[8]  Maciel EAP, Carvalho ALF, Nascimento SF, Matos RB, Gouveia EL, Reis MG, Ko AI (2008). Household transmission of Leptospira infection in urban slum communities. PLoS Negl Trop Dis 2: e154. doi:10.1371/journal.pntd.0000154.

Enhanced by Zemanta

JISC Open Citations Project – Final Project Blog Post

Executive summary

Introduction

To general readers of this blog, this post will appear different from normal posts. Rather than being about a particular topic, it pulls together a summary of the work undertaken over the past year within the Open Citations Project supported by the JISC, and is primarily intended to assist JISC evaluation of the project and its outputs. Details of the work undertaken and the outputs from this project have mostly been described in previous blog posts, to which this post will frequently refer.

Project scope and purpose

The Open Citations Project is global in scope, designed to change the face of scientific publishing and scholarly communication. Specifically, it aims to make it possible to publish bibliographic information in RDF and to make citation links as easy to traverse as Web links.

Project aims

To achieve this goal, we have had four primary aims:

  • To create a semantic infrastructure that makes possible the description of citations, references and bibliographic entities in RDF, since we found existing ontologies inadequate for our purpose.
  • To extend that semantic infrastructure to handle data citations and data entities, as well as bibliographic citations and bibliographic entities, mindful of Philip Bourne’s prediction that soon there will be no meaningful difference between a journal article and a database entry.
  • To provide exemplars of how these ontologies can be applied to real-world data, by creating mappings from existing encodings to RDF, and by creating
    RDF metadata relating to bibliographic and data entities and their citations.
  • To convert the reference lists within all the PMC Open Access subset articles to RDF, and their publication as open linked data that third parties can use in novel ways.

Principle deliverables and outputs

  • The SPAR (Semantic Publishing and Referencing) Ontologies.
  • Graffoo and LODE, two novel tools for ontology visualization and documentation.
  • Mappings of various existing metadata schemes to RDF using SPAR.
  • Development of data citation methods and protocols
  • The Open Citations Blog in which activities and outputs are described.
  • The Open Citations Corpus of bibliographic citation data encoded in RDF and published as Open Linked Data.
  • The OpenCitations.net web site, to provide user access to the Open Citations Corpus.
  •     The Open Citations Project softwareused for processing the Pubmed Central Open Access corpus into Open Linked Data.The net result is open citation data from life science journal articles available on the web, for utilization by academics, for citation network analysis applications, and for tracking the impact of research grant funding.

Primary beneficiaries

  • Scholars worldwide, particularly in the biomedical sciences, by providing better access to bibliographic and citation data.
  • Academic publishers and repository managers, by providing a semantic infrastructure and tools to enable their outputs and holdings to join the semantic web of open linked data.

Background

In 2008, Katie Portwin and I had an enjoyable summer ‘souping up’ a PLoS Neglected Tropical Diseases article by Reis et al. (2008) [1] that I had downloaded as an XML file from the journal web site in late April, one week after it had been published. The resulting enhanced publication, available here, became an exemplar of what is possible in the realm of semantic publishing, while undertaking that work was very influential in shaping the course of my more recent activities.

One of the things we undertook was to mark up the reference list with annotations that clarified the nature of the cited entity (e.g. book, journal article, medical report) and the reason the authors had cited those entities (used data from, obtained background from, extended, etc.) – annotation that we took care to verify with the authors themselves before publishing them!

While we undertook that work manually, it quickly became apparent that what we needed was an ontology from which a controlled vocabulary of such terms could be used to create both human- and machine-readable metadata describing the citations and the cited entities. We therefore developed a draft ontology that we subsequently split to form the basis of the first two ontologies of the suite of SPAR (Semantic Publishing and Referencing) Ontologies described elsewhere on this blog, namely CiTO, the Citation Typing Ontology to describe the relationships between the citing and cited entities, and FaBiO, the FRBR-aligned Bibliographic Ontology, to describe the cited entities themselves.

Using these tools, we were able to mark up the reference list from Reis et al., and publish it as RDF.

From there it was but a small step to dream of the day when the references from all biomedical research articles would be published as open linked data, and to think what we could do to make that dream a reality.

And it was obvious where to start – with the Open Access subset of journal articles available in PubMed Central (PMC), all nicely marked up in XML using the National Library of Medicine DTD.

Table of contents

The various aspects of the Open Citations Project and its outputs are described in the following blog posts, complete with diagrams, data tables, and screen shots where appropriate. These are organized into the following set of distinct topics:

  • Standards
  • The SPAR ontologies for bibliographic and data entities and their citations
  • Graffoo and LODE – tools for ontology visualization and documentation
  • Third-party applications of our ontologies
  • Mappings to the SPAR ontologies, and exemplar RDF encodings
  • Development of data citation methods and protocols
  • The creation of the Open Citation Corpus of linked bibliographic citation data

1 Standards

Advantages of Ontological Standards in Scholarly Publishing

Nomenclature for citations and references

2 The SPAR ontologies for categorizing bibliographic and data entities and their citations

The SPAR ontologies described in the following blog posts were developed jointly with Silvio Peroni, a brilliant graduate student from the University of Bologna, who spent the last six months of 2010 working with me as an intern in Oxford, where he became an honorary member of the Open Citations Project, contributing very significantly to our achievements. His supervisor Fabio Vitali and the Department of Information Science at the University of Bologna are to be congratulated and thanked for their enlightened requirement that all their graduate students spend an internship overseas, since without his collaboration and great skill, much of this development would not have been possible within the available time, if at all.

Introducing the Semantic Publishing and Referencing (SPAR) Ontologies

New web site for the SPAR ontologies

Functional clustering of CiTO properties

Extending FRBR within FaBiO

Categorising bibliographic resources with FaBiO and SKOS

CiTO4Data – a new data-centric citation typing ontology

Using FaBiO to describe data entities

3 Graffoo and LODE – tools for ontology visualization and documentation

These tools have been developed by Silvio Peroni, an honorary member of the Open Citations Project, as explained above.

Graffoo, a Graphical Framework for OWL Ontologies

Using LODE for ontology visualization

4  Third-party applications of our ontologies

Our work to develop a standard semantic infrastructure for bibliographic and data entities and their citation is new. Nevertheless, we have received encouraging responses when we have presented our work at international publishing venues such as the 2010 ALPSP Conference and the 2010 STM Innovation Conference.

Apart from local applications at the University of Oxford, the University of Bologna, and the University of Manchester (for the Utopia Project), and adoption of the SPAR ontologies by the University of Harvard both to complement SWAN (Semantic Web Applications in Neuromedicine) and to mark up astrophysics data (Accomazzi and Dave (2011) Semantic Interlinking of Resources in the Virtual Observatory Era. arXiv:1103.5958), we have expressions of interest from PLoS, Nature and Il Mulino, a major academic publisher in Italy, who are looking to improve their metadata encoding as RDF. We are also interacting with the British Library in mapping the DataCite Metadata Kernel to RDF (see below), and with the Dryad Data Repository in creating RDF mappings of Dryad metadata to RDF and, as part of the JISC Dryad-UK Project, in developing MIIDI and MIIDI-structured RDF metadata for infectious disease papers and datasets, using SPAR ontologies where appropriate, with the aim of permitting authors to submit rich metadata to Dryad.

The following blog posts describe uptake and use of CiTO in CiteULike and WordPress.

Use of CiTO in CiteULike

How to employ CiTO in CiteULike

Using CiTO in WordPress

5 Mappings to the SPAR ontologies, and exemplar RDF encodings

Comparison of BIBO and FaBiO

BIBO2SPAR, an RDF Mapping of BIBO to the SPAR Ontologies

DataCite2RDF – Mapping DataCite Metadata Scheme Terms to ontologies

6 Development of data citation methods and protocols

Nomenclature for data publications and citations

Questions of granularity – Dryad’s use of DataCite DOIs for data citation

How to cite data

Pensoft Journals policy and author guidelines on data publication and citation

7 The creation of the Open Citation Corpus of linked bibliographic citation data

This achievement is almost entirely as the result of the excellent work of our chief data wrangler Alex Dutton, whose skill and natural feel for linked data has done wonders for this project.

The following set of blog posts describe the starting corpus from PubMed central, our transformation of it to RDF, the problems we encountered along the way, the resulting Open Citations Corpus, and the potential uses to which the resulting open citation data can now be put.

Input data for Open Citations – the PMC Open Access Subset

Garbage in, garbage out – problems with bibliographic references

Who wrote this paper? Author list problems in PubMed Central references

Citation correction methods

The citation processing pipeline and the Open Citations Corpus

JISC Open Citations Project web site

Like a kid with a new train set! Exploring citation networks


JISC Administrative Data for Open Citations Project

This information, extracted from the JISC Expo DOAP (Description of a Project) spreadsheet,  is to be found in a separate blog post, here.

The Future

While this is the formal Final Blog Post for the JISC-funded Open Citations Project, that was funded for a year from 1st July 2010, our work is not yet finished. We cherish grand ideas for the liberation of the reference lists from all scholarly journal articles, using the Open Citations Corpus as an exemplar, in collaboration with publishers and organizations such as CrossRef who handle such citation data on behalf of publishers on a daily basis.

This work will only be finished when it is longer up to an individual academic research group to take on the task of citation liberation, but when each publisher publishes the citation data from each of their journal articles as open linked data on their own web sites, marked up using agreed ontological standards that we have proposed, freely available for scholar around the world, from Bangladesh to Zimbabwe, and from Holland to New Zealand, to use and explore, independent of their ability to afford subscription access to the journal articles from which the citations are made.

The citation processing pipeline and the Open Citations Corpus

The input PubMed Central Open Access subset XML reference data, our starting corpus, were transformed into Open Citations RDF in multiple stages:

  1. The original XML was first transformed into an intermediate form using XSLT. The multitudinous ways different publishers have developed of encoding the same information can be more easily handled in this way, by generating an intermediate XML output dataset in which things are described in a more consistent manner, and enabled the resulting information to be parsed more easily from within a non-XML-based programming environment. Our transform pulled out information about articles, people, organisations, in-text reference pointers, and the reference list, and the links between them.
  2. The intermediate XML dataset was then transformed into BibJSON using a Python script. BibJSON is a relatively standard method of encoding bibliographic information. Each of the ~200,000 generated BibJSON dataset contains the information extracted from one marked-up Open Access article. We extended the standard BibJSON records with additional attributes (named with an ‘x-‘ prefix) for other properties we wish later to encode as RDF. At this and later stages, the BibJSON datasets are packed into a single gzipped tarball. Since it would be unwise to unpack such a tarball into ~200,000 independent files would give data management problems, the contents are extracted from the tarball as required using the Python tarfile module.
  3. Another Python script was then used to extract all the PubMed IDs, and to use these as inputs to the Entrez API, in order to extract independent information about the cited entities from the PubMed database. The returned PubMed records were then added alongside the original BibJSON records. These additional data were extremely useful for comparison when attempting to spot erroneous citations, as previously described.
  4. Next we ran a ‘sanitization script’ over the data, which performed the following functions:
  1. URL normalization (e.g. adding URL schemes, undoing character substitutions (e.g. en-dashes for hyphens, quotation marks for apostrophes).
  2. Splitting issue information from journal attributes.
  3. Fixing malformed DOIs (e.g. those missing the ’10.’ prefix). Where DOIs could not be fixed they were removed.
  4. Pulling “doi:****” DOIs out of “http://dx.doi.org/**** URLs.
  5. Removing spurious publication dates (those before 1900 and after 2011).

These corrections are easily extensible if we discover other classes of error in the data.

  1. The records were next unified by taking the transitive closure on a number of identifiers. These identifiers included DOIs, PubMed IDs, PubMed Central IDs and URLs for articles and other cited works, and ISSNs, eISSNs and ISO title abbreviations for journals.
  2. The BibJSON data were then rearranged so that each dataset contains multiple records believed to reference the same bibliographic entity, if it had multiple citations.
  3. Owing to mis-citation (in this case, the use of incorrect or incomplete identifiers) there were a number of clearly different that had been mistakenly declared to refer to the same entity. For this reason we use a distance metric to recluster record groups based on similarity.
  4. Finally, a Python script transforms the BibJSON tarball into RDF. The input tarball contains datasets, each of which comprises records believed to refer to the same entity. The script takes each of these datasets and merges them into a single ‘best’ record using the majority vote procedure previously described. The resultant record is then transformed into a number of quads for inclusion in the final RDF N-Quads Open Citations Corpus, principally modelled using the suite of SPAR ontologies created for this purpose.

This Open Citations Corpus of rdf citation data extracted from the open access subset of PubMed Central, detailing every reference list in the OASS articles, holds each reference list as an individual named graph (hence the storage in N-Quads rather than triples), and comprises 236,499,781 quads occupying 2.1 gigabytes of storage in its compressed state. It includes references to ~20% of all post-1980 papers recorded in PubMed, including all the highly cited papers in every field of biomedical research, and is freely available under a CC0 waiver from http://opencitations.net/data/.

The Open Citations Corpus can be queried via a Web query form or via a SPARQL interface from the Open Citations Project web site at http://opencitations.net/, described in a subsequent blog post, where more information about the project is given.

All the scripts used to transform the OASS input data into the Open Citations Corpus, described above, are available under an MIT Open Source licence at https://github.com/opencitations/.

Citation correction methods

As previously described, the PubMed Central Open Access subset of journal articles yielded 6,529,815 independent bibliographic records of both citing and cited entities, while our use of the PubMed Entrez API provided a further 2,304,143 bibliographic records for the same cited entities. Before converting these references into RDF to create the Open Citations Corpust, we attempted to remove errors in the data.

Some of the references we collected were to highly cited papers, while 2,505,879 referenced papers were only cited once. Figure 1 shows the number of citations per paper for the 100 most highly cited papers in our records – the left hand end of what is a classical long-tail dataset.

 Figure 1

We have not yet analysed the topics of these papers, but can reveal that the paper most highly cited from within the OASS, with 2150 citations, is

Altschul et al. (1997). Gapped BLAST and PSI-BLAST: a new generation of protein database. Nucleic Acids Res. 25(17):3389-3402. doi: 10.1093/nar/25.17.3389.

In an ideal world, all OASS references to an individual paper would be identical, and would exactly match the data on that paper extracted from the Entrez API. However, as we have already seen for author names, this is not the case. As with most datasets in the world, a significant proportion (~1%) of our input reference data is either incomplete or erroneous.

We attempted to correct for these errors by comparing references that appeared to reference the same bibliographic entity, and from this comparison extract the correct data for authors, title, etc. using the following rules:

  1. Accept the longest author list and names bearing accents over those lacking them.
  2. Accept DOIs and PubMed IDs from references having them, after eliminate mis-formed identifiers (e.g. DOIs lacking the journal prefix “10.****”), using the majority vote if different identifiers were given for the same paper.
  3. Accepting those variants of titles, journal names, etc. held in common by the majority of references.

This voting method was weighted in favour of data we judged to be most reliable, namely the PubMed records returned from the Entrez API, and metadata about cited papers that were within the OASS coming from the independent bibliographic records we had for those papers.

As a result of these activities, not only did we coalesce the independent references from different OASS articles to the same multiply cited papers into a set of 3,578,598 unique bibliographic citation target records describing 204,637 OASS articles and 3,373,961 articles outside the OASS, but we were able to select from the multiple references those elements (author list, title, etc.) that were judged to be correct for each target.

However, of the 2,505,879 papers that are only cited once from within the OASS, 1,246,967 lacked a PubMed ID, so for these we were unable to gather confirmatory evidence for the accuracy of the citation from the Entrez API. These references, which are to the least significant papers in the corpus, are therefore provided “as is” from PubMed Central, without any external corroboration of their accuracy.

How these error-correction processes fitted into the data processing pipeline used to create the Open Citations Corpus is described in the next blog post.

Input data for Open Citations – the PMC Open Access Subset

PubMed, created by the US National Library of Medicine in DATE, holds bibliographic records and abstracts for essentially all journal articles published in the biomedical sciences. It currently records almost a million new entries each year!

PubMed Central (PMC), created as an extension of PubMed, is designed to hold full text articles from among the PubMed entries. At present, PMC holds entries for ~9.3% of the papers indexed in PubMed published between 1980 and 2010, 1,428,675 out of a total of 15,319,102. Many of these PMC articles (192,452 for the years 1980 to 2010, ~13.5% of the PMC holdings) are truly Open Access articles, that users can download and repurpose as they will. However, the majority are articles from subscription access journals deposited in PMC under licence agreements with funding agencies that, while providing read access to the full text, prevent readers from downloading the articles and from making derivative works.

The Open Citations Project has to date worked exclusively with the Open Access subset (OASS) of PMC. As of 24 January 2011, there were 204,637 OASS articles, including a few published before 1980. In almost all of these OASS articles, the reference lists were nicely marked up in NLM-DTD XML, making the task of identifying individual references straightforward. In a few cases, the articles were present as scanned page images, lacking any internal markup – those we were unable to process.

From the XML reference lists of these papers, we were able to identify and extract 6,325,178 individual references, which, together with the bibliographic information we had on the OASS articles themselves gave us 6,529,815 independent bibliographic records of both citing and cited entities. As explained in the next blog post, these records showed varying degrees of completeness and accuracy.

Using the Entrez API, we were able to use PubMed IDs, where these were available in the references, to extract a further 2,304,143 bibliographic records from PubMed, which, in the ideal world, would each exactly duplicate the information we had previously obtained from the OASS bibliographic reference containing that PubMed ID. As we shall describe, these additional PubMed records proved exceptionally useful in correcting imperfect OASS references.

Since the OASS articles cite papers outside the OASS, as well as a few within it, the majority of the bibliographic information we thus acquired related to papers represented within PubMed but not within PubMed Central. And because many OASS papers independently contained references to the most highly cited biomedical papers, many of our records were to the same bibliographic entities.

An important part of our data processing was thus to coalesce independent references from different OASS articles to the same multiply cited papers into a set of unique bibliographic records, each for one paper. Once this had been achieved, we were left with 3,578,598 unique bibliographic records, 204,637 describing the OASS articles themselves, and 3,373,961 describing articles outside the OASS, mostly from subscription-access journals.

The following table and figure tabulates and illustrates the number of papers in each category between 1980 and 2010 inclusive. The most striking thing about these data are that they show how, between these years, the relatively small number of articles in the Open Access subset of PMC (approx. 200,000 articles) referenced >20% of all PubMed Central papers published between 1950 and 2010 (approx. 15.3 million papers), and in doing so reference all the most important highly cited papers in every field of biomedical endeavour. This inclusive coverage means that citation graphs created from the Open Citations dataset will capture all the important aspects of any field.

Table 1

Year

Number of papers

 

Pubmed

PMC

OASS

Cited by OASS

1950-1979

5,128,602

427,877

8352

146027

1980

278,069

23,218

631

15708

1981

278,069

23,685

543

16627

1982

292,219

25,215

740

18389

1983

305,725

25,688

738

21263

1984

314,737

26,316

543

23249

1985

331,706

25,916

637

25780

1986

345,501

26,721

590

28761

1987

363,754

27,834

555

32222

1988

381,976

28,802

442

36320

1989

398,620

29,855

616

42005

1990

398,620

30,143

704

48422

1991

407,465

31,337

733

53655

1992

412,457

32,325

719

61091

1993

420,935

33,203

1055

70272

1994

431,160

33,456

1279

80206

1995

441,967

34,276

1148

91814

1996

452,218

34,755

1155

101853

1997

451,533

34,800

1314

114967

1998

469,466

36,179

1341

131510

1999

469,466

37,534

1420

146623

2000

528,243

39,047

1608

170330

2001

542,854

40,235

2546

179203

2002

560,006

43,265

3199

195879

2003

590,317

46,442

4015

211423

2004

634,432

51,416

6005

229423

2005

694,687

60,411

10333

236678

2006

740,007

72,295

14264

238387

2007

777,311

87,744

20070

222085

2008

824,612

120,004

31416

190071

2009

862,372

146,413

41848

124894

2010

918,598

120,145

40245

27877

Total

15,319,102

1,428,675

192,452

3,186,987

% of PubMed  

9.33%

 

20.80%

% of PMC    

13.47%

 

Figure 1

Figure 2

The OASS source data give the types of cited entity, aggregated after coalescing, shown in the Figure 3.

Figure 3

JISC Open Citations: Aims, Objectives and Final Outputs

The Open Citations Project is global in scope, designed to change the face of scientific publishing.

It aims to make bibliographic citation links as easy to use as Web links. Its goals are three-fold:

  • To establish OpenCitations.net, a public RDF triplestore for biomedical literature citations.

(Note: In this context, a bibliographic citation is a reference within a particular citing work to another publication termed the cited work. This use of the word ‘citation’ should be clearly distinguished from the common related use of this word to indicate the cited work itself. Within this application, ‘cite’ and ‘citation’ denote the performative act of citation itself, not the target document of that citation.)

  • To harvest the reference lists from many current and recent open access journal articles, and to convert these datasets into RDF, starting with those in UK Pubmed Central, those published by the Public Library of Science and Biomed Central, those from other publishers willing for CrossRef to release their data, and articles from other Open Access repositories like EPrints.
  • To publish these citation datasets as Open Linked Data on the Talis Connected Commons Platform under an open data license in both human- and computer-accessible formats.

As such, the Open Citation Project seeks to promote citation datasets as first class information objects. The reference list from each article processed will be published as an individual named graph, with its own individual Digital Object Identifier (DOI) assigned by the relevant publisher via CrossRef or by the British Library on behalf of the DataCite Project.