Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Input data for Open Citations – the PMC Open Access Subset

PubMed, created by the US National Library of Medicine in DATE, holds bibliographic records and abstracts for essentially all journal articles published in the biomedical sciences. It currently records almost a million new entries each year!

PubMed Central (PMC), created as an extension of PubMed, is designed to hold full text articles from among the PubMed entries. At present, PMC holds entries for ~9.3% of the papers indexed in PubMed published between 1980 and 2010, 1,428,675 out of a total of 15,319,102. Many of these PMC articles (192,452 for the years 1980 to 2010, ~13.5% of the PMC holdings) are truly Open Access articles, that users can download and repurpose as they will. However, the majority are articles from subscription access journals deposited in PMC under licence agreements with funding agencies that, while providing read access to the full text, prevent readers from downloading the articles and from making derivative works.

The Open Citations Project has to date worked exclusively with the Open Access subset (OASS) of PMC. As of 24 January 2011, there were 204,637 OASS articles, including a few published before 1980. In almost all of these OASS articles, the reference lists were nicely marked up in NLM-DTD XML, making the task of identifying individual references straightforward. In a few cases, the articles were present as scanned page images, lacking any internal markup – those we were unable to process.

From the XML reference lists of these papers, we were able to identify and extract 6,325,178 individual references, which, together with the bibliographic information we had on the OASS articles themselves gave us 6,529,815 independent bibliographic records of both citing and cited entities. As explained in the next blog post, these records showed varying degrees of completeness and accuracy.

Using the Entrez API, we were able to use PubMed IDs, where these were available in the references, to extract a further 2,304,143 bibliographic records from PubMed, which, in the ideal world, would each exactly duplicate the information we had previously obtained from the OASS bibliographic reference containing that PubMed ID. As we shall describe, these additional PubMed records proved exceptionally useful in correcting imperfect OASS references.

Since the OASS articles cite papers outside the OASS, as well as a few within it, the majority of the bibliographic information we thus acquired related to papers represented within PubMed but not within PubMed Central. And because many OASS papers independently contained references to the most highly cited biomedical papers, many of our records were to the same bibliographic entities.

An important part of our data processing was thus to coalesce independent references from different OASS articles to the same multiply cited papers into a set of unique bibliographic records, each for one paper. Once this had been achieved, we were left with 3,578,598 unique bibliographic records, 204,637 describing the OASS articles themselves, and 3,373,961 describing articles outside the OASS, mostly from subscription-access journals.

The following table and figure tabulates and illustrates the number of papers in each category between 1980 and 2010 inclusive. The most striking thing about these data are that they show how, between these years, the relatively small number of articles in the Open Access subset of PMC (approx. 200,000 articles) referenced >20% of all PubMed Central papers published between 1950 and 2010 (approx. 15.3 million papers), and in doing so reference all the most important highly cited papers in every field of biomedical endeavour. This inclusive coverage means that citation graphs created from the Open Citations dataset will capture all the important aspects of any field.

Table 1

Year

Number of papers

 

Pubmed

PMC

OASS

Cited by OASS

1950-1979

5,128,602

427,877

8352

146027

1980

278,069

23,218

631

15708

1981

278,069

23,685

543

16627

1982

292,219

25,215

740

18389

1983

305,725

25,688

738

21263

1984

314,737

26,316

543

23249

1985

331,706

25,916

637

25780

1986

345,501

26,721

590

28761

1987

363,754

27,834

555

32222

1988

381,976

28,802

442

36320

1989

398,620

29,855

616

42005

1990

398,620

30,143

704

48422

1991

407,465

31,337

733

53655

1992

412,457

32,325

719

61091

1993

420,935

33,203

1055

70272

1994

431,160

33,456

1279

80206

1995

441,967

34,276

1148

91814

1996

452,218

34,755

1155

101853

1997

451,533

34,800

1314

114967

1998

469,466

36,179

1341

131510

1999

469,466

37,534

1420

146623

2000

528,243

39,047

1608

170330

2001

542,854

40,235

2546

179203

2002

560,006

43,265

3199

195879

2003

590,317

46,442

4015

211423

2004

634,432

51,416

6005

229423

2005

694,687

60,411

10333

236678

2006

740,007

72,295

14264

238387

2007

777,311

87,744

20070

222085

2008

824,612

120,004

31416

190071

2009

862,372

146,413

41848

124894

2010

918,598

120,145

40245

27877

Total

15,319,102

1,428,675

192,452

3,186,987

% of PubMed  

9.33%

 

20.80%

% of PMC    

13.47%

 

Figure 1

Figure 2

The OASS source data give the types of cited entity, aggregated after coalescing, shown in the Figure 3.

Figure 3

Pensoft Journals policy and author guidelines on data publication and citation

In a recent blog post, Heather Piwowar, in discussing the advantages of citing datasets in the reference list of the article, said “No journals have standardized on this approach so far”. However, Pensoft Journals, a publisher that specializes in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, has exactly such a policy.

Recently, in response to my Data Citation Best Practice Discussion Document [1] discussed in the preceding blog post, I was invited to work with Pensoft Journals to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [2].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals.

While recognising that citations of Genbank and similar bioinformatics datasets are by custom made by placing the database accession number somewhere in the text, with no entry in the reference list of the article, we make the following generic recommendation:

“Data citations may relate either to the author’s own data, or to data created and published by others (“third-party data”). In the former case, the dataset may have been previously published, or may be published for the first time in association with the article that is now citing it. All these types of data should, for consistency, be cited in the same manner.

“As is the norm when citing another research article, any citation of a data publication, including a citation of one’s own data, should always have two components:

  • An in-text citation statement containing an in-text reference pointer that directs the reader to a formal data reference in the paper’s reference list.

and

  • A formal data reference within the article’s reference list.

“The data reference in the article’s reference list should contain the minimal components recommended in the DataCite Metadata Kernel v2.0 specification. In DataCite terms: Creator PublicationYear Title Publisher Identifier; alternatively (but meaning the same thing): Author PublicationYear Title DataRepositoryName DOI. These components should be presented in whatever format and punctuation style the journal specifies for its references. The following example demonstrates in general terms what is required.

“In-text citation:

This paper uses data from the [name] data repository at http://dx.doi.org/***** (Jones et al. 2008a), first described in Jones et al. 2008b.

“Data reference in reference list:

Jones A, Bloggs B, Smith C (2008a). Title of data package. Repository name. doi:*****.

“Article reference in reference list:

Jones A, Saul D, Smith C (2008b). Title of journal article. Journal Volume: Pages. doi:###. ”

Pensoft also recommends that the in-text data citation statement in Pensoft journals should be included in the body of the paper, in a separate section named Data Resources situated after the Material and Methods section.  More details are given in the paper [2].

Furthermore, Pensoft has reached an agreement for cooperation in data hosting and developing of data publishing workflows with GBIF, the Global Biodiversity Information Facility, with the Dryad Data Repository and with the Consortium for Barcode of Life.

Clearly, these Pensoft data citation recommendations, which work fine for on-line journals without a numerical limit on the number of citations, would not be feasible in journal articles with a strict limit to the number of citations, which is why Heather’s emphasis of exploring alternative ways for data citation in such cases is important.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.  

[2]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

How to cite data

As an approach towards developing best practice for data citation, I recently wrote a Data Citation Best Practice Discussion Document that is available on Google Docs, and that I have now slightly revised to Version 2 [1].

In that document, I first compared what is recommended by DataCite [2] and by Altman and King [3] with what currently practised by the Dryad Data Repository and what presently occurs ‘in the wild’ in a handful of journal articles that reference Dryad datasets.  I then proposed some ‘internal’ recommendations for Dryad to adopt, and concluded with draft Data Citation Best Practice Recommendations.  As I say in the preface to the document:

“Since Dryad is pioneering data management in terms of data resources that are linked to journal articles, it is to be hoped that by first developing citation best practice in the Dryad context we can thereby catalyse its wider spread.  If we can thus agree what such best practice should be among the Dryad community and implement such best practice proposals, we can then promote such practices within the wider scholarly community.”

I realized that much of the confusion and disagreement concerning the best method of citing data resources within earlier e-mail threads resulted from a conflation of ideas about two entities which in the conventional citation of journal articles are quite distinct:

  • the in-text citation containing an in-text reference pointer, e.g. “this paper builds upon the work of Jones et al. [15].”     and
  • the actual reference to Jones et al. within the article’s reference list, e.g. “[15] Jones A, Bloggs B and Smith C (2008). Title. JournalName
    14:132-134. doi:*****.”

Thus, in an e-mail I wrote on 27 April, where I said

“Excellent, but what we really want is for the data citations to be included in the reference list along with the bibliographic citations, following the DataCite model: Creator (PublicationYear): Title. Version. Publisher. ResourceType. Identifier “

. . . I should also have stressed the need for explicit in-text citations that denote such references.

All that is explained within the Google Docs paper.  In that paper I also proposed having a separate Data Resources section within the body text of a journal article, in which data resource citations can be gathered.  That does not preclude these resources also being cited, where appropriate, within the Methods and Materials or Results sections of the paper, but is designed to put data resource citations “on the map”, so to speak, as important new publication performative acts.

It is not appropriate, in my mind, for data citations to be included in the Acknowledgements section of a paper, which is designed for acknowledging contributions to the work from people and funding agencies, even if Thomson Reuters has developed methods to parse such entries, since they also have well-established mechanisms for harvesting proper (data) references from the reference list.

All the ontological terms required to mark up in-text reference pointers and their textual contexts, references, reference lists, etc., to permit automated detection and harvesting of data citations and references, are available as RDF within the SPAR (Semantic Publishing and Referencing) Ontologies (http://purl.org/spar/), which were designed precisely to facilitate such work.

Since writing my Data Citation Best Practice Discussion Document, I was invited (on a purely voluntary non-commercial basis, I should add!) to work with Pensoft Journals, a publisher that specialises in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [4].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals, which I discuss in the next blog post, and which I am pleased to say includes all the recommendations discussed above.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.

[2]    The DataCite Metadata Kernel version 2.0 (2011). http://datacite.org/schema/DataCite-MetadataKernel_v2.0.pdf.

[3]    Micah Altman and Gary King (2007). A proposed standard for the scholarly citation of quantitative data. D-Lib Magazine. 13. http://www.dlib.org/dlib/march07/altman/03altman.html.

[4]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

Questions of granularity – Dryad’s use of DataCite DOIs for data citation, and the Annotation Ontology

DataCite is an international organisation, founded in 2009, which promotes the use of DOIs (Digital Object Identifiers) for published datasets, in order to establish easier access to research data, to increase acceptance of research data as legitimate contributions in the scholarly record, and to support data archiving to permit results to be verified and re-purposed for future study.

Its founding members were the British Library; the Technical Information Center of Denmark; TU Delft Library; the National Research Council’s Canada Institute for Scientific and Technical Information (NRC-CISTI); California Digital Library; Purdue University; and the German National Library of Science and Technology. Since its foundation, it has been joined by several other leading organisations from around the world, and it therefore provides a stable basis for the ongoing use of DOIs for data.

This recent availability of DOIs from DataCite for the identification of data entities has made all the difference to data repositories wishing to give unique global identifiers to their data holdings, since DOIs are widely recognised and respected throughout the academic world, because of their widespread prior use for identifying journal articles, made possible by CrossRef.

However, in their recent discussion paper Data Citation and Linking, published on 8th June 2011, Alex Ball and Monica Duke of UKOLN at the University of Bath ask:

“At what granularity should data be made citable? If single datasets are given identifiers, what about collections of datasets, or subsets of data?”

Individual data files and metadata documents will, of course, have their own unique internal identifiers within any data repository, but may not have externally resolvable identifiers such as DOIs.  Practice varies.

This post is to explain how DOIs are employed in the Dryad Data Repository, that specializes in publishing data linked to peer-reviewed biological journal articles, since it is both elegant and addresses at least some of the issues raised by Alex and Monica.

The Dryad DOI usage policy is described at https://www.nescent.org/wg_dryad/DOI_Usage, and involves assigning unique DOIs to each version of every data package, and to each version of every data file, in a principled and easy-to-understand manner. In summary:

  • Each data package is given a DataCite DOI, which can be versioned by adding “.2”, “.3”, etc. after the original DOI to create new DOIs for new versions of the same data package.
  • Within each data package, each data file has a unique DOI defines by suffixing the data package DOI with “/1”, “/2”. etc., with versions indicated as for data packages.

Thus the third version of the second data file in the second version of a Dryad data package would have a DOI of the form doi:10.5061/dryad.1234.2/2.3.

One might argue that it would result in an awfully large number of DOIs if a single data package was made up of thousands of data files. True, but numbers themselves are limitless and free, and the cost of a DataCite DOI is small relative to the cost of data creation and preservation. The real problem at present is lack of identifiable, citable data entities within repositories – to have so many that the cost of DOIs becomes an issue should be regarded as an achievement, not a problem!

Dryad does not have a mechanism for assigning identifiers to a portion of a data file (“a subset of data”), and DOIs are probably not the correct identifiers for that purpose, since they are primarily designed for citation and resource discovery.

A more appropriate method for identifying portions of a data file, or of any other digital object or document, is to use the Annotation Ontology (AO) developed by Paolo Ciccarese of Harvard University, described at http://code.google.com/p/annotation-ontology/wiki/Homepage. AO can be used to identify and annotate portions of a wide variety of resources such as HTML, PDF, Word, Excel, XML documents, images, videos, databases, web services, experimental data and metadata files. Paolo is currently working with a group in Harvard that focuses on biodiversity, who are using OA to address databases and data, and he anticipates publishing version 2.0 of AO in September.

Functional clustering of CiTO properties

CiTO v2.0 contains just two main object properties, cito:cites and its inverse cito:isCitedBy, each of which as thirty-two sub-properties. Intentionally, these properties are not constrained as to domain or range, thereby maximising their applicability in a wide range of citation contexts. Additionally, CiTO contains one additional generic object property, cito:shareAuthorsWith, that may be used even outside a citation context.

Some have criticised CiTO for having too many properties, making it confusing for potential users, and Martin Fenner chose to use only the ten ‘most popular’ properties in his CiTO plug-in for WordPress, previously mentioned in this blog post. In response, I would point out that each property has a distinct and clearly defined meaning, and that together they provide an appropriate level of expressivity for effective use. Nevertheless, it is clearly difficult to conceptualize all the CiTO properties at one time, if they are being viewed using an ontology editor such as Protégé. I have thus created the following figure that groups CiTO properties by similarity, in the hope that this will facilitate choice of the most appropriate one.

Clustering of CiTO relationships by similarity

As shown in the figure, the CiTO properties and sub-properties (and, consequently, their inverses) may be classified as rhetorical (upper oval) and/or factual (lower oval, dark blue text), with the rhetorical properties being grouped in three sets depending on their connotation: positive (green), neutral or informative (blue) and negative (red). Five properties (in purple, within the overlap of the ovals) have both factual and rhetorical characteristics. The inverse properties are not shown.

JISC Open Citations: Wider Benefits to Sector & Achievements for Host Institution

One of the biggest challenges faced by modern scientists is information overload. The life sciences are probably the area most affected by it, with almost a million new entries being added to PubMed each year. While on-line publishing and bibliographic search engines have made the problem of finding individual research articles considerably easier, the present scholarly citation system inadequately exposes the knowledge networks that exist within the scientific literature, linking papers, authors and research projects.  Much of the problem stems from the lack of freely available citation data in machine-readable form.

In this Open Access age, it is a scandal that reference lists from journal articles, the core elements of the academic data cycle, are not freely available for use by scholars.   Current citation services are largely restricted to a small number of commercial companies whose valuable products are still insufficiently developed to satisfy all the needs of the academic community.

Google Scholar offers navigation through the citation network, but only in one direction – backwards. Thomson Reuter guards citation data in ISI Citation Index and Web of Knowledge as commercial assets, as does Elsevier for citation data in Scopus, with limited subscription-access search and display capabilities, and no methods for extracting citation data in bulk. Furthermore, they do not characterise the nature of citations between publications.

The value of citation data to the research community has grown as research evaluation has increased in importance. Citation metrics are increasingly used by institutions to establish their research quality, and by funding agencies to determine the effectiveness of their grant spending.

Citation data now need to be recognized as a part of the Commons – those works that are freely and legally available for sharing and reuse – extending the Science Commons / Open Knowledge Foundation philosophy to the world of scientific citation.

If machine-readable citation data for all scholarly publications were to be published freely on the Web, the construction and interrogation of citation networks would become trivially simple, with enormous advantages to scholarship. Thanks to CiteSeerX, citation data in computer science have been freely available for several years,

Similar access is now coming for other fields of scholarship, particularly for the biological sciences through CiteXplore. However, in none of these cases are the citation data available as Linked Open Data, and there are no convenient free tools, accessible to working research biologists, that permit them to visualize and navigate the literature by means of its citation network, or that permit knowledge analysts to pose generic questions over the whole corpus, such as determining whether those who publish in Open Access journals are more prone to cite other Open Access articles, in comparison with those who do not.

This project will produce citation information as Open, Linked Data which will address the short-comings of the current situation, aiming to provide researchers with the tools to explore the citation network.

JISC Open Citations: Aims, Objectives and Final Outputs

The Open Citations Project is global in scope, designed to change the face of scientific publishing.

It aims to make bibliographic citation links as easy to use as Web links. Its goals are three-fold:

  • To establish OpenCitations.net, a public RDF triplestore for biomedical literature citations.

(Note: In this context, a bibliographic citation is a reference within a particular citing work to another publication termed the cited work. This use of the word ‘citation’ should be clearly distinguished from the common related use of this word to indicate the cited work itself. Within this application, ‘cite’ and ‘citation’ denote the performative act of citation itself, not the target document of that citation.)

  • To harvest the reference lists from many current and recent open access journal articles, and to convert these datasets into RDF, starting with those in UK Pubmed Central, those published by the Public Library of Science and Biomed Central, those from other publishers willing for CrossRef to release their data, and articles from other Open Access repositories like EPrints.
  • To publish these citation datasets as Open Linked Data on the Talis Connected Commons Platform under an open data license in both human- and computer-accessible formats.

As such, the Open Citation Project seeks to promote citation datasets as first class information objects. The reference list from each article processed will be published as an individual named graph, with its own individual Digital Object Identifier (DOI) assigned by the relevant publisher via CrossRef or by the British Library on behalf of the DataCite Project.