Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

The plate tectonics of research data publication

In biology, the fields of macromolecular structural biology and sequence bioinformatics have, since the 1970s, had established international databases for the deposition of data, and journal policies mandating such deposition prior to acceptance for publication of manuscripts describing the data.  Similar good practices have developed more recently in other disciplines, notably astronomy.  But these are the exceptions, and over the majority of scientific fields data publication remains a minority activity.  For the most part, this is because the technical barriers to publication of research datasets remain so high, and the academic rewards so low, that such publication is undertaken only by the few who regard it as a moral imperative.  However, new policies are combining with new technological capabilities to bring significant change to this publication landscape.

An analogy can perhaps be made with geological plate tectonics.  The new policies of funders and journal publishers towards the open publication of research datasets arising from publicly funded research can be likened to a tectonic plate slowly moving forward with inexorable force, that is colliding with the massive stationary continental plate of established scientific practice, in which data are traditionally regarded as belonging to the research group that generated them, in which data sharing occurs only between trusted colleagues on the basis of personal request, and in which the only publications that are truly valued are those of journal articles.

This traditional position is reinforced by the metrics employed in the assessment of research quality, which, while giving lip service to the value of data publication, regard articles in high impact factor journals as of paramount importance.  And there is good reason why the scholarly research article is so highly regarded, since it is a rhetorical construct in which the authors attempt by the selective presentation of evidence to convince the readers that particular hypotheses are proven.  As such, it can both demonstrate the competence and achievements of the authors, and be evaluated against other such publications by peer review.  In contrast, the publication of a research dataset is primarily a presentation of facts, with no rhetorical content, that can only be validated on the basis of internal self-consistency, information about instrument calibrations, resolution and error estimates, and the possession of adequate descriptive metadata in appropriate formats – much more pedestrian stuff.

However, to return to our analogy, these tectonic plates are colliding, with the traditional plate of data retention facing ultimate subduction beneath the advancing plate of open data publication.  At present there is friction between them, and along much of the sheer zone there seems to be little movement, leading to a build-up of pressure.  Small projects that result in some local movement towards better data management can be likened to minor earthquakes having only local impact.  But these are the precursors to an inevitable major re-positioning of the plates to relieve the mounting pressure for change.  This will result in a major general realignment of attitudes along the whole plate boundary, resulting in a tsunami of open data publication.  We are thus on the cusp between the traditional status quo and a new dramatically reshaped scientific publication landscape, in which open data publication will take its proper place as underpinning the publication of ideas and evidences supporting hypotheses.

Technological projects such as the JISC ADMIRAL Project and its successor the JISC UMF DataFlow Project serve to facilitate this transition by ‘lubricating’ the plate boundary and enabling movement.  In particular, the two-tiered federated data management infrastructure they provide, with local services to meet the private data management needs of individual research groups (DataStage filestore instances) being linked to institutional repositories (DataBank repository instances) by automated procedures for the easy archiving and publication of selected datasets, makes the whole process easier, as illustrated by the following figures taken from the original ADMIRAL Project grant application.

Figure 1: The conventional research data lifecycle  

Four phases mark the activities undertaken in traversing the conventional data lifecycle: formulation, experimentation, interpretation and publication.       The publication outputs from one cycle provide the input to the next.  However, only selected research data are conventionally published.  The original research datasets are frequently abandoned on local hard drives or CD-ROMs, and neither datasets nor papers are submitted to institutional repositories.

Figure 2: The ADMIRAL enhanced research data lifecycle

Raw research data are first organized and annotated in a local research data filestore.  From there they can be shared and used to support publications, and can be automatically archived to institutional repositories, from which they can optionally be published as Linked Open Data on the Semantic Web for public dissemination and reuse.  This figure differs from the DCC Data Cycle model by emphasizing the importance of the local research data filestore.

Figure 3: The effort involved in submitting data to an institutional repository  

As investment is made in the local organization and annotation of research datasets, the effort involved in data submission to an institutional repository reduces to the point where it becomes feasible on a routine basis.

IBRG projects to facilitate data publication and data citation

In the previous post, I outlined reasons why researchers don’t publish data, presented as evidence to the Royal Society’s Policy Study “Science as a Public Enterprise” Call for Evidence.  Here, I summarize activities by members of my Image Bioinformatics Research Group (IBRG) at Oxford University to facilitate data publication and data citation, and thus to help catalyze a cultural shift to a situation in which data publication is as natural a part of research life as is undertaking experiments.

= = =

Data management services and data repositories

We are developing tools and services to assist researchers in their local data management, for their own personal benefit, while facilitating automated data submission to appropriate institutional or subject-specific data repositories, in ways that fit with their normal working practices and impose as little as possible in terms of cognitive overhead – what we term sheer curation.  These include the two-stage data management services we are currently funded to develop by the University Modernization Fund through the JISC DataFlow Project, namely (a) DataStage, a private local data management file system, with automated backup, Web access, and security access control, for use by individual research groups, and (b) DataBank, a cloud-deployable data repository for use by universities, research institutes or large research consortia.  These open source services will be made available for installation by third parties on the Eduserv academic cloud and elsewhere, as required by research groups, institutions and universities both in the UK and internationally.  We seek early adopters!

Curation by addition

For automated data submissions from DataStage to DataBank, that will use the SWORDv2 repository submission protocol to standardize data package ingest, we are intentionally lowering the barriers in terms of metadata requirements for initial data submission, with the the possibility of enriching the metadata at a later date – what we call curation by addition – in order to kick-start the cultural sea change required for data deposition to become routine.  We are trying to avoid the best – the requirement for perfect and complete metadata – becoming the enemy of the good – data publication by any means.

Dryad

We are, through the JISC Dryad-UK Project, working to promote the Dryad Data Repository, a domain-specific repository for biological datasets linked to peer-reviewed journal articles, by bringing additional publishers and journals on board, and enabling Dryad metadata to be published as open linked data.

SWORD

We are also promoting the adoption of SWORDv2 repository communication protocol for data package wrapping, to permit automated deposit to DataBank, Dryad or other SWORD-compliant repositories, and the exchange of metadata between them.

SPAR (Semantic Publishing and Referencing) Ontologies

To enable Dryad, DataBank and similar repository metadata to be published as open linked data, we are creating appropriate data description and data citation ontologies, including FaBiO and CiTO4Data, as part of our suite of SPAR Ontologies, and are using them to provide mappings from the DataCite XML Metadata Kernel to RDF.

Data citation

We are working with DataCite to assign DOIs to Dryad and DataBank datasets, so that data publications become citable, gaining academic credit for the data depositor.

These data citations, when they exist, will fit naturally within the Open Citations Corpus, a collection of some 3.4 million bibliographic citations from within PubMed Central that we have recently established as open linked data, as part of the JISC Open Citations Project.

We have also worked to establish best practice for citing data publications from within the literature, and with one open access journal publisher to influence their Data Publishing Policies and Guidelines to Authors regarding data citation, as detailed in earlier posts on this blog.

Tools for metadata curation

The above tools and services are generic.  Specifically in the biomedical area, we are developing MIIDI, a Minimal Information standard for reporting an Infectious Disease Investigation, to specify the metadata that should for completeness accompany such an investigation, and have recently developed MIIDI Forms, a web tool that facilitates the entry of such metadata, that involves interaction with appropriate web services to enable autocompleting of bibliographic information and specification of geo-coordinates for place names, and permits automated look-up of ontology terms from the NCBI BioPortal

Open Research Reports

We are working to create Open Research Reports, open access structured digital abstracts in both human- and machine-readable form that describe datasets or journal articles that relate to infectious disease, based on MIIDI and to be published in an instant data journal format with DOIs to permit referencing and citation.

Tools for creating data management plans

We have recently started working with the Digital Curation Centre to help improve their DMPonline data management planning tool for creating the data management plans increasingly required to accompany grant applications, and useful for managing the flow of data from funded projects.  If our current funding application is successful, this work will be carried forward in the OXFORD DMPonline Project, in which, in addition to adoption, adaption, customization and integration of the tool for use by University of Oxford researchers, we will develop the following generic improvements to the tool that will be fed back to the DCC as open source enhancements for general use across UK academia and internationally:

a)     creation of DaMO, a simple data management ontology,

b)     use of DaMO to create RDF metadata for data management plans,

c)     SWORDv2-wrapping of data management plans for repository submission, and

d)     creation of DMPBank, a DataBank instance specifically tailored for archiving and publishing data management plans.

Why researchers don’t publish data

Evidence submitted by David Shotton in response to the Royal Society’s Policy Study “Science as a Public Enterprise” Call for Evidence, addressing the following two topics raised by that call:

Getting Researcher buy-in. How do we get researchers to be more willing to share data? What is there to be learned from disciplines such as genomics which have norms which favour wide sharing of data?

Ensuring we generate useful metadata. For open data to be useful, it needs to be sufficiently well described. The researchers creating the dataset are in the best position to create the metadata; but as things stand, the incentives for them to do a thorough job of this are not always very strong. Do we need to change incentives?

= = =

“I guess I have been invited to contribute evidence to the evidence session on digital curation at the Royal Society on 5th August 2011 to present the view from the shop floor – or rather from the laboratory bench.  I would like to mention three pressures that presently combine to prevent researchers from publishing their data.

Pressure one: Information volume

When I started research, you could, if you were very fortunate (as I was), solve a protein structure to low resolution within six months and to medium resolution within three or four years, and you could hope to know something about all the protein structures that had so far been determined.  Today, you can collect the crystallographic structure factor data for a new protein in a few minutes at the Diamond Light Source, and can compute its 3D structure on your laptop during the train ride home. PDB currently contains the structures of about 74,000 macromolecules, and you are unlikely to know the structures of more than a handful of these.

Looking at the same problem from a different perspective, PubMed currently received a million articles per year.  If you imagine there might be a thousand biomedical specialisms – if you slice the salami thinly enough -, as a specialist you can expect to have on average twenty new papers in your field each week – an impossible number to carve out time for, from your other activities, if you wish to read them properly.

Thus you will never catch up – there is just too much scientific information around now.  You would like to know about it all, to keep abreast of your field, but the task is impossible.  Researchers are thus under overwhelming pressure, and have to run just to stand still.  They have no spare time to undertake data curation activities for which they receive little or no academic reward in terms of peer esteem, tenure or promotion.

Pressure two: Institutional pressures

The principal pressures researchers are under from their departments and institutions are (a) to win grants and (b) to publish in high impact journals, because these things influence departmental income both (a) directly through full economic costs from funding agencies, and (b) through high RAE/REF scores that in England determine funding from HEFCE.  From the viewpoint of a Head of Department trying to establish or maintain his department’s reputation and financial health, nothing else matters.  I have known these factors as the deciding ones in academic appointments.  Nobler concepts of scientific excellence and of scientific altruism in the form of data publication become submerged beneath these pressures.

Pressure three: Cognitive overheads of data management

Appropriate ontologies and technical infrastructures for data preservation increasingly exist, but the concepts surrounding metadata creation, repository deposit and data accessibility are foreign to most biomedical researchers, leading to cognitive and skill barriers that prevent them from undertaking routine best-practice data management.

Put crudely, the large amount of effort involved in preparing data for publication, coupled with the negligible incentives and rewards, prevents researchers in most biomedical specialisms from doing so.

Having said that, research scientists are perfectly able to provide structured metadata when it is necessary to do so.  With the switch to on-line journal article submission, publishers have devised lengthy web forms that require completion with details of co-authors and their affiliations, funding agencies, etc. before you are permitted to upload your manuscript – forms that for certain publishers can take the best part of an afternoon to complete for a new submission involving many authors, figures and supplementary files.  Since researchers have no choice but to comply with the metadata requests, they do so, since this is the only way in which to achieve their desired goal of publication in the chosen journal.

That the fields of genomics and macromolecular structures are exceptions to the rule that data are not widely published is due to two factors:

  • First, their datasets are relatively simple, homogeneous and well-defined  – linear nucleotide or amino acid sequences, lists of structure factors, and lists of atomic coordinates – in comparison with the heterogeneity of data in fields such as ecology or animal behaviour, simplifying the tasks of data management and metadata creation.
  • Second, and more important, is the fact that in the early 1970s journals such as Nature started to mandate database accession numbers as a precondition of publishing sequence or structure papers – this brought about an almost instantaneous change in attitudes among our research community!

For other disciplines, while I commend journals’ and research councils’ recent policies regarding data publication, I believe we will only achieve radical change when funders and publishers mandate data publication as a pre-condition of applying for a further grant or of article submission.  Toothless research council data policies, however laudable, are of little use unless backed up by some policing.  ‘Sticks’ are required to achieve desired policy aims, as well as the ‘carrots’ of better personal data management and data security obtained by employing easy-to-use tools and systems.”

= = =

The following post describes what we are doing, with funding help from the JISC, to help mitigate these pressures and provide tools and services to assist researchers in data publication.

Pensoft Journals policy and author guidelines on data publication and citation

In a recent blog post, Heather Piwowar, in discussing the advantages of citing datasets in the reference list of the article, said “No journals have standardized on this approach so far”. However, Pensoft Journals, a publisher that specializes in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, has exactly such a policy.

Recently, in response to my Data Citation Best Practice Discussion Document [1] discussed in the preceding blog post, I was invited to work with Pensoft Journals to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [2].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals.

While recognising that citations of Genbank and similar bioinformatics datasets are by custom made by placing the database accession number somewhere in the text, with no entry in the reference list of the article, we make the following generic recommendation:

“Data citations may relate either to the author’s own data, or to data created and published by others (“third-party data”). In the former case, the dataset may have been previously published, or may be published for the first time in association with the article that is now citing it. All these types of data should, for consistency, be cited in the same manner.

“As is the norm when citing another research article, any citation of a data publication, including a citation of one’s own data, should always have two components:

  • An in-text citation statement containing an in-text reference pointer that directs the reader to a formal data reference in the paper’s reference list.

and

  • A formal data reference within the article’s reference list.

“The data reference in the article’s reference list should contain the minimal components recommended in the DataCite Metadata Kernel v2.0 specification. In DataCite terms: Creator PublicationYear Title Publisher Identifier; alternatively (but meaning the same thing): Author PublicationYear Title DataRepositoryName DOI. These components should be presented in whatever format and punctuation style the journal specifies for its references. The following example demonstrates in general terms what is required.

“In-text citation:

This paper uses data from the [name] data repository at http://dx.doi.org/***** (Jones et al. 2008a), first described in Jones et al. 2008b.

“Data reference in reference list:

Jones A, Bloggs B, Smith C (2008a). Title of data package. Repository name. doi:*****.

“Article reference in reference list:

Jones A, Saul D, Smith C (2008b). Title of journal article. Journal Volume: Pages. doi:###. ”

Pensoft also recommends that the in-text data citation statement in Pensoft journals should be included in the body of the paper, in a separate section named Data Resources situated after the Material and Methods section.  More details are given in the paper [2].

Furthermore, Pensoft has reached an agreement for cooperation in data hosting and developing of data publishing workflows with GBIF, the Global Biodiversity Information Facility, with the Dryad Data Repository and with the Consortium for Barcode of Life.

Clearly, these Pensoft data citation recommendations, which work fine for on-line journals without a numerical limit on the number of citations, would not be feasible in journal articles with a strict limit to the number of citations, which is why Heather’s emphasis of exploring alternative ways for data citation in such cases is important.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.  

[2]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

How to cite data

As an approach towards developing best practice for data citation, I recently wrote a Data Citation Best Practice Discussion Document that is available on Google Docs, and that I have now slightly revised to Version 2 [1].

In that document, I first compared what is recommended by DataCite [2] and by Altman and King [3] with what currently practised by the Dryad Data Repository and what presently occurs ‘in the wild’ in a handful of journal articles that reference Dryad datasets.  I then proposed some ‘internal’ recommendations for Dryad to adopt, and concluded with draft Data Citation Best Practice Recommendations.  As I say in the preface to the document:

“Since Dryad is pioneering data management in terms of data resources that are linked to journal articles, it is to be hoped that by first developing citation best practice in the Dryad context we can thereby catalyse its wider spread.  If we can thus agree what such best practice should be among the Dryad community and implement such best practice proposals, we can then promote such practices within the wider scholarly community.”

I realized that much of the confusion and disagreement concerning the best method of citing data resources within earlier e-mail threads resulted from a conflation of ideas about two entities which in the conventional citation of journal articles are quite distinct:

  • the in-text citation containing an in-text reference pointer, e.g. “this paper builds upon the work of Jones et al. [15].”     and
  • the actual reference to Jones et al. within the article’s reference list, e.g. “[15] Jones A, Bloggs B and Smith C (2008). Title. JournalName
    14:132-134. doi:*****.”

Thus, in an e-mail I wrote on 27 April, where I said

“Excellent, but what we really want is for the data citations to be included in the reference list along with the bibliographic citations, following the DataCite model: Creator (PublicationYear): Title. Version. Publisher. ResourceType. Identifier “

. . . I should also have stressed the need for explicit in-text citations that denote such references.

All that is explained within the Google Docs paper.  In that paper I also proposed having a separate Data Resources section within the body text of a journal article, in which data resource citations can be gathered.  That does not preclude these resources also being cited, where appropriate, within the Methods and Materials or Results sections of the paper, but is designed to put data resource citations “on the map”, so to speak, as important new publication performative acts.

It is not appropriate, in my mind, for data citations to be included in the Acknowledgements section of a paper, which is designed for acknowledging contributions to the work from people and funding agencies, even if Thomson Reuters has developed methods to parse such entries, since they also have well-established mechanisms for harvesting proper (data) references from the reference list.

All the ontological terms required to mark up in-text reference pointers and their textual contexts, references, reference lists, etc., to permit automated detection and harvesting of data citations and references, are available as RDF within the SPAR (Semantic Publishing and Referencing) Ontologies (http://purl.org/spar/), which were designed precisely to facilitate such work.

Since writing my Data Citation Best Practice Discussion Document, I was invited (on a purely voluntary non-commercial basis, I should add!) to work with Pensoft Journals, a publisher that specialises in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [4].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals, which I discuss in the next blog post, and which I am pleased to say includes all the recommendations discussed above.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.

[2]    The DataCite Metadata Kernel version 2.0 (2011). http://datacite.org/schema/DataCite-MetadataKernel_v2.0.pdf.

[3]    Micah Altman and Gary King (2007). A proposed standard for the scholarly citation of quantitative data. D-Lib Magazine. 13. http://www.dlib.org/dlib/march07/altman/03altman.html.

[4]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

Questions of granularity – Dryad’s use of DataCite DOIs for data citation, and the Annotation Ontology

DataCite is an international organisation, founded in 2009, which promotes the use of DOIs (Digital Object Identifiers) for published datasets, in order to establish easier access to research data, to increase acceptance of research data as legitimate contributions in the scholarly record, and to support data archiving to permit results to be verified and re-purposed for future study.

Its founding members were the British Library; the Technical Information Center of Denmark; TU Delft Library; the National Research Council’s Canada Institute for Scientific and Technical Information (NRC-CISTI); California Digital Library; Purdue University; and the German National Library of Science and Technology. Since its foundation, it has been joined by several other leading organisations from around the world, and it therefore provides a stable basis for the ongoing use of DOIs for data.

This recent availability of DOIs from DataCite for the identification of data entities has made all the difference to data repositories wishing to give unique global identifiers to their data holdings, since DOIs are widely recognised and respected throughout the academic world, because of their widespread prior use for identifying journal articles, made possible by CrossRef.

However, in their recent discussion paper Data Citation and Linking, published on 8th June 2011, Alex Ball and Monica Duke of UKOLN at the University of Bath ask:

“At what granularity should data be made citable? If single datasets are given identifiers, what about collections of datasets, or subsets of data?”

Individual data files and metadata documents will, of course, have their own unique internal identifiers within any data repository, but may not have externally resolvable identifiers such as DOIs.  Practice varies.

This post is to explain how DOIs are employed in the Dryad Data Repository, that specializes in publishing data linked to peer-reviewed biological journal articles, since it is both elegant and addresses at least some of the issues raised by Alex and Monica.

The Dryad DOI usage policy is described at https://www.nescent.org/wg_dryad/DOI_Usage, and involves assigning unique DOIs to each version of every data package, and to each version of every data file, in a principled and easy-to-understand manner. In summary:

  • Each data package is given a DataCite DOI, which can be versioned by adding “.2”, “.3”, etc. after the original DOI to create new DOIs for new versions of the same data package.
  • Within each data package, each data file has a unique DOI defines by suffixing the data package DOI with “/1”, “/2”. etc., with versions indicated as for data packages.

Thus the third version of the second data file in the second version of a Dryad data package would have a DOI of the form doi:10.5061/dryad.1234.2/2.3.

One might argue that it would result in an awfully large number of DOIs if a single data package was made up of thousands of data files. True, but numbers themselves are limitless and free, and the cost of a DataCite DOI is small relative to the cost of data creation and preservation. The real problem at present is lack of identifiable, citable data entities within repositories – to have so many that the cost of DOIs becomes an issue should be regarded as an achievement, not a problem!

Dryad does not have a mechanism for assigning identifiers to a portion of a data file (“a subset of data”), and DOIs are probably not the correct identifiers for that purpose, since they are primarily designed for citation and resource discovery.

A more appropriate method for identifying portions of a data file, or of any other digital object or document, is to use the Annotation Ontology (AO) developed by Paolo Ciccarese of Harvard University, described at http://code.google.com/p/annotation-ontology/wiki/Homepage. AO can be used to identify and annotate portions of a wide variety of resources such as HTML, PDF, Word, Excel, XML documents, images, videos, databases, web services, experimental data and metadata files. Paolo is currently working with a group in Harvard that focuses on biodiversity, who are using OA to address databases and data, and he anticipates publishing version 2.0 of AO in September.