Pensoft Journals policy and author guidelines on data publication and citation

In a recent blog post, Heather Piwowar, in discussing the advantages of citing datasets in the reference list of the article, said “No journals have standardized on this approach so far”. However, Pensoft Journals, a publisher that specializes in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, has exactly such a policy.

Recently, in response to my Data Citation Best Practice Discussion Document [1] discussed in the preceding blog post, I was invited to work with Pensoft Journals to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [2].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals.

While recognising that citations of Genbank and similar bioinformatics datasets are by custom made by placing the database accession number somewhere in the text, with no entry in the reference list of the article, we make the following generic recommendation:

“Data citations may relate either to the author’s own data, or to data created and published by others (“third-party data”). In the former case, the dataset may have been previously published, or may be published for the first time in association with the article that is now citing it. All these types of data should, for consistency, be cited in the same manner.

“As is the norm when citing another research article, any citation of a data publication, including a citation of one’s own data, should always have two components:

  • An in-text citation statement containing an in-text reference pointer that directs the reader to a formal data reference in the paper’s reference list.

and

  • A formal data reference within the article’s reference list.

“The data reference in the article’s reference list should contain the minimal components recommended in the DataCite Metadata Kernel v2.0 specification. In DataCite terms: Creator PublicationYear Title Publisher Identifier; alternatively (but meaning the same thing): Author PublicationYear Title DataRepositoryName DOI. These components should be presented in whatever format and punctuation style the journal specifies for its references. The following example demonstrates in general terms what is required.

“In-text citation:

This paper uses data from the [name] data repository at http://dx.doi.org/***** (Jones et al. 2008a), first described in Jones et al. 2008b.

“Data reference in reference list:

Jones A, Bloggs B, Smith C (2008a). Title of data package. Repository name. doi:*****.

“Article reference in reference list:

Jones A, Saul D, Smith C (2008b). Title of journal article. Journal Volume: Pages. doi:###. ”

Pensoft also recommends that the in-text data citation statement in Pensoft journals should be included in the body of the paper, in a separate section named Data Resources situated after the Material and Methods section.  More details are given in the paper [2].

Furthermore, Pensoft has reached an agreement for cooperation in data hosting and developing of data publishing workflows with GBIF, the Global Biodiversity Information Facility, with the Dryad Data Repository and with the Consortium for Barcode of Life.

Clearly, these Pensoft data citation recommendations, which work fine for on-line journals without a numerical limit on the number of citations, would not be feasible in journal articles with a strict limit to the number of citations, which is why Heather’s emphasis of exploring alternative ways for data citation in such cases is important.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.  

[2]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

Cite this article as: davidshotton, "Pensoft Journals policy and author guidelines on data publication and citation," in OpenCitations blog, 30/06/2011, https://opencitations.hypotheses.org/156.

How to cite data

As an approach towards developing best practice for data citation, I recently wrote a Data Citation Best Practice Discussion Document that is available on Google Docs, and that I have now slightly revised to Version 2 [1].

In that document, I first compared what is recommended by DataCite [2] and by Altman and King [3] with what currently practised by the Dryad Data Repository and what presently occurs ‘in the wild’ in a handful of journal articles that reference Dryad datasets.  I then proposed some ‘internal’ recommendations for Dryad to adopt, and concluded with draft Data Citation Best Practice Recommendations.  As I say in the preface to the document:

“Since Dryad is pioneering data management in terms of data resources that are linked to journal articles, it is to be hoped that by first developing citation best practice in the Dryad context we can thereby catalyse its wider spread.  If we can thus agree what such best practice should be among the Dryad community and implement such best practice proposals, we can then promote such practices within the wider scholarly community.”

I realized that much of the confusion and disagreement concerning the best method of citing data resources within earlier e-mail threads resulted from a conflation of ideas about two entities which in the conventional citation of journal articles are quite distinct:

  • the in-text citation containing an in-text reference pointer, e.g. “this paper builds upon the work of Jones et al. [15].”     and
  • the actual reference to Jones et al. within the article’s reference list, e.g. “[15] Jones A, Bloggs B and Smith C (2008). Title. JournalName
    14:132-134. doi:*****.”

Thus, in an e-mail I wrote on 27 April, where I said

“Excellent, but what we really want is for the data citations to be included in the reference list along with the bibliographic citations, following the DataCite model: Creator (PublicationYear): Title. Version. Publisher. ResourceType. Identifier “

. . . I should also have stressed the need for explicit in-text citations that denote such references.

All that is explained within the Google Docs paper.  In that paper I also proposed having a separate Data Resources section within the body text of a journal article, in which data resource citations can be gathered.  That does not preclude these resources also being cited, where appropriate, within the Methods and Materials or Results sections of the paper, but is designed to put data resource citations “on the map”, so to speak, as important new publication performative acts.

It is not appropriate, in my mind, for data citations to be included in the Acknowledgements section of a paper, which is designed for acknowledging contributions to the work from people and funding agencies, even if Thomson Reuters has developed methods to parse such entries, since they also have well-established mechanisms for harvesting proper (data) references from the reference list.

All the ontological terms required to mark up in-text reference pointers and their textual contexts, references, reference lists, etc., to permit automated detection and harvesting of data citations and references, are available as RDF within the SPAR (Semantic Publishing and Referencing) Ontologies (http://purl.org/spar/), which were designed precisely to facilitate such work.

Since writing my Data Citation Best Practice Discussion Document, I was invited (on a purely voluntary non-commercial basis, I should add!) to work with Pensoft Journals, a publisher that specialises in publishing biodiversity and biological systematics papers, and that has taken the lead in promoting the publication of datasets with DOIs, to contribute to and help revise their now-published Data Publishing Policies and Guidelines for Biodiversity Data [4].  This 34-page paper has a three-page section on how to cite data in Pensoft Journals, which I discuss in the next blog post, and which I am pleased to say includes all the recommendations discussed above.

[1]     David Shotton (2011) Data Citation Best Practice Discussion Document. Google Docs. https://docs.google.com/document/d/1kF8-faB72l4dKTLEyx6Z5cIabk68GrJ9GraCtWnK0qQ/edit?hl=en_GB&authkey=CPPW46wL#.

[2]    The DataCite Metadata Kernel version 2.0 (2011). http://datacite.org/schema/DataCite-MetadataKernel_v2.0.pdf.

[3]    Micah Altman and Gary King (2007). A proposed standard for the scholarly citation of quantitative data. D-Lib Magazine. 13. http://www.dlib.org/dlib/march07/altman/03altman.html.

[4]     Penev L, Mietchen D, Chavan V, Hagedorn G, Remsen D, Smith V, Shotton D (2011). Pensoft Data Publishing Policies and Guidelines for Biodiversity Data. Pensoft Publishers, http://www.pensoft.net/J_FILES/Pensoft_Data_Publishing_Policies_and_Guidelines.pdf.

Cite this article as: davidshotton, "How to cite data," in OpenCitations blog, 30/06/2011, https://opencitations.hypotheses.org/153.

Questions of granularity – Dryad’s use of DataCite DOIs for data citation, and the Annotation Ontology

DataCite is an international organisation, founded in 2009, which promotes the use of DOIs (Digital Object Identifiers) for published datasets, in order to establish easier access to research data, to increase acceptance of research data as legitimate contributions in the scholarly record, and to support data archiving to permit results to be verified and re-purposed for future study.

Its founding members were the British Library; the Technical Information Center of Denmark; TU Delft Library; the National Research Council’s Canada Institute for Scientific and Technical Information (NRC-CISTI); California Digital Library; Purdue University; and the German National Library of Science and Technology. Since its foundation, it has been joined by several other leading organisations from around the world, and it therefore provides a stable basis for the ongoing use of DOIs for data.

This recent availability of DOIs from DataCite for the identification of data entities has made all the difference to data repositories wishing to give unique global identifiers to their data holdings, since DOIs are widely recognised and respected throughout the academic world, because of their widespread prior use for identifying journal articles, made possible by CrossRef.

However, in their recent discussion paper Data Citation and Linking, published on 8th June 2011, Alex Ball and Monica Duke of UKOLN at the University of Bath ask:

“At what granularity should data be made citable? If single datasets are given identifiers, what about collections of datasets, or subsets of data?”

Individual data files and metadata documents will, of course, have their own unique internal identifiers within any data repository, but may not have externally resolvable identifiers such as DOIs.  Practice varies.

This post is to explain how DOIs are employed in the Dryad Data Repository, that specializes in publishing data linked to peer-reviewed biological journal articles, since it is both elegant and addresses at least some of the issues raised by Alex and Monica.

The Dryad DOI usage policy is described at https://www.nescent.org/wg_dryad/DOI_Usage, and involves assigning unique DOIs to each version of every data package, and to each version of every data file, in a principled and easy-to-understand manner. In summary:

  • Each data package is given a DataCite DOI, which can be versioned by adding “.2”, “.3”, etc. after the original DOI to create new DOIs for new versions of the same data package.
  • Within each data package, each data file has a unique DOI defines by suffixing the data package DOI with “/1”, “/2”. etc., with versions indicated as for data packages.

Thus the third version of the second data file in the second version of a Dryad data package would have a DOI of the form doi:10.5061/dryad.1234.2/2.3.

One might argue that it would result in an awfully large number of DOIs if a single data package was made up of thousands of data files. True, but numbers themselves are limitless and free, and the cost of a DataCite DOI is small relative to the cost of data creation and preservation. The real problem at present is lack of identifiable, citable data entities within repositories – to have so many that the cost of DOIs becomes an issue should be regarded as an achievement, not a problem!

Dryad does not have a mechanism for assigning identifiers to a portion of a data file (“a subset of data”), and DOIs are probably not the correct identifiers for that purpose, since they are primarily designed for citation and resource discovery.

A more appropriate method for identifying portions of a data file, or of any other digital object or document, is to use the Annotation Ontology (AO) developed by Paolo Ciccarese of Harvard University, described at http://code.google.com/p/annotation-ontology/wiki/Homepage. AO can be used to identify and annotate portions of a wide variety of resources such as HTML, PDF, Word, Excel, XML documents, images, videos, databases, web services, experimental data and metadata files. Paolo is currently working with a group in Harvard that focuses on biodiversity, who are using OA to address databases and data, and he anticipates publishing version 2.0 of AO in September.

Cite this article as: davidshotton, "Questions of granularity – Dryad’s use of DataCite DOIs for data citation, and the Annotation Ontology," in OpenCitations blog, 30/06/2011, https://opencitations.hypotheses.org/148.

Functional clustering of CiTO properties

CiTO v2.0 contains just two main object properties, cito:cites and its inverse cito:isCitedBy, each of which as thirty-two sub-properties. Intentionally, these properties are not constrained as to domain or range, thereby maximising their applicability in a wide range of citation contexts. Additionally, CiTO contains one additional generic object property, cito:shareAuthorsWith, that may be used even outside a citation context.

Some have criticised CiTO for having too many properties, making it confusing for potential users, and Martin Fenner chose to use only the ten ‘most popular’ properties in his CiTO plug-in for WordPress, previously mentioned in this blog post. In response, I would point out that each property has a distinct and clearly defined meaning, and that together they provide an appropriate level of expressivity for effective use. Nevertheless, it is clearly difficult to conceptualize all the CiTO properties at one time, if they are being viewed using an ontology editor such as Protégé. I have thus created the following figure that groups CiTO properties by similarity, in the hope that this will facilitate choice of the most appropriate one.

Clustering of CiTO relationships by similarity

As shown in the figure, the CiTO properties and sub-properties (and, consequently, their inverses) may be classified as rhetorical (upper oval) and/or factual (lower oval, dark blue text), with the rhetorical properties being grouped in three sets depending on their connotation: positive (green), neutral or informative (blue) and negative (red). Five properties (in purple, within the overlap of the ovals) have both factual and rhetorical characteristics. The inverse properties are not shown.

Cite this article as: davidshotton, "Functional clustering of CiTO properties," in OpenCitations blog, 29/06/2011, https://opencitations.hypotheses.org/91.

JISC Open Citations: Wider Benefits to Sector & Achievements for Host Institution

One of the biggest challenges faced by modern scientists is information overload. The life sciences are probably the area most affected by it, with almost a million new entries being added to PubMed each year. While on-line publishing and bibliographic search engines have made the problem of finding individual research articles considerably easier, the present scholarly citation system inadequately exposes the knowledge networks that exist within the scientific literature, linking papers, authors and research projects.  Much of the problem stems from the lack of freely available citation data in machine-readable form.

In this Open Access age, it is a scandal that reference lists from journal articles, the core elements of the academic data cycle, are not freely available for use by scholars.   Current citation services are largely restricted to a small number of commercial companies whose valuable products are still insufficiently developed to satisfy all the needs of the academic community.

Google Scholar offers navigation through the citation network, but only in one direction – backwards. Thomson Reuter guards citation data in ISI Citation Index and Web of Knowledge as commercial assets, as does Elsevier for citation data in Scopus, with limited subscription-access search and display capabilities, and no methods for extracting citation data in bulk. Furthermore, they do not characterise the nature of citations between publications.

The value of citation data to the research community has grown as research evaluation has increased in importance. Citation metrics are increasingly used by institutions to establish their research quality, and by funding agencies to determine the effectiveness of their grant spending.

Citation data now need to be recognized as a part of the Commons – those works that are freely and legally available for sharing and reuse – extending the Science Commons / Open Knowledge Foundation philosophy to the world of scientific citation.

If machine-readable citation data for all scholarly publications were to be published freely on the Web, the construction and interrogation of citation networks would become trivially simple, with enormous advantages to scholarship. Thanks to CiteSeerX, citation data in computer science have been freely available for several years,

Similar access is now coming for other fields of scholarship, particularly for the biological sciences through CiteXplore. However, in none of these cases are the citation data available as Linked Open Data, and there are no convenient free tools, accessible to working research biologists, that permit them to visualize and navigate the literature by means of its citation network, or that permit knowledge analysts to pose generic questions over the whole corpus, such as determining whether those who publish in Open Access journals are more prone to cite other Open Access articles, in comparison with those who do not.

This project will produce citation information as Open, Linked Data which will address the short-comings of the current situation, aiming to provide researchers with the tools to explore the citation network.

Cite this article as: benosteen, "JISC Open Citations: Wider Benefits to Sector & Achievements for Host Institution," in OpenCitations blog, 15/07/2010, https://opencitations.hypotheses.org/1040.

JISC Open Citations: Aims, Objectives and Final Outputs

The Open Citations Project is global in scope, designed to change the face of scientific publishing.

It aims to make bibliographic citation links as easy to use as Web links. Its goals are three-fold:

  • To establish OpenCitations.net, a public RDF triplestore for biomedical literature citations.

(Note: In this context, a bibliographic citation is a reference within a particular citing work to another publication termed the cited work. This use of the word ‘citation’ should be clearly distinguished from the common related use of this word to indicate the cited work itself. Within this application, ‘cite’ and ‘citation’ denote the performative act of citation itself, not the target document of that citation.)

  • To harvest the reference lists from many current and recent open access journal articles, and to convert these datasets into RDF, starting with those in UK Pubmed Central, those published by the Public Library of Science and Biomed Central, those from other publishers willing for CrossRef to release their data, and articles from other Open Access repositories like EPrints.
  • To publish these citation datasets as Open Linked Data on the Talis Connected Commons Platform under an open data license in both human- and computer-accessible formats.

As such, the Open Citation Project seeks to promote citation datasets as first class information objects. The reference list from each article processed will be published as an individual named graph, with its own individual Digital Object Identifier (DOI) assigned by the relevant publisher via CrossRef or by the British Library on behalf of the DataCite Project.

Cite this article as: benosteen, "JISC Open Citations: Aims, Objectives and Final Outputs," in OpenCitations blog, 15/07/2010, https://opencitations.hypotheses.org/5.
Search OpenEdition Search

You will be redirected to OpenEdition Search