Since its inauguration in 2010, OpenCitations has always granted free access to its services to users throughout the world, with no requirement for registration or sign-up. Programmatic access to OpenCitations data can be obtained either via our SPARQL endpoints and our REST APIs. In addition, OpenCitations data – available in CSV, Scholix, and RDF formats – can be downloaded from data dumps made periodically and stored on Figshare, so as to enable large-scale analyses using the whole content of the data sets, and also be obtained via our user-friendly text-based search and browsing interfaces.
One of OpenCitations’ priorities is (and will always be) to keep its data globally open and available at zero cost and without restriction for third-party analysis and re-use. As a matter of sustainability, OpenCitations relies on financial support from the scholarly community, which includes those institutions that use OpenCitations data. However, OpenCitations has not so far had in place a proper system to monitor its users, and the main evidence of the impact of OpenCitations in different academic fields and countries has been incompletely obtained from direct contacts with our members and donors across the world, our collaborations with international projects, and the interactions on our social platforms (Twitter and LinkedIn).
We would now like to institute a system that enables us to follow the usage and assess the impact of OpenCitations more reliably. For this purpose, we are now happy to announce the launch of the OpenCitations Access Token System for access to the OpenCitations data and services.
An OpenCitations Access Token is an opaque character string that anonymously identifies a unique user of the OpenCitations APIs. OpenCitations assigns an access token only if authorized to do so by each user, who can request a token by inserting his/her email address into the access form and clicking “Get token”. Upon submission of such a request, each user will automatically receive a personal access token by email. Users can save their personal access token and reuse it every time calling the APIs of OpenCitations, by passing it as a value for the key access-token in the header of each API call.
Obtaining and using an OpenCitations Access Token is thus easy. It only requires a simple form request, and then the insertion of your personal token into the API call header when using OpenCitations REST APIs. OpenCitations will not store users’ email addresses or any personal information, so that the users’ privacy will be totally safeguarded. The token system just provides a simple mechanism for identifying unique users, for which the use of IP addresses is insufficient.
Obtaining an OpenCitations Access Token will take the user only a few seconds and needs to happen only once. You can request your OpenCitations Access Token here:
Use of an OpenCitations Access Token is not compulsory. However, token use will help OpenCitations incredibly, by enabling us to monitor the number of the unique users accessing our data and services, providing objective anonymized evidence of the number of institutions and researchers accessing our data either occasionally or on a regular basis, which we can then employ to demonstrate the usefulness of OpenCitations in the research environment. While the token system will initially be employed just for API calls (the most used service we offer), it will subsequently be extended to our other forms of data access.
OpenCitations exists for the people that use its data for research purposes every day, and thanks to their support. This is why obtaining precise knowledge of how many researchers and institutions are accessing our services is essential to us, since it will enable us to present the uniqueness and value of OpenCitations to new communities of stakeholders, and thus to make it possible to enlarge the already enthusiastic and diverse group of people and institutions supporting and using our Open Science Infrastructure.
To summarize: Getting and using an OpenCitations Access Token is voluntary, easy, and does not cost you anything. However, it will help OpenCitations a great deal. Please get your own token now, and use it next time you access OpenCitations. Thank you very much!
Guest post by Arcangelo Massari, University of Bologna
In this post, Arcangelo Massari, who recently graduated in Digital Humanities and Digital Knowledge under Professor Silvio Peroni at the University of Bologna, shares the results of his master thesis.
A particular problem in information retrieval is that of obtaining data from an evolving dataset, independent of the time at which that item of data was added, changed or removed. To permit such time-independent queries to be performed over evolving RDF datasets, I have developed two new pieces of open source software, time-agnostic-library [1] and time-agnostic-browser [2], that are now available from the OpenCitations GitHub repository.
The time-agnostic-library is a Python library to perform live time-traversal queries on RDF datasets. Time-traversal means being agnostic about time: a SPARQL query that is not run on the current state of the collection but over its entire history or over a specified timespan of that history [3]. This tool allows materializations – obtaining all versions of an entity over time, or its status at a given time. Furthermore, SPARQL queries can be performed to get the delta between two or more versions of one or more resources. Thereby, the time-agnostic-library realizes all the retrieval functionalities described in the taxonomy by Fernández et al. [3].
To complement this query software, the time-agnostic-browser is a web application built on top of the time-agnostic-library to achieve the same results via a graphical user interface.
The primary purpose of these developments is to offer a system for browsing the provenance [4] of RDF statements across time: who produced them, when, where the information was taken from, and what changes were made compared to the previous state of the resource. Knowledge of such information is essential because data changes over time, either because of the natural evolution of concepts or due to the correction of mistakes. Indeed, the latest version of knowledge may not be the most accurate. Such phenomena are particularly tangible in the Web of Data, as highlighted in a study by the Dynamic Linked Data Observatory, which noted the modification of about 38% of the nearly 90,000 RDF documents monitored for 29 weeks, and the permanent disappearance of 5% of them [5] (Figure 1).
Additionally, the truthfulness of data cannot be assessed without provenance records and a system to query them. In fact, the truth value of an assertion on the Web is never absolute, as demonstrated by Wikipedia, which in its official policy on the subject states: “The threshold for inclusion in Wikipedia is verifiability, not truth.” [6]. The Semantic Web does not alter that condition, and trustworthiness has to be evaluated by each application by probing the context of the statements [7]. It is a challenging task and thus, in the Semantic Web Stack, trust is the highest and most complex level to satisfy, subsuming all the previous ones (Figure 2).
Notwithstanding these premises, at present the most extensive RDF datasets – DBPedia [8], Wikidata [9], Yago [10], and the Dynamic Linked Data Observatory [11] – do not use RDF to track changes and record the provenance of such changes. Instead, they all adopt backup-based archiving policies. Some of them, such as Yago 4, record provenance but not changes. As far as citation databases are concerned, OpenCitations is the only infrastructure to implement change-tracking mechanisms and to record full RDF provenance records for each data entity. Among the leading players in this field, neither Web of Science nor Scopus have adopted similar solutions.
In accordance with the OpenCitations Data Model (OCDM) [12],a provenancesnapshot is generated by OpenCitations every time a bibliographical entity is created or modified. Each snapshot (prov:Entity) records the responsible agent (prov:wasAttributedTo), the generation time (prov:generatedAtTime), the invalidation time (prov:invalidatedAtTime), the primary source (prov:hadPrimarySource), and a link to the previous snapshot (prov:wasDerivedFrom), using terms from the Provenance Ontology. In addition, OCDM introduced a system to simplify restoring an entity’s status at a given time, by saving the delta between two versions as a SPARQL update query (prov:hasUpdateQuery) [13] (Figure 3). This approach enables one to restore an entity to a specific timepoint (snapshot) in a straightforward way by applying the inverse operations, i.e., deletions instead of additions, etc.
This solution is concretely used in all the datasets related to the OpenCitations infrastructure, such as COCI, an open index containing almost 1.2 billion DOI-to-DOI citation links derived from the open reference data available in Crossref [14]. It is important to note that this OpenCitations provenance model is generic and reusable in any other context. Since the time-agnostic-library leverages OCDM, it too is generic and can be used for any RDF dataset that tracks changes and provenance as OpenCitations does.
The time-agnostic-library is released under the ISC license and is downloadable through pip [1]. Test-driven development was adopted as a software development process during its creation [15]. It makes three main classes available to the user: AgnosticEntity, VersionQuery, and DeltaQuery, for materializations, version queries, and delta queries, respectively (Listing 1).
All three operations can be performed over the entire available history of the dataset, or by specifying a time interval via a tuple in the form (START, END).
The time-agnostic-browser [2] is also released under the ISC license and can be run as a Flask application. It is organized into two macro-sections: “Explore” and “Query”. In the former, a text input accepts a URI. By submitting it, the entire history of the corresponding resource is displayed. In the latter, a text area receives a SPARQL query, which is resolved on all dataset states. Its main added value is hiding the triples and the complexity of the underlying RDF model: predicate URIs, as well as subjects and objects, appear in a human-readable format. Moreover, all the entities are displayed as links, providing shortcuts to reconstruct the history of the related resources (Figure 4).
The efficiency of time-agnostic-library was measured with two types of benchmarks [16], one on execution times and the other on the amount of computer memory (RAM) required by ten different use cases, each repeated ten times to produce significant results and avoid outliers. In light of these benchmarks, time-agnostic-library has proven effective for any materialization. Regarding structured queries, they are swift if all subjects are known or deductible. On the other hand, the presence of unknown subjects in the user’s SPARQL query involves the identification of all present and past entities that satisfy that pattern, and so requires a more significant amount of time and resources. Specifically, all materializations and the cross-version structured query with known subjects required about half a second and about 50 MB of RAM; conversely, with unknown subjects, 581 seconds and 519 MB of RAM are required. It can be concluded that the proposed software can be used effectively in all cases where the subject is known, that is, for any materialization or formulated SPARQL queries without isolated triple patterns containing unknown subjects.
Other software solutions for such problems have been proposed. Table 1 shows the list of available software to perform materializations and time-traversal queries on RDF datasets. As can be observed, time-agnostic-library is the only one to support all retrieval functionalities without requiring pre-indexing processes. This feature makes it particularly suitable for use in scenarios with large amounts of data that often change over time. Moreover, compared to the approach of Im, Lee and Kim [17] and OSTRICH [18], the OpenCitations Data Model only requires storing the current state of the dataset, rather than the original one, allowing one to query the latest version, without additional computational effort to first re-create the original version.
Table 1. Comparative between time-agnostic-library and preexisting software to achieve materializations and time traversal queries on RDF datasets. (Scroll right to see Columns 6-8).
The OpenCitations Data Model and the time-agnostic-library software are the pre-requisites that will allow OpenCitations to involve third parties, for example members of staff in academic libraries, in the submission, curation and updating of OpenCitations bibliographic and citation data. At this stage, all entities in COCI have a single snapshot — the one made at the time of creation. However, since these entities may become modified, corrected or enriched over time, it is imperative to have appropriate software tools available for use by curators. With the time-agnostic-library software and its associated time-agnostic-browser, it will be possible for a curator to explore the entire history of the changes within an RDF dataset, to know when they were made, based on which source, and by which responsible agent, thus ensuring the reliability and verifiability of data, and facilitating any necessary further changes.
[3] J. D. Fernández, A. Polleres, and J. Umbrich, ‘Towards Efficient Archiving of Dynamic Linked’, in DIACRON@ESWC, Portorož, Slovenia: Computer Science, 2015, pp. 34–49.
[5] T. Käfer, A. Abdelrahman, J. Umbrich, P. O’Byrne, and A. Hogan, ‘Observing Linked Data Dynamics’, in The Semantic Web: Semantics and Big Data, vol. 7882, P. Cimiano, O. Corcho, V. Presutti, L. Hollink, and S. Rudolph, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 213–227. doi: 10.1007/978-3-642-38288-8_15
[8] F. Orlandi and A. Passant, ‘Modelling provenance of DBpedia resources using Wikipedia contributions’, Journal of Web Semantics, vol. 9, no. 2, pp. 149–164, Jul. 2011, doi: 10.1016/j.websem.2011.03.002.
[9] P. Dooley and B. Božić, ‘Towards Linked Data for Wikidata Revisions and Twitter Trending Hashtags’, in Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, Munich Germany, Dec. 2019, pp. 166–175. doi: 10.1145/3366030.3366048.
[11] J. Umbrich, M. Hausenblas, A. Hogan, A. Polleres, and S. Decker, ‘Towards Dataset Dynamics: Change Frequency of Linked Open Data Sources’, in Proceedings of the WWW2010 Workshop on Linked Data on the Web, Raleigh, USA, 2010. Available: http://ceur-ws.org/Vol-628/ldow2010_paper12.pdf
[12] M. Daquino, S. Peroni, and D. Shotton, ‘The OpenCitations Data Model’, p. 836876 Bytes, 2020, doi: 10.6084/M9.FIGSHARE.3443876.V7.
[13] S. Peroni, D. Shotton, and F. Vitali, ‘A Document-inspired Way for Tracking Changes of RDF Data’, in Detection, Representation and Management of Concept Drift in Linked Open Data, Bologna, 2016, pp. 26–33. Available: http://ceur-ws.org/Vol-1799/Drift-a-LOD2016_paper_4.pdf
[14] I. Heibi, S. Peroni, and D. Shotton, ‘Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations’, Scientometrics, vol. 121, no. 2, pp. 1213–1228, Nov. 2019, doi: 10.1007/s11192-019-03217-6.
[15] K. Beck, Test-driven development: by example. Boston: Addison-Wesley, 2003.
[16] A. Massari, ‘time-agnostic-library: benchmark results on execution times and RAM’. Zenodo, Oct. 05, 2021. doi: 10.5281/ZENODO.5549648.
[17] D.-H. Im, S.-W. Lee, and H.-J. Kim, ‘A Version Management Framework for RDF Triple Stores’, Int. J. Softw. Eng. Knowl. Eng., vol. 22, pp. 85–106, 2012.
[18] R. Taelman, M. V. Sande, and R. Verborgh, ‘OSTRICH: Versioned Random-Access Triple Store’, in Companion Proceedings of the Web Conference 2018, 2018, pp. 127–130. Available: https://core.ac.uk/download/pdf/157574975.pdf
[19] N. F. Noy and M. A. Musen, ‘Promptdiff: A Fixed-Point Algorithm for Comparing Ontology Versions’, in Proc. of IAAI, 2002, pp. 744–750.
[20] M. Völkel, W. Winkler, Y. Sure, S. Kruk, and M. Synak, ‘SemVersion: A Versioning System for RDF and Ontologies’, 2005.
[21] M. V. Sande, P. Colpaert, R. Verborgh, S. Coppens, E. Mannens, and R. V. Walle, ‘R&Wbase: Git for triples’, 2013.
[22] T. Neumann and G. Weikum, ‘x-RDF-3X: Fast Querying, High Update Rates, and Consistency for RDF Databases’, Proceedings of the VLDB Endowment, vol. 3, pp. 256–263, 2010.
[23] A. Cerdeira-Pena, A. Farina, J. D. Fernandez, and M. A. Martinez-Prieto, ‘Self-Indexing RDF Archives’, in 2016 Data Compression Conference (DCC), Snowbird, UT, USA, Mar. 2016, pp. 526–535. doi: 10.1109/DCC.2016.40.
[24] T. Pellissier Tanon and F. Suchanek, ‘Querying the Edit History of Wikidata’, in The Semantic Web: ESWC 2019 Satellite Events, vol. 11762, P. Hitzler, S. Kirrane, O. Hartig, V. de Boer, M.-E. Vidal, M. Maleshkova, S. Schlobach, K. Hammar, N. Lasierra, S. Stadtmüller, K. Hose, and R. Verborgh, Eds. Cham: Springer International Publishing, 2019, pp. 161–166. doi: https://doi.org/10.1007/978-3-030-32327-1_32.
OpenCitations makes available a SPARQL endpoint for querying the data included in the OpenCitations Corpus. While several queries are possible according to the model described in the website (and, with more details, in the official metadata document of the Corpus), we have received some requests by users of the service for exemplar queries. We have chosen two of them, which are particularly relevant with regard to the work that has been done in the past months by the Initiative for Open Citations – that we have already introduced in another blog post.
Query: return all the papers (including their titles) citing the article with DOI “10.1038/227680a0”.
Query: return all the papers cited by the bibliographic resource “br/4186” included in the OCC, including the text of bibliographic references used in “br/4186” for making the citations and the titles of the cited papers.
JATS, the Journal Article Tag Suite, defines a vocabulary of XML elements and attributes used to describe the content and metadata of journal articles. As described in the previous post, we have mapped the metadata elements of the JATS Journal Publishing Tag Set to RDF, so that publishers’ XML article metadata encoded using JATS might become part of the web of linked data. Our JATS2RDF mapping document is available here in PDF format. We also created an XSLT to automate the creation of RDF metadata from documents marked up in XML using the NISO-JATS Journal Publishing Tag Library v1.0, enabling this information to be published to the Semantic Web as linked open data in a manner that is unambiguous and universally understood.
Most XML markup applied to journal articles is created by specialist companies employed by publishers for this purpose. However, to facilitate the creation of JATS-compliant metadata by others, Tanya Gray and I have separately created a JATS Metadata Input Form, by adapting the metadata input system that she had previously created to permit entry of MIIDI metadata (Minimal Information to record an Infectious Disease Investigation), as we also repurposed to create the DataCite metadata input form described in a recent post.
The JATS Metadata Input Form is freely available on the Web at http://www.miidi.org/jats. Backed by an XML model that is interpreted by XForms and Orbeon Forms to create the displayed Web form dynamically, this form contains an input tab for the five principal metadata elements that we mapped, <article> <article-meta> <journal-meta> <contrib> and <ref-list>, with an input field in each tab for the various relevant JATS metadata elements and attributes. Where appropriate, each element is accompanied by a drop-down menu that permits the user to choose one from the list of suggested input values given in the JATS specification, as shown for the element <article-type> in the following screen shot of the <article> tab:
The entered JATS metadata are saved as an XML file on the user’s local hard drive, with a name and directory location of the user’s choosing. Optionally, the metadata can also be saved in other formats including HTML, PDF and Kipling XML (a subset of the NLM Journal Publishing DTD version 3.0 that is the input format for Annotum, a publishing system based on the WordPress blogging platform). Additionally, the metadata can also be converted to RDF using the XSLT transformation described in the previous blog post.
We welcome feedback about the usefulness and functionality of this service.
DataCite is an international organization responsible for the DOIs (Digital Object Identifiers) issued for research datasets. For each DOI issued, DataCite requires the data publisher to create and submit to DataCite descriptive metadata that can aid resource discovery. These metadata elements, divided into mandatory and option ones, are specified in a document that is periodically updated by DataCite, the most recent version of which is the DataCite Metadata Kernel, v2.2.
In July 2012, Silvio Peroni and I revised and expanded the DataCite Ontology, as described in an earlier blog post, and then used it to map include all the mandatory and optional DataCite metadata terms from the DataCite Metadata Kernel, v2.2 to RDF, as described in the previous blog post.
To facilitate the creation of metadata compliant with the DataCite Metadata Kernel v2.2, Tanya Gray and I have separately created a DataCite Metadata Input Form, by adapting the metadata input system that she had previously created to permit entry of MIIDI metadata (Minimal Information to record an Infectious Disease Investigation).
The DataCite Metadata Input Form is freely available on the Web at http://www.miidi.org/datacite. Backed by an XML model that is interpreted by XForms and Orbeon Forms to create the displayed Web form dynamically, this form contains an input field for each of the DataCite metadata elements, both the main terms and their sub-terms, in numerical order. Where appropriate, each element contains a drop-down menu that permits the user to choose one from the list of allowed input values specified by the DataCite Metadata Kernel v2.2, as shown in the following screen shot:
The entered DataCite metadata are saved as an XML file on the user’s local hard drive, with a name and directory location of the user’s choosing. Optionally, the metadata can also be saved in other formats including HTML, PDF and Kipling XML (a subset of the NLM Journal Publishing DTD version 3.0 that is the input format for Annotum, a publishing system based on the WordPress blogging platform). Additionally, the metadata can also be converted to RDF using an XSLT transformation based on the mapping of the DataCite Metadata Kernel, v2.2 to RDF described in the previous blog post.
We commend use of this service to researchers wishing to create DataCite metadata to accompany datasets sent to data repositories, and to repository creators wishing to create or supplement DataCite metadata for a newly submitted dataset they will archive and publish, to send to DataCite when registering a DOI for that dataset.
Our system uses a server-client software system that is more fully described here. Because of this, there is a possibility that our server could potentially suffer from usage overload that might lead to unacceptably slow response times. We have not experienced such slow response times while testing, but would like to hear from users both about any performance issues experienced, and about suggestions for improvement.
The purpose of mapping DataCite metadata elements to ontology terms is to enable DataCite metadata to be published in RDF as Open Linked Data, enabling these metadata to be understood programmatically and integrated automatically with similar data from elsewhere.
In the previous blog post, I described the updated and expanded version of the DataCite Ontology, version 0.6.1, that I created with Silvio Peroni to conform to the DataCite Metadata Kernel, v2.2 published in July 2012. The revised ontology now provides the eleven classes and five properties required to cover all the items in v2.2 of the DataCite Metadata Schema – not just the core DataCite metadata elements, but all of them – that were not conveniently covered by terms in other ontologies.
In this post, I describe how we have used this revised DataCite Ontology to create a new revised DataCite2RDF mapping document. This now replaces the previous mapping document, described in an earlier post, that was a partial mapping of the DataCite Metadata Kernel v2.0 using the original version of the DataCite Ontology.
Wherever possible in this new DataCite2RDF mapping, we have used commonly used vocabularies, including:
The mapping document is structured in tabular form, with three columns: the first containing the DataCite ID, the second containing the name of the DataCite property, and the third containing the ontology entities used in mapping each of the DataCite metadata elements. All the metadata elements of the DataCite Metadata Kernel version 2.2 are included, both mandatory and optional, and both major and supplementary. For each, we provide not only the ontology terms, but also a specific exemplar of the usage of that term in an RDF statement, giving alternatives where appropriate. To show the style employed, the mappings for the first three DataCite metadata elements are shown in the following table:
ID
DataCite property
Equivalent ontology class or property
1
Identifier
datacite:PrimaryIdentifier (A sub-class of datacite:ResourceIdentifier that uses a datacite:IdentiferScheme that is restricted to datacite:doi, an individual in the datacite:ResourceIdentifierScheme)Exemplar usage:
To facilitate our mapping, the object properties compiles and isCompiledBy, that are required for the DataCite relationType controlled list, have now been included in version 2.2 of CiTO (created 3 July 2012) as cito:compiles and cito:isCompiledBy. The use of the mini-ontology CiTO4Data, that contained only those properties, has consequently been deprecated.
In several instances, we propose alternative mappings, depending upon whether one wishes to use a data property that has a literal (e.g. text, number, date) as its object, or to use an object property that has a URI as its object. As explained more fully in the mapping document itself, our recommended best practice is to use DCMI metadata terms (dcterms:) as object properties in preference over Dublin Core metadata elements (dc:) as data properties, unless one specifically needs to use a literal as the object of an RDF triple.
A presentation related to this work, that was given at a DataCite meeting held at the British Library on 6 July 2012, is available here.
The next blog post describes a DataCite Metadata Input Form based on this new DataCite2RDF mapping that Tanya Gray and I have created. This is a Web input tool that permits easy entry of metadata compliant with the DataCite Metadata Kernel v2.2. The metadata can be saved in an XML file, and can be automatically mapped to RDF by employing an XSLT that uses this mapping.
We commend the use of this mapping to all who wish to encode DataCite metadata in RDF, and welcome feedback on this work.
In a previous blog post, I described the work that Silvio Peroni and I undertook in May 2011 to map the main terms from the DataCite Metadata Kernel v2.0 to RDF.
To enable that, we created a ‘proto-ontology’, the DataCite Ontology version 0.2, that contained just the following four object properties:
These properties permitted us to provide identifier descriptions required by DataCite that could not be achieved using other Ontologies. We did this using the following type of construction:
:this-dataset datacite:hasPrimaryIdentifier
[ a prism:doi ; literal:hasLiteralValue "***" ] .
in which the object property relates to a blank node defining something that is a DOI and that has the particular literal value specified.
In July 2012, to permit an updating and expansion of the DataCite2RDF mapping, to conform to the DataCite Metadata Kernel, v2.2 published in July 2012, we undertook a complete revision of the DataCite Ontology to permit us to create mappings not just for the core DataCite metadata elements, but for them all.
Note that the four original specific object properties have been replaced by the single object property datacite:hasIdentifier, and that the method of defining the identifier has been changed. Now, rather than the object property relating to a blank node in which the identifier is defined as a literal, the object property datacite:hasIdentifier has as its object a member of the class datacite:Identifier, or of one of its three sub-classes, datacite:PersonalIdentifier, datacite:FunderIdentifier or datacite:ResourceIdentifier, as shown in the following diagram:
The exact nature of the identifier is then defined using the second DataCite object property datacite:usesIdentifierScheme that has as its object the class datacite:IdentifierScheme or one of its three sub-classes: datacite:PersonalIdentifierScheme, datacite:FunderIdentifierScheme or datacite:ResourceIdentifierScheme.
This provides a robust method for defining identifiers, since each specific identifier is defined as an individual member of its appropriate identifier scheme class. Using the new DataCite Ontology, these three types of identifier scheme can be used as follows:
where datacite:doi is an individual member of the class datacite:ResourceIdentifierScheme specifying a DataCite Digital Object Identifier, datacite:orcid is an individual member of the class datacite:PersonalIdentifierScheme specifying an Open Researcher and Contributor Identifier, and datacite:fundref is an individual member of the class datacite:FunderIdentifierScheme specifying a FundRef Funder Identifier.
As need arises, new identifiers can be added later as new members of each class, without having to modify the structure of the DataCite Ontology. We have already added to the DataCite specification by adding three members, datacite:local-resource-identifier-scheme, datacite:local-personal-identifier-scheme and datacite:local-funder-identifier-scheme, to permit the use of local identifiers , and have requested that DataCite include such local identifier schemes in their next release (version 2.3) of the DataCite Metadata Kernel.
Version 2.2 of the DataCite Metadata Kernel has a property “Description”, with four permitted values: ‘abstract’, ‘other’, ‘series information’ and ‘table of content’. The DataCite team recognises that this rather rag-bag collection of values makes use of the Description property highly problematic, and has referred this matter to its metadata committee for re-consideration.
Nevertheless, to complete our development of the DataCite Ontology, thus permitting mapping of all the DataCite Metadata Kernel v2.2 metadata properties, we have created a new class, datacite:DescriptionType, and two final DataCite object properties, datacite:hasDescription and datacite:hasDescriptionType. These allows us to link an entity to another item representing an entity description of a particular type. This is defined using the property datacite:hasDescriptionType, which must have as its object one of the members of the class datacite:DescriptionType, i.e datacite:abstract, datacite:other, datacite:series-information and datacite:table-of-content. In this way it is possible to associate written documents (e.g. journal articles or ‘data articles’) as descriptions of datasets, as shown in the following excerpt:
:this-dataset a fabio:Dataset ;
datacite:hasDescription [ a fabio:JournalArticle ;
datacite:hasDescriptionType datacite:other ] .
We expect the membership of this class datacite:DescriptionType will expand once the DataCite Metadata Kernel v2.3 is published.
Why should the publishers of subscription-access journals, who presently generate income from the sale of access to peer-reviewed full text scholarly articles, be willingly open the reference lists of these articles, and contribute these to the Open Citations Corpus for publication as open linked data? I would like to suggest the following reasons:
1. There is a general move towards open data, which is widely regarded as a common good. This includes citation data, i.e. bibliographic references from one article to another (in RDF Turtle format: A cito:cites B . ).
2. The reference lists at the end of journal articles are works of scholarship by the authors, who have chosen to include certain references and exclude other potentially citable papers from the reference list. However, the references themselves are simply items of bibliographic data, formatted according to the journal style, and do not benefit from the author’s creative input.
3. The reference list, together with the front matter (including the bibliographic information about the article itself) and the abstract, has traditionally been included within the copyright protection enjoyed by the article as a whole. However, the bibliographic information about the article and the article’s abstract are commonly made freely available, for example through PubMed. This same openness should now be afforded to the reference list within each article.
4. There is a home for such reference citation data: the Open Citations Corpus has been specifically created to house and publish scholarly bibliographic citation data, and is now preparing to welcome article reference lists from subscription-access journals, to supplement those already contributed from open-access journals.
5. For those publishers who already contribute their reference information to CrossRef as part of its Cited-By Linking service, this can be accomplished without any change to the publisher’s own publishing workflows, just by giving permission for CrossRef to flag the articles of certain journals as having open references. Open Citations intends to collaborate with CrossRef by harvesting the reference lists from such flagged articles, parsing them into RDF, and adding them to the Open Citations Corpus. Provided that the references are already being submitted to CrossRef, no work will have to be done by the publisher, and no changes in publishing procedure will be involved.
6. Open Citations will publish each reference list as an independent RDF Named Graph, with a unique URI, thereby protecting the integrity of the article reference list as a unit of scholarship, the source of which will be explicitly acknowledged.
7. The open citations data will then be offered back to publishers to use as they wish, e.g. for visualization of citation networks, calculation of metrics, etc., providing easier and more usable access to their own citation data than is currently afforded by commercial providers, who do not provide such data in linked data format.
8. Publishers will also be free to host their own open citations data, should they wish to do so.
9. For the majority of publishers, who would still receive subscriptions on the full articles themselves, opening their article reference lists in this way will cost nothing in terms of lost revenue.
10. Indeed, participation in the Open Citation Corpus will bring the following benefits to subscription-access publishers:
– Access to services to be built over the aggregated open citations data, for example an automated reference correction service available to editors upon receipt of a manuscript, for the automated pre-publication correction of errors in reference lists prior to article publication.
– Increased exposure to users of the references to the publisher’s own journal articles – a form of advertising. While at first coverage among subscription-access publishers will be incomplete, this expanding Open Citation Corpus will, in true Web 2.0 style, become more useful the more publishers participate.
– Even while coverage is incomplete, the Open Citations Corpus by its very nature contains reference citations to all the key papers published in every field covered – currently to all the key papers published in every biomedical field, enabling readers more easily to identify and find the most highly cited papers of each contributing publisher.
– Opening citations data will result in white-listing and general good-will from funding agencies, government and other advocates of open data, who might otherwise mandate publication by grantees in alternative open-access journals.
– Opening citations data will lead to support from scholars and researchers themselves, who wil be more inclined to publish in that publisher’s journals, feeling that at last the publishers would be giving back to them some of their own data, rather than selling it back to them as at present.
As my next blog post shows, one leading subscription-access publisher is now willing to open its journal article references in the way I have suggested. Others who would like to so the same should contact me at <david.shotton@zoo.ox.ac.uk>.
PubMed, created by the US National Library of Medicine in DATE, holds bibliographic records and abstracts for essentially all journal articles published in the biomedical sciences. It currently records almost a million new entries each year!
PubMed Central (PMC), created as an extension of PubMed, is designed to hold full text articles from among the PubMed entries. At present, PMC holds entries for ~9.3% of the papers indexed in PubMed published between 1980 and 2010, 1,428,675 out of a total of 15,319,102. Many of these PMC articles (192,452 for the years 1980 to 2010, ~13.5% of the PMC holdings) are truly Open Access articles, that users can download and repurpose as they will. However, the majority are articles from subscription access journals deposited in PMC under licence agreements with funding agencies that, while providing read access to the full text, prevent readers from downloading the articles and from making derivative works.
The Open Citations Project has to date worked exclusively with the Open Access subset (OASS) of PMC. As of 24 January 2011, there were 204,637 OASS articles, including a few published before 1980. In almost all of these OASS articles, the reference lists were nicely marked up in NLM-DTD XML, making the task of identifying individual references straightforward. In a few cases, the articles were present as scanned page images, lacking any internal markup – those we were unable to process.
From the XML reference lists of these papers, we were able to identify and extract 6,325,178 individual references, which, together with the bibliographic information we had on the OASS articles themselves gave us 6,529,815 independent bibliographic records of both citing and cited entities. As explained in the next blog post, these records showed varying degrees of completeness and accuracy.
Using the Entrez API, we were able to use PubMed IDs, where these were available in the references, to extract a further 2,304,143 bibliographic records from PubMed, which, in the ideal world, would each exactly duplicate the information we had previously obtained from the OASS bibliographic reference containing that PubMed ID. As we shall describe, these additional PubMed records proved exceptionally useful in correcting imperfect OASS references.
Since the OASS articles cite papers outside the OASS, as well as a few within it, the majority of the bibliographic information we thus acquired related to papers represented within PubMed but not within PubMed Central. And because many OASS papers independently contained references to the most highly cited biomedical papers, many of our records were to the same bibliographic entities.
An important part of our data processing was thus to coalesce independent references from different OASS articles to the same multiply cited papers into a set of unique bibliographic records, each for one paper. Once this had been achieved, we were left with 3,578,598 unique bibliographic records, 204,637 describing the OASS articles themselves, and 3,373,961 describing articles outside the OASS, mostly from subscription-access journals.
The following table and figure tabulates and illustrates the number of papers in each category between 1980 and 2010 inclusive. The most striking thing about these data are that they show how, between these years, the relatively small number of articles in the Open Access subset of PMC (approx. 200,000 articles) referenced >20% of all PubMed Central papers published between 1950 and 2010 (approx. 15.3 million papers), and in doing so reference all the most important highly cited papers in every field of biomedical endeavour. This inclusive coverage means that citation graphs created from the Open Citations dataset will capture all the important aspects of any field.
Table 1
Year
Number of papers
Pubmed
PMC
OASS
Cited by OASS
1950-1979
5,128,602
427,877
8352
146027
1980
278,069
23,218
631
15708
1981
278,069
23,685
543
16627
1982
292,219
25,215
740
18389
1983
305,725
25,688
738
21263
1984
314,737
26,316
543
23249
1985
331,706
25,916
637
25780
1986
345,501
26,721
590
28761
1987
363,754
27,834
555
32222
1988
381,976
28,802
442
36320
1989
398,620
29,855
616
42005
1990
398,620
30,143
704
48422
1991
407,465
31,337
733
53655
1992
412,457
32,325
719
61091
1993
420,935
33,203
1055
70272
1994
431,160
33,456
1279
80206
1995
441,967
34,276
1148
91814
1996
452,218
34,755
1155
101853
1997
451,533
34,800
1314
114967
1998
469,466
36,179
1341
131510
1999
469,466
37,534
1420
146623
2000
528,243
39,047
1608
170330
2001
542,854
40,235
2546
179203
2002
560,006
43,265
3199
195879
2003
590,317
46,442
4015
211423
2004
634,432
51,416
6005
229423
2005
694,687
60,411
10333
236678
2006
740,007
72,295
14264
238387
2007
777,311
87,744
20070
222085
2008
824,612
120,004
31416
190071
2009
862,372
146,413
41848
124894
2010
918,598
120,145
40245
27877
Total
15,319,102
1,428,675
192,452
3,186,987
% of PubMed
9.33%
20.80%
% of PMC
13.47%
Figure 1
Figure 2
The OASS source data give the types of cited entity, aggregated after coalescing, shown in the Figure 3.