Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

OpenCitations Access Tokens: how they work and why they are important

Since its inauguration in 2010, OpenCitations has always granted free access to its services to users throughout the world, with no requirement for registration or sign-up. Programmatic access to OpenCitations data can be obtained either via our SPARQL endpoints and our REST APIs. In addition, OpenCitations data – available in CSV, Scholix, and RDF formats – can be downloaded from data dumps made periodically and stored on Figshare, so as to enable large-scale analyses using the whole content of the data sets, and also be obtained via our user-friendly text-based search and browsing interfaces.

One of OpenCitations’ priorities is (and will always be) to keep its data globally open and available at zero cost and without restriction for third-party analysis and re-use. As a matter of sustainability, OpenCitations relies on financial support from the scholarly community, which includes those institutions that use OpenCitations data. However, OpenCitations has not so far had in place a proper system to monitor its users, and the main evidence of the impact of OpenCitations in different academic fields and countries has been incompletely obtained from direct contacts with our members and donors across the world, our collaborations with international projects, and the interactions on our social platforms (Twitter and LinkedIn).

We would now like to institute a system that enables us to follow the usage and assess the impact of OpenCitations more reliably. For this purpose, we are now happy to announce the launch of the OpenCitations Access Token System for access to the OpenCitations data and services.

An OpenCitations Access Token is an opaque character string that anonymously identifies a unique user of the OpenCitations APIs. OpenCitations assigns an access token only if authorized to do so by each user, who can request a token by inserting his/her email address into the access form and clicking “Get token”. Upon submission of such a request, each user will automatically receive a personal access token by email. Users can save their personal access token and reuse it every time calling the APIs of OpenCitations, by passing it as a value for the key access-token in the header of each API call. 

Obtaining and using an OpenCitations Access Token is thus easy. It only requires a simple form request, and then the insertion of your personal token into the API call header when using OpenCitations REST APIs. OpenCitations will not store users’ email addresses or any personal information, so that the users’ privacy will be totally safeguarded. The token system just provides a simple mechanism for identifying unique users, for which the use of IP addresses is insufficient.

Obtaining an OpenCitations Access Token will take the user only a few seconds and needs to happen only once. You can request your OpenCitations Access Token here

https://opencitations.net/accesstoken 

Use of an OpenCitations Access Token is not compulsory. However, token use will help OpenCitations incredibly, by enabling us to monitor the number of the unique users accessing our data and services, providing objective anonymized evidence of the number of institutions and researchers accessing our data either occasionally or on a regular basis, which we can then employ to demonstrate the usefulness of OpenCitations in the research environment. While the token system will initially be employed just for API calls (the most used service we offer), it will subsequently be extended to our other forms of data access.

OpenCitations exists for the people that use its data for research purposes every day, and thanks to their support. This is why obtaining precise knowledge of how many researchers and institutions are accessing our services is essential to us, since it will enable us to present the uniqueness and value of OpenCitations to new communities of stakeholders, and thus to make it possible to enlarge the already enthusiastic and diverse group of people and institutions supporting and using our Open Science Infrastructure.

To summarize: Getting and using an OpenCitations Access Token is voluntary, easy, and does not cost you anything. However, it will help OpenCitations a great deal. Please get your own token now, and use it next time you access OpenCitations. Thank you very much!

 

Performing live time-traversal queries on RDF datasets

Guest post by Arcangelo Massari, University of Bologna

In this post, Arcangelo Massari, who recently graduated in Digital Humanities and Digital Knowledge under Professor Silvio Peroni at the University of Bologna, shares the results of his master thesis.

A particular problem in information retrieval is that of obtaining data from an evolving dataset, independent of the time at which that item of data was added, changed or removed. To permit such time-independent queries to be performed over evolving RDF datasets, I have developed two new pieces of open source software, time-agnostic-library [1] and time-agnostic-browser [2], that are now available from the OpenCitations GitHub repository.

The time-agnostic-library is a Python library to perform live time-traversal queries on RDF datasets. Time-traversal means being agnostic about time: a SPARQL query that is not run on the current state of the collection but over its entire history or over a specified timespan of that history [3]. This tool allows materializations – obtaining all versions of an entity over time, or its status at a given time. Furthermore, SPARQL queries can be performed to get the delta between two or more versions of one or more resources. Thereby, the time-agnostic-library realizes all the retrieval functionalities described in the taxonomy by Fernández et al. [3].

To complement this query software, the time-agnostic-browser is a web application built on top of the time-agnostic-library to achieve the same results via a graphical user interface.

The primary purpose of these developments is to offer a system for browsing the provenance [4] of RDF statements across time: who produced them, when, where the information was taken from, and what changes were made compared to the previous state of the resource. Knowledge of such information is essential because data changes over time, either because of the natural evolution of concepts or due to the correction of mistakes. Indeed, the latest version of knowledge may not be the most accurate. Such phenomena are particularly tangible in the Web of Data, as highlighted in a study by the Dynamic Linked Data Observatory, which noted the modification of about 38% of the nearly 90,000 RDF documents monitored for 29 weeks, and the permanent disappearance of 5% of them [5] (Figure 1).

Figure 1. Donut chart showing the results of the study conducted by the Dynamic Linked Data Observatory on the evolution of RDF documents [5].

Additionally, the truthfulness of data cannot be assessed without provenance records and a system to query them. In fact, the truth value of an assertion on the Web is never absolute, as demonstrated by Wikipedia, which in its official policy on the subject states: “The threshold for inclusion in Wikipedia is verifiability, not truth.” [6]. The Semantic Web does not alter that condition, and trustworthiness has to be evaluated by each application by probing the context of the statements [7]. It is a challenging task and thus, in the Semantic Web Stack, trust is the highest and most complex level to satisfy, subsuming all the previous ones (Figure 2).

Figure 2.The Semantic Web layers [7]. Trust is the uppermost level of the stack, subsuming all the others.

Notwithstanding these premises, at present the most extensive RDF datasets – DBPedia [8], Wikidata [9], Yago [10], and the Dynamic Linked Data Observatory [11] – do not use RDF to track changes and record the provenance of such changes. Instead, they all adopt backup-based archiving policies. Some of them, such as Yago 4, record provenance but not changes. As far as citation databases are concerned, OpenCitations is the only infrastructure to implement change-tracking mechanisms and to record full RDF provenance records for each data entity. Among the leading players in this field, neither Web of Science nor Scopus have adopted similar solutions.

In accordance with the OpenCitations Data Model (OCDM) [12], a provenance snapshot is generated by OpenCitations every time a bibliographical entity is created or modified. Each snapshot (prov:Entity) records the responsible agent (prov:wasAttributedTo), the generation time (prov:generatedAtTime), the invalidation time (prov:invalidatedAtTime), the primary source (prov:hadPrimarySource), and a link to the previous snapshot (prov:wasDerivedFrom), using terms from the Provenance Ontology. In addition, OCDM introduced a system to simplify restoring an entity’s status at a given time, by saving the delta between two versions as a SPARQL update query (prov:hasUpdateQuery) [13] (Figure 3). This approach enables one to restore an entity to a specific timepoint (snapshot) in a straightforward way by applying the inverse operations, i.e., deletions instead of additions, etc.

Figure 3. Provenance in the OpenCitations Data Model.

This solution is concretely used in all the datasets related to the OpenCitations infrastructure, such as COCI, an open index containing almost 1.2 billion DOI-to-DOI citation links derived from the open reference data available in Crossref [14]. It is important to note that this OpenCitations provenance model is generic and reusable in any other context. Since the time-agnostic-library leverages OCDM, it too is generic and can be used for any RDF dataset that tracks changes and provenance as OpenCitations does.

The time-agnostic-library is released under the ISC license and is downloadable through pip [1]. Test-driven development was adopted as a software development process during its creation [15]. It makes three main classes available to the user: AgnosticEntity, VersionQuery, and DeltaQuery, for materializations, version queries, and delta queries, respectively (Listing 1).

Listing 1. Code template to achieve materializations, time-traversal queries, and delta queries.

All three operations can be performed over the entire available history of the dataset, or by specifying a time interval via a tuple in the form (START, END).

The time-agnostic-browser [2] is also released under the ISC license and can be run as a Flask application. It is organized into two macro-sections: “Explore” and “Query”. In the former, a text input accepts a URI. By submitting it, the entire history of the corresponding resource is displayed. In the latter, a text area receives a SPARQL query, which is resolved on all dataset states. Its main added value is hiding the triples and the complexity of the underlying RDF model: predicate URIs, as well as subjects and objects, appear in a human-readable format. Moreover, all the entities are displayed as links, providing shortcuts to reconstruct the history of the related resources (Figure 4).

Figure 4. Graphical user interface of an entity history reconstruction through the time-agnostic-browser.

The efficiency of time-agnostic-library was measured with two types of benchmarks [16], one on execution times and the other on the amount of computer memory (RAM) required by ten different use cases, each repeated ten times to produce significant results and avoid outliers. In light of these benchmarks, time-agnostic-library has proven effective for any materialization. Regarding structured queries, they are swift if all subjects are known or deductible. On the other hand, the presence of unknown subjects in the user’s SPARQL query involves the identification of all present and past entities that satisfy that pattern, and so requires a more significant amount of time and resources. Specifically, all materializations and the cross-version structured query with known subjects required about half a second and about 50 MB of RAM; conversely, with unknown subjects, 581 seconds and 519 MB of RAM are required. It can be concluded that the proposed software can be used effectively in all cases where the subject is known, that is, for any materialization or formulated SPARQL queries without isolated triple patterns containing unknown subjects.

Other software solutions for such problems have been proposed. Table 1 shows the list of available software to perform materializations and time-traversal queries on RDF datasets. As can be observed, time-agnostic-library is the only one to support all retrieval functionalities without requiring pre-indexing processes. This feature makes it particularly suitable for use in scenarios with large amounts of data that often change over time. Moreover, compared to the approach of Im, Lee and Kim [17] and OSTRICH [18], the OpenCitations Data Model only requires storing the current state of the dataset, rather than the original one, allowing one to query the latest version, without additional computational effort to first re-create the original version.

SoftwareVersion materializationDelta materializationSingle-version structured queryCross-version structured querySingle-delta structured queryCross-delta structured queryLive
PromptDiff [19]+++
SemVersion [20]+++
Im, Lee, & Kim, 2012 [17]+++++
R&Wbase [21]++++
x-RDF-3X [22]+++
v-RDFCSA [23]++++++
OSTRICH [18]+++
Tanon & Suchanek, 2019 [24]++++++
time-agnostic-library[1]+++++++
Table 1. Comparative between time-agnostic-library and preexisting software to achieve materializations and time traversal queries on RDF datasets. (Scroll right to see Columns 6-8).

The OpenCitations Data Model and the time-agnostic-library software are the pre-requisites that will allow OpenCitations to involve third parties, for example members of staff in academic libraries, in the submission, curation and updating of OpenCitations bibliographic and citation data. At this stage, all entities in COCI have a single snapshot — the one made at the time of creation. However, since these entities may become modified, corrected or enriched over time, it is imperative to have appropriate software tools available for use by curators. With the time-agnostic-library software and its associated time-agnostic-browser, it will be possible for a curator to explore the entire history of the changes within an RDF dataset, to know when they were made, based on which source, and by which responsible agent, thus ensuring the reliability and verifiability of data, and facilitating any necessary further changes.

References

[1] A. Massari, time-agnostic-library. 2021. Available: https://archive.softwareheritage.org/swh:1:snp:d7fd1754377f45d16afb61efc770815b5a3c8f83

[2] A. Massari, time-agnostic-browser. 2021. Available: https://archive.softwareheritage.org/swh:1:dir:337f641375cca034eda39c2380b4a7878382fc4c

[3] J. D. Fernández, A. Polleres, and J. Umbrich, ‘Towards Efficient Archiving of Dynamic Linked’, in DIACRON@ESWC, Portorož, Slovenia: Computer Science, 2015, pp. 34–49.

[4] December, ‘Provenance XG Final Report’. 2010. Available: http://www.w3.org/2005/Incubator/prov/XGR-prov-20101214/

[5] T. Käfer, A. Abdelrahman, J. Umbrich, P. O’Byrne, and A. Hogan, ‘Observing Linked Data Dynamics’, in The Semantic Web: Semantics and Big Data, vol. 7882, P. Cimiano, O. Corcho, V. Presutti, L. Hollink, and S. Rudolph, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 213–227. doi: 10.1007/978-3-642-38288-8_15

[6] S. L. Garfinkel, ‘Wikipedia and the Meaning of Truth’, MIT Technology Review, 2008, [Online]. Available: https://stephencodrington.com/Blogs/Hong_Kong_Blog/Entries/2009/4/11_What_is_Truth_files/Wikipedia%20and%20the%20Meaning%20of%20Truth.pdf

[7] M.-R. Koivunen and E. Miller, ‘Semantic Web Activity’, W3C, Nov. 02, 2001. https://www.w3.org/2001/12/semweb-fin/w3csw

[8] F. Orlandi and A. Passant, ‘Modelling provenance of DBpedia resources using Wikipedia contributions’, Journal of Web Semantics, vol. 9, no. 2, pp. 149–164, Jul. 2011, doi: 10.1016/j.websem.2011.03.002.

[9] P. Dooley and B. Božić, ‘Towards Linked Data for Wikidata Revisions and Twitter Trending Hashtags’, in Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, Munich Germany, Dec. 2019, pp. 166–175. doi: 10.1145/3366030.3366048.

[10] Yago Project, ‘Download data, code, and logo of Yago projects’, Yago, 2021. https://yago-knowledge.org/downloads (accessed Sep. 24, 2021).

[11] J. Umbrich, M. Hausenblas, A. Hogan, A. Polleres, and S. Decker, ‘Towards Dataset Dynamics: Change Frequency of Linked Open Data Sources’, in Proceedings of the WWW2010 Workshop on Linked Data on the Web, Raleigh, USA, 2010. Available: http://ceur-ws.org/Vol-628/ldow2010_paper12.pdf

[12] M. Daquino, S. Peroni, and D. Shotton, ‘The OpenCitations Data Model’, p. 836876 Bytes, 2020, doi: 10.6084/M9.FIGSHARE.3443876.V7.

[13] S. Peroni, D. Shotton, and F. Vitali, ‘A Document-inspired Way for Tracking Changes of RDF Data’, in Detection, Representation and Management of Concept Drift in Linked Open Data, Bologna, 2016, pp. 26–33. Available: http://ceur-ws.org/Vol-1799/Drift-a-LOD2016_paper_4.pdf

[14] I. Heibi, S. Peroni, and D. Shotton, ‘Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations’, Scientometrics, vol. 121, no. 2, pp. 1213–1228, Nov. 2019, doi: 10.1007/s11192-019-03217-6.

[15] K. Beck, Test-driven development: by example. Boston: Addison-Wesley, 2003.

[16] A. Massari, ‘time-agnostic-library: benchmark results on execution times and RAM’. Zenodo, Oct. 05, 2021. doi: 10.5281/ZENODO.5549648.

[17] D.-H. Im, S.-W. Lee, and H.-J. Kim, ‘A Version Management Framework for RDF Triple Stores’, Int. J. Softw. Eng. Knowl. Eng., vol. 22, pp. 85–106, 2012.

[18] R. Taelman, M. V. Sande, and R. Verborgh, ‘OSTRICH: Versioned Random-Access Triple Store’, in Companion Proceedings of the Web Conference 2018, 2018, pp. 127–130. Available: https://core.ac.uk/download/pdf/157574975.pdf

[19] N. F. Noy and M. A. Musen, ‘Promptdiff: A Fixed-Point Algorithm for Comparing Ontology Versions’, in Proc. of IAAI, 2002, pp. 744–750.

[20] M. Völkel, W. Winkler, Y. Sure, S. Kruk, and M. Synak, ‘SemVersion: A Versioning System for RDF and Ontologies’, 2005.

[21] M. V. Sande, P. Colpaert, R. Verborgh, S. Coppens, E. Mannens, and R. V. Walle, ‘R&Wbase: Git for triples’, 2013.

[22] T. Neumann and G. Weikum, ‘x-RDF-3X: Fast Querying, High Update Rates, and Consistency for RDF Databases’, Proceedings of the VLDB Endowment, vol. 3, pp. 256–263, 2010.

[23] A. Cerdeira-Pena, A. Farina, J. D. Fernandez, and M. A. Martinez-Prieto, ‘Self-Indexing RDF Archives’, in 2016 Data Compression Conference (DCC), Snowbird, UT, USA, Mar. 2016, pp. 526–535. doi: 10.1109/DCC.2016.40.

[24] T. Pellissier Tanon and F. Suchanek, ‘Querying the Edit History of Wikidata’, in The Semantic Web: ESWC 2019 Satellite Events, vol. 11762, P. Hitzler, S. Kirrane, O. Hartig, V. de Boer, M.-E. Vidal, M. Maleshkova, S. Schlobach, K. Hammar, N. Lasierra, S. Stadtmüller, K. Hose, and R. Verborgh, Eds. Cham: Springer International Publishing, 2019, pp. 161–166. doi: https://doi.org/10.1007/978-3-030-32327-1_32.

Querying the OpenCitations Corpus

OpenCitations makes available a SPARQL endpoint for querying the data included in the OpenCitations Corpus. While several queries are possible according to the model described in the website (and, with more details, in the official metadata document of the Corpus), we have received some requests by users of the service for exemplar queries. We have chosen two of them, which are particularly relevant with regard to the work that has been done in the past months by the Initiative for Open Citations – that we have already introduced in another blog post.

Query: return all the papers (including their titles) citing the article with DOI “10.1038/227680a0”.

PREFIX cito: <http://purl.org/spar/cito/>
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX datacite: <http://purl.org/spar/datacite/>
PREFIX literal: <http://www.essepuntato.it/2010/06/literalreification/>
SELECT ?citing ?title WHERE {
  ?id a datacite:Identifier ;
    datacite:usesIdentifierScheme datacite:doi ;
    literal:hasLiteralValue "10.1038/227680a0" .
  ?br 
    datacite:hasIdentifier ?id ;
    ^cito:cites ?citing .
  ?citing dcterms:title ?title
}

Query: return all the papers cited by the bibliographic resource “br/4186” included in the OCC, including the text of bibliographic references used in “br/4186” for making the citations and the titles of the cited papers.

PREFIX cito: <http://purl.org/spar/cito/>
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX biro: <http://purl.org/spar/biro/>
PREFIX frbr: <http://purl.org/vocab/frbr/core#>
PREFIX c4o: <http://purl.org/spar/c4o/>
SELECT ?cited ?cited_ref ?title WHERE {
  <https://w3id.org/oc/corpus/br/4186> cito:cites ?cited .
  OPTIONAL { 
    <https://w3id.org/oc/corpus/br/4186> frbr:part ?ref .
    ?ref biro:references ?cited ;
      c4o:hasContent ?cited_ref 
  }
  OPTIONAL { ?cited dcterms:title ?title }
}

Why publishers should open their references

Why should the publishers of subscription-access journals, who presently generate income from the sale of access to peer-reviewed full text scholarly articles, be willingly open the reference lists of these articles, and contribute these to the Open Citations Corpus for publication as open linked data? I would like to suggest the following reasons:

1. There is a general move towards open data, which is widely regarded as a common good. This includes citation data, i.e. bibliographic references from one article to another (in RDF Turtle format: A cito:cites B . ).
2. The reference lists at the end of journal articles are works of scholarship by the authors, who have chosen to include certain references and exclude other potentially citable papers from the reference list. However, the references themselves are simply items of bibliographic data, formatted according to the journal style, and do not benefit from the author’s creative input.
3. The reference list, together with the front matter (including the bibliographic information about the article itself) and the abstract, has traditionally been included within the copyright protection enjoyed by the article as a whole. However, the bibliographic information about the article and the article’s abstract are commonly made freely available, for example through PubMed. This same openness should now be afforded to the reference list within each article.
4. There is a home for such reference citation data: the Open Citations Corpus has been specifically created to house and publish scholarly bibliographic citation data, and is now preparing to welcome article reference lists from subscription-access journals, to supplement those already contributed from open-access journals.
5. For those publishers who already contribute their reference information to CrossRef as part of its Cited-By Linking service, this can be accomplished without any change to the publisher’s own publishing workflows, just by giving permission for CrossRef to flag the articles of certain journals as having open references. Open Citations intends to collaborate with CrossRef by harvesting the reference lists from such flagged articles, parsing them into RDF, and adding them to the Open Citations Corpus. Provided that the references are already being submitted to CrossRef, no work will have to be done by the publisher, and no changes in publishing procedure will be involved.
6. Open Citations will publish each reference list as an independent RDF Named Graph, with a unique URI, thereby protecting the integrity of the article reference list as a unit of scholarship, the source of which will be explicitly acknowledged.
7. The open citations data will then be offered back to publishers to use as they wish, e.g. for visualization of citation networks, calculation of metrics, etc., providing easier and more usable access to their own citation data than is currently afforded by commercial providers, who do not provide such data in linked data format.
8. Publishers will also be free to host their own open citations data, should they wish to do so.
9. For the majority of publishers, who would still receive subscriptions on the full articles themselves, opening their article reference lists in this way will cost nothing in terms of lost revenue.
10. Indeed, participation in the Open Citation Corpus will bring the following benefits to subscription-access publishers:
– Access to services to be built over the aggregated open citations data, for example an automated reference correction service available to editors upon receipt of a manuscript, for the automated pre-publication correction of errors in reference lists prior to article publication.
– Increased exposure to users of the references to the publisher’s own journal articles – a form of advertising. While at first coverage among subscription-access publishers will be incomplete, this expanding Open Citation Corpus will, in true Web 2.0 style, become more useful the more publishers participate.
– Even while coverage is incomplete, the Open Citations Corpus by its very nature contains reference citations to all the key papers published in every field covered – currently to all the key papers published in every biomedical field, enabling readers more easily to identify and find the most highly cited papers of each contributing publisher.
– Opening citations data will result in white-listing and general good-will from funding agencies, government and other advocates of open data, who might otherwise mandate publication by grantees in alternative open-access journals.
– Opening citations data will lead to support from scholars and researchers themselves, who wil be more inclined to publish in that publisher’s journals, feeling that at last the publishers would be giving back to them some of their own data, rather than selling it back to them as at present.

As my next blog post shows, one leading subscription-access publisher is now willing to open its journal article references in the way I have suggested.  Others who would like to so the same should contact me at <david.shotton@zoo.ox.ac.uk>.

Input data for Open Citations – the PMC Open Access Subset

PubMed, created by the US National Library of Medicine in DATE, holds bibliographic records and abstracts for essentially all journal articles published in the biomedical sciences. It currently records almost a million new entries each year!

PubMed Central (PMC), created as an extension of PubMed, is designed to hold full text articles from among the PubMed entries. At present, PMC holds entries for ~9.3% of the papers indexed in PubMed published between 1980 and 2010, 1,428,675 out of a total of 15,319,102. Many of these PMC articles (192,452 for the years 1980 to 2010, ~13.5% of the PMC holdings) are truly Open Access articles, that users can download and repurpose as they will. However, the majority are articles from subscription access journals deposited in PMC under licence agreements with funding agencies that, while providing read access to the full text, prevent readers from downloading the articles and from making derivative works.

The Open Citations Project has to date worked exclusively with the Open Access subset (OASS) of PMC. As of 24 January 2011, there were 204,637 OASS articles, including a few published before 1980. In almost all of these OASS articles, the reference lists were nicely marked up in NLM-DTD XML, making the task of identifying individual references straightforward. In a few cases, the articles were present as scanned page images, lacking any internal markup – those we were unable to process.

From the XML reference lists of these papers, we were able to identify and extract 6,325,178 individual references, which, together with the bibliographic information we had on the OASS articles themselves gave us 6,529,815 independent bibliographic records of both citing and cited entities. As explained in the next blog post, these records showed varying degrees of completeness and accuracy.

Using the Entrez API, we were able to use PubMed IDs, where these were available in the references, to extract a further 2,304,143 bibliographic records from PubMed, which, in the ideal world, would each exactly duplicate the information we had previously obtained from the OASS bibliographic reference containing that PubMed ID. As we shall describe, these additional PubMed records proved exceptionally useful in correcting imperfect OASS references.

Since the OASS articles cite papers outside the OASS, as well as a few within it, the majority of the bibliographic information we thus acquired related to papers represented within PubMed but not within PubMed Central. And because many OASS papers independently contained references to the most highly cited biomedical papers, many of our records were to the same bibliographic entities.

An important part of our data processing was thus to coalesce independent references from different OASS articles to the same multiply cited papers into a set of unique bibliographic records, each for one paper. Once this had been achieved, we were left with 3,578,598 unique bibliographic records, 204,637 describing the OASS articles themselves, and 3,373,961 describing articles outside the OASS, mostly from subscription-access journals.

The following table and figure tabulates and illustrates the number of papers in each category between 1980 and 2010 inclusive. The most striking thing about these data are that they show how, between these years, the relatively small number of articles in the Open Access subset of PMC (approx. 200,000 articles) referenced >20% of all PubMed Central papers published between 1950 and 2010 (approx. 15.3 million papers), and in doing so reference all the most important highly cited papers in every field of biomedical endeavour. This inclusive coverage means that citation graphs created from the Open Citations dataset will capture all the important aspects of any field.

Table 1

Year

Number of papers

 

Pubmed

PMC

OASS

Cited by OASS

1950-1979

5,128,602

427,877

8352

146027

1980

278,069

23,218

631

15708

1981

278,069

23,685

543

16627

1982

292,219

25,215

740

18389

1983

305,725

25,688

738

21263

1984

314,737

26,316

543

23249

1985

331,706

25,916

637

25780

1986

345,501

26,721

590

28761

1987

363,754

27,834

555

32222

1988

381,976

28,802

442

36320

1989

398,620

29,855

616

42005

1990

398,620

30,143

704

48422

1991

407,465

31,337

733

53655

1992

412,457

32,325

719

61091

1993

420,935

33,203

1055

70272

1994

431,160

33,456

1279

80206

1995

441,967

34,276

1148

91814

1996

452,218

34,755

1155

101853

1997

451,533

34,800

1314

114967

1998

469,466

36,179

1341

131510

1999

469,466

37,534

1420

146623

2000

528,243

39,047

1608

170330

2001

542,854

40,235

2546

179203

2002

560,006

43,265

3199

195879

2003

590,317

46,442

4015

211423

2004

634,432

51,416

6005

229423

2005

694,687

60,411

10333

236678

2006

740,007

72,295

14264

238387

2007

777,311

87,744

20070

222085

2008

824,612

120,004

31416

190071

2009

862,372

146,413

41848

124894

2010

918,598

120,145

40245

27877

Total

15,319,102

1,428,675

192,452

3,186,987

% of PubMed  

9.33%

 

20.80%

% of PMC    

13.47%

 

Figure 1

Figure 2

The OASS source data give the types of cited entity, aggregated after coalescing, shown in the Figure 3.

Figure 3