Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

From little acorns . . . A retrospective on OpenCitations

The initial vision

Now that OpenCitations is hosting over one billion freely available scholarly bibliographic citations, this is perhaps an opportune moment to look back to the start of this initiative. A little over eleven years ago, on 24 April 2010, I spoke at the Open Knowledge Foundation Conference, OKCon2010, in London, on the topic

OpenCitations: Publishing Bibliographic Citations as Linked Open Data

I reported that, earlier that same week, I had applied to Jisc for a one-year grant to fund the OpenCitations Project (opencitations.net). Jisc (at that time ‘The JISC’, the Joint Information Systems Committee) was tasked by the UK government, among other things, to support research and development in information technology for the benefit of the academic community.

The purpose of that original OpenCitations R&D project was to develop a prototype in which we:

  • harvested citations from the open access biomedical literature in PubMed Central;
  • described and linked them using CiTO, the Citation Typing Ontology [1];
  • encoded and organized them in an RDF triplestore; and
  • published them as Linked Open Data in the OpenCitations Corpus (OCC).

I told those at the conference that in this demonstration project, with limited JISC funding, we could not hope to “boil the whole ocean”, but that nevertheless there would be substantial benefits from even partial coverage of citation data from the scholarly literature:

  • We could show the way and establish best practice.
  • Despite partial coverage, all key papers would most likely be cited several times.
  • The overall topological structure of the citation network would be revealed.
  • We would create a ‘benchmark’ corpus of high-quality RDF citation data that could be used to develop analytical and visualization tools.
  • We could show the value of open citation data in helping scholars to discover full text articles of all types, and thus encourage subscription-access publishers to release their reference metadata.

The important thing, I said, was to make a start!

The Jisc OpenCitations Project

That JISC grant application was funded, and the project, to last for a year with modest funding of £100K, started in my lab in the Department of Zoology at Oxford University on 1st June 2010, and was subsequently extended for a further six months.

Using data from the Open Access subset of PubMed Central, we created the first prototype release of the OpenCitations Corpus of linked bibliographic citation data, containing 6,529,815 independent bibliographic records of both citing and cited entities, comprising references to ~20% of all post-1980 articles recorded in PubMed, including those to all the most important highly cited papers in every field of biomedical endeavour.

This achievement was almost entirely the result of the excellent work by our chief data wrangler Alex Dutton, whose skill and natural feel for linked data did wonders for this project. Ben O’Steen, Graham Klyne and Alistair Miles made important contributions.

The project also resulted in many other development, described here, most which were developed or at least initiated during a short but wonderfully productive collaboration with Silvio Peroni, who spent six months with me in 2010 as a doctoral student intern from the University of Bologna, to which he subsequently returned to complete his thesis and develop his academic career.

These included:

  • the deconstruction and re-development of the original version of CiTO into a suite of orthogonal and complementary ontologies covering the whole domain of scholarly publishing – the SPAR (Semantic Publishing and Referencing) Ontologies [2, 3];
  • the mapping of various existing metadata schemas into RDF using SPAR, including the DataCite Metadata Schema, and subsequently JATS, now the default NISO standard for XML markup of scholarly documents) [4]; and
  • the initiation of the Semantic Publishing Blog and this OpenCitations Blog.

Life after Jisc – the flowering of OpenCitations

After the Jisc funding ended and I, after a long career in biological teaching and research, formally retired from the Department of Zoology at the Oxford University, members of the initial OpenCitations team moved on to other things. Like so many grant-funded academic project whose initial financial support had dried up, OpenCitations could have foundered at that stage, as an interesting prototype but with too little content to be useful. However, the concept of providing an open alternative to proprietary citation indexes was too important to abandon. But how could it be transitioned into something enduring and useful, particularly when as a matter of principle one had decided that the citation data should be made freely available, thus precluding income generation by charging for ‘premium’ services or the formation of a commercial spin-off?

Finally, I realized that something radical needed to be done to move OpenCitations forward. I had maintained a lively collaboration with Silvio Peroni at the University of Bologna, resulting between 2011 and 2014 in the publication of 18 articles and conference papers concerning the SPAR ontologies, ontology development, documentation and visualization, and related topics, and in 2015 I invited him to start working with me directly on OpenCitations. It was the best decision I could have made. We decided to take the initial concept and re-implement it from the bottom up. OpenCitations gave Silvio a major computer science project to which he could apply his considerable talent, and soon resulted in the development of a revised RDF data model for describing citation data, the OpenCitations Data Model (OCDM) [5] and a suite of new software tools to harvest, organise and publish citations at linked open data [6]. The credit for almost all the subsequent conceptual and technical developments within OpenCitations, which have incrementally led to our present situation, is due to Silvio Peroni, and the scholarly community is indebted to him for the intelligence, skill and diligent application he has given to OpenCitations over the past six years. I am truly honoured to have Silvio as co-Director of OpenCitations, and wish to take this opportunity to acknowledge his contributions and to thank him publicly.

Our work on OpenCitations at that stage, summarized in [7], would not have been possible without the enthusiastic support of Silvio’s senior colleague Fabio Vitali and of the Department of Computer Science and Engineering at the University of Bologna, which not only provided a stimulating environment for Silvio’s post-doctoral work, but also supplied computing services and infrastructure at no charge to OpenCitations. It was also greatly helped by Professor David De Roure of Oxford University, who gave me an academic home and a formal affiliation within the Oxford e-Research Centre after my retirement from the Department of Zoology, which enabled me to continue to hold research grants.

As has been documented in earlier posts in this blog, we greatly benefitted in 2017 from a grant from the Alfred P. Sloan Foundation which enabled us to purchase a new and more powerful computing infrastructure for the sole use of OpenCitations and to extend and improve our software, and subsequently in 2019 by a project grant from the Wellcome Trust to develop the Open Biomedical Citations in Context Corpus, that permitted the extension of OCDM and SPAR for the characterization of in-text references and their textual contexts.

A significant breakthrough came in January 2018 with our decision to treat citations as first-class data entities, each with its own persistent identifier (PID), the Open Citations Identifier (OCI) [8]. This gave Silvio the freedom to envision a new kind of database, a citation index in which each citation had its own metadata, including citation timespan, citation categorization (e.g. self-citation), and of course the DOIs of the citing and cited publications. The creation of this new index was possible only with the incredible effort by Ivan Heibi, who served as a Research Fellow in the project funded by the Alfred P. Sloan Foundation at that time, and who was entirely responsible for developing the first version of the code necessary for creating such a database. Having harvested all the open references from Crossref metadata dumps, Silvio and Ivan created COCI, the OpenCitations Index of Crossref DOI-to-DOI Citations, which immediately became our principal source of open citations, the original OpenCitations Corpus being retained as a ‘sandbox’ in which to experiment with new data representations, for example those required for the Open Biomedical Citations in Context Corpus. Access to COCI was facilitated by Silvio’s development of a REST API, using his software tool RAMOSE (Restful API Manager Over SPARQL Endpoints), which enables the easily configurable deployment of a REST API over any SPARQL endpoint to an RDF triplestore [9]. We were able to organize our all data, both ‘traditional’ and new, and to encode it in RDF, thanks to the comprehensive OpenCitations Data Model [5], itself based on our SPAR Ontologies [3], which we evolved as necessary to accommodate new data representation requirements.

During this period we published a number of definitions, conference papers and journal articles documenting these advances, details of which can be found here. Of these, the most recent canonical publication describing OpenCitations as an infrastructure for open scholarship, and its datasets, tools, services and activities, is Peroni and Shotton (2020) [10]. We also established the Research Centre for Open Scholarly Metadata at the University of Bologna, primarily to handle administrative, financial and academic aspects of OpenCitations activities.

OpenCitations’ future

The problem remained: how to sustain the OpenCitations infrastructure financially. We were greatly helped by Bilder, Lin and Neylon’s formulation of the Principles of Open Scholarly Infrastructures (POSI) [11], in which they clearly pointing out that reliance solely on grant funding for specific projects was not the answer. OpenCitations compliance with POSI is described here. We were thus immensely grateful that SPARC Europe and other institutions had the wisdom to establish SCOSS (The Global Sustainability Coalition for Open Science Services) to facilitate the crowd-sourced financial support of useful open infrastructures by the scholarly community, including academic libraries, government agencies and other stakeholders. OpenCitations applied for SCOSS support in 2019, which led to the selection of OpenCitations for support in the SCOSS second round.

The donations we are now starting to receive from such stakeholders, and the new staff that this funding has recently allowed us to hire, signal the start of our transition from a financially vulnerable academic project to a sustainable open scholarly infrastructure of real value to the community.

The work of opening more of the global citation graph now requires two things:

  • that each publisher takes responsibility for ensuring that the references from all of its journal articles and books are submitted, together with all other bibliographic metadata, to open scholarly bibliographic metadata aggregators such as Crossref and DataCite, from which they can be indexed into open citation indexes of sufficient quality, depth of detail and breadth of coverage that these offer genuine alternatives to the expensive proprietary citation indexing services upon which the academic community presently relies; and
  • that the entire scholarly stakeholder community re-directs a fraction of the enormous sums currently spent on its subscriptions to proprietary bibliographic services in order to support Open Science infrastructures such as OpenCitations that making citations and other forms of scholarly metadata and objects freely available.

References

[1] David Shotton (2010). CiTO, the Citation Typing Ontology. J. Biomedical Semantics 1 (Suppl. 1): S6. http://dx.doi.org/10.1186/2041-1480-1-S1-S6

[2] Silvio Peroni, David Shotton (2012). FaBiO and CiTO: ontologies for describing bibliographic resources and citations. Web Semantics, 17: 33-34. https://doi.org/10.1016/j.websem.2012.08.001, OA at http://speroni.web.cs.unibo.it/publications/peroni-2012-fabio-cito-ontologies.pdf

[3] Silvio Peroni, David Shotton (2018). The SPAR Ontologies. In Proceedings of the 17th International Semantic Web Conference (ISWC 2018): 119-136. https://doi.org/10.1007/978-3-030-00668-6_8

[4] Peroni S, Lapeyre DA and Shotton D (2012) From Markup to Linked Data: Mapping NISO JATS v1.0 to RDF using the SPAR (Semantic Publishing and Referencing) Ontologies. Proc. 2012 JATS Conference, National Library of Medicine, Bethesda, Maryland, USA (October 2012): 16-17. http://www.ncbi.nlm.nih.gov/books/NBK100491/

[5] Marilena Daquino, Silvio Peroni , David Shotton (2020). The OpenCitations Data Model. Figshare. https://doi.org/10.6084/m9.figshare.3443876.v7

[6] Silvio Peroni, David Shotton, Fabio Vitali (2017). One Year of the OpenCitations Corpus: Releasing RDF-based scholarly citation data into the Public Domain. In The Semantic Web – ISWC 2017 (Lecture Notes in Computer Science Vol. 10588, pp. 184–192). Springer, Cham. https://doi.org/10.1007/978-3-319-68204-4_19

[7] Silvio Peroni, Alexander Dutton, Tanya Gray, David Shotton (2015). Setting our bibliographic references free: towards open citation data. Journal of Documentation, 71 (2): 253-277. http://dx.doi.org/10.1108/JD-12-2013-0166, OA at http://speroni.web.cs.unibo.it/publications/peroni-2015-setting-bibliographic-references.pdf

[8] Silvio Peroni, David Shotton (2019). Open Citation Identifier: Definition. Figshare. https://doi.org/10.6084/m9.figshare.7127816

[9] Daquino, M., Heibi, I., Peroni, S., & Shotton, D. (2021). Creating Restful APIs over SPARQL endpoints with RAMOSE. Semantic Web. http://arxiv.org/abs/2007.16079

[10] Silvio Peroni, David Shotton (2020). OpenCitations, an infrastructure organization for open scholarship. Quantitative Science Studies, 1(1): 428-444. https://doi.org/10.1162/qss_a_00023

[11] Geoffrey Bilder, Jenny Lin, Cameron Neylon (2015). Principles for Open Scholarly Infrastructure. http://dx.doi.org/10.6084/m9.figshare.1314859

Open Citations Corpus Import Process

As part of the Open Citations project, we have been asked to review and improve the process of importing data into the Open Citations Corpus, taking the scripts from the initial project as our starting point.

The current import procedure evolved from several disconnected processes and requires running multiple command line scripts and transforming the data into different intermediate formats. As a consequence, it is not very efficient and we will be looking to improve on the speed and reliability of the import procedure. Moreover, there are two distinct procedures depending on the source of the data (arXiv or PubMed Central); we are hoping to unify the common parts of these procedures into a single process which can be simplified and normalised to improve code re-use and comprehensibility.

The Workflow

As PubMed Central provides an OAI-PMH feed, this could be used to retrieve article metadata, and for some articles, full text. Using this feed, rather than an FTP download (as used currently) would allow the metadata import for both arXiv and PubMed Central to follow a near-identical process, as we are already using the OAI-PMH feed for arXiv.

Also, rather than have intermediate databases and information stores, it would be cleaner to import from the information source straight into a datastore. The datastore could then be queried, allowing matches and linking between articles to be performed in situ. The process would therefore become:

  1. Pull new metadata from arXiv (OAI-PMH) and PubMed Central (OAI-PMH) and insert new records into the Open Citations Corpus datastore
  2. Pull new full-text from arXiv and PubMed Central, extract citations, and match with article data in Open Citations server, creating links between these references and the metadata records for the cited articles. Store unmatched citations as nested records in the metadata for each article.
  3. On a scheduled basis (e.g. nightly), review each existing article’s unmatched citations and attempt to match these with existing bibliographic records of other articles.

In outline, this looks like this:

The Datastore

Neo4J is currently used as the final Open Citations Corpus datastore for the arXiv data, by the Related Work system. We propose instead to use BibServer as the final datastore, for its flexibility and scalability, and suitability for the Open Citations use cases.

The Data Structure

The data stored within BibServer as BibJSON will be a collection of linked bibliographic records describing articles. Associated with each record and stored as nested data will be a list of matched citations (i.e. those for which the Open Citations Corpus has a bibliographic record), a list of unmatched citations, and a list of authors.

Authors will not be stored as separate entities. De-coupling and de-duplicating authors and articles could form the basis of a future project, perhaps using proprietary identifiers (such as ORCHID, PubMed Author ID or arXiv Author ID) or email addresses, but this will not be considered further in this work package.

Overall Aim

The overall aim of this work is to provide a consistent, simple and re-usable import pipeline for data for the Open Citations Corpus. In the fullness of time we’d expect it to be possible to add new data sources with minimal additional complexity. By using an approach whereby data is imported into the datastore at as early a stage as possible in the import pipeline, we can use common tools for extracting, matching, deduplicating citations; the work for each datasource, then, is just to convert the source data format into BibJSON and store it in BibServer.

Postscript

David Shotton writes: This productive collaboration between Cottage Labs and the Open Citations Corpus came to an end when Jisc funding ran out.  The corpus has more recently been given a new lease of life, as described here, with a new instantiation named OpenCitations hosted at the Department of Computer Science and Engineering of the University of Bologna, with Silvio Peroni as Co-Director.

Open Citations – Indexing PubMed Central OA data

As part of our work on the Open Citations extensions project, I have recently been doing one of my favourite things – namely indexing large quantities of data then exploring it.

On this project we are interested in the PubMed Central Open Access subset, and more specifically, we are interested in what we can do with the citation data contained within the records that are in that subset – because, as they are open access, that citation data is public and freely available.

We are building a pipeline that will enable us to easily import data from the PMC OA and from other sources such as arXiv, so that we can do great things with it like explore it in a facetview, manage and edit it in a bibserver, visualise it, and stick it in the rather cool related-work prototype software. We are building on the earlier work of both the original Open Citations project, and of the Open Bibliography projects.

Work done so far

We have spent a few weeks getting to understand the original project software and clarifying some of the goals the project should achieve; we have put together a design for a processing pipeline to get the data from source right through to where we need it, in the shape that we need it. In the case of facetview / bibserver work, this means getting it into a wonderful elasticsearch index.

While Martyn continues work on the bits and pieces for managing the pipeline as a whole and pulling data from arXiv, I have built an automated and threadable toolchain for unpacking data out of the compressed file format it arrives in from the US National Institutes of Health, parsing the XML file format and converting it into BibJSON, and then bulk loading it into an elasticsearch index. This has gone quite well.

To fully browse what we have so far, check out http://occ.cottagelabs.com.

For the code: https://github.com/opencitations/OpenCitationsCorpus/tree/master/pipeline.

The indexing process

Whilst the toolchain is capable of running threaded, the server we are using only has 2 cores and I was not sure to what extent they would be utilised, so I ran the process singular. It took five hours and ten minutes to build an index of the PMC OA subset, and we now have over 500,000 records. We can full-text search them and facet browse them.

Some things of particular interest that I learnt – I have an article in the PMC OA! And also PMIDs are not always 8 digits long – they appear in fact to be incremental from 1.

What next

At the moment there is no effort made to create record objects for the citations we find within these records, however plugging that into the toolchain is relatively straightforward now.

The full pipeline is of course still in progress, and so this work will need a wee bit of wiring into it.

Improve parsing. There are probably improvements to the parsing that we can make too, and so one of the next tasks will be to look at a few choice records and decide how better to parse them. The best way to get a look at the records for now is to use a browser like Firefox or Chrome and install the JSONview plugin, then go to occ.cottagelabs.com and have a bit of a search, then click the small blue arrows at the start of a record you are interested in to see it in full JSON straight from the index. Some further analysis on a few of these records would be a great next step, and should allow for improvements to both the data we can parse and to our representation of it.

Finish visualisations. Now that we have a good test dataset to work with, the various bits and pieces of visualisation work will be pulled together and put up on display somewhere soon. These, in addition to the search functionality already available, will enable us to answer the questions set as representative of project goals earlier in January (thanks David for those).

Postscript

David Shotton writes: This productive collaboration between Cottage Labs and the Open Citations Corpus came to an end when Jisc funding ran out.  The corpus has more recently been given a new lease of life, as described here, with a new instantiation named OpenCitations hosted at the Department of Computer Science and Engineering of the University of Bologna, with Silvio Peroni as Co-Director.

Why openness benefits research

David writes: Dr Heinrich Hartman is a new colleague of mine, who, having been working in the Mathematical Institute of Oxford University, has just returned to Germany to start a new job in a leading semantic web research group, that of Steffan Staab at the Institute for Web Science and Technologies, University of Koblenz-Landau.  What follows are our thoughts about research openness, that relates to our decision, described in the next blog post, to merge our bibliographic citation projects.

The following text is jointly authored by David Shotton (david.shotton@zoo.ox.ac.uk) and Heinrich Hartmann (hartmann@uni-koblenz.de)

Transparency is essential for trust and credibility in the research community, and true openness brings great opportunities for academia. The internet facilitates the free flow of information and knowledge, and permits new forms of communication both for researchers and for the general public. Already, today’s children can listen freely on the internet to university courses taught by world-leading scientists, and everybody has the best encyclopaedia ever written (Wikipedia) at their fingertips.  These are real game changers. Opening up the research literature is the next logical step.

Open publishing

We believe that the current academic publishing model – whereby researchers give their content to commercial publishers and then buy it back from them at enormous cost by means of journal subscription fees – has become absurd, since it is no longer helping the researcher to distribute his or her findings, but rather prevents the work from being widely read, by hiding it behind subscription pay walls.  Would it not be much better to let this information flow freely, accessible to everybody who wants to read it!

Of course, such a vision of openness for academic publishing raises issues of finance and quality control – who will pay for open access publishing, and how can we ensure that scientific rigor accompanies open publication.  While the internet enables dissemination of information at a fraction of the cost of traditional print publication, publishing clearly involves more than electronic dissemination.  It is for this reason that we, with others, are presently planning a high level conference on modern scientific communication, entitled Rigor and Openness in 21st Century Science, to be held in Oxford next spring.

However, new publication funding models are being developed, particularly in the United Kingdom, where Research Councils UK and the Wellcome Trust are insisting that papers reporting research results obtained as a result of their research funding should be published under an open Creative Commons CC-By attribution licence when an article processing charge (APC) is levied, so that the works are freely available for text mining and re-use [1].  What is significant is that they are backing their words with funding to enable it.  Cameron Neylon has recently written a commentary in Nature about the importance of this [2].

Furthermore, peer review is being carefully examined by several forward-looking publishers to determine how well open alternatives to the present system of confidential review actually work.

The role of social media in science

Much academic research is done in relative isolation, because topics have become so specialized that there may be only a few experts in the whole world who really understand each particular research problem.  These experts may be located on different continents, and may not know about one another – a situation that is particularly true for Ph.D. students and other young researchers, who may not yet be familiar with the literature in their field, and who may have formed few personal relationships with colleagues in other institutions through attendance at research conferences.  New forms of academic social media can play a role here, to catalyse interactions between geographically separated academics, and many experiments in this area are being conducted.

Academic social media can also play an important role in filtering the wealth of new articles published every day, and in alerting people to the small fraction of these that are most relevant to them.  Typically, junior researchers rely on recommendations from friends and colleagues about which articles are worth reading, but if academic social media can be used to broaden this recommendation network, they will provide a significant service.

Fears and benefits of openness

Of course researchers, particularly early in their careers, are cautious about sharing their discoveries too early or too widely, for fear they may get ‘scooped’, since they naturally and quite properly wish to obtain credit for their own work by being the first to publish it.  However, what is often missed by people of this mind-set is that working openly with other people can have benefits too.  It can be a lot more fun, can lead to more sustainable motivation, can result in incredibly rapid collaborative progress, and hence can often lead to better results.  An essential pre-requisite for this is the willingness to share one’s ideas and making contact with like-mined people.  An example of a researcher who practices openness in his day-to-day research is Georgio Gilestro, Lecturer in Systems Neurobiology with the Department of Life Sciences at Imperial College London, who publishes his research group’s Open Lab Book online.

Our personal experience, not at least in the joint Open Citations and Related Work developments described in the next blog post, is that you gain more than you loose by being open!

Reference

[1]       Wellcome Trust announcement: Open access: CC-BY licence required for all articles which incur an open access publication fee – FAQ. Available from http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/WTVM055715.pdf.

[2]       Cameron Neylon (2012). Science publishing: Open access must enable open use.  Nature 492: 348–349.  doi:10.1038/492348a.

Open letter to publishers

[The text of this post was updated on 27-09-2013 and 04-04-2017 to reflect a new CrossRef metadata best practice document and a change in their URI.]

Today I wrote an open letter to all scholarly journal publishers, available online here, entitled:

Open your article reference lists for inclusion in the Open Citations Corpus.

In this letter, I request that publishers open the bibliographic citation data in their journal article reference lists.  There is a growing movement to make such bibliographic citation data open – for example, Nature Publishing Group’s open Linked Data Platform now includes citation metadata for all published article references.

Provided a publisher is already depositing article references with CrossRef as part of the CrossRef CitedBy Linking service, all the publisher need to do is to inform CrossRef that it is willing for CrossRef to freely distribute these reference, for example in response to queries against the CrossRef XML API.  We will then harvest them from CrossRef and incorporate them as open linked data in the Open Citations Corpus.

Nature Publishing Group, Taylor & Francis, the American Association for the Advancement of Science (who publish Science) and Oxford University Press, as well as a number of open-access publishers, have already given their consent to CrossRef to do this for some or all of their journals.

If not already a subscriber to the CrossRef CitedBy Linking service, a publisher can register for this useful service free of charge.  Having done so, there is nothing further the publisher needs to do to ‘open’ its reference data, other than to give its consent to CrossRef.  This can be done automatically, in the submitted article metadata, or (for back numbers) by informing CrossRef directly.

Even Open Access publishers, publishing articles under a CC-By open license, need to give this specific permission to CrossRef for this to occur, because CrossRef policy is that all publishers, including open access publishers, have to opt in to any distribution of references that CrossRef makes.

For new submissions, publishers should follow instructions detailed in the CrossRef blog at https://www.crossref.org/blog/distributing-references-via-crossref/, which contains the following key instruction:

“In order for publishers to distribute references along with standard bibliographic metadata, publishers need to set the <reference_distribution_opt> metadata element to “any" for each DOI deposit where they want to make references openly available.”

In this way, publishers can choose to open the reference lists for all their journals, or to do so on a journal-by-journal or on an article-by-article basis (useful for ‘hybrid’ subscription-access journals in which only some articles are open access).

To open reference lists for back numbers, publisher needs to e-mail CrossRef to express their intent,using the template shown at the foot of this post, as detailed in my Open Letter to Publishers.

I have copied this open letter to the CEOs of the Open Access Scholarly Publishers Association (OASPA), of the Association of Learned and Professional Society Publishers (ALPSP), and of the International Association of Scientific, Technical & Medical Publishers (STM), asking them to distribute it to their members, perhaps in association with their next Members News Letter, as CrossRef itself is planning to do later this month.

Please spread the word about this, particularly to publishers who may not be members of these professional associations.  Thanks.

= = =

Template for an e-mail to CrossRef expressing willingness to open reference lists in previously published and future journal articles.

To support@crossref.org

I am writing on behalf of *** [name of publisher] to confirm that *** [name of publisher] is willing for the bibliographic reference lists within the articles in [delete as necessary:] all our journals [or] the attached list of journals be made freely available by CrossRef, for inclusion in the Open Citations Corpus. These journals are associated with the following DOI prefix(es): 10.**** [Please complete DOI prefix(es) – see footnote].
Yours sincerely [name, position, date]
= = =
Footnote: Publisher’s DOI prefixes are listed at http://www.crossref.org/06members/50go-live.html by name of publisher.

 

Taylor & Francis to open article reference lists

I am very pleased to announce that last year Ian Bannerman, Managing Director for Journals at Taylor & Francis, confirmed this publisher’s willingness to pilot the opening of the reference lists from articles in 29 of their subscription access journals, as well as from all of their current list of 15 Open Access journals, for inclusion in the Open Citations Corpus.  The reference lists for these journals are already being supplied to CrossRef as part of the CrossRef CitedBy Linking service, and will be made available publicly via the CrossRef XML query API.

Taylor & Francis is a major international publisher of over 1,000 academic journals and more than 1,800 books per year, incorporating well-known publishing names including Bios Scientific Publishing, CRC Press, Garland Science, Marcel Decker and Routledge.  It is the largest publisher of subscription access journals yet to agree to make reference lists available from its journal articles, and I welcome them warmly into the Open Citations fold.

Incorporation of new reference data from Taylor and Frances journals into the Open Citations Corpus will commence in the near future.