We’re happy to announcePOCI, the OpenCitations Index of PubMed open PMID-to-PMID citations, an RDF dataset containing details of all the citations from publications bearing PubMed Identifiers (PMIDs) to other PMID-identified publications, harvested from the National Institutes of Health Open Citations Collection (NIH-OCC). The citations available in POCI are treated as first-class data entities, with accompanying properties including the citations timespan, modelled according to the OpenCitations Data Model.
Each citation (i.e. an individual of the class cito:Citation) is identified by an URL structured as follows:
https://w3id.org/oc/index/poci/ci/[[OCI]].
Open Citation Identifiers
Each Open Citation Identifier [[OCI]] has a simple structure: the lower-case letters “oci” followed by a colon, followed by two numbers separated by a dash (e.g. https://w3id.org/oc/index/poci/ci/01600102060800080706-016002060909030401), in which the first number identifies the citing work and the second number identifies the cited work.
For citations in which the citing and cited works are identified by PMIDs, which includes all the POCI citations, the OCI is created in the following manner, as explained more fully here. Each converted numeral part of OCI is prefixed by a 0160, which indicates that NIH is the supplier of the original metadata of the citation (as indicated at http://opencitations.net/oci).
are available as dumps on Figshare in CSV, N-Triples and Scholix.
What is an Open Citation Index?
A citation index is a bibliographic index recording citations between publications, allowing the user to establish which later documents cite earlier documents. The current indexes available in OpenCitations are:
Cite this article as: Chiara Di Giambattista, "Discover POCI, the index of open citations from PubMed ," in OpenCitations blog, 27/12/2022, https://opencitations.hypotheses.org/3246.
We’re excited to introduce DOCI, the OpenCitations Index of Datacite open DOI-to-DOI citations, a new tool containing citations derived from publications bearing DataCite DOIs to other DOI-identified publications, harvested from DataCite. The citations available in DOCI are treated as first-class data entities, with accompanying properties including the citations timespan, modelled according to the OpenCitations Data Model.
For citations in which the citing and cited works are identified by DOIs, which includes all the DOCI citations, the OCI is created in the following manner, as explained more fully here. Each case-insensitive DOI is first normalized to lower case letters. Then, after omitting the initial doi:10. prefix, the alphanumeric string of the DOI is converted reversibly to a pure numerical string using the simple two-numeral lookup table for numerals, lower case letters and other characters presented at https://github.com/opencitations/oci/blob/master/lookup.csv. Finally, each converted numeral is prefixes by a 080, which indicates that DataCite is the supplier of the original metadata of the citation (as indicated at http://opencitations.net/oci).
A citation index is a bibliographic index recording citations between publications, allowing the user to establish which later documents cite earlier documents. The current indexes available in OpenCitations are:
Want to keep yourself updated about the ongoing activities of OpenCitations? We have now publicly released the OpenCitations Roadmap, available on Trello.com:
The OpenCitations Roadmap consists of a board fulfilled with colour-labelled cards which present the goals so far reached, the present projects and activities, and the future plans. By clicking on the cards, it is possible to visualize a description for each activity, the progress state, and who in the OpenCitations team is working on it.
The OpenCitations Roadmap covers all kinds of activities divided according to the scope, identified by the coloured labels, in particular:
light blue for the technical development, such as the development of the software for the creation of the new database OpenCitations Meta and of DOCI, the OpenCitations Index of DataCite open DOI-to-DOI citations, and the re-engineering of the infrastructure and the website;
green for the data model implementation;
yellow for the data development, such as the bi-monthly COCI releases;
purple for the events and outreach activities.
The cards also highlight the activities related to the two EC-funded projects OpenCitations is involved in, OpenAIRE Nexus (blue label) and RISIS2 (orange label). We thank the OpenAIRE team for the help and suggestions during the Roadmap review process.
The OpenCitations Roadmap is an open work in progress that will reflect the developments and growth of OpenCitations. At OpenCitations, we don’t want this Roadmap to be just an online ‘showcase’, but a room in which to share ideas and opinions. We invite you – the members of our community, our stakeholders, the other Open Science actors, researchers, and librarians, and anyone who is interested in OpenCitations activities – to add a comment or a question in the ‘Leave feedback‘ card. This will help us to better understand our strong and weak points, and to stay in touch with the needs and thoughts of the community.
In this way, supplementing the conventional communications channels of email and the social platforms (our blog, Twitter, LinkedIn), the OpenCitations Roadmap will become a new virtual place for dialogue, where you can directly contribute to improve OpenCitations.
Cite this article as: Chiara Di Giambattista, "The OpenCitations Roadmap is now publicly available on Trello," in OpenCitations blog, 29/03/2022, https://opencitations.hypotheses.org/1519.
Guest post by Arcangelo Massari, University of Bologna
In this post, Arcangelo Massari, who recently graduated in Digital Humanities and Digital Knowledge under Professor Silvio Peroni at the University of Bologna, shares the results of his master thesis.
A particular problem in information retrieval is that of obtaining data from an evolving dataset, independent of the time at which that item of data was added, changed or removed. To permit such time-independent queries to be performed over evolving RDF datasets, I have developed two new pieces of open source software, time-agnostic-library [1] and time-agnostic-browser [2], that are now available from the OpenCitations GitHub repository.
The time-agnostic-library is a Python library to perform live time-traversal queries on RDF datasets. Time-traversal means being agnostic about time: a SPARQL query that is not run on the current state of the collection but over its entire history or over a specified timespan of that history [3]. This tool allows materializations – obtaining all versions of an entity over time, or its status at a given time. Furthermore, SPARQL queries can be performed to get the delta between two or more versions of one or more resources. Thereby, the time-agnostic-library realizes all the retrieval functionalities described in the taxonomy by Fernández et al. [3].
To complement this query software, the time-agnostic-browser is a web application built on top of the time-agnostic-library to achieve the same results via a graphical user interface.
The primary purpose of these developments is to offer a system for browsing the provenance [4] of RDF statements across time: who produced them, when, where the information was taken from, and what changes were made compared to the previous state of the resource. Knowledge of such information is essential because data changes over time, either because of the natural evolution of concepts or due to the correction of mistakes. Indeed, the latest version of knowledge may not be the most accurate. Such phenomena are particularly tangible in the Web of Data, as highlighted in a study by the Dynamic Linked Data Observatory, which noted the modification of about 38% of the nearly 90,000 RDF documents monitored for 29 weeks, and the permanent disappearance of 5% of them [5] (Figure 1).
Figure 1. Donut chart showing the results of the study conducted by the Dynamic Linked Data Observatory on the evolution of RDF documents [5].
Additionally, the truthfulness of data cannot be assessed without provenance records and a system to query them. In fact, the truth value of an assertion on the Web is never absolute, as demonstrated by Wikipedia, which in its official policy on the subject states: “The threshold for inclusion in Wikipedia is verifiability, not truth.” [6]. The Semantic Web does not alter that condition, and trustworthiness has to be evaluated by each application by probing the context of the statements [7]. It is a challenging task and thus, in the Semantic Web Stack, trust is the highest and most complex level to satisfy, subsuming all the previous ones (Figure 2).
Figure 2.The Semantic Web layers [7]. Trust is the uppermost level of the stack, subsuming all the others.
Notwithstanding these premises, at present the most extensive RDF datasets – DBPedia [8], Wikidata [9], Yago [10], and the Dynamic Linked Data Observatory [11] – do not use RDF to track changes and record the provenance of such changes. Instead, they all adopt backup-based archiving policies. Some of them, such as Yago 4, record provenance but not changes. As far as citation databases are concerned, OpenCitations is the only infrastructure to implement change-tracking mechanisms and to record full RDF provenance records for each data entity. Among the leading players in this field, neither Web of Science nor Scopus have adopted similar solutions.
In accordance with the OpenCitations Data Model (OCDM) [12],a provenancesnapshot is generated by OpenCitations every time a bibliographical entity is created or modified. Each snapshot (prov:Entity) records the responsible agent (prov:wasAttributedTo), the generation time (prov:generatedAtTime), the invalidation time (prov:invalidatedAtTime), the primary source (prov:hadPrimarySource), and a link to the previous snapshot (prov:wasDerivedFrom), using terms from the Provenance Ontology. In addition, OCDM introduced a system to simplify restoring an entity’s status at a given time, by saving the delta between two versions as a SPARQL update query (prov:hasUpdateQuery) [13] (Figure 3). This approach enables one to restore an entity to a specific timepoint (snapshot) in a straightforward way by applying the inverse operations, i.e., deletions instead of additions, etc.
Figure 3. Provenance in the OpenCitations Data Model.
This solution is concretely used in all the datasets related to the OpenCitations infrastructure, such as COCI, an open index containing almost 1.2 billion DOI-to-DOI citation links derived from the open reference data available in Crossref [14]. It is important to note that this OpenCitations provenance model is generic and reusable in any other context. Since the time-agnostic-library leverages OCDM, it too is generic and can be used for any RDF dataset that tracks changes and provenance as OpenCitations does.
The time-agnostic-library is released under the ISC license and is downloadable through pip [1]. Test-driven development was adopted as a software development process during its creation [15]. It makes three main classes available to the user: AgnosticEntity, VersionQuery, and DeltaQuery, for materializations, version queries, and delta queries, respectively (Listing 1).
Listing 1. Code template to achieve materializations, time-traversal queries, and delta queries.
All three operations can be performed over the entire available history of the dataset, or by specifying a time interval via a tuple in the form (START, END).
The time-agnostic-browser [2] is also released under the ISC license and can be run as a Flask application. It is organized into two macro-sections: “Explore” and “Query”. In the former, a text input accepts a URI. By submitting it, the entire history of the corresponding resource is displayed. In the latter, a text area receives a SPARQL query, which is resolved on all dataset states. Its main added value is hiding the triples and the complexity of the underlying RDF model: predicate URIs, as well as subjects and objects, appear in a human-readable format. Moreover, all the entities are displayed as links, providing shortcuts to reconstruct the history of the related resources (Figure 4).
Figure 4. Graphical user interface of an entity history reconstruction through the time-agnostic-browser.
The efficiency of time-agnostic-library was measured with two types of benchmarks [16], one on execution times and the other on the amount of computer memory (RAM) required by ten different use cases, each repeated ten times to produce significant results and avoid outliers. In light of these benchmarks, time-agnostic-library has proven effective for any materialization. Regarding structured queries, they are swift if all subjects are known or deductible. On the other hand, the presence of unknown subjects in the user’s SPARQL query involves the identification of all present and past entities that satisfy that pattern, and so requires a more significant amount of time and resources. Specifically, all materializations and the cross-version structured query with known subjects required about half a second and about 50 MB of RAM; conversely, with unknown subjects, 581 seconds and 519 MB of RAM are required. It can be concluded that the proposed software can be used effectively in all cases where the subject is known, that is, for any materialization or formulated SPARQL queries without isolated triple patterns containing unknown subjects.
Other software solutions for such problems have been proposed. Table 1 shows the list of available software to perform materializations and time-traversal queries on RDF datasets. As can be observed, time-agnostic-library is the only one to support all retrieval functionalities without requiring pre-indexing processes. This feature makes it particularly suitable for use in scenarios with large amounts of data that often change over time. Moreover, compared to the approach of Im, Lee and Kim [17] and OSTRICH [18], the OpenCitations Data Model only requires storing the current state of the dataset, rather than the original one, allowing one to query the latest version, without additional computational effort to first re-create the original version.
Table 1. Comparative between time-agnostic-library and preexisting software to achieve materializations and time traversal queries on RDF datasets. (Scroll right to see Columns 6-8).
The OpenCitations Data Model and the time-agnostic-library software are the pre-requisites that will allow OpenCitations to involve third parties, for example members of staff in academic libraries, in the submission, curation and updating of OpenCitations bibliographic and citation data. At this stage, all entities in COCI have a single snapshot — the one made at the time of creation. However, since these entities may become modified, corrected or enriched over time, it is imperative to have appropriate software tools available for use by curators. With the time-agnostic-library software and its associated time-agnostic-browser, it will be possible for a curator to explore the entire history of the changes within an RDF dataset, to know when they were made, based on which source, and by which responsible agent, thus ensuring the reliability and verifiability of data, and facilitating any necessary further changes.
[3] J. D. Fernández, A. Polleres, and J. Umbrich, ‘Towards Efficient Archiving of Dynamic Linked’, in DIACRON@ESWC, Portorož, Slovenia: Computer Science, 2015, pp. 34–49.
[5] T. Käfer, A. Abdelrahman, J. Umbrich, P. O’Byrne, and A. Hogan, ‘Observing Linked Data Dynamics’, in The Semantic Web: Semantics and Big Data, vol. 7882, P. Cimiano, O. Corcho, V. Presutti, L. Hollink, and S. Rudolph, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 213–227. doi: 10.1007/978-3-642-38288-8_15
[8] F. Orlandi and A. Passant, ‘Modelling provenance of DBpedia resources using Wikipedia contributions’, Journal of Web Semantics, vol. 9, no. 2, pp. 149–164, Jul. 2011, doi: 10.1016/j.websem.2011.03.002.
[9] P. Dooley and B. Božić, ‘Towards Linked Data for Wikidata Revisions and Twitter Trending Hashtags’, in Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, Munich Germany, Dec. 2019, pp. 166–175. doi: 10.1145/3366030.3366048.
[11] J. Umbrich, M. Hausenblas, A. Hogan, A. Polleres, and S. Decker, ‘Towards Dataset Dynamics: Change Frequency of Linked Open Data Sources’, in Proceedings of the WWW2010 Workshop on Linked Data on the Web, Raleigh, USA, 2010. Available: http://ceur-ws.org/Vol-628/ldow2010_paper12.pdf
[12] M. Daquino, S. Peroni, and D. Shotton, ‘The OpenCitations Data Model’, p. 836876 Bytes, 2020, doi: 10.6084/M9.FIGSHARE.3443876.V7.
[13] S. Peroni, D. Shotton, and F. Vitali, ‘A Document-inspired Way for Tracking Changes of RDF Data’, in Detection, Representation and Management of Concept Drift in Linked Open Data, Bologna, 2016, pp. 26–33. Available: http://ceur-ws.org/Vol-1799/Drift-a-LOD2016_paper_4.pdf
[14] I. Heibi, S. Peroni, and D. Shotton, ‘Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations’, Scientometrics, vol. 121, no. 2, pp. 1213–1228, Nov. 2019, doi: 10.1007/s11192-019-03217-6.
[15] K. Beck, Test-driven development: by example. Boston: Addison-Wesley, 2003.
[16] A. Massari, ‘time-agnostic-library: benchmark results on execution times and RAM’. Zenodo, Oct. 05, 2021. doi: 10.5281/ZENODO.5549648.
[17] D.-H. Im, S.-W. Lee, and H.-J. Kim, ‘A Version Management Framework for RDF Triple Stores’, Int. J. Softw. Eng. Knowl. Eng., vol. 22, pp. 85–106, 2012.
[18] R. Taelman, M. V. Sande, and R. Verborgh, ‘OSTRICH: Versioned Random-Access Triple Store’, in Companion Proceedings of the Web Conference 2018, 2018, pp. 127–130. Available: https://core.ac.uk/download/pdf/157574975.pdf
[19] N. F. Noy and M. A. Musen, ‘Promptdiff: A Fixed-Point Algorithm for Comparing Ontology Versions’, in Proc. of IAAI, 2002, pp. 744–750.
[20] M. Völkel, W. Winkler, Y. Sure, S. Kruk, and M. Synak, ‘SemVersion: A Versioning System for RDF and Ontologies’, 2005.
[21] M. V. Sande, P. Colpaert, R. Verborgh, S. Coppens, E. Mannens, and R. V. Walle, ‘R&Wbase: Git for triples’, 2013.
[22] T. Neumann and G. Weikum, ‘x-RDF-3X: Fast Querying, High Update Rates, and Consistency for RDF Databases’, Proceedings of the VLDB Endowment, vol. 3, pp. 256–263, 2010.
[23] A. Cerdeira-Pena, A. Farina, J. D. Fernandez, and M. A. Martinez-Prieto, ‘Self-Indexing RDF Archives’, in 2016 Data Compression Conference (DCC), Snowbird, UT, USA, Mar. 2016, pp. 526–535. doi: 10.1109/DCC.2016.40.
[24] T. Pellissier Tanon and F. Suchanek, ‘Querying the Edit History of Wikidata’, in The Semantic Web: ESWC 2019 Satellite Events, vol. 11762, P. Hitzler, S. Kirrane, O. Hartig, V. de Boer, M.-E. Vidal, M. Maleshkova, S. Schlobach, K. Hammar, N. Lasierra, S. Stadtmüller, K. Hose, and R. Verborgh, Eds. Cham: Springer International Publishing, 2019, pp. 161–166. doi: https://doi.org/10.1007/978-3-030-32327-1_32.
Cite this article as: Chiara Di Giambattista, "Performing live time-traversal queries on RDF datasets," in OpenCitations blog, 29/11/2021, https://opencitations.hypotheses.org/1427.
The interconnection between Wikipedia and Wikidata is now larger than ever.
The Wikipedia Citations dataset currently includes around 30M citations from Wikipedia pages to a variety of sources – of which 4M are to scientific publication. The increase of the connection with external data services and the provision of structured data to one of the key elements of Wikipedia articles has two significant benefits: first of all, a better discoverability of relevant encyclopedic articles related to scholarly studies; furthermore, the enacting of Wikipedia as a social authority and policy hub which would enable policymakers to assess the importance of an article, person, research group and institution by looking at how many Wikipedia articles cite them.
These are the motivations behind the “Wikipedia Citations in Wikidata” project, supported by a grant from the WikiCite Initiative. From January 2021 until the end of April, the team of Silvio Peroni (director of OpenCitations), Giovanni Colavizza, Marilena Daquino, Gabriele Pisciotta and Simone Persiani from the University of Bologna (Department of Classical Philology and Italian Studies) has been working in developing a codebase to enrich Wikidata with citations to scholarly publications that are currently referenced in English Wikipedia. This codebase consists of four software modules in Python and integrates new components (a classifier to distinguish citations by cited source and a look-up module to equip citations with identifiers from Crossref or other APIs). In so doing, Wikipedia Citations extends upon prior work which only focused on citations already equipped with identifiers.
In the first two steps of the workflow (extractor and converter) the mapping between the various ways Wikipedia citations are represented in Wikipedia articles and the OpenCitations Data Model (OCDM) has been implemented and then enriched with a component responsible to find new identifiers to the entities in a dataset compliant with OCDM (enricher), while in the pusher step the mapping between the OCDM and Wikidata has been enabled, and the code has been finally released in GitHub.
WCW Workflow Diagram
The extensive documentation that has accompanied the release of the codebase is crucial for one of the principal aims of the project, I.e., the adoption and reuse of the codebase by the community in other relevant Wikimedia projects, while the engagement of various communities (Wikidata, libraries, scholars…) is favored on one side by offering an increased number of citations data included in Wikidata, on the other side by blogging and sharing the updates on Twitter and public mailing lists
This project, whose ambitious purpose is to make Wikipedia contents better discoverable and enrich Wikidata with a ready-to-use corpus for further analysis or for developing new services, is opened to future perspectives. The intention is to use the software to create a dataset of Wikipedia English citations to understand, in particular, how many new entities (i.e., citing Wikipedia pages, cited articles and venues, authors) should be added to Wikidata in order to upload all the set of extracted citations, with the result of adding a massive amount of new bibliographic-related entities to the dataset.
The first steps have been taken, now we aim to extend the engagement of the community involved, especially those scholars that leverage Wikidata in existing services, and to interact with the scholars, libraries and institutions interested in a new approach to research, focused on people (from individuals to research groups) and their intellectual relevance.
Requirements for citations to be treated as First-Class Data Entities
In my introductory blog post, I listed five requirements for the treatment of citations as first-class data entities. The third of these requirements is that they must be storable, searchable and retrievable in an open database designed for bibliographic citations.
In this post, I describe the current status of the OpenCitations Corpus, a well-structured open database specifically developed by OpenCitations and designed to store information about bibliographic citations as Linked Open Data, encoded in RDF (specifically JSON-LD).
What is OpenCitations?
OpenCitations (http://opencitations.net) is an scholarly infrastructure organization that has created and is currently expanding the coverage of the Open Citations Corpus (OCC), an open repository of scholarly citation data made available under a Creative Commons CC0 public domain dedication, which provides in RDF accurate citation information (bibliographic references) harvested from the scholarly literature.
The Co-Directors of OpenCitations are David Shotton, Oxford e-Research Centre, University of Oxford (david.shotton@opencitations.net) and Silvio Peroni, Department of Computer Science and Engineering, University of Bologna (silvio.peroni@opencitations.net).
We are committed to open scholarship, open data, open access publication, and open source software. We espouse the FAIR data principles developed by Force11, of which David Shotton was a founding member, and the aim of the Initiative for OpenCitations (I4OC), of which David Shotton and Silvio Peroni were both founding members, to promote the availability of citation data that is structured, separable, and open.
The principal activity of OpenCitations to date has been the establishment and population of the OpenCitations Corpus.
Holdings of the OpenCitations Corpus
We have so far concentrated on ingesting into the OpenCitations Corpus bibliographic references from open access papers available at PubMed Central, the encoding of these data in RDF, and high-quality curation of the citation links they represent, involving metadata enrichment from the Crossref API and (for authors) the ORCID API.
To date (19th February 2018), the OCC has ingested the references from 302,758 citing bibliographic resources, and contains information about 12,830,347 citation links to 6,549,665 cited resources. Plans to expand the coverage of the OCC are outlined below.
User interfaces
The information within the OCC can be accessed via OSCAR, our new generic OpenCitations RDF Search Application (http://opencitations.net/search) [1], which can be used for textual searches over any triplestore presenting a SPARQL endpoint. Users can employ OSCAR to search the OCC for publication titles, author names, publication years, and identifiers (DOIs, PubMed IDs PubMed Central IDs, ORCIDs, and OCC corpus identifiers). Such a search returns details of all bibliographic resources within the OCC matching the search term, from which their references can be obtained, if known. In the near future, we will complement OSCAR with a browse interface named LUCINDA.
We also provide a SPARQL endpoint for directly querying the Blazegraph triplestore in which we store the OCC RDF, and we plan in the near future to supplement such programmatic access with a REST API. In addition, the contents of the entire triplestore, and of the various sub-databases within the Corpus, together with their provenance information, are downloadable from Figshare as monthly dumps. Once the REST API has been developed, we will turn our attention to developing user interfaces for the interactive visualization of citation graphs.
The OpenCitations Data Model
As described in the previous blog post, we have just completed a comprehensive revision of the OpenCitations Data Model (OCDM, available at https://doi.org/10.6084/m9.figshare.3443876), which we use to capture descriptions of all aspects of the OCC citations and their provenance. This model makes extensive use of our SPAR (Semantic Publishing and Referencing) Ontologies (http://www.sparontologies.net/), which we developed to describe all aspects of the scholarly publishing domain in RDF .
The OpenCitations Data Model is freely available for third parties to use when recording their own bibliographic and citation information in RDF, with the advantage that data so modelled will be immediately compatible with those within the OpenCitations Corpus, which can act as a publishing venue for such third-party data.
Future ingest rate and data sources
Since July 2016, the instantiation of the OpenCitations Corpus currently running at the University of Bologna has been ingesting reference lists from biomedical journal articles at the relatively slow rate of about 200,000 citing bibliographic resources per year. During February 2018, ingestion into the Corpus is suspended, while we move the system to a completely new and more powerful server, supplemented by thirty Raspberry Pi ingest engines that will work in parallel feeding ingested data to the server.
This will increase our ingestion rate ~30-fold to about six million citing bibliographic resources per year, equivalent to ~240 million citations per year at 40 references per paper (the current OCC value is 42.4 references per paper). We should then be able to complete ingestion of the ~1.4 million remaining OA resources at PubMed Central within about three months.
At that stage, we plan to start ingesting references from the ~17 million journal articles whose deposited references are now open at Crossref as a consequence of the Initiative for Open Citations. The scholarly world currently publishes about 2.5 million new journal articles each year, of which about half will be probably be open at Crossref (assuming Elsevier has not by then opened its references). So, by the end of 2020, Crossref will have ~650 million open references. In addition to ingesting new open Crossref references as they are made available, we will be able to eat into the backlog of existing Crossref open references at a catch-up rate of ~190 million per year. By the end of 2020, we anticipate that the OCC should contain ~650 million citations harvested from PMC and Crossref, roughly half the coverage of Web of Science. We are currently also considering ingest of references from other major bibliographic databases.
Our vision for OpenCitations
Our vision is that OpenCitations should become a comprehensive source of open citation information from all disciplines of scholarly endeavour encoded as Linked Open Data, a key component of the academic open infrastructure used on a daily basis without charge by scholars worldwide.
To be of maximum utility, it requires effective graphical user interfaces and analytical tools to interrogate and quantify the data contained within the OCC. Since these data are all open, we anticipate that such interface and tool development will best be undertaken collaboratively within the open scholarly community, and we invite developers interested in such collaboration to contact us at contact@opencitations.net.
Reference
[1] Ivan Heibi, Silvio Peroni and David Shotton (2018). OSCAR: A customisable tool for free-text search over SPARQL endpoints. Accepted to the 2018 International Workshop on Semantics, Analytics, Visualisation: Enhancing Scholarly Dissemination Workshop (https://save-sd.github.io/2018/, co-located with The Web Conference), 24 April 2018 – Lyon, France. Preprint available at https://w3id.org/people/essepuntato/papers/oscar-savesd2018.html
Cite this article as: davidshotton, "Citations as First-Class Data Entities: The OpenCitations Corpus," in OpenCitations blog, 04/03/2018, https://opencitations.hypotheses.org/824.