WikiJournal Preprints/Generic Tuple Store
This article is an unpublished pre-print not yet undergoing peer review.
To submit this article for peer review, please:
Article information
Abstract
Introduction
[edit | edit source]While there is prior art regarding the versioning of triple and quads. The review of existing software in Collaborative Open Data versioning: a pragmatic approach using Linked Data[1] concludes that there still much work to do in this field to make it practical and usable. The most important indicator will be Wikidata. If there was a readily available implementation of versioned triple store that can scale to 9 billions triples and 1.7 TB that is free software, it would be Wikidata. In spite of lot of efforts, Wikidata, in particular Wikidata Query Service (WDQS) does not scale. Guillaume Lederrey, Engineering Manager, Search Platform at Wikimedia Foundation stated on Wikidata few month back:
In an ideal world, WDQS should:
Scaling graph databases is a "known hard problem", and we are reaching a scale where there are no obvious easy solutions to address all the above constraints. At this point, just "throwing hardware at the problem" is not an option anymore. We need to go deeper into the details and potentially make major changes to the current architecture. |
Wikidata was founded 7 years ago. Since then the tools and practices have evolved. In particular, WiredTiger was acquired by MongoDB, and as such, has seen much more production workload and bug fixes. More recently, in 2015, Apple released FoundationDB that was created in 2009 to tackle the difficult problem of distributed, scalable databases with strong guarantees. This two relatively new tools made it possible to build upon the Generic Tuple Store (nstore) standardized in Scheme Request for Implementation as SRFI-168[2].
The nstore that is presented in this document is the basic building block that allows to implement the Versioned Generic Tuple Store (vnstore) that will be presented in another document. In the use-cases section, the vnstore implementation is introduced as to legitimate the nstore.
The first part of this article will present the main topics that are discussed in the paper and how they relate to the current work. Then several use-cases for the nstore are presented. The document ends with a conclusion and future work.
Background
[edit | edit source]Resource Description Framework
[edit | edit source]RDF is World Wide Web Consortium (W3C) set of standards that aims to facilitating cooperation around data by specifying several tools. Among other things it specified means to exchange, query and somewhat how to store data.
The following sections dive into several part of the RDF framework and explain how they relate to nstore.
SPARQL
[edit | edit source]SPARQL is a query language part of RDF that specify the language that must be used by RDF databases to store and query data. It also provides ways to do federated queries. That is, queries across several databases. Like it is explained in the literature SPARQL can be difficult at times to implement. Instead of aiming for direct interoperability, nstore take the stance to primarily deliver its main features:
- tuple of n items
- good performance
- horizontal scalability
- easy to setup
nstore internal query language is very similar in principle to SPARQL even if it is not same syntax.
SPARQL specify various data types based on XML specification. nstore doesn't conform to that specification. Instead, nstore can store anything that has a Scheme representation which is superset of RDF base data types. At query time, the user or client must convert the Scheme representation into the intended representation possibly relying on tuples added in the nstore at import time.
Vocabularies, Ontologies and Linked Data
[edit | edit source]With RDF comes a specification to describe the content of a database. For instance, the INSPIRE initiative is an interesting project that aims that standardizing across organization a vocabulary to exchange spatial data. There is many competing and cooperating vocabularies.
Ordered Key-Value Store
[edit | edit source]Key-value stores offers a rather high level primitive to build high performance, multi-model, and domain specific databases. The common denominator of key-value stores is that they are mappings of bytes where keys are always sorted in the lexicographic order. Even if they do not all expose a cursor interface, they certainly allow movements inside the mapping or range queries (also known as slices).
Nowadays there is numerous libraries offering compatible interface. FoundationDB is one. Similar software include Tokyo Cabinet, Kyoto Cabinet, LMDB, LevelDB and RocksDB. They offer different trade-offs and features. nstore use WiredTiger because it is not a bad choice. It performs well on some benchmarks[3]. It takes in charge the difficult matter of delivering Atomic, Consistent, Isolated and Durable (ACID) storage engine. WiredTiger also handle in-memory caching.
The prototype only use WiredTiger, but thanks to SRFI-168 it is possible to use FoundationDB given the appropriate code is contributed.
Scheme Programming Language
[edit | edit source]According to Wikipedia:
Scheme is a programming language that supports multiple paradigms, including functional and imperative programming. It is one of the three main dialects of Lisp, alongside Common Lisp and Clojure. Unlike Common Lisp, Scheme follows a minimalist design philosophy, specifying a small standard core with powerful tools for language extension. |
High-level languages like Scheme are not the preferred tools to build database abstraction, so far. That said, some had success with Java, Go and Clojure[4]. With the advent of OKVS and the massive improvements of Scheme compilers the situation is different. Key-value stores solve the performance problems while Scheme allows to express quickly high-level abstractions that fit exactly the domain problem.
Chez Scheme is (prolly) the fastest Scheme implementation[5] and in particular it is faster than Racket[6]. Which makes Chez probably the fastest dynamically typed language in the known Universe. Scheme community has good Science culture. It has been the inspiration that allowed nstore to take the current form.
Use-cases
[edit | edit source]They are some situation where three or four items in a tuple are not enough. This includes but is not limited to tracking provenance, lineage, license or other metadata. There is several ways to reify a tuple in a triple or quad store as presented in [7]. The approach taken by nstore to bundle metadata with the rest of the tuple is even more interesting from a performance point of view when the reification of a tuple is systematic. Metadata requirements are known beforehand and required for every tuple. For instance, in the case of the versioned generic tuple store (vnstore) where every tuple is associated with its history significance and a Boolean denoting whether it is alive or dead. Another example is when the user wants to know the provenance of every single tuple. While there is no rigorous benchmarks of the different reification approaches compared to the nstore approach. There is the intuition that the nstore is faster and consume less disk space than any kind of reification.
Conclusion and future work
[edit | edit source]Preliminary micro-benchmarks show that chez-nomunofu is faster than the competition whether it is time taken to import the data or query time. There is three aspects that remains to be explored: a) Include support for FoundationDB b) Implement the SPARQL middleware c) More rigorous benchmarks based on WDQS SPARQL logs[8].
Additional information
[edit | edit source]Acknowledgements
[edit | edit source]StackOverflow user zhoraster helped pin the mathematics behind the implementation of generic tuple store and Mike Earnest provided an algorithm to compute the minimal set of tuple items permutations that allows efficient querying[9].
Grant
[edit | edit source]This work is part of a wikimedia grant request.
References
[edit | edit source]- ↑ Garrote, Antonio; García, María N. Moreno (2013). Cases on Open-Linked Data and Semantic Web Applications. IGI Global. pp. 192–198. ISBN 978-1-4666-2827-4. http://dx.doi.org/10.4018/978-1-4666-2827-4.ch010.
- ↑ "Generic Tuple Store Database". srfi.schemers.org. Retrieved 2019-12-23.
- ↑ "On-Disk Microbenchmark". www.lmdb.tech. Retrieved 2019-12-23.
- ↑ "Datomic Documentation". docs.datomic.com. Retrieved 2019-12-23.
- ↑ "Scheme Implementations Benchmarks". ecraven.github.io. Retrieved 2019-12-23.
- ↑ "Racket vs Python 3 - Which programs are fastest? | Computer Language Benchmarks Game". benchmarksgame-team.pages.debian.net. Retrieved 2019-12-23.
- ↑ "Evaluation of Metadata Representations in RDF stores | www.semantic-web-journal.net". www.semantic-web-journal.net. Retrieved 2019-12-24.
- ↑ "Getting the Most out of Wikidata: Semantic Technology Usage in Wikipedia's Knowledge Graph - International Center for Computational Logic". iccl.inf.tu-dresden.de. Retrieved 2019-12-23.
- ↑ "combinatorics - What are the smallest sets of $n$-permutations of $\{1, 2, \ldots, n\}$ where all n-combinations are "prefix-permutation" of one element of the set?". Mathematics Stack Exchange. Retrieved 2019-12-23.