Algoritmica e Laboratorio "Corso B"
Teacher
Bachelor Degree in Computer Science
Giulio Ermanno Pibiri is a Post-Doctoral Research Fellow in Computer Science, currently affiliated to the HPC-Lab, ISTI-CNR (Pisa, Italy). He obtained a PhD in Computer Science in 2019, from the University of Pisa.
His research activity focuses on devising compressed data structures to index and search large quantities of data. The proposed solutions are available as research papers and optimized software libraries.
The list of my publications follows below. You can also visit my DBLP and GoogleScholar profiles.
For the code, have a look at my GitHub.
A dictionary of $k$-mers is a data structure that stores a set of $n$ distinct $k$-mers and supports membership queries. This data structure is at the hearth of many important tasks in computational biology. High-throughput sequencing of DNA can produce very large $k$-mer sets, in the size of billions of strings – in such cases, the memory consumption and query efficiency of the data structure is a concrete challenge. To tackle this problem, we describe a compressed and associative dictionary for $k$-mers, that is: a data structure where strings are represented in compact form and each of them is associated to a unique integer identifier in the range $[0,n)$. We show that some statistical properties of $k$-mer minimizers can be exploited by minimal perfect hashing to substantially improve the space/time trade-off of the dictionary compared to the best-known solutions.
Time series are ubiquitous in computing as a key ingredi- ent of many machine learning analytics, ranging from classification to forecasting. Typically, the training of such machine learning algorithms on time series requires to access the data in temporal order for several times. Therefore, a compression algorithm providing good compression ratios and fast decompression speed is desirable. In this paper, we present TSXor, a simple yet effective lossless compressor for time series. The main idea is to exploit the redundancy/similarity between close-in-time values through a window that acts as a cache, as to improve the compression ratio and decompression speed. We show that TSXor achieves up to 3× better compression and up to 2X faster decompression than the state of the art on real-world datasets.
A minimal perfect hash function $f$ for a set $S$ of $n$ keys is a bijective function of the form $f : S \rightarrow \{0,\ldots,n-1\}$. These functions are important for many practical applications in computing, such as search engines, computer networks, and databases. Several algorithms have been proposed to build minimal perfect hash functions that: scale well to large sets, retain fast evaluation time, and take very little space, e.g., 2 - 3 bits/key. PTHash is one such algorithm, achieving very fast evaluation in compressed space, typically several times faster than other techniques. In this work, we propose a new construction algorithm for PThash enabling: (1) multi-threading, to either build functions more quickly or more space-efficiently, and (2) external-memory processing to scale to inputs much larger than the available internal memory. Only few other algorithms in the literature share these features, despite of their big practical impact. We conduct an extensive experimental assessment on large real-world string collections and show that, with respect to other techniques, PTHash is competitive in construction time and space consumption, but retains 2 - 6$\times$ better lookup time.
Given a set $S$ of $n$ distinct keys, a function $f$ that bijectively maps the keys of $S$ into the range $\{0,\ldots,n-1\}$ is called a minimal perfect hash function for $S$. Algorithms that find such functions when $n$ is large and retain constant evaluation time are of practical interest; for instance, search engines and databases typically use minimal perfect hash functions to quickly assign identifiers to static sets of variable-length keys such as strings. The challenge is to design an algorithm which is efficient in three different aspects: time to find $f$ (construction time), time to evaluate $f$ on a key of $S$ (lookup time), and space of representation for $f$. Several algorithms have been proposed to trade-off between these aspects. In 1992, Fox, Chen, and Heath (FCH) presented an algorithm at SIGIR providing very fast lookup evaluation. However, the approach received little attention because of its large construction time and higher space consumption compared to other subsequent techniques. Almost thirty years later we revisit their framework and present an improved algorithm that scales well to large sets and reduces space consumption altogether, without compromising the lookup time. We conduct an extensive experimental assessment and show that the algorithm finds functions that are competitive in space with state-of-the art techniques and provide $2-4\times$ better lookup time.
The problem of answering rank/select queries over a bitmap is of utmost importance for many succinct data structures. When the bitmap does not change, many solutions exist in the theoretical and practical side. In this work we consider the case where one is allowed to modify the bitmap via a flip(i) operation that toggles its i-th bit. By adapting and properly extending some results concerning prefix-sum data structures, we present a practical solution to the problem, tailored for modern CPU instruction sets. Compared to the state-of-the-art, our solution improves runtime with no space degradation. Moreover, it does not incur in a significant runtime penalty when compared to the fastest immutable indexes, while providing even lower space overhead.
The sheer increase in volume of RDF data demands efficient solutions for the triple indexing problem, that is devising a compressed data structure to compactly represent RDF triples by guaranteeing, at the same time, fast pattern matching operations. This problem lies at the heart of delivering good practical performance for the resolution of complex SPARQL queries on large RDF datasets.
In this work, we propose a trie-based index layout to solve the problem and introduce two novel techniques to reduce its space of representation for improved effectiveness. The extensive experimental analysis con- ducted over a wide range of publicly available real-world datasets, reveals that our best space/time trade-off configuration substantially outperforms existing solutions at the state-of-the-art, by taking 30-60% less space and speeding up query execution by a factor of 2-81X.
We present a data structure that encodes a sorted integer sequence in small space allowing, at the same time, fast intersection operations. The data layout is carefully designed to exploit word-level parallelism and SIMD instructions, hence providing good practical performance. The core algorithmic idea is that of recursive partitioning the universe of representation: a markedly different paradigm than the widespread strategy of partitioning the sequence based on its length. Extensive experimentation and comparison against several competitive techniques shows that the proposed solution embodies an improved space/time trade-off for the set intersection problem.
The data structure at the core of large-scale search engines is the inverted index, which is essentially a collection of sorted integer sequences called inverted lists. Because of the many documents indexed by such engines and stringent performance requirements imposed by the heavy load of queries, the inverted index stores billions of integers that must be searched efficiently. In this scenario, index compression is essential because it leads to a better exploitation of the computer memory hierarchy for faster query processing and, at the same time, allows reducing the number of storage machines. The aim of this article is twofold: first, surveying the encoding algorithms suitable for inverted index compression and, second, characterizing the performance of the inverted index through experimentation.
Given an integer array $A$, the prefix-sum problem is to answer sum($i$) queries that return the sum of the elements in $A[0..i]$, knowing that the integers in $A$ can be changed. It is a classic problem in data structure design with a wide range of applications in computing from coding to databases. In this work, we propose and compare several and practical solutions to this problem, showing that new trade-offs between the performance of queries and updates can be achieved on modern hardware.
Query Auto-Completion
(QAC) is an ubiquitous feature
of modern textual search systems, suggesting possible
ways of completing the query being typed by the user.
Efficiency is crucial to make the system have
a real-time responsiveness when operating in the
million-scale search space.
Prior work has extensively advocated the use of
a trie data structure for fast prefix-search operations
in compact space.
However, searching by prefix has little discovery power in that
only completions that are prefixed by the query
are returned. This may impact negatively the effectiveness
of the QAC system, with a consequent
monetary loss for real applications like Web Search Engines
and eCommerce.
In this work we describe the implementation that empowers a new
QAC system at eBay, and discuss its efficiency/effectiveness
in relation to other approaches at the state-of-the-art.
The solution is based on the combination of
an inverted index with succinct data structures,
a much less explored direction in the literature.
This system is replacing the previous implementation based on
Apache SOLR that was not always able to meet the
required service-level-agreement.
The representation of a dynamic ordered set of $n$ integer keys drawn from a universe of size $m$ is a fundamental data structuring problem. Many solutions to this problem achieve optimal time but take polynomial space, therefore preserving time optimality in the compressed space regime is the problem we address in this work. For a polynomial universe $m = n^{\Theta(1)}$, we give a solution that takes $\textsf{EF}(n,m) + o(n)$ bits, where $\textsf{EF}(n,m) \leq n\lceil \log_2(m/n)\rceil + 2n$ is the cost in bits of the Elias-Fano representation of the set, and supports random access to the $i$-th smallest element in $O(\log n/ \log\log n)$ time, updates and predecessor search in $O(\log\log n)$ time. These time bounds are optimal.
The sheer increase in volume of RDF data demands efficient solutions for the triple indexing problem, that is devising a compressed data structure to compactly represent RDF triples by guaranteeing, at the same time, fast pattern matching operations. This problem lies at the heart of delivering good practical performance for the resolution of complex SPARQL queries on large RDF datasets.
In this work, we propose a trie-based index layout to solve the problem and introduce two novel techniques to reduce its space of representation for improved effectiveness. The extensive experimental analysis con- ducted over a wide range of publicly available real-world datasets, reveals that our best space/time trade-off configuration substantially outperforms existing solutions at the state-of-the-art, by taking 30-60% less space and speeding up query execution by a factor of 2-81X.
The ubiquitous Variable-Byte encoding is one of the fastest compressed representation for integer sequences. However, its compression ratio is usually not competitive with other more sophisticated encoders, especially when the integers to be compressed are small that is the typical case for inverted indexes.
This paper shows that the compression ratio of Variable-Byte can be improved by 2X by adopting a partitioned representation of the inverted lists. This makes Variable-Byte surprisingly competitive in space with the best bit-aligned encoders, hence disproving the folklore belief that Variable-Byte is space-inefficient for inverted index compression. Despite the significant space savings, we show that our optimization almost comes for free, given that: we introduce an optimal partitioning algorithm that does not affect indexing time because of its linear-time complexity; we show that the query processing speed of Variable-Byte is preserved, with an extensive experimental analysis and comparison with several other state-of-the-art encoders.
This thesis concerns the design of compressed data structures for the efficient storage of massive datasets of integer sequences and short strings.
Two fundamental problems concern the handling of large n-gram language models: indexing, that is compressing the n-grams and associated satellite values without compromising their retrieval speed, and estimation, that is computing the probability distribution of the n-grams extracted from a large textual source.
Performing these two tasks efficiently is vital for several applications in the fields of Information Retrieval, Natural Language Processing and Machine Learning, such as auto-completion in search engines and machine translation.
Regarding the problem of indexing, we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to the state-of-the-art solutions and related software packages. In particular, we present a compressed trie data structure in which each word of an n-gram following a context of fixed length k, i.e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. Since the number of words following a given context is typically very small in natural languages, we lower the space of representation to compression levels that were never achieved before, allowing the indexing of billions of strings. Despite the significant savings in space, our technique introduces a negligible penalty at query time.
Specifically, the most space-efficient competitors in the literature, that are both quantized and lossy, do not take less than our trie data structure and are up to 5 times slower. Conversely, our trie is as fast as the fastest competitor, but also retains an advantage of up to 65% in absolute space.
Regarding the problem of estimation, we present a novel algorithm for estimating modified Kneser-Ney language models that have emerged as the de-facto choice for language modeling in both academia and industry, thanks to their relatively low perplexity performance. Estimating such models from large textual sources poses the challenge of devising algorithms that make a parsimonious use of the disk.
The state-of-the-art algorithm uses three sorting steps in external memory: we show an improved construc- tion that requires only one sorting step by exploiting the properties of the extracted n-gram strings. With an extensive experimental analysis performed on billions of n-grams, we show an average improvement of 4.5 times on the total running time of the previous approach.
Dictionary-based compression schemes provide fast decoding operation, typically at the expense of reduced compression effectiveness compared to statistical or probability-based approaches.
In this work, we apply dictionary-based techniques to the compression of inverted lists, showing that the high degree of regularity that these integer sequences exhibit is a good match for certain types of dictionary methods, and that an important new trade-off balance between compression effectiveness and compression efficiency can be achieved.
Our observations are supported by experiments using the document-level inverted index data for two large text collections, and a wide range of other index compression implementations as reference points. Those experiments demonstrate that the gap between efficiency and effectiveness can be substantially narrowed.
The data structure at the core of nowadays large-scale search engines, social networks and storage architectures is the inverted index, which can be regarded as being a collection of sorted integer sequences called inverted lists. Because of the many documents indexed by search engines and stringent performance requirements dictated by the heavy load of user queries, the inverted lists often store several million (even billion) of integers and must be searched efficiently.
In this scenario, compressing the inverted lists of the index appears as a mandatory design phase since it can introduce a twofold advantage over a non-compressed representation: feed faster memory levels with more data in order to speed up the query processing algorithms and reduce the number of storage machines needed to host the whole index. The scope of the chapter is the one of surveying the most important encoding algorithms developed for efficient inverted index compression.
The efficient indexing of large and sparse $N$-gram datasets is crucial in several applications in Information Retrieval, Natural Language Processing and Machine Learning.
Because of the stringent efficiency requirements, dealing with billions of $N$-grams poses the challenge of introducing a compressed representation that preserves the query processing speed.
In this paper we study the problem of reducing the space required by the
representation of such datasets, maintaining the capability of looking up for
a given $N$-gram within micro seconds.
For this purpose we describe compressed, exact and lossless data structures that
achieve, at the same time, high space reductions and no time degradation with
respect to state-of-the-art software packages.
In particular, we present a trie data structure in which each word following a context of fixed length $k$, i.e., its preceding $k$ words, is encoded as an integer whose value is proportional to the number of words that follow such
context.
Since the number of words following a given context is typically very
small in natural languages, we are able to lower the space of representation to compression levels that were never achieved before.
Despite the significant savings in space, we show that our technique introduces a negligible penalty at query time.
We show that it is possible to store a dynamic ordered set $\mathcal{S}(n,u)$ of $n$ integers drawn from a bounded universe of size $u$ in space close to the information-theoretic lower bound and yet preserve the asymptotic time optimality of the operations.
Our results leverage on the Elias-Fano representation of $\mathcal{S}(n, u)$ which takes $\textsf{EF}(\mathcal{S}(n, u)) = n\lceil \log\frac{u}{n}\rceil + 2n$ bits of space and can be shown to be less than half a bit per element away from the information-theoretic minimum.
Considering a RAM model with memory words of $\Theta(\log u)$ bits, we focus on the case in which the integers of $\mathcal{S}$ are drawn from a polynomial universe of size $u = n^\gamma$, for any $\gamma = \Theta(1)$.
We represent $\mathcal{S}(n,u)$ with $\textsf{EF}(\mathcal{S}(n, u)) + o(n)$ bits of space and:
1. support static predecessor/successor queries in $\mathcal{O}\min\{1+\log\frac{u}{n}, \log\log n\})$;
2. make $\mathcal{S}$ grow in an append-only fashion by spending $\mathcal{O}(1)$ per inserted element;
3. support random access in $\mathcal{O}(\frac{\log n}{\log\log n})$ worst-case, insertions/deletions in $\mathcal{O}(\frac{\log n}{\log\log n})$ amortized and predecessor/successor queries in $\mathcal{O}(\min\{1+\log\frac{u}{n}, \log\log n\})$ worst-case time. These time bounds are optimal.
State-of-the-art encoders for inverted indexes compress each posting list individually. Encoding clusters of posting lists offers the possibility of reducing the redundancy of the lists while maintaining a noticeable query processing speed.
In this paper we propose a new index representation based on clustering the collection of posting lists and, for each created cluster, building an ad-hoc reference list with respect to which all lists in the cluster are encoded with Elias-Fano. We describe a posting lists clustering algorithm tailored for our encoder and two methods for building the reference list for a cluster. Both approaches are heuristic and differ in the way postings are added to the reference list: or according to their frequency in the cluster or according to the number of bits necessary for their representation.
The extensive experimental analysis indicates that significant space reductions are indeed possible, beating the best state-of-the-art encoders.