"SPLADE" redirects here. For the eating utensil, see splayd.
Learned sparse retrieval or sparse neural search is an approach to Information Retrieval which uses a sparse vector representation of queries and documents.[1] It borrows techniques both from lexical bag-of-words and vector embedding algorithms, and is claimed to perform better than either alone. The best-known sparse neural search systems are SPLADE[2] and its successor SPLADE v2.[3] Others include DeepCT,[4] uniCOIL,[5] EPIC,[6] DeepImpact,[7] TILDE and TILDEv2,[8] Sparta,[9] SPLADE-max, and DistilSPLADE-max.[3]
There are also extensions of sparse retrieval approaches to the vision-language domain, where these methods are applied to multimodal data, such as combining text with images.[10] This expansion enables the retrieval of relevant content across different modalities, such as finding images based on text queries or vice versa.
Some implementations of SPLADE have similar latency to Okapi BM25 lexical search while giving as good results as state-of-the-art neural rankers on in-domain data.[11]
SPRINT is a toolkit for evaluating neural sparse retrieval systems.[13]
Splade
SPLADE (Sparse Lexical and Expansion Model) is a neural retrieval model that learns sparse vector representations for queries and documents, combining elements of traditional lexical matching with semantic representations derived from transformer-based architectures[14]. Unlike dense retrieval models that rely on continuous vector spaces, SPLADE produces sparse outputs that are compatible with inverted index structures commonly used in information retrieval systems[14].
The original SPLADE model was introduced at the 44th International ACM SIGIR Conference in 2021[14]. An updated version, SPLADE v2, incorporated modifications to its pooling mechanisms, document expansion strategies, and training objectives using knowledge distillation. Empirical evaluations have shown improvements on benchmarks such as the TREC Deep Learning 2019 dataset and the BEIR benchmark suite[15].
These models aim to maintain retrieval efficiency comparable to traditional sparse methods while enhancing semantic matching capabilities, offering a balance between effectiveness and computational cost[16].
^ abFormal, Thibault; Piworwarski, Benjamin; Lassance, Carlos; Clinchant, Stéphane (21 September 2021). "SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval". arXiv:2109.10086v1 [cs.IR].
^Lin, Jimmy; Ma, Xueguang (28 June 2021). "A few brief notes on DeepImpact, COIL, and a conceptual framework for information retrieval techniques". arXiv:2106.14807 [cs.IR].
^Nguyen, Thong; Hendriksen, Mariya; Yates, Andrew; de Rijke, Maarten (2024). "Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control". European Conference on Information Retrieval. Cham: Springer Nature Switzerland. pp. 448–464.