SPLADE CoCondenser SelfDistil
Property | Value |
---|---|
License | cc-by-nc-sa-4.0 |
Paper | View Paper |
Performance (MRR@10) | 37.6 on MS MARCO dev |
Recall@1000 | 98.4 on MS MARCO dev |
What is splade-cocondenser-selfdistil?
SPLADE CoCondenser SelfDistil is an advanced passage retrieval model that leverages sparse neural representations for efficient information retrieval. Developed by NAVER, this model implements the SPLADE (Sparse Lexical AndContextual Encoder) architecture with self-distillation techniques to achieve impressive performance on passage retrieval tasks.
Implementation Details
The model utilizes a sophisticated architecture combining BERT-based transformers with sparse representations and knowledge distillation. It employs bag-of-words approaches while maintaining the benefits of contextual embeddings.
- Implements CoCondenser architecture for enhanced contextual understanding
- Utilizes self-distillation for improved model efficiency
- Optimized for both query and document expansion
- Achieves sparse representations for efficient retrieval
Core Capabilities
- High-performance passage retrieval with 37.6 MRR@10 on MS MARCO
- Exceptional recall with 98.4 R@1000 on MS MARCO dev set
- Efficient query expansion capabilities
- Effective document expansion functionality
- Optimized for production deployment through inference endpoints
Frequently Asked Questions
Q: What makes this model unique?
This model stands out through its combination of CoCondenser architecture and self-distillation techniques, achieving state-of-the-art performance while maintaining efficient sparse representations for practical deployment.
Q: What are the recommended use cases?
The model is particularly well-suited for passage retrieval tasks, document search systems, and information retrieval applications where both accuracy and efficiency are crucial.