Light-R1-14B-DS-GGUF
Property | Value |
---|---|
Author | qihoo360 |
Model Size | 14B parameters |
Model URL | Hugging Face |
Quantization Variants | Int4, Int8, Q4-KM |
What is Light-R1-14B-DS-GGUF?
Light-R1-14B-DS-GGUF is a sophisticated language model developed by qihoo360, featuring 14 billion parameters and multiple quantization options. The model demonstrates impressive performance on AIME (AI Mathematics Evaluation) benchmarks, with scores reaching up to 74.0 on AIME24 and 60.2 on AIME25 in its base form.
Implementation Details
The model comes in several quantized variants, each optimized for different use cases: Q4_0 (int4), Q8_0 (int8), and Q4-KM. These quantizations offer different trade-offs between model size and performance, with the int8 version maintaining particularly strong performance relative to the base model.
- Base model: AIME24: 74.0, AIME25: 60.2
- Q4_0 (int4): AIME24: 70.1, AIME25: 54.9
- Q8_0 (int8): AIME24: 71.9, AIME25: 59.4
- Q4-KM: AIME24: 70.0, AIME25: 61.3
Core Capabilities
- Strong performance on mathematical reasoning tasks
- Multiple quantization options for different deployment scenarios
- Efficient memory usage through various compression techniques
- Maintained performance even in compressed formats
Frequently Asked Questions
Q: What makes this model unique?
The model stands out for its strong performance on mathematical reasoning tasks while offering various quantization options that maintain high accuracy levels. The Q4-KM variant, in particular, shows impressive results on AIME25 benchmarks.
Q: What are the recommended use cases?
Given its strong performance on AIME benchmarks, this model is particularly well-suited for mathematical reasoning tasks. The different quantization options make it versatile for various deployment scenarios, from resource-constrained environments (using int4) to higher-performance requirements (using int8).