DeepSeek-R1-Distill-Qwen-7B-rk3588-rkllm-1.1.4
Property | Value |
---|---|
Base Model | Qwen2.5-Math-7B |
License | MIT License |
Context Length | 32,768 tokens |
Paper | arXiv:2501.12948 |
What is DeepSeek-R1-Distill-Qwen-7B-rk3588-rkllm-1.1.4?
This model is a distilled version of the larger DeepSeek-R1 model, specifically optimized for the Qwen architecture at 7B parameters. It represents a significant achievement in maintaining high-level reasoning capabilities while reducing model size through careful distillation techniques.
Implementation Details
The model is built upon the Qwen2.5-Math-7B architecture and has been fine-tuned using carefully curated samples from DeepSeek-R1. It achieves impressive performance metrics, including 55.5% pass@1 on AIME 2024 and 92.8% on MATH-500 benchmarks.
- Optimized for 32,768 token context length
- Implements temperature control (recommended 0.6) for optimal performance
- Supports commercial use and modifications under MIT License
- Compatible with vLLM and SGLang deployment
Core Capabilities
- Strong mathematical reasoning with step-by-step solution generation
- Enhanced coding capabilities with 1189 Codeforces rating
- Efficient processing of complex reasoning tasks
- Multi-turn conversation support
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its ability to maintain strong reasoning capabilities despite being a distilled version of a much larger model. It achieves impressive performance on mathematical and coding tasks while being more accessible for deployment.
Q: What are the recommended use cases?
The model excels in mathematical problem-solving, coding tasks, and general reasoning applications. It's particularly well-suited for applications requiring detailed step-by-step solutions and technical reasoning.