StableLM Zephyr 3B GGUF
Property | Value |
---|---|
Parameter Count | 2.8B |
Model Type | Causal Language Model |
License | StabilityAI Non-Commercial Research |
Paper | DPO Paper |
Base Model | stabilityai/stablelm-3b-4e1t |
What is stablelm-zephyr-3b-GGUF?
StableLM Zephyr 3B GGUF is a quantized version of Stability AI's instruction-tuned language model, optimized using Direct Preference Optimization (DPO). This GGUF version enables efficient deployment across various platforms while maintaining impressive performance metrics, including a 6.64 MT-Bench score and 76% AlpacaEval win rate.
Implementation Details
The model is available in multiple quantization formats ranging from 2-bit to 8-bit precision, offering different trade-offs between model size and performance. The recommended Q4_K_M variant provides a balanced compromise at 1.71GB file size.
- Multiple quantization options (Q2_K through Q8_0)
- Supports context length of 4096 tokens
- Optimized for both CPU and GPU inference
- Compatible with popular frameworks like llama.cpp
Core Capabilities
- Strong performance on MT-Bench (6.64) and AlpacaEval (76%)
- Effective on various benchmarks including ARC (47.0%), MMLU (46.3%), and GSM8K (42.3%)
- Specialized instruction following with custom chat template
- Reduced harmful outputs compared to larger models
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for achieving impressive performance metrics despite its relatively small size (3B parameters), making it particularly suitable for resource-constrained environments while maintaining strong capabilities in instruction following and general language tasks.
Q: What are the recommended use cases?
The model is ideal for research and non-commercial applications requiring efficient deployment, particularly suited for chatbots, instruction following, and general language understanding tasks. It's especially valuable when working with limited computational resources while still requiring decent performance.