DeepSeek-R1-Distill-Qwen-1.5B-uncensored-GGUF

Maintained By
mradermacher

DeepSeek-R1-Distill-Qwen-1.5B-uncensored-GGUF

PropertyValue
Authormradermacher
Original ModelDeepSeek-R1-Distill-Qwen-1.5B-uncensored
FormatGGUF (Various Quantizations)
Model Size Range0.9GB - 3.7GB

What is DeepSeek-R1-Distill-Qwen-1.5B-uncensored-GGUF?

This is a quantized version of the DeepSeek-R1-Distill-Qwen-1.5B-uncensored model, converted into the GGUF format for optimized deployment and reduced memory footprint. The model offers various quantization options to balance between model size and performance.

Implementation Details

The model provides multiple quantization formats, each optimized for different use cases:

  • Q2_K: Smallest size at 0.9GB
  • Q4_K_S/M: Fast and recommended (1.2GB)
  • Q6_K: Very good quality at 1.6GB
  • Q8_0: Best quality while maintaining speed at 2.0GB
  • F16: Full precision at 3.7GB

Core Capabilities

  • Multiple quantization options for different deployment scenarios
  • Size-optimized versions for resource-constrained environments
  • Quality-preserving quantization techniques
  • Compatible with standard GGUF loading implementations

Frequently Asked Questions

Q: What makes this model unique?

This model offers various quantization options of the DeepSeek-R1-Distill-Qwen-1.5B-uncensored, allowing users to choose the optimal trade-off between model size and quality for their specific use case. The availability of multiple GGUF formats makes it highly versatile for different deployment scenarios.

Q: What are the recommended use cases?

For most applications, the Q4_K_S or Q4_K_M variants (1.2GB) are recommended as they offer a good balance between speed and quality. For highest quality requirements, the Q8_0 variant (2.0GB) is recommended, while resource-constrained environments might benefit from the Q2_K variant (0.9GB).

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.