Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF
Property | Value |
---|---|
Base Model | Google Gemma-3b |
Context Length | 128k tokens |
Quantization | GGUF format with MAX optimization |
Author | DavidAU |
What is Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF?
This is a specialized version of Google's Gemma-3 model, enhanced with a "Neo Horror Imatrix" and optimized quantization for improved performance. The model features MAX optimization, where embed and output tensors are set to BF16 (full precision) across all quantizations, resulting in enhanced quality and depth at the cost of slightly larger model size.
Implementation Details
The model incorporates several key technical innovations: The "Horror Imatrix" built using Grand Horror 16B adds a horror-themed element to outputs, while the NEO IMATRIX dataset improves instruction following and general performance. The model supports various quantization levels, from IQ1_S to Q8_0, with recommendations for different use cases.
- Enhanced BF16 precision for embed and output tensors
- 128k context window for extended processing
- Horror-themed fine-tuning using Grand Horror 16B
- Optimized for creative writing and narrative generation
Core Capabilities
- Superior instruction following with NEO IMATRIX integration
- Enhanced creative writing with horror elements
- Multiple quantization options for different hardware configurations
- Optimal performance at IQ4s/Q4s levels for horror-themed content
Frequently Asked Questions
Q: What makes this model unique?
The combination of MAX optimization with horror-themed fine-tuning and the NEO IMATRIX dataset creates a uniquely capable model for creative writing, particularly in the horror genre. The BF16 precision for crucial tensors ensures higher quality outputs.
Q: What are the recommended use cases?
The model excels at creative writing, particularly horror-themed content. IQ3s/IQ4XS/IQ4NL quantizations are recommended for creative tasks, while Q5s/Q6/Q8 are better for general usage. Q4_0/Q5_0 are optimized for mobile devices.