Moistral-11B-v3
Property | Value |
---|---|
Base Model | Fimbulvetr-11B-v2 |
License | CC-BY-NC-4.0 |
Training Data | 8K samples |
Primary Use | Text Generation |
What is Moistral-11B-v3?
Moistral-11B-v3 is an advanced fine-tuned language model based on Fimbulvetr-11B-v2, specifically optimized for creative text generation and narrative content. This third iteration brings significant improvements in both intelligence and vocabulary diversity, trained on an expanded dataset of 8,000 samples.
Implementation Details
The model utilizes the Transformer architecture and implements the Alpaca Instruct format for optimal interaction. It supports multiple interaction modes including character-based, narrator-based, and director-style inputs.
- Enhanced genre balance with new categories including fantasy, science fiction, and diverse narrative styles
- Improved formatting and sanitization of training data
- Optimized for long-form content generation up to 8K tokens
- Multiple GGUF variants available including IMATRIX and EXL2
Core Capabilities
- Long-form narrative generation with consistent quality
- Multiple perspective handling (character, narrator, director)
- Enhanced vocabulary and genre diversity
- Improved chat and instruct modes
Frequently Asked Questions
Q: What makes this model unique?
The model stands out for its ability to generate coherent long-form content while maintaining consistent quality and character perspectives. It features an extensively cleaned dataset and improved training methodology compared to previous versions.
Q: What are the recommended use cases?
The model excels in creative writing scenarios, particularly in generating novel-style content and maintaining consistent narrative flow. It's optimized for the Novel/Story format and works best with regenerative approaches for longer content.