Cydonia-24B-v2
Property | Value |
---|---|
Base Model | Mistral Small 2501 |
Context Length | Up to 24K tokens |
Supported Templates | Mistral v7 Tekken, Metharme |
Model URL | https://huggingface.co/TheDrummer/Cydonia-24B-v2 |
What is Cydonia-24B-v2?
Cydonia-24B-v2 is an advanced language model fine-tuned from Mistral's latest 'Small' model (2501). It represents a significant advancement in context handling and detail retention capabilities, designed to maintain stability across extended context lengths up to 24K tokens.
Implementation Details
The model is available in multiple formats including the original version, GGUF implementation, and optimized versions like iMatrix (recommended) and EXL2. It implements the Mistral v7 Tekken chat template as the primary conversation format, with additional support for Metharme templates.
- Extended context handling up to 24K tokens
- Stable performance across long contexts
- Enhanced detail and scene retention capabilities
- Multiple format availability for different deployment needs
Core Capabilities
- Robust detail retention for complex scenarios
- Stable performance in extended conversations
- Advanced vocabulary handling
- Consistent context management across long sequences
Frequently Asked Questions
Q: What makes this model unique?
The model's ability to handle extended context lengths while maintaining stability and detail retention sets it apart from similar implementations. Its optimization for both standard and complex scenarios makes it versatile for various applications.
Q: What are the recommended use cases?
The model excels in applications requiring extended context handling and detailed scene retention. It's particularly suitable for complex conversations and scenarios where maintaining context consistency is crucial.