Mistral-Evolved-11b-v0.1
Property | Value |
---|---|
Parameter Count | 11 Billion |
Base Model | Mistral-7B-v0.1 |
Format | ChatML |
Author | Replete-AI |
Model URL | https://huggingface.co/rombodawg/rombos_Mistral-Evolved-11b-v0.1 |
What is Mistral-Evolved-11b-v0.1?
Mistral-Evolved-11b-v0.1 is an advanced language model that represents a significant evolution of the original Mistral-7B-v0.1. Developed by Replete-AI, this model features an expanded architecture with 11 billion parameters and has undergone additional pretraining on a private dataset to enhance its capabilities.
Implementation Details
The model utilizes the ChatML format as its primary prompt template, though it maintains compatibility with other formats like Alpaca. It has been made available in multiple quantizations, including GGUF and exl2 formats, making it accessible for various deployment scenarios.
- Benchmark Performance: 65.8% average score across major evaluations
- Advanced parameter scaling from 7B to 11B
- Optimized prompt handling with ChatML format
- Multiple quantization options available
Core Capabilities
- ARC: 62.2% accuracy
- HellaSwag: 84.65% accuracy
- MMLU: 63.11% accuracy
- TruthfulQA: 59.23% accuracy
- Winogrande: 75.77% accuracy
- GSM8K: 49.81% accuracy
Frequently Asked Questions
Q: What makes this model unique?
The model combines the proven architecture of Mistral-7B with expanded parameters and additional pretraining, achieving strong performance across various benchmarks while maintaining compatibility with multiple prompt formats.
Q: What are the recommended use cases?
Given its comprehensive benchmark performance, the model is well-suited for general-purpose language tasks, including question-answering, reasoning, and text completion. Its various quantization options make it adaptable for different computational requirements.