rombos_Mistral-Evolved-11b-v0.1

Maintained By
rombodawg

Mistral-Evolved-11b-v0.1

PropertyValue
Parameter Count11 Billion
Base ModelMistral-7B-v0.1
FormatChatML
AuthorReplete-AI
Model URLhttps://huggingface.co/rombodawg/rombos_Mistral-Evolved-11b-v0.1

What is Mistral-Evolved-11b-v0.1?

Mistral-Evolved-11b-v0.1 is an advanced language model that represents a significant evolution of the original Mistral-7B-v0.1. Developed by Replete-AI, this model features an expanded architecture with 11 billion parameters and has undergone additional pretraining on a private dataset to enhance its capabilities.

Implementation Details

The model utilizes the ChatML format as its primary prompt template, though it maintains compatibility with other formats like Alpaca. It has been made available in multiple quantizations, including GGUF and exl2 formats, making it accessible for various deployment scenarios.

  • Benchmark Performance: 65.8% average score across major evaluations
  • Advanced parameter scaling from 7B to 11B
  • Optimized prompt handling with ChatML format
  • Multiple quantization options available

Core Capabilities

  • ARC: 62.2% accuracy
  • HellaSwag: 84.65% accuracy
  • MMLU: 63.11% accuracy
  • TruthfulQA: 59.23% accuracy
  • Winogrande: 75.77% accuracy
  • GSM8K: 49.81% accuracy

Frequently Asked Questions

Q: What makes this model unique?

The model combines the proven architecture of Mistral-7B with expanded parameters and additional pretraining, achieving strong performance across various benchmarks while maintaining compatibility with multiple prompt formats.

Q: What are the recommended use cases?

Given its comprehensive benchmark performance, the model is well-suited for general-purpose language tasks, including question-answering, reasoning, and text completion. Its various quantization options make it adaptable for different computational requirements.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.