Loyal-Macaroni-Maid-7B
Property | Value |
---|---|
Base Model | Mistral-7B-v0.1 |
License | CC-BY-NC-4.0 |
MT-Bench Score | 7.95 |
MMLU Score | ~64.9 |
What is Loyal-Macaroni-Maid-7B?
Loyal-Macaroni-Maid-7B is an advanced language model that combines high-performance benchmarking capabilities with specialized roleplay abilities. Built using the DARE TIES merger methodology, it achieves impressive MT-Bench scores comparable to GPT-3.5-Turbo while maintaining strong roleplay capabilities.
Implementation Details
The model employs a sophisticated merger of multiple models including Marcoroni-neural-chat-7B-v2, loyal-piano-m7, Toppy-M-7B, Noromaid-7b-v0.2, and NSFW_DPO_vmgb-7b. It uses varying density and weight parameters to optimize performance, with primary models merged at 0.8 density and secondary models at 0.4 density.
- DARE TIES merger with 1.3 total weight for enhanced characteristic preservation
- Supports both custom format and Alpaca prompt templates
- Optimized for SillyTavern implementation with specific character cards
Core Capabilities
- Strong benchmark performance (MT-Bench: 7.95, MMLU: ~64.9)
- Enhanced roleplay capabilities with character card adherence
- Flexible prompt format support
- Balanced performance between general tasks and roleplay scenarios
Frequently Asked Questions
Q: What makes this model unique?
The model uniquely combines high benchmark scores with strong roleplay capabilities, achieved through a carefully balanced merger of multiple specialized models using the DARE TIES methodology. It performs exceptionally well on MT-Bench, scoring close to GPT-3.5-Turbo while maintaining roleplay abilities.
Q: What are the recommended use cases?
The model excels in roleplay scenarios, particularly when used with RPG Narrator in group chats. It's also effective for general chat interactions and can handle both SFW and NSFW content. The model supports various prompt formats and works best with provided SillyTavern configurations.