Rombo-LLM-V2.5-Qwen-14b
Property | Value |
---|---|
Base Model | Qwen-14B |
Parameters | 14 billion |
Model Type | Instruction-tuned LLM |
Author | Rombo-Org |
GGUF Version | Available |
What is Rombo-LLM-V2.5-Qwen-14b?
Rombo-LLM-V2.5-Qwen-14b is an enhanced version of the Qwen-14B model, created through continuous fine-tuning using the Ties merge method. This innovative approach combines the instruct model with the base model to achieve superior performance compared to both original versions.
Implementation Details
The model employs a unique continuous fine-tuning methodology, demonstrating the effectiveness of merging instruction-tuned and base models. The implementation shows impressive benchmark results, including a 58.40 score on IFEval and 49.39 on BBH.
- Continuous fine-tuning with Ties merge method
- Available in GGUF format for broader compatibility
- Benchmark average score of 34.52 across multiple tests
Core Capabilities
- Strong performance on zero-shot tasks (IFEval: 58.40)
- Effective 3-shot reasoning (BBH: 49.39)
- Professional knowledge testing (MMLU-PRO: 48.62)
- Mathematical problem-solving capabilities (MATH Lvl 5: 15.63)
- General question answering (GPQA: 16.22)
Frequently Asked Questions
Q: What makes this model unique?
The model's uniqueness lies in its continuous fine-tuning approach and the successful implementation of the Ties merge method, combining the strengths of both instruct and base models.
Q: What are the recommended use cases?
Based on its benchmark performance, the model is well-suited for tasks requiring zero-shot learning, professional knowledge application, and complex reasoning scenarios. It shows particular strength in instruction-following tasks and professional domain applications.