oxy-1-small
Property | Value |
---|---|
Base Model | Qwen/Qwen2.5-14B-Instruct |
License | Apache-2.0 |
Context Window | 32,768 tokens |
Output Tokens | 8,192 tokens |
Model URL | Hugging Face |
What is oxy-1-small?
Oxy-1-small is a specialized fine-tuned version of the Qwen2.5-14B-Instruct model, developed by Oxygen (oxyapi) with contributions from TornadoSoftwares. The model is specifically optimized for role-play scenarios and interactive storytelling, while maintaining efficiency through its compact architecture.
Implementation Details
Built on the Qwen2.5-14B-Instruct architecture, this model implements the ChatML prompt format and supports various parameter controls including temperature, top_p, top_k, and frequency/presence penalties. The model features an impressive 32K token input window and can generate responses up to 8,192 tokens.
- Fine-tuned specifically for role-play dialogue generation
- Supports advanced parameter tuning for output control
- Implements ChatML format for structured conversations
- Optimized for efficiency while maintaining performance
Core Capabilities
- Dynamic and contextually rich role-play dialogue generation
- Extended context understanding with 32K token window
- Competitive performance metrics (33.14 average on Open LLM Leaderboard)
- Strong performance in inference tasks (62.45 on IFEval)
- Specialized for creative and immersive storytelling
Frequently Asked Questions
Q: What makes this model unique?
The model's specialization in role-play scenarios, combined with its extensive context window and efficient architecture, makes it particularly suitable for creative writing and interactive storytelling applications. Its performance metrics, especially in inference tasks, demonstrate its capability in understanding and generating contextually appropriate responses.
Q: What are the recommended use cases?
This model is ideal for applications involving interactive fiction, role-playing games, character-based dialogue systems, and creative writing assistance. Its large context window makes it particularly suitable for maintaining coherent, long-form conversations and story development.