OpenHands LM 32B
Property | Value |
---|---|
Parameter Count | 32 Billion |
Context Window | 128K tokens |
Base Model | Qwen Coder 2.5 Instruct 32B |
Model URL | https://huggingface.co/all-hands/openhands-lm-32b-v0.1 |
What is openhands-lm-32b-v0.1?
OpenHands LM is an open-source coding model designed to provide powerful software development capabilities while being accessible for local deployment. Built on Qwen Coder 2.5 Instruct 32B, it achieves impressive performance with a 37.2% resolve rate on SWE-Bench Verified, competing with models 20x its size.
Implementation Details
The model implements a specialized fine-tuning process using RL-based framework from SWE-Gym, training on data generated from diverse open-source repositories. Its 128K token context window enables handling of large codebases and complex software engineering tasks.
- Fine-tuned using OpenHands-generated training data
- Implements RL-based training framework
- Optimized for local deployment on consumer GPUs like 3090
- Achieves performance comparable to 671B parameter models
Core Capabilities
- Strong performance in software engineering tasks
- GitHub issue resolution and code modification
- Local deployment without API dependencies
- Efficient resource utilization
Frequently Asked Questions
Q: What makes this model unique?
The model combines strong performance with practical deployability, offering a 37.2% resolve rate while being small enough to run on consumer hardware. It eliminates dependency on proprietary APIs while maintaining competitive capabilities.
Q: What are the recommended use cases?
The model excels at solving GitHub issues and handling software engineering tasks. It's particularly well-suited for code modification, issue resolution, and working with large codebases, though it may perform less optimally on varied software engineering tasks outside these core competencies.