WizardCoder-Python-13B-V1.0-GPTQ
Property | Value |
---|---|
Base Model | WizardCoder Python 13B |
License | Llama 2 |
HumanEval Score | 64.0 pass@1 |
Quantization | Multiple GPTQ variants (4-bit, 8-bit) |
What is WizardCoder-Python-13B-V1.0-GPTQ?
WizardCoder-Python-13B-V1.0-GPTQ is a quantized version of the powerful WizardCoder Python model, specifically optimized for code generation tasks. This model represents a significant advancement in code-focused AI, offering multiple quantization options to balance performance and resource requirements.
Implementation Details
The model is available in various GPTQ quantization formats, including 4-bit and 8-bit versions with different group sizes and optimization parameters. It uses the Alpaca prompt template format and is compatible with popular frameworks like AutoGPTQ and Transformers.
- Multiple quantization options (4-bit with group sizes 32g/64g/128g)
- Optimized using Evol Instruct Code dataset
- 8192 sequence length support
- Compatible with ExLlama for 4-bit variants
Core Capabilities
- Strong Python code generation performance (64.0 pass@1 on HumanEval)
- Efficient resource usage through various quantization options
- Flexible deployment options across different hardware configurations
- Supports both inference and pipeline implementations
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its optimized performance in Python code generation while offering multiple quantization options for different hardware configurations. It achieves impressive benchmark scores while maintaining efficient resource usage.
Q: What are the recommended use cases?
The model is particularly suited for Python code generation tasks, code completion, and programming assistance. It's ideal for developers looking for an efficient, open-source solution for code generation with flexible deployment options.