plateer_classifier_v0.1
Property | Value |
---|---|
Base Model | Qwen/Qwen2.5-1.5B |
Training Accuracy | 89.97% |
Framework | PyTorch 2.2.1, Transformers 4.46.3 |
Training Type | Fine-tuned with PEFT |
What is plateer_classifier_v0.1?
plateer_classifier_v0.1 is a specialized text classification model fine-tuned on the Qwen2.5-1.5B architecture for e-commerce product categorization. The model is specifically designed to classify Korean product descriptions into 17 distinct categories, achieving a remarkable accuracy of 89.97% on the evaluation set.
Implementation Details
The model utilizes Parameter-Efficient Fine-Tuning (PEFT) techniques and implements a custom TextClassificationPipeline for inference. It was trained using mixed precision training with Native AMP on multiple GPUs, using AdamW optimizer with a linear learning rate scheduler.
- Training batch size: 128 (distributed across 4 GPUs)
- Learning rate: 0.0002 with 10,000 warmup steps
- Gradient accumulation steps: 4
- Training duration: 1 epoch with 110,000 steps
Core Capabilities
- Multi-class classification for 17 product categories
- Top-k prediction support (default k=3)
- Probability scores for each prediction
- Efficient inference with custom pipeline implementation
Frequently Asked Questions
Q: What makes this model unique?
The model combines the powerful Qwen2.5-1.5B architecture with efficient fine-tuning techniques to achieve high accuracy in Korean product classification. Its custom pipeline allows for flexible top-k predictions with probability scores.
Q: What are the recommended use cases?
The model is ideal for e-commerce platforms requiring automated product categorization, especially for Korean products. It can be used for both single-label classification and generating multiple category suggestions with confidence scores.