distilbert-base-uncased-go-emotions-onnx
Property | Value |
---|---|
License | MIT |
Framework | ONNX |
Base Architecture | DistilBERT |
Task | Text Classification (Emotions) |
What is distilbert-base-uncased-go-emotions-onnx?
This model is an ONNX-optimized version of the DistilBERT-based emotion detection model, specifically converted and quantized using Optimum. The original model was created through zero-shot distillation on the GoEmotions dataset, offering an efficient approach to emotion classification in text.
Implementation Details
The model was trained using mixed precision for 10 epochs on unlabeled GoEmotions data. It employs zero-shot distillation techniques to transfer knowledge from a more complex NLI-based teacher model to a more efficient student model.
- Optimized for ONNX runtime execution
- Quantized for improved performance
- Based on DistilBERT architecture
- Trained with mixed precision
Core Capabilities
- Emotion classification in text
- Single-label classification (though original dataset supports multi-label)
- Efficient inference through ONNX optimization
- Lightweight alternative to full NLI-based models
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its efficient implementation through ONNX optimization and its unique training approach using zero-shot distillation, making it a lightweight alternative for emotion detection tasks.
Q: What are the recommended use cases?
The model is best suited for emotion classification tasks where efficiency is prioritized over maximum accuracy. It's particularly useful in production environments where ONNX runtime is preferred, though users should note it may not perform as well as fully supervised models.