bert-fa-base-uncased-sentiment-deepsentipers-binary
Property | Value |
---|---|
Author | HooshvareLab |
Task | Binary Sentiment Analysis |
Language | Persian |
Performance | 92.42% F1-score |
Paper | ParsBERT Paper |
What is bert-fa-base-uncased-sentiment-deepsentipers-binary?
This model is a specialized version of ParsBERT v2.0, specifically fine-tuned for binary sentiment analysis in Persian language. It's trained on the DeepSentiPers dataset, which is a balanced and augmented version of SentiPers, containing user opinions about digital products. The model classifies text into two categories: Positive (Happy + Delighted) and Negative (Furious + Angry).
Implementation Details
The model is built upon the ParsBERT architecture, which is a transformer-based model designed specifically for Persian language understanding. It represents a significant improvement over previous versions, achieving state-of-the-art performance in binary sentiment classification tasks.
- Built on ParsBERT v2.0 architecture
- Fine-tuned on DeepSentiPers dataset
- Optimized for binary classification tasks
- Outperforms both ParsBERT v1 and mBERT in sentiment analysis
Core Capabilities
- Binary sentiment classification of Persian text
- Handles complex Persian language nuances
- Processes user comments and opinions effectively
- Achieves 92.42% F1-score on binary classification tasks
Frequently Asked Questions
Q: What makes this model unique?
This model is specifically optimized for Persian language sentiment analysis, achieving superior performance (92.42% F1-score) compared to previous versions and other models. It's particularly effective for analyzing user comments and product reviews in Persian.
Q: What are the recommended use cases?
The model is ideal for analyzing Persian user comments, product reviews, and social media content where binary sentiment classification (positive/negative) is needed. It's particularly well-suited for e-commerce platforms, social media monitoring, and customer feedback analysis in Persian-language contexts.