distilroberta-base-finetuned-suicide-depression
| Property | Value |
|---|---|
| Model Base | DistilRoBERTa |
| Task | Binary Classification |
| Best Accuracy | 71.58% |
| Author | mrm8488 |
| Model URL | Hugging Face |
What is distilroberta-base-finetuned-suicide-depression?
This model is a fine-tuned version of DistilRoBERTa specifically trained to detect and classify tweets as either related to suicide (label 1) or depression (label 0). It represents a proof-of-concept implementation utilizing the SDCNL dataset, achieving a validation accuracy of 71.58%.
Implementation Details
The model was trained using the Adam optimizer with carefully selected hyperparameters including a learning rate of 2e-05 and linear scheduler. Training was conducted over 5 epochs with batch sizes of 8 for both training and evaluation.
- Built on DistilRoBERTa base architecture
- Trained using PyTorch 1.9.0
- Implements Transformers 4.11.3
- Uses Datasets 1.13.0 and Tokenizers 0.10.3
Core Capabilities
- Binary classification of tweets (suicide vs. depression)
- Achieves 71.58% accuracy on validation set
- Optimized for research and experimental purposes
- Suitable for text analysis in mental health contexts
Frequently Asked Questions
Q: What makes this model unique?
This model specializes in distinguishing between suicide-related and depression-related content in social media text, particularly Twitter, using a distilled version of RoBERTa as its foundation.
Q: What are the recommended use cases?
The model is explicitly marked as a proof-of-concept and should NOT be used in production environments. It's suitable for research purposes and understanding the potential of transformer models in mental health content analysis.





