distilroberta-base-finetuned-suicide-depression

distilroberta-base-finetuned-suicide-depression

mrm8488

A fine-tuned DistilRoBERTa model for detecting suicide and depression in tweets, achieving 71.58% accuracy. Not production-ready.

PropertyValue
Model BaseDistilRoBERTa
TaskBinary Classification
Best Accuracy71.58%
Authormrm8488
Model URLHugging Face

What is distilroberta-base-finetuned-suicide-depression?

This model is a fine-tuned version of DistilRoBERTa specifically trained to detect and classify tweets as either related to suicide (label 1) or depression (label 0). It represents a proof-of-concept implementation utilizing the SDCNL dataset, achieving a validation accuracy of 71.58%.

Implementation Details

The model was trained using the Adam optimizer with carefully selected hyperparameters including a learning rate of 2e-05 and linear scheduler. Training was conducted over 5 epochs with batch sizes of 8 for both training and evaluation.

  • Built on DistilRoBERTa base architecture
  • Trained using PyTorch 1.9.0
  • Implements Transformers 4.11.3
  • Uses Datasets 1.13.0 and Tokenizers 0.10.3

Core Capabilities

  • Binary classification of tweets (suicide vs. depression)
  • Achieves 71.58% accuracy on validation set
  • Optimized for research and experimental purposes
  • Suitable for text analysis in mental health contexts

Frequently Asked Questions

Q: What makes this model unique?

This model specializes in distinguishing between suicide-related and depression-related content in social media text, particularly Twitter, using a distilled version of RoBERTa as its foundation.

Q: What are the recommended use cases?

The model is explicitly marked as a proof-of-concept and should NOT be used in production environments. It's suitable for research purposes and understanding the potential of transformer models in mental health content analysis.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026