sentence-compression

sentence-compression

AlexMaclean

A fine-tuned DistilBERT model for sentence compression achieving 89.12% accuracy, with strong F1 (0.8367) and precision (0.8495) scores. Apache 2.0 licensed.

PropertyValue
Base ModelDistilBERT-base-cased
LicenseApache 2.0
Training FrameworkPyTorch 1.10.0
Accuracy89.12%

What is sentence-compression?

The sentence-compression model is a specialized NLP model built on DistilBERT architecture, designed to create shorter versions of sentences while maintaining their core meaning. This model demonstrates impressive performance metrics, including 89.12% accuracy and an F1 score of 0.8367.

Implementation Details

Built using the Transformers library (v4.12.5) and PyTorch, this model was trained using carefully tuned hyperparameters including a learning rate of 5e-05 and Adam optimizer. The training process spanned 3 epochs with 500 warmup steps, showing consistent improvement in performance metrics.

  • Batch sizes: 16 for training, 64 for evaluation
  • Linear learning rate scheduler
  • Precision: 0.8495
  • Recall: 0.8243

Core Capabilities

  • Token classification for sentence compression
  • High-accuracy text processing (89.12%)
  • Balanced precision and recall metrics
  • Efficient inference with DistilBERT architecture

Frequently Asked Questions

Q: What makes this model unique?

This model stands out for its high accuracy in sentence compression tasks while maintaining a good balance between precision (0.8495) and recall (0.8243). It's built on the efficient DistilBERT architecture, making it suitable for production environments.

Q: What are the recommended use cases?

The model is particularly well-suited for applications requiring text summarization, content compression, and efficient information extraction. Its high accuracy makes it reliable for automated text processing pipelines.

Related Models

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026