DeBERTa-v3-base-mnli-fever-anli

Maintained By
MoritzLaurer

DeBERTa-v3-base-mnli-fever-anli

PropertyValue
Parameter Count184M
LicenseMIT
PaperDeBERTa Paper
Training Data763,913 NLI pairs

What is DeBERTa-v3-base-mnli-fever-anli?

This is a specialized version of Microsoft's DeBERTa-v3-base model, fine-tuned specifically for natural language inference (NLI) tasks. The model has been trained on a comprehensive dataset combining MultiNLI, Fever-NLI, and Adversarial-NLI, making it particularly robust for zero-shot classification tasks. Notably, it outperforms many larger models on the ANLI benchmark despite its base size.

Implementation Details

The model leverages the advanced DeBERTa-v3 architecture with significant improvements in pre-training objectives compared to previous versions. It was trained using mixed precision training with specific hyperparameters including a learning rate of 2e-05, batch size of 32, and 3 training epochs.

  • Zero-shot classification capability with simple pipeline implementation
  • Supports both single-label and multi-label classification
  • Optimized for inference tasks with three output labels: entailment, neutral, and contradiction

Core Capabilities

  • High accuracy on MNLI benchmark (90.3%)
  • Strong performance on FEVER-NLI (77.7%)
  • Competitive ANLI performance (49.5% on R3)
  • Efficient zero-shot classification for various tasks

Frequently Asked Questions

Q: What makes this model unique?

The model combines DeBERTa-v3's advanced architecture with comprehensive NLI training, making it particularly effective for zero-shot classification while being more efficient than larger models.

Q: What are the recommended use cases?

The model excels at zero-shot text classification, natural language inference, and hypothesis-premise pair analysis. It's particularly useful when you need to classify text without task-specific training data.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.