XGLM-564M
Property | Value |
---|---|
Parameters | 564 Million |
License | MIT |
Paper | Few-shot Learning with Multilingual Language Models |
Languages | 31 languages |
Training Data | 500B tokens |
What is XGLM-564M?
XGLM-564M is a powerful multilingual autoregressive language model developed by Facebook. It represents a significant advancement in multilingual NLP, trained on a balanced corpus spanning 30 diverse languages. The model is specifically designed for few-shot learning tasks and demonstrates remarkable capabilities across different linguistic families.
Implementation Details
The model utilizes a transformer-based architecture and is implemented using PyTorch. It was trained on a carefully curated dataset with balanced representation across languages, with English comprising 32.59% of the training data after low-resource language upsampling.
- Supports 31 languages from various language families including Indo-European, Sino-Tibetan, and Austronesian
- Implements autoregressive language modeling for versatile text generation
- Utilizes advanced few-shot learning capabilities
Core Capabilities
- Multilingual text generation across 31 languages
- Few-shot learning for various NLP tasks
- Choice of Plausible Alternatives (COPA) task support
- Cross-lingual understanding and generation
Frequently Asked Questions
Q: What makes this model unique?
XGLM-564M stands out for its balanced multilingual training approach and efficient parameter count, making it accessible while maintaining strong performance across diverse languages. The model's ability to handle low-resource languages and perform few-shot learning tasks makes it particularly valuable for multilingual applications.
Q: What are the recommended use cases?
The model is ideal for multilingual text generation, few-shot learning tasks, and cross-lingual applications. It's particularly useful for applications requiring understanding or generation in multiple languages, especially when working with low-resource languages.