XGLM-7.5B
Property | Value |
---|---|
Parameter Count | 7.5 Billion |
Model Type | Multilingual Autoregressive Language Model |
License | MIT |
Paper | Few-shot Learning with Multilingual Language Models |
Languages Supported | 31 |
What is XGLM-7.5B?
XGLM-7.5B is a sophisticated multilingual language model developed by Facebook, trained on an extensive and balanced corpus of 500 billion sub-tokens across 31 different languages. It represents a significant advancement in multilingual AI capabilities, with particular strength in few-shot learning tasks across diverse languages.
Implementation Details
The model utilizes a transformer-based architecture and has been trained on a carefully curated dataset that spans multiple language families, from Indo-European to Sino-Tibetan. The training data distribution is intentionally balanced, with English comprising 32.59% of the upsampled training data, while maintaining representation of low-resource languages.
- Architecture: Transformer-based with 7.5B parameters
- Training Data: 500B sub-tokens across 31 languages
- Implementation: PyTorch-based with HuggingFace integration
- Tokenization: Custom multilingual tokenizer
Core Capabilities
- Multilingual text generation across 31 languages
- Few-shot learning tasks in multiple languages
- Zero-shot cross-lingual transfer
- Balanced performance across high and low-resource languages
- Support for various NLP tasks including COPA (Choice of Plausible Alternatives)
Frequently Asked Questions
Q: What makes this model unique?
XGLM-7.5B stands out for its balanced multilingual training approach and extensive language coverage, including low-resource languages like Quechua and Haitian Creole. It's specifically designed for few-shot learning scenarios across multiple languages.
Q: What are the recommended use cases?
The model is ideal for multilingual text generation, cross-lingual transfer learning, and few-shot learning tasks. It's particularly useful for applications requiring language understanding across multiple languages or working with low-resource languages.