FluentlyLM-Prinum

Maintained By
fluently-lm

FluentlyLM-Prinum

PropertyValue
Parameter Count32.5B (31.0B Non-Embedding)
Model TypeCausal Language Model (QwenForCausalLM)
Context Length131,072 tokens
LanguagesEnglish, French, Spanish, Russian, Chinese, Japanese, Persian
LicenseMIT
Hugging FaceRepository

What is FluentlyLM-Prinum?

FluentlyLM-Prinum is a powerful large language model representing the first standalone release from Project Fluently LM. This 32.5B parameter model features an impressive context length of 131K tokens and supports seven languages officially. The model achieved notable recognition by securing 12th place on the Open LLM Leaderboard.

Implementation Details

The model utilizes a sophisticated architecture with 64 layers and implements Group Query Attention (GQA) with 40 heads for queries and 8 heads for key-values. It's built on the QwenForCausalLM architecture and offers flexible deployment options, including GGUF quantization for local usage.

  • 64-layer architecture with GQA attention mechanism
  • Full 131,072 token context window
  • Available in various GGUF quantized versions
  • Implements efficient query-key-value attention distribution

Core Capabilities

  • Strong performance in IF-Eval (80.90%) and BBH tasks (59.48%)
  • Multilingual support across major world languages
  • Advanced reasoning capabilities demonstrated by MATH Level 5 performance (54.00%)
  • Professional knowledge evaluation through MMLU-PRO (53.42%)

Frequently Asked Questions

Q: What makes this model unique?

FluentlyLM-Prinum stands out for its combination of large parameter count (32.5B), extensive context length (131K tokens), and strong multilingual capabilities. Its architecture leveraging GQA attention and impressive benchmark performances make it particularly suitable for complex language tasks.

Q: What are the recommended use cases?

The model excels in various applications including multilingual text processing, complex reasoning tasks, and professional domain applications. Its strong performance in IF-Eval and BBH suggests particular strength in instruction following and problem-solving scenarios.

🍰 Interesting in building your own agents?
PromptLayer provides Huggingface integration tools to manage and monitor prompts with your whole team. Get started here.