Mistral Nemo

Mistral Nemo

Mistral AI

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese,...

What is Mistral Nemo?

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese,...

Specifications

  • Developer: Mistral AI
  • Context window: 131.1K tokens
  • Max output: — tokens
  • Input modalities: text
  • Output modalities: text
  • Input price: $0.0200 per 1M tokens
  • Output price: $0.0300 per 1M tokens
  • Knowledge cutoff: 2024-04-30
  • Supported parameters: frequency_penalty, logit_bias, logprobs, max_tokens, min_p, presence_penalty, repetition_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_k, top_logprobs, top_p

Use Mistral Nemo with PromptLayer

PromptLayer lets teams manage, evaluate, and observe prompts that run on Mistral Nemo alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.

Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026