Mixtral 8x22B Instruct
Mistral AI
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding,...
What is Mixtral 8x22B Instruct?
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding,...
Specifications
- Developer: Mistral AI
- Context window: 65.5K tokens
- Max output: — tokens
- Input modalities: text
- Output modalities: text
- Input price: $2.00 per 1M tokens
- Output price: $6.00 per 1M tokens
- Knowledge cutoff: 2024-01-31
- Supported parameters: frequency_penalty, max_tokens, presence_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_p
Use Mixtral 8x22B Instruct with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on Mixtral 8x22B Instruct alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.