Llama 4 Maverick
Meta
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...
What is Llama 4 Maverick?
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...
Specifications
- Developer: Meta
- Context window: 1.0M tokens
- Max output: 16.4K tokens
- Input modalities: text, image
- Output modalities: text
- Input price: $0.1500 per 1M tokens
- Output price: $0.6000 per 1M tokens
- Knowledge cutoff: 2024-08-31
- Supported parameters: frequency_penalty, logit_bias, max_tokens, min_p, presence_penalty, repetition_penalty, response_format, seed, stop, structured_outputs, temperature, top_k, top_p
Use Llama 4 Maverick with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on Llama 4 Maverick alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.