Llama 3.2 11B Vision Instruct
Meta
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and...
What is Llama 3.2 11B Vision Instruct?
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and...
Specifications
- Developer: Meta
- Context window: 131.1K tokens
- Max output: 16.4K tokens
- Input modalities: text, image
- Output modalities: text
- Input price: $0.2450 per 1M tokens
- Output price: $0.2450 per 1M tokens
- Knowledge cutoff: 2023-12-31
- Supported parameters: frequency_penalty, logit_bias, max_tokens, min_p, presence_penalty, repetition_penalty, response_format, seed, stop, temperature, top_k, top_p
Use Llama 3.2 11B Vision Instruct with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on Llama 3.2 11B Vision Instruct alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.