GLM 4.5V

GLM 4.5V

Z.AI

GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results in video understanding,...

What is GLM 4.5V?

GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results in video understanding,...

Specifications

  • Developer: Z.AI
  • Context window: 65.5K tokens
  • Max output: 16.4K tokens
  • Input modalities: text, image
  • Output modalities: text
  • Input price: $0.6000 per 1M tokens
  • Output price: $1.80 per 1M tokens
  • Knowledge cutoff: 2024-12-31
  • Supported parameters: frequency_penalty, include_reasoning, max_tokens, presence_penalty, reasoning, repetition_penalty, seed, stop, temperature, tool_choice, tools, top_k, top_p

Use GLM 4.5V with PromptLayer

PromptLayer lets teams manage, evaluate, and observe prompts that run on GLM 4.5V alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.

Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026