GLM 4.7 Flash
Z.AI
As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning,...
What is GLM 4.7 Flash?
As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning,...
Specifications
- Developer: Z.AI
- Context window: 202.8K tokens
- Max output: 16.4K tokens
- Input modalities: text
- Output modalities: text
- Input price: $0.0600 per 1M tokens
- Output price: $0.4000 per 1M tokens
- Knowledge cutoff: —
- Supported parameters: frequency_penalty, include_reasoning, logit_bias, max_tokens, min_p, presence_penalty, reasoning, repetition_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_k, top_p
Use GLM 4.7 Flash with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on GLM 4.7 Flash alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.