Qwen3 235B A22B Thinking 2507

Qwen3 235B A22B Thinking 2507

Alibaba (Qwen)

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...

What is Qwen3 235B A22B Thinking 2507?

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...

Specifications

  • Developer: Alibaba (Qwen)
  • Context window: 131.1K tokens
  • Max output: — tokens
  • Input modalities: text
  • Output modalities: text
  • Input price: $0.1495 per 1M tokens
  • Output price: $1.50 per 1M tokens
  • Knowledge cutoff: 2025-06-30
  • Supported parameters: frequency_penalty, include_reasoning, logit_bias, max_tokens, min_p, presence_penalty, reasoning, repetition_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_k, top_p

Use Qwen3 235B A22B Thinking 2507 with PromptLayer

PromptLayer lets teams manage, evaluate, and observe prompts that run on Qwen3 235B A22B Thinking 2507 alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.

Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026