DeepSeek V3.2
DeepSeek
DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism...
What is DeepSeek V3.2?
DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism...
Specifications
- Developer: DeepSeek
- Context window: 131.1K tokens
- Max output: 65.5K tokens
- Input modalities: text
- Output modalities: text
- Input price: $0.2520 per 1M tokens
- Output price: $0.3780 per 1M tokens
- Knowledge cutoff: —
- Supported parameters: frequency_penalty, include_reasoning, logit_bias, max_tokens, min_p, presence_penalty, reasoning, repetition_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_k, top_p
Use DeepSeek V3.2 with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on DeepSeek V3.2 alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.