Mercury 2

Mercury 2

Inception

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving...

What is Mercury 2?

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving...

Specifications

  • Developer: Inception
  • Context window: 128K tokens
  • Max output: 50K tokens
  • Input modalities: text
  • Output modalities: text
  • Input price: $0.2500 per 1M tokens
  • Output price: $0.7500 per 1M tokens
  • Knowledge cutoff:
  • Supported parameters: include_reasoning, max_tokens, reasoning, response_format, stop, structured_outputs, temperature, tool_choice, tools

Use Mercury 2 with PromptLayer

PromptLayer lets teams manage, evaluate, and observe prompts that run on Mercury 2 alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.

Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026