LFM2.5-1.2B-Instruct (free)
Liquid
LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.
What is LFM2.5-1.2B-Instruct (free)?
LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.
Specifications
- Developer: Liquid
- Context window: 32.8K tokens
- Max output: — tokens
- Input modalities: text
- Output modalities: text
- Input price: Free per 1M tokens
- Output price: Free per 1M tokens
- Knowledge cutoff: —
- Supported parameters: frequency_penalty, max_tokens, min_p, presence_penalty, repetition_penalty, seed, stop, temperature, top_k, top_p
Use LFM2.5-1.2B-Instruct (free) with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on LFM2.5-1.2B-Instruct (free) alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.