Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF
lordx64
Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF is a text-generation model from lordx64 on Hugging Face, with 42K downloads.
What is Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF?
Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF is an open-weight text-generation model released by lordx64 on Hugging Face. It has been downloaded 42K times and received 25 likes since release.
Specifications
- Developer: lordx64
- Library: gguf
- Pipeline tag: text-generation
- License: apache-2.0
- Base model: lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled
- Downloads: 42K
- Likes: 25
- Created: 2026-04-20
- Last modified: —
- Hugging Face: lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF
Use Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.