Brief Details: A specialized LoRA model by OedoSoldier designed to enhance image detail and quality in stable diffusion workflows, available on HuggingFace.
Brief-details: Lightweight 2B parameter multilingual LM supporting 6 European languages, optimized for edge devices. Strong performance in various NLP tasks with 60.7% average accuracy on English benchmarks.
Brief Details: A specialized Sparse Autoencoder (SAE) trained on layer 9 of Meta's Llama-3.2-1B-Instruct, designed to decompose neural activations into interpretable features. Achieves L0=63 during training.
BRIEF-DETAILS: Hunyuan3D-2 is a specialized 3D model generator by Kijai, available via HuggingFace, optimized for detailed 3D object creation and manipulation.
Brief-details: A 4-bit quantized version of DeepSeek's 70B parameter LLaMA model, optimized for MLX framework with maintained performance and reduced memory footprint.
Brief Details: DeepSeek-R1-Distill-Llama-8B-Abliterated-GGUF is a quantized version of the DeepSeek LLM, offering various compression options from 3.3GB to 16.2GB with different quality-performance tradeoffs.
Brief Details: DeepSeek-R1-3bit is a 3-bit quantized version of DeepSeek-R1, optimized for MLX framework with efficient inference capabilities.
Brief Details: A specialized LoRA model for generating wire art & black/white drawings, trained on 19 images with optimal 3:2 aspect ratio and 30-35 inference steps
Brief Details: 4-bit quantized 14B parameter version of DeepSeek-R1 optimized by Unsloth, featuring dynamic quantization for better accuracy and memory efficiency.
Brief-details: Advanced reasoning-focused LLM with BF16 precision, part of DeepSeek-R1 family. Excels at mathematical, coding, and complex reasoning tasks with 37B active parameters.
BRIEF-DETAILS: Russian-adapted Qwen2.5 32B model with enhanced tokenization for 60% faster Russian text generation, featuring LEP technique and continued pretraining.
Brief Details: MahaKumbh-Llama3.3-70B is a 70B parameter language model developed by IVentureISB, based on Llama architecture, focused on large-scale language processing capabilities.
Brief Details: An 8B parameter LLaMA-based language model fine-tuned for chain reasoning tasks, developed by Shaleen123 and hosted on Hugging Face.
Brief-details: A merged language model combining Phi-4 variants with SuperThoughts-CoT, achieving strong performance on IFEval (63.75%) and BBH (54.69%). Built using mergekit with bfloat16 precision.
Brief Details: 11B parameter vision-language model based on Llama architecture, optimized for instruction-following with GGUF format compatibility
Brief-details: Quantized GGUF/FP8 model for video and text-to-world generation, featuring 7B parameters with specialized ComfyUI workflows
Brief-details: PaSa-7B-Selector is a specialized LLM agent designed for comprehensive academic paper search and analysis, developed by ByteDance Research to enhance scholarly research workflows.
BRIEF-DETAILS: FAST is an efficient action tokenizer for robotics that converts robot action sequences into discrete tokens for vision-language-action models
Brief Details: ReaderLM-v2-GGUF is a quantized version of ReaderLM-v2, offering multiple compression variants from 0.9GB to 3.7GB with different quality-size tradeoffs.
Brief Details: Specialized 2B parameter medical LLM built on Gemma2, focused on clinical medicine with emphasis on limited-resource settings and humanitarian care.
Brief Details: GLM-4-9B is a 9B parameter open-source LLM supporting 26 languages, 8K context, achieving strong performance in reasoning and code tasks