BRIEF-DETAILS: A CLI-based image captioning tool that processes single/batch images using CLIP and LLM models, with NSFW support and customizable output options.
BRIEF-DETAILS: 22B parameter Mistral-based RP model optimized for creative writing. Features 32k context length, QLORA training, and multiple quantization options.
Brief Details: State-of-the-art ecommerce embedding model with 652M parameters, optimized for product search and retrieval with 17.6% MRR improvement over existing solutions.
Brief-details: Quantized 1.5B parameter Llama-3 model optimized for 8-bit precision, supporting 8 languages with minimal performance loss compared to base model
Brief-details: 7B parameter model specialized in function calling, built on Qwen2.5-Coder-7B-Instruct. Features robust performance on BFCL-v3 benchmark and practical API integration capabilities.
Brief-details: Llama-3.2-1B-Instruct is a compact multilingual model supporting 8 languages with 128K context, optimized for dialogue and instruction-following tasks.
Brief-details: A psychedelic maximalist text-to-image LoRA model built on FLUX.1-dev, specialized in generating acid surrealism artwork with vivid, chaotic compositions.
Brief Details: Specialized 7B parameter math-focused LLM supporting both Chain-of-Thought and Tool-integrated Reasoning for solving math problems in English and Chinese.
Brief Details: RetouchFLux - A LoRA enhancement model for FLUX.1-dev focusing on image quality improvement with HDR-like effects and luxury aesthetics
Brief-details: Korean-optimized multilingual embedding model based on BGE-M3, with 568M parameters, achieving strong performance on Korean text similarity tasks and benchmarks.
Brief Details: Advanced 12B parameter AI reasoning model using two-model approach for enhanced problem-solving. Built on Mistral-Nemo, specializes in guided responses.
Brief Details: Qwen2.5-3B-Instruct-GGUF is a 3.4B parameter instruction-tuned LLM optimized for chat and text generation, supporting 29+ languages with 32K context.
Brief-details: FLUX.1-controlnet-lineart AI model trained for line art conditioning, optimized for artistic control and image generation with 28K likes and 478 downloads
Brief-details: A LoRA model for FLUX.1-dev focused on artistic photography, featuring adjustable strength (~1.2 recommended) and optimized guidance scaling for reducing plasticky effects.
Brief Details: Qwen2.5's 7B parameter instruction-tuned model quantized to 8-bit precision. Features enhanced knowledge, coding capabilities, and multilingual support for 29+ languages.
Brief Details: Qwen2.5-1.5B is a compact yet powerful instruction-tuned language model supporting 29+ languages with 32K context window and optimized for chat applications.
BRIEF DETAILS: Japanese ASR model based on Whisper, 756M params, 6.3x faster than large-v3, achieves competitive CER/WER scores. Optimized for Japanese speech recognition.
BRIEF-DETAILS: A sophisticated 12.2B parameter language model created using the Model Stock merge method, combining 15+ models with psychology and reasoning capabilities.
Brief-details: Open-source Sora-like video generation model featuring WFVAE, prompt refiner, and sparse attention. Supports 93x480p within 24GB VRAM with high-quality outputs.
Brief Details: A specialized Uzbek-English language model based on Mistral-Nemo-Instruct, optimized for translation, summarization, and QA tasks with strong BLEU/COMET scores.
Brief-details: ZEBRA - Zero-shot retrieval augmentation framework for commonsense QA, built on E5-base-v2, achieving significant accuracy improvements across 8 QA datasets.