Brief Details: IllustMixLuminous is a specialized text-to-image diffusion model built on Illustrious base, featuring built-in VAE and optimized for high-quality anime-style illustrations.
Brief Details: Visual retrieval model combining PaliGemma-3B with ColBERT strategy for efficient document indexing, trained on 127k query-page pairs with MIT license.
BRIEF-DETAILS: A 494M parameter code-specialized LLM from Qwen's 2.5 series, featuring 32K context length and optimized for code generation, reasoning, and fixing tasks.
Brief-details: A 12.2B parameter Mistral-based merged model combining multiple language models using Model Stock method, optimized for text generation and conversational tasks.
BRIEF DETAILS: OpenCoder-1.5B-Instruct is a bilingual code LLM with 1.91B parameters, trained on 2.5T tokens. Excels at code generation with 72.5% HumanEval score, supports English/Chinese.
Brief-details: A LoRA model for FLUX.1-dev focused on yellow-pop art style generation. Trained on 22 images with 64 network dimensions and constant LR scheduling. Optimized for 1024x1024 output.
Brief Details: YosoNormal v1.5 - Specialized image-to-image model for generating stable normal maps with reduced diffusion variance, supporting multiple scene types.
Brief Details: 4-bit quantized Llama 3.2 (1B params) instruction model supporting 8 languages. Optimized for memory efficiency with GPTQ quantization and 682M parameters.
Brief Details: MobileLLM-350M is an optimized transformer model with 345.3M parameters, designed for on-device use with enhanced efficiency and performance.
Brief-details: EVA-Qwen2.5-14B-v0.2 is a 14.8B parameter specialized language model for roleplay/storytelling, fine-tuned on Qwen2.5-14B with high-quality synthetic and natural datasets
BRIEF-DETAILS: ONNX-optimized version of Qwen2-VL-2B-Instruct model with Q4/F16 quantization for efficient visual language processing and inference
Brief Details: A powerful 465B parameter Japanese-English language model using sparse upcycling technique, requiring 16x H100/A100 GPUs for inference
Brief-details: A specialized LoRA model trained on the FLUX.1-dev base model, optimized for cloud and nature-based image generation with 16 hi-res training images and constant LR scheduling.
Brief-details: A specialized LoRA model for generating 2D game assets and pixel art, built on FLUX.1-dev base model. Optimized for white background game asset creation.
Brief-details: A personal merge model based on ChromaXL Mix 0.75, optimized for low CFG (3) and 12+ steps, featuring variants ZoinksNoobTest and JinkiesNoob for different use cases.
Brief-details: A LoRA model for FLUX.1-dev that generates colorful fantasy ink illustrations, featuring vibrant and impactful artistic styles with specific trigger word "fae_ink"
Brief Details: Florence-2-large-PromptGen-v2.0 is an 823M parameter image captioning model offering efficient VRAM usage (1GB) with multiple caption styles and image analysis capabilities.
Brief Details: TIPO-100M is a 100M parameter LLaMA-based model for text-to-image prompt optimization, trained on multiple datasets including Danbooru and LAION.
Brief Details: NVIDIA's Cosmos-Tokenizer-CI8x8 is a continuous image tokenizer offering 8x8 spatial compression with high reconstruction quality and fast processing speeds.
Brief Details: A specialized LoRA model trained on FLUX.1-dev for generating sticker-style illustrations with white backgrounds. Features 15 hi-res training images and optimized for 1024x1024 resolution.
Brief Details: Qwen2.5-0.5B-200K-GGUF is a compact 494M parameter language model optimized for inference, featuring multiple quantization options and Ollama compatibility.