Brief-details: OLMo-2 7B DPO model optimized for chat & tasks like MATH/GSM8K. Apache 2.0 licensed, trained on Tülu 3 dataset with length-normalized DPO approach.
BRIEF-DETAILS: A 12.2B parameter GGUF model optimized for creative writing and roleplaying, featuring high emotional intelligence scores and merged using Model Stock methodology
Brief-details: Multi-lingual Indian language model supporting 12 languages, quantized for efficient deployment with 3.21B parameters and multiple GGUF variants
Brief-details: A specialized diffusion model implementation that combines FLUX with Redux architecture, utilizing SigLIP for image feature processing and T5 embeddings integrationh
Brief-details: An 8B parameter merged LLM combining adventure, writing, and multilingual capabilities using Model Stock method. Built on LLaMA3 with transformers architecture.
BRIEF DETAILS: AI model for generating anime-style images using FLUX architecture. Features custom LoRA weights, requires "anm" trigger word, and operates under non-commercial license.
Brief-details: A quantized 7.24B parameter conversational AI model with multiple GGUF variants optimized for different performance/size tradeoffs, featuring imatrix quantization.
Brief-details: Qwen2.5-Coder-32B-Instruct-3bit is a 4.1B parameter MLX-optimized coding model with 3-bit quantization, designed for code generation and chat interactions.
Brief-details: NoobAI XL V-Pred based merged model focused on pencil-style outputs, with strict licensing terms prohibiting commercial use and requiring open-source sharing of derivatives.
Brief-details: GGUF-optimized version of Stable Diffusion 3.5 Medium with various quantization options, offering efficient text-to-image generation across different precision levels.
Brief Details: An 8B parameter GGUF-quantized LLM fine-tuned on insurance domain data, based on Llama 3. Optimized for insurance Q&A with multiple quantization options.
Brief Details: A 7.62B parameter merged LLM ranking #1 among sub-13B models, combining HomerAnvita-NerdMix and HomerCreative-Mix using SLERP merge method.
Brief-details: A PEFT-based mathematical language model built on vinallama-7b-chat, featuring specialized mathematical capabilities and safetensor implementation.
Brief-details: A quantized 7.62B parameter Qwen2.5 model optimized for creative and roleplay tasks, offering multiple GGUF variants for different performance needs.
Brief-details: Quantized 7B parameter Qwen2.5 model optimized for creative and instructional tasks, featuring multiple GGUF variants for different performance needs.
Brief-details: A 894M parameter vision-language model focused on content safety assessment, built on LLaVA-OneVision with 32K token context window and FP16 precision.
Brief-details: Qwen2.5-7B variant optimized for creative and roleplay tasks, offering multiple GGUF quantization options from 3.1GB to 15.3GB, with strong emphasis on performance and quality balance.
Brief Details: An 8B parameter LLM fine-tuned on insurance data, based on Llama 3 ChatQA. Specialized for insurance queries with 20.97M trainable parameters.
Brief-details: Qwen2.5-7B Creative Mix GGUF - A quantized 7.62B parameter model optimized for creative and roleplay tasks with multiple compression options
Brief-details: A 9B parameter multilingual LLM optimized for Indonesian, Javanese, Sundanese and English, based on Gemma2 with strong performance on regional benchmarks
Brief Details: 8B parameter merged LLM combining Tulu, MedIT, and SuperNova models. Strong performance on IFEval (81.94%). Optimized for text generation and conversation.