Brief-details: LlavaGuard v1.2 0.5B is a safety-focused vision-language model with 894M parameters, built for content moderation and safety assessment of visual content.
Brief-details: A quantized 7.62B parameter Qwen2.5 model optimized for creative and conversational tasks with multiple GGUF variants for different performance needs
BRIEF DETAILS: A versatile 7.62B parameter GGUF quantized model merging Qwen2.5 with Homer and Anvita capabilities, optimized for roleplay and creative tasks
Brief Details: 8B parameter merged LLM combining MedIT-SUN, Tulu-3, and SuperNova-Lite models. Strong IFEval performance (81.94%). BF16 format, optimized for text generation.
Brief Details: A 9B parameter multilingual LLM optimized for Indonesian, Javanese, Sundanese and English, based on Gemma architecture with strong instruction-following capabilities.
Brief Details: 8B parameter insurance-focused LLM based on Llama 3, fine-tuned for insurance domain tasks using InsuranceQA dataset with LoRA optimization.
Brief-details: A 30B parameter uncensored language model with multiple quantization options (2-8bit), trained on 7 datasets including WizardLM and Guanaco, optimized for GGUF format.
Brief-details: A 7B parameter merged LLM combining Qwen2.5 variants optimized for creative, technical & instructional tasks. Features high IFEval accuracy (77.08%) & BF16 precision.
BRIEF-DETAILS: 8.35B parameter LLaMA3-based visual reasoning model combining Oryx-ViT for image processing, supporting English/Chinese with 32K context window
Brief Details: A 12.2B parameter conversational AI model merging Chronos-Gold-12B and ChatWaifu using TIES methodology, optimized for text generation with FP16 precision.
Brief Details: An 8B parameter merged LLM combining Aspire, Heart Stolen, and CursedMatrix models, optimized for creative writing and general tasks in BF16 format.
Brief-details: AWQ-quantized version of Mistral's 123B parameter LLM, optimized for multi-lingual tasks, coding, and reasoning with 128k context window support.
Brief Details: An 8.35B parameter LLaMA3-based visual reasoning model supporting English/Chinese, featuring multi-agent architecture and 32K context window.
Brief-details: Skip-DiT enhances vision diffusion transformers with skip branches for faster inference, offering up to 2.2x speedup while maintaining quality in video/image generation tasks.
Brief-details: 8B parameter GGUF-quantized Llama-based model trained on Orca agent datasets, offering multiple quantization options from 3.3GB to 16.2GB with varying quality-size tradeoffs.
Brief-details: Japanese text-to-speech model based on Parler-TTS, offering high-quality voice synthesis with 2.33B parameters. Supports natural Japanese speech generation with rich expressiveness.
Brief-details: ExllamaV2 quantized version of Mistral-Large-Instruct supporting 10 languages with various quantization options from 2.2 to 6.5 bits per weight
Brief-details: Japanese text-to-speech model based on Parler-TTS mini, optimized for Japanese language with high-quality voice generation capabilities and custom tokenizer integration.
Brief Details: OrcaAgent-llama3.2-8b is an 8B parameter LLM based on Meta's Llama 3, fine-tuned on the Orca AgentInstruct dataset for enhanced instruction following and agent-like behavior.
Brief-details: 8B parameter LLM optimized for role-playing, combining NIHAPPY's narrative base with Mythorica's emotional depth and V-Blackroot's character consistency
Brief Details: A specialized LoRA model for creating sketch card images, built on FLUX.1-dev. Features 64 network dimensions and trained on 13 images with constant LR scheduling.