BRIEF-DETAILS: A LoRA model for FLUX.1-dev that specializes in generating images with centered subjects on white backgrounds, offering clean and consistent compositions.
Brief Details: Human_LLaVA is an 8.48B parameter vision-language model specialized in human-related tasks, built on Meta-Llama-3-8B-Instruct with FP16 precision.
BRIEF DETAILS: 12B parameter Mistral-based model optimized for roleplay and story generation with 16k context window, featuring enhanced prompt adherence and ChatML format support.
BRIEF DETAILS: A 9B parameter bilingual Arabic-English LLM built on Google Gemma, achieving state-of-the-art performance in Arabic language tasks despite smaller size
Brief-details: 12.2B parameter multilingual chat model optimized with hybrid KTO+DPOP reinforcement learning, supporting 9 languages and using ChatML format for enhanced prose quality
Brief Details: A 1.24B parameter LLaMA-based model fine-tuned on NVIDIA's ChatQA dataset, optimized for conversational AI and QA tasks with 1024 token context.
Brief-details: A quantized 9.24B parameter Gemma model optimized for Russian language tasks, showing 91.9% score on arena-hard questions with GGUF format support
Brief-details: A comprehensive quantized version of Llama 3.1 70B model with multiple GGUF variants optimized for different hardware and memory constraints, featuring high-quality text generation capabilities
BRIEF DETAILS: A 22B parameter conversational AI model optimized for creative and adventure-focused interactions, supporting multiple chat templates and roleplay formats.
Brief Details: CryptoTrader-LM is an 8B parameter model fine-tuned using LoRA on Mistral-8B for crypto trading decisions, achieving 0.94 Sharpe Ratio.
Brief Details: Qwen2.5-Coder-32B-Instruct is a powerful 32.5B parameter code-focused LLM with 128K context, optimized for GGUF format and achieving GPT-4 level coding capabilities.
Brief Details: A specialized LoRA model trained on FLUX.1-dev base, optimized for DALLE-style image generation with focus on photorealistic outputs and enhanced face realism.
Brief Details: A high-performance text-to-image model based on Flux.1, optimized for fast inference (4-8 steps), with improved image quality and prompt adherence while maintaining original Flux style.
Brief Details: A lightweight image captioning model (271M params) with multiple caption styles, efficient VRAM usage (~1GB), and specialized features for Flux model compatibility.
Brief-details: A fine-tuned LoRA model for FLUX.1-dev focused on photorealistic image generation with enhanced detail and texture quality. Trained on 27 curated images with constant LR scheduling and AdamW optimization.
Brief Details: LLM2CLIP-EVA02-L-14-336: Advanced vision-language model leveraging LLMs to enhance CLIP's capabilities, offering improved cross-modal and cross-lingual performance
Brief Details: AIMv2-huge: 681M parameter vision model by Apple achieving 87.5% ImageNet accuracy. Excels in multimodal understanding and feature extraction.
Brief Details: Ovis1.6-Gemma2-9B is a 10.2B parameter multimodal LLM built on Gemma architecture, offering state-of-the-art image-text processing with leading benchmark performance.
Brief Details: A specialized LoRA model for Stable Diffusion focused on clothing generation, trained on Florence-2-large for natural language processing, featuring precise garment visualization.
Brief Details: Walking Dead-themed LoRA model for Stable Diffusion XL, specializing in generating apocalyptic and zombie-related imagery with 64 network dimensions and AdamW optimization.
Brief-details: A specialized LoRA model for FLUX.1-dev that generates 2.5D toon-style images, featuring 64 network dimensions and 15 training epochs with AdamW optimizer.