Brief Details: A specialized LoRA model for generating frosted glass container images, built on FLUX.1-dev with 64 network dimensions and constant LR scheduling
Brief Details: A LoRA model fine-tuned on FLUX.1-dev for high-quality image generation, optimized for realism and close-up shots with MidJourney v6 aesthetics.
BRIEF DETAILS: Multilingual text-to-image model (1.6B params) supporting English, Chinese & emoji. Generates high-quality 512px images with fast inference on consumer GPUs.
Brief Details: A specialized text-to-image LoRA model built on FLUX.1-dev, optimized for toon-style image generation with 64 network dimensions and AdamW optimization.
Brief Details: Cydonia-22B v1.3 is a 22.2B parameter Mistral-based model optimized for creative and adventure-focused content generation with multiple chat template support.
Brief-details: A comprehensive collection of 60+ LoRA models for Flux.1-Dev, offering diverse styles from anime to vintage futurism, each approximately 613MB in size. Perfect for varied artistic applications.
Brief Details: A 3.2B parameter summarization model fine-tuned from Llama-3.2-3B-Instruct using DPO, optimized for human-preferred summaries across 7 domains.
Brief-details: Specialized LoRA model for FLUX.1-dev focusing on isometric 3D cinematographic imagery, trained on 24 images with 64 network dimensions and constant LR scheduling
BRIEF DETAILS: Hunyuan-A52B-Instruct is a powerful 389B parameter MoE model with 52B active parameters, offering state-of-the-art performance across multiple benchmarks with efficient resource usage.
Brief-details: A specialized LoRA model combining Midjourney-like aesthetics with FLUX.1-dev base model, optimized for photorealistic and artistic image generation with 60+ hi-res training images.
Brief-details: An 8B parameter Llama-based model focused on enhanced reasoning capabilities, featuring slow thinking and step-by-step problem solving across mathematical and coding tasks
Brief-details: Specialized LoRA model for generating isometric 3D scenes, trained on FLUX.1-dev base model with 20 epochs. Optimized for architectural and landscape visualization in isometric perspective.
Brief-details: Multilingual 7B parameter LLM trained on 4T tokens across 24 European languages, fine-tuned for instruction following with commercial license
Brief Details: Optimized 8B parameter LLM with 2:4 sparsity pattern, achieving 98.37% accuracy recovery compared to dense model. Efficient for deployment via vLLM.
Brief-details: A comprehensive guide for optimizing AI model performance across different quantization types, focusing on sampling parameters and advanced control techniques.
Brief-details: 7B parameter multilingual LLM supporting 24 EU languages, instruction-tuned for research use with strong performance in European language tasks. Pre-trained on 4T tokens.
Brief-details: A specialized LoRA model for FLUX.1-dev focused on product advertisement image generation, featuring high-quality backdrop compositions and commercial-style outputs.
Brief Details: An 8B parameter LLaMA-based model fine-tuned for instruction following, excelling in math, reasoning, and chat tasks. Features RLVR training.
Brief-details: A powerful 70B parameter instruction-following model built on Llama 3.1, optimized for diverse tasks including MATH, GSM8K, and general chat. Strong performance in mathematical reasoning and safety.
Brief Details: Sana_1600M_512px: High-performance text-to-image model with 1.6B parameters, optimized for 512px resolution, featuring Linear Diffusion Transformer architecture and Gemma2-2B-IT encoder.
Brief Details: AIMv2 Large - A 309M parameter vision model from Apple achieving 86.6% ImageNet accuracy, specializing in multimodal understanding and feature extraction.