Brief-details: A specialized LoRA collection for FLUX.1-dev model enabling diverse image generation styles including furry, anime, Disney, scenery and art with high-quality 1024x1024 output capabilities
Brief-details: A 7.62B parameter language model built on Qwen2.5-7B, trained using Tulu-3 methodology. Strong performance on benchmarks like BBH and MMLU with BF16 precision.
Brief Details: A 3.2B parameter LLaMA-based instruction-following model optimized for conversational AI, featuring multiple GGUF quantization variants and Ollama compatibility.
Brief-details: A virtual try-on AI model that uses outfitting fusion and latent diffusion for controllable clothing visualization, supporting both half-body and full-body implementations
Brief-details: Boltz-1 is an open-source biomolecular structure prediction model for proteins, RNA, DNA, and small molecules, supporting modified residues and complex interactions.
BRIEF DETAILS: Llama-3.1-Tulu-3-8B-DPO is an 8B parameter instruction-following model built by Allen AI, featuring strong performance in math, reasoning, and general tasks with DPO optimization.
Brief Details: OLMo-2-1124-7B-Instruct is a 7.3B parameter language model from Allen AI, fine-tuned on Tülu 3 dataset with DPO and RLVR training for enhanced instruction following and mathematical reasoning.
BRIEF DETAILS: A specialized LoRA model for FLUX.1-dev focused on creating long-form cartoon/toon-style images, optimized for 768x1024 resolution with AdamW optimization and constant LR scheduling.
Brief-details: NONAMEmix_v1 is a merged SDXL model combining multiple base models for enhanced illustration capabilities, featuring Booru-tag prompt style and optimized for aesthetic image generation.
Brief-details: A powerful 13B parameter language model from Allen AI, trained on 5T tokens, featuring strong performance on academic benchmarks with full open-source availability.
Brief-details: A 22B parameter Mistral-based model optimized for unrestricted instruction following while maintaining strong cognitive capabilities and Alpaca format support.
Brief Details: An anime-style LoRA model for FLUX.1-dev, optimized for creating detailed animated characters with 64 network dimensions and constant LR scheduling
Brief Details: ShowUI-2B is a 2.21B parameter vision-language-action model specialized for GUI agents, built on Qwen2-VL architecture for computer interface interaction.
Brief-details: A specialized LoRA model trained on FLUX.1-dev for generating cute 3D Kawaii-style images, featuring 64 network dimensions and optimized for 768x1024 resolution outputs.
Brief-details: An archive of 87 videos (~702MB) generated by OpenAI's Sora video generation model during temporary public access, with corresponding prompts and documentation
Brief-details: OLMo-2 13B instruct model by AllenAI - Open language model with 13.7B params, trained on Tülu 3 dataset for chat and math tasks. Apache 2.0 licensed.
Brief-details: CATVTON-Flux is a state-of-the-art virtual try-on solution combining CATVTON with Flux fill inpainting, achieving FID 5.59 on VITON-HD dataset
Brief-details: A powerful 123B parameter LLM based on Largestral 2411, featuring system prompt support and creative text generation capabilities. Known for unique prose and chaotic creativity.
BRIEF-DETAILS: Lightweight 1.54B parameter LLM optimized for efficient text generation in Russian/English with minimal resources. Strong performance in ru-llm-arena benchmark.
Brief-details: Allegro-TI2V is an advanced open-source text-image-to-video generation model capable of creating 6-second high-resolution videos from prompts and images.
Brief Details: PTA-1 is a 271M parameter vision-language model for GUI automation, built on Florence-2, optimized for element localization with 79.98% accuracy.