Brief-details: LLaMA-Mesh-GGUF is an 8.03B parameter model offering multiple GGUF quantizations optimized for mesh generation and text tasks, with variants from 2.95GB to 16.07GB.
Brief details: FLUX.1-Fill-dev-nf4 is a specialized NF4-quantized version of FLUX.1-Fill for efficient image inpainting, offering comparable quality to the original model while reducing memory footprint
Brief Details: LLM2CLIP fine-tuned 7.5B parameter model that extends CLIP's capabilities through large language models, optimized for zero-shot classification and cross-modal tasks.
BRIEF-DETAILS: Japanese-focused text-to-image model combining multiple V-Prediction architectures, optimized for anime/illustration with SPO integration and ComfyUI compatibility
Brief Details: Lightweight 4-bit quantized bilingual (Russian/English) LLM with 403M parameters, optimized for efficient text generation and conversations
Brief Details: A 27B parameter Bulgarian-English LLM built on Google's Gemma 2, featuring strong bilingual capabilities and state-of-the-art performance in Bulgarian language tasks.
Brief-details: Behemoth-123B is a powerful 123B parameter LLM based on Largestral 2411, featuring system prompt support and optimized for creative applications.
Brief Details: A 7B parameter vision-language model built on Qwen2.5, specializing in visual reasoning with 32K context window and multi-agent capabilities.
Brief-details: A specialized crypto trading model fine-tuned on Mistral-8B using LoRA, designed to predict BTC/ETH trading decisions with 0.94 Sharpe Ratio and 72% accuracy.
Brief-details: Flux-mini is a 3.2B parameter efficient text-to-image model distilled from the larger 12B Flux-dev, optimized for consumer devices while maintaining strong generation capabilities.
BRIEF DETAILS: NuExtract-1.5-smol: A 1.71B parameter multilingual model fine-tuned from SmolLM2, specialized in structured information extraction with MIT license support.
BRIEF DETAILS: Qwen2.5-Coder-32B-Instruct is a powerful 32.5B parameter code-focused LLM with 128K context length, supporting advanced code generation and reasoning capabilities.
Brief-details: A specialized LoRA model built on FLUX.1-dev, optimized for DALLE-style image generation with photorealistic outputs and enhanced face realism capabilities.
Brief Details: A specialized LoRA model for fashion and modeling photography, trained on FLUX.1-dev. Features high-quality fashion poses and realistic clothing renditions.
BRIEF DETAILS: A high-performance text-to-image model based on Flux.1, optimized for fast generation (4-8 steps), maintaining original Flux style with enhanced detail and realism. 11.9B parameters, commercial-use friendly.
Brief-details: A fine-tuned LoRA model for FLUX.1-dev focused on generating highly detailed realistic images, trained on 27 curated images with specific optimization for photorealistic outputs.
Brief-details: LLM2CLIP-EVA02-L-14-336 is a zero-shot image classification model that leverages LLMs to enhance CLIP's capabilities, offering improved cross-modal and cross-lingual performance.
Brief Details: A lightweight image captioning model (271M params) with enhanced caption generation, offering multiple instruction modes and efficient VRAM usage at just 1GB.
Brief Details: Ovis1.6-Gemma2-9B: A 10.2B parameter multimodal LLM combining SigLIP-400M vision encoder with Gemma2-9B, leading OpenCompass benchmark for MLLMs under 30B params.
Brief Details: Apple's AIMv2-huge vision model with 681M params, achieving 87.5% ImageNet accuracy. Excellent for image feature extraction and classification tasks.
Brief Details: A specialized LoRA model for generating clothing images, trained on FLUX.1-dev base model with Florence-2-large captioning, optimized for garments and fashion items.