Brief Details: Qwen2.5-0.5B-Instruct-GGUF is a lightweight 630M parameter LLM optimized for chat and instruction-following, supporting 29+ languages with 32K context window.
Brief-details: Optimized Stable Diffusion 1.5 LCM model for RKNN2 NPU, enabling fast image generation on RK3588 hardware with efficient memory usage and competitive inference speeds
Brief-details: A 141B parameter language model built on WizardLM-2-8x22B, optimized for roleplay using LoRA training. Features BF16 precision and Apache 2.0 license.
BRIEF DETAILS: T5-based prompt enhancement model (223M params) that expands short prompts into detailed descriptions, optimized for text-to-text generation tasks.
Brief-details: ChartMoE is a multimodal LLM using Mixture-of-Expert connector for advanced chart analysis, editing, and transformation, built on InternLM-XComposer2.
Brief Details: Llama-3.1-8B-Instuct-Uz is a specialized bilingual LLM optimized for Uzbek-English tasks, featuring 8B parameters and improved BLEU scores for translation.
Brief-details: FLUX MidJourney Anime is a specialized LoRA model for Stable Diffusion that creates anime-style artwork inspired by MidJourney's aesthetic, using FLUX.1-dev as base model.
Brief Details: A DeBERTa-based text quality classifier that categorizes content into High/Medium/Low quality, trained on 22.8K samples with 82.5% accuracy.
Brief Details: Advanced face ID adapter model for Stable Diffusion that maintains identity consistency while generating high-quality images, built by Kwai-Kolors team.
Brief-details: A powerful bilingual embedding model (404M params) for Russian-English text, fine-tuned on 4M pairs with multiple prefixes for different tasks.
Brief-details: Open-Sora-Plan v1.2.0 is an open-source implementation of Sora-like capabilities, featuring 3D full attention architecture for video generation with improved visual representations and multilingual support.
Brief Details: A powerful 12.2B parameter multilingual LLM supporting 9 languages, fine-tuned for instruction following with 128k context window and Apache 2.0 license.
Brief-details: RWKV's v6-Finch-7B-HF is a 7.64B parameter language model with improved performance over Eagle-7B, featuring strong multilingual capabilities and HuggingFace compatibility.
Brief Details: A specialized LoRA model for SDXL focused on creating artistic wallpapers with liquid effects, featuring geometric patterns and abstract themes.
Brief-details: A GTA-style LoRA model for SDXL that creates images in the visual style of Grand Theft Auto games. Features 64 network dimensions and 15 training epochs.
Brief Details: Enhanced CLIP model with 428M params supporting 248 tokens (vs standard 77). Features geometric parametrization for improved accuracy and longer text processing.
Brief-details: Japanese speech recognition model specialized for anime/game voices, fine-tuned on 5,300 hours of data. 756M params, achieves 13% CER on anime domain.
BRIEF DETAILS: NuExtract-1.5-smol is a 1.71B parameter multilingual model fine-tuned from SmolLM2-1.7B, specialized in structured information extraction with MIT license.
Brief Details: Qwen2.5-Coder-32B-Instruct-128K-GGUF is a powerful 32.5B parameter code-focused LLM with 128K context, optimized for programming tasks and code generation.
Brief-details: A LoRA model trained on FLUX.1-dev base for DALL-E style image generation. Features realistic outputs, photo-realistic faces, and Pixar-like characters. Uses AdamW optimizer with 64 network dimensions.
Brief Details: A powerful text-to-image model based on Flux.1, optimized for fast 4-8 step inference with enhanced quality and prompt following capabilities, supporting GGUF quantization.