Brief Details: A FLUX-based LoRA model specialized in generating desaturated illustrations with thick outlines, built on black-forest-labs/FLUX.1-dev base model
Brief Details: A 1.78B parameter language model based on Qwen2.5-1.5B-Instruct, fine-tuned with Magpie-Pro dataset using MGS & UNA techniques for enhanced performance.
Brief Details: NVIDIA FastConformer-Hybrid Large model for Uzbek speech recognition, featuring 115M parameters and achieving 16.46% WER on Common Voice test set.
Brief Details: HTML-optimized 1.24B parameter LLaMA model for efficient HTML content pruning in RAG systems, featuring two-step block-tree pruning approach.
Brief Details: Large-scale Japanese vision-language model (14B params) with strong performance in image understanding and Japanese text generation. Excels in benchmarks.
BRIEF-DETAILS: DeBERTa-v3-small model for multi-dimensional toxicity classification across 9 languages, trained on 600k samples with 141M parameters
Brief-details: A LoRA adaptation for SD3.5-Turbo focusing on hyper-realistic image generation, trained on 30 images over 15 epochs with AdamW optimizer and constant LR scheduler. Requires "hyper realistic" trigger word.
Brief Details: 12B parameter Mistral-based model optimized for roleplay and creative writing, featuring diverse training data and non-repetitive character generation.
Brief-details: A specialized LoRA model for FLUX.1-dev that generates realistic anime-style images, featuring fashion photography aesthetics with detailed lighting and composition control.
Brief Details: A multilingual vision model with 1.13B parameters using SigLIP architecture for zero-shot image classification, optimized with sigmoid loss function.
Brief Details: A powerful 938M parameter model for converting table images to LaTeX/HTML/Markdown, supporting both English and Chinese with high-efficiency processing capabilities.
Brief-details: Multilingual text-to-speech model supporting 8 European languages with 938M parameters, using advanced tokenization and natural language prompting.
Brief Details: A state-of-the-art 27B parameter reward model built on Gemma-2-27b-it, achieving top performance on RewardBench with advanced preference learning capabilities.
Brief-details: SummLlama3.1-8B: An 8B parameter LLM fine-tuned for human-preferred summarization across 7 domains, optimized for faithfulness, completeness, and conciseness.
Brief Details: A 22B parameter Mistral-based model specialized in roleplay, featuring diverse AI personas and both markdown/narrative styles. Built for character interaction.
Brief-details: A fine-tuned GLiNER model specialized for PII/PHI detection, achieving 0.94 F1 score with strong privacy compliance features and entity recognition capabilities
Brief-details: Arabic/English text embedding model using Matryoshka technique, offering flexible dimensionality (8-768D) with 135M parameters, optimized for semantic similarity tasks.
Brief Details: Optimized 12.7B parameter multimodal model supporting text/image input with FP8 quantization for 50% memory reduction. Supports 8 languages.
Brief-details: A 2.6B parameter bilingual (Japanese-English) instruction-tuned LLM based on Gemma 2, enhanced with Chat Vector and ORPO optimization techniques.
Brief-details: A specialized LoRA model for Flux that enhances 90s anime art generation, featuring sharper backgrounds and authentic retro anime aesthetics. Created by MindlyWorks, CC0-1.0 licensed.
Brief-details: A compact but powerful sentence embedding model with 24.1M parameters, offering binary quantization and Matryoshka capabilities while maintaining 93.9% performance efficiency.