Brief-details: A Stable Diffusion model fine-tuned on Clone Wars TV series screenshots, enabling generation of images in Clone Wars animation style using "clonewars style" prompt.
BRIEF-DETAILS: A Stable Diffusion model fine-tuned on Disco Elysium game character portraits, enabling generation of artwork in the game's distinctive style.
Brief Details: Chinese RoBERTa model (110M params) fine-tuned for sentiment analysis, achieving 96-97% accuracy across major benchmarks. Built by IDEA-CCNL team.
Brief-details: Gemma-2-Ataraxy-9B is a merged model combining SimPO and Gutenberg finetunes, achieving top creative writing benchmarks with 10.2B parameters and BF16 precision
Brief Details: Advanced IP-Adapter for Stable Diffusion 3.5 using SigLIP image encoding, enabling seamless image-to-text integration with 64 image tokens.
Brief-details: OCR model for CAPTCHA recognition using CNN+RNN architecture with CTC loss. Built with Keras Functional API and subclassing approach.
Brief-details: Specialized 8B parameter LLM built on Llama 3 for document-based Q&A, achieving GPT-4 level performance on conversational QA tasks
Brief Details: H2O.ai's 4B parameter chat model optimized for mobile deployment. Features 24-layer architecture, 8K context window, and strong benchmark performance (61.42% avg accuracy).
Brief Details: Next-DiT model (2B params) using Gemma-2B encoder for text-to-image generation. Features supervised fine-tuning and SDXL VAE for enhanced image quality.
Brief-details: A 70B parameter Japanese-optimized LLaMA 3.1 model fine-tuned for instruction following, supporting both Japanese and English languages with BF16 precision.
BRIEF DETAILS: A 70B parameter LLM fine-tuned for Traditional Mandarin and English, achieving SOTA performance on Taiwanese benchmarks with 8K context window and comprehensive domain coverage.
Brief Details: Qwen1.5-7B-Chat-GGUF is a 7.7B parameter chat model optimized for multilingual support and 32K context length, featuring various quantization options for efficient deployment.
Brief-details: Snowflake's Arctic base model featuring 482B parameters with dense-MoE hybrid architecture, optimized for enterprise AI applications using BF16 precision.
Brief-details: EEVE-Korean-Instruct is a 10.8B parameter Korean language model optimized for instruction-following, available in GGUF format for efficient deployment.
Brief-details: SeaLLM-7B-v2: State-of-the-art 7B parameter multilingual LLM supporting 10 Southeast Asian languages, with superior math reasoning and commonsense capabilities.
Brief-details: High-performance 34B parameter chat model quantized to GGUF format, optimized for CPU/GPU inference with multiple quantization options from 2-bit to 8-bit
Brief-details: A high-performing 60.8B parameter MoE model combining Yi and Mixtral architectures, achieving 76.72 average score on OpenLLM leaderboard with strong multilingual capabilities.
Brief Details: LLaVA-Med is a biomedical vision-language model that extends LLaVA for medical imaging analysis and question answering, trained on PMC-15M dataset.
Brief-details: Mythalion-13B is a merged model combining Pygmalion-2 13B and MythoMax 13B, optimized for RP/Chat applications with GGUF quantization formats for efficient deployment
Brief Details: A powerful zero-shot object detection model with 233M parameters, based on DINO architecture. Enables open-set detection through text-guided identification.
Brief-details: ControlNet model specialized in QR code pattern generation with selective pixel conditioning (25% black/white), built on SD-v1-5, ideal for artistic QR manipulation