Brief-details: GPT-4chan is a controversial fine-tuned GPT-J 6B model trained on 4chan's /pol/ board data, showing improved performance on certain benchmarks but raises ethical concerns.
BRIEF-DETAILS: An 8B parameter LLaMA-based reasoning model offering transparent, JSON-formatted thought processes. Runs on 16GB+ VRAM with structured problem-solving capabilities.
Brief-details: TAIDE-LX-7B-Chat is a 7B parameter language model from TAIDE, designed for chat applications with community licensing and privacy considerations built-in
BRIEF DETAILS: ChatDoctor is a medical AI assistant fine-tuned on LLaMA, trained with 200k medical dialogues to provide doctor-patient interactions and medical advice.
Brief-details: Mamba-2.8B is a state-of-the-art selective state space model with 2.8B parameters, offering efficient sequence processing and strong performance on language tasks.
Brief-details: A 13B parameter bilingual Arabic-English LLM fine-tuned on 10M instruction pairs, featuring SwiGLU and ALiBi position embeddings for enhanced context handling
Brief-details: Reliberate - An AI model by XpucT available on HuggingFace, designed for text generation and natural language processing tasks.
Brief Details: XuanYuan2.0 - A non-commercial AI model from xyz-nlp team with specific Chinese governance compliance and research-focused licensing requirements.
Brief-details: FitDiT is an advanced virtual try-on AI model using Diffusion Transformers for high-fidelity garment visualization, featuring authentic detail preservation and two-step processing.
Brief Details: BERT-based spam detection model achieving 99.37% validation accuracy. Fine-tuned on 5.57k entries for binary classification of spam/ham messages.
Brief-details: A 3B parameter LLaMA-based model specialized in generating music-related content, including lyrics and compositions. Fine-tuned on song patterns for creative music text generation.
BRIEF DETAILS: A quantized version of patricide-12B offering multiple GGUF variants from 3.1GB to 10.2GB, optimized for different performance/quality tradeoffs with imatrix quantization.
BRIEF-DETAILS: A specialized image generation model featuring multiple variants (NV, NV-551, NV-L5, etc.) focused on distinctive anime-style artwork with configurable parameters.
Brief-details: YOLO-based manga element detection model with variants (nano to extra-large) for detecting body, face, frame and text in manga, achieving 88-92% F1 scores.
Brief-details: An 8B parameter LLaMA 3.1-based model fine-tuned for concise responses, featuring DPO and KTO reinforcement learning optimizations. Specialized for roleplay and creative writing.
Brief Details: F5-TTS-Russian is a fine-tuned text-to-speech model optimized for Russian and English languages, trained for 813k steps on 100k hours of data.
Brief Details: High-performance text-to-image model derived from SD3.5-medium, optimized for speed with 4-8 step inference and LoRA support
Brief-details: Hammer2.1-7b is a specialized 7B parameter LLM focused on function calling, built on Qwen 2.5 coder series with multi-step and multi-turn capabilities.
Brief-details: FLUX.1-Fill-dev model optimized with float8 (e4m3fn) quantization for efficient deployment, developed by boricuapab under non-commercial license.
Brief Details: Sortformer model for speaker diarization supporting up to 4 speakers with high accuracy. Uses Fast-Conformer architecture and achieves DER of 14.76% on DIHARD3-Eval dataset.
Brief Details: BERT-VI: A specialized BERT-based model trained for political text analysis in Portuguese, built on BERTimbau architecture.