Brief-details: A GGUF-quantized version of wan2.1 t2v 1.3b model optimized for ComfyUI, featuring text-to-video capabilities with specialized architecture for efficient inference
Brief-details: A 0.5B parameter draft model designed for speculative sampling with DeepSeek-R1, converted to GGUF format for efficient deployment and inference.
BRIEF DETAILS: 32B parameter LLM fine-tuned on Claude-style datasets, optimized for roleplay. Uses ChatML format. Trained on 8x H100s for 2 epochs, based on Hamanasu-QwQ-V2-RP.
Brief-details: Light-R1-14B-DS-GGUF is a 14B parameter quantized language model with strong performance on AIME benchmarks, offering int4/int8 variants
BRIEF-DETAILS: A LoRA model trained on Replicate for image generation, requires TOK trigger word, compatible with 🧨 diffusers library and Flux-dev trainer
BRIEF DETAILS: Thai text-to-speech model based on F5-TTS architecture, trained on 90,000 voice samples (100 hours). Capable of natural Thai speech synthesis with 430k training steps.
Brief-details: High-performance English PII anonymization model achieving 99.17% accuracy across 20 PII categories. Specialized in text redaction with near-perfect precision.
Brief Details: Vietnamese text embedding model optimized for legal domain RAG applications with 768-dim output, trained on 100k legal QA pairs using Matryoshka loss
Brief Details: Advanced 8B-parameter video multimodal LLM with enhanced temporal understanding and fine-grained detail perception through TPO and HiCo techniques.
Brief-details: YOLOv8s model fine-tuned for handwritten signature detection, achieving 94.74% precision and 89.72% recall, with fast inference times on both CPU and GPU.
Brief Details: A specialized LoRA model for creating squish-style video effects, built on LTX Video v0.9.5. Uses "SQUISH" trigger word for generating squeeze animations.
Brief Details: FanFic-Illustrator - A 3B parameter AI model that analyzes creative stories and generates optimal illustration scene prompts for image generation, specialized in anime/manga content.
Brief-details: Qwen 2.5 7B model fine-tuned with RLHF for creative writing, using Erebus dataset and custom reward model for improved narrative generation
Brief-details: A fine-tuned version of Phi-3-mini optimized for character-based chat with strict prompt formatting requirements and multiple precision options for various hardware specs.
Brief-details: RWKV7 2.9B parameter language model using flash-linear attention, trained on 3.119T tokens. Features efficient architecture and World tokenizer with 65k vocabulary.
Brief-details: A 32B parameter Japanese-focused instruction-tuned LLM built on Qwen2.5, enhanced with Chat Vector and ORPO optimization, showing strong reasoning capabilities.
BRIEF-DETAILS: EasyControl: A flexible conditional DiT framework enhancing transformer-based diffusion models with efficient control mechanisms and multi-condition support.
BRIEF DETAILS: LHM_Runtime is a groundbreaking AI model that converts single images to animatable 3D human models in seconds, using feed-forward architecture and video-based training.
Brief-details: A LoRA model trained on Replicate using flux-dev-lora-trainer, designed for image generation with TOK trigger word support and UltraRealism capabilities.
BRIEF-DETAILS: 32B parameter RWKV-based model converted from Qwen 2.5, offering 1000x inference cost reduction while maintaining competitive performance across benchmarks
Brief-details: Specialized Japanese speech recognition model optimized for anime content, featuring reduced hallucination and improved domain-specific accuracy with beam search support.