Brief Details: Specialized 7B parameter Japanese-to-Chinese translation model optimized for visual novels (Galgame), with strong handling of script elements and formatting.
BRIEF DETAILS: Quantized 3B parameter text-to-speech model supporting 8 distinct voices and emotions, optimized for efficient inference at 24kHz audio output
Brief-details: A specialized 7B-parameter visual-language-action model for Minecraft gameplay, enabling natural language control of in-game actions using keyboard and mouse interactions.
Brief-details: A modified 12B parameter variant of Google's Gemma model, tuned for adversarial responses and unique perspectives. Features vision capabilities and alternative personality traits.
Brief Details: Specialized 4B parameter variant of Google's Gemma optimized for neutral information retrieval, featuring reduced moral constraints and bias dampening
Brief Details: A 3B parameter Vietnamese reasoning model focused on analytical tasks, developed by 5CD-AI. Currently in beta, specializing in detailed multi-step reasoning.
Brief-details: A comprehensive 24B parameter LLaMA-based model with multiple GGUF quantizations, optimized for performance and efficiency with sizes ranging from 7GB to 47GB.
BRIEF-DETAILS: Mistral Small 3.1 24B Instruct model converted to Hugging Face format, optimized for text-only tasks, featuring 24B parameters
Brief-details: Experimental uncensored version of Gemma 3 4B using layerwise abliteration technique, optimized for reduced refusals while maintaining coherent outputs
BRIEF-DETAILS: Advanced 70B parameter LLM fine-tuned from Llama-3.3, specializing in selecting high-quality responses. Achieves 93.4% accuracy on Arena Hard with Feedback-Edit ITS.
Brief-details: Sonata is a Facebook-developed AI model accessible via Huggingface, focused on audio and music processing capabilities.
BRIEF-DETAILS: NVIDIA's 7B parameter transfer learning model optimized for autonomous vehicle applications, focusing on vision and control tasks
Brief-details: Advanced SDXL-based image generation model with 1536×1536 native resolution, combining natural language and tag-based prompting for high-quality illustrations. Built by OnomaAI with future plans for 2K+ resolution support.
Brief-details: Voice safety classification model by Roblox, fine-tuned on WavLM for detecting toxicity in voice chat. 94.48% precision across multiple violation categories. Built on 2,374 hours of audio data.
BRIEF DETAILS: A 4B parameter "evil-tuned" variant of Google's Gemma model, designed with reduced ethical constraints and enhanced capability for unconventional responses. Built by TheDrummer with vision capabilities.
BRIEF DETAILS: A specialized LoRA model for creating cake-transformation video effects, built on LTX Video v0.9.5. Uses "CAKEIFY" trigger word for video manipulations.
Brief-details: RWKV7 2.9B parameter language model with multiple quantization options, trained on World v3 dataset with 3.119T tokens, featuring flash-linear attention architecture.
BRIEF-DETAILS: Specialized 4B parameter variant of Google's Gemma, optimized for neutral information retrieval with reduced moral bias and refusal mechanisms
Brief Details: A 24B parameter instruction-tuned language model based on Mistral architecture, optimized for instruction following and deployed on Hugging Face.
Brief-details: MagicMotion is an AI video generation model that uses dense-to-sparse trajectory guidance for controllable video creation, developed by quanhaol
BRIEF-DETAILS: Vamba-Qwen2-VL-7B is a hybrid Mamba-Transformer model designed for efficient hour-long video understanding, combining cross-attention layers and Mamba-2 blocks to reduce computational overhead.