Brief-details: EraX-WoW-Turbo-V1.0 is a high-speed multilingual speech recognition model optimized for Vietnamese and 10 other languages, featuring ultra-low latency and high accuracy.
Brief Details: Uncensored version of DeepSeek-671B using abliteration technique. Work-in-progress model focused on unrestricted capabilities with planned pruning improvements.
BRIEF-DETAILS: Llamacpp imatrix quantized version of Google's Gemma 3B model with vision capabilities, offering multiple compression options from 8GB to 54GB.
Brief Details: LoRA model for Wan2.1 14B I2V that creates squishing effect animations. Trained on 1.5 minutes of video data across 18 epochs. Supports 480p output.
Brief Details: Gemma-3 4B GGUF optimized model by Unsloth. Features 128K context window, multimodal capabilities, and strong multilingual support across 140+ languages.
Brief-details: Quantized versions of RekaAI's reka-flash-3 model offering various compression levels from 41GB to 7GB, optimized for different hardware configurations and use cases.
Brief Details: Gemma 3 12B instruction-tuned model in GGUF format. Trained on 12T tokens, handles text+image input with 128K context window, optimized for efficiency.
BRIEF-DETAILS: Gemma 3B pretrained model from Google, requiring explicit license agreement. Part of Google's new open model series with 3.4B parameters.
BRIEF-DETAILS: Control LoRA models for Wan2.1, offering lightweight alternatives to ControlNet for video-to-video generation with efficient tile-based control signals
Brief-details: A powerful multilingual reranking model supporting 100+ languages with SOTA performance, optimized for English/Chinese, featuring code and long-context support.
Brief-details: ShieldGemma 2-4B-IT is Google's instruction-tuned variant of Gemma, requiring explicit license acceptance through Hugging Face for access and usage.
Brief-details: Open-Sora-v2 is an 11B parameter open-source video generation model supporting both text-to-video and image-to-video generation at 256px and 768px resolutions.
Brief Details: R1-Omni-0.5B is a groundbreaking omni-multimodal emotion recognition model using reinforcement learning, achieving 65.83% WAR on DFEW dataset.
Brief Details: A 24B parameter multimodal LLM with vision capabilities, 128k context window, and support for 24+ languages. Achieves SOTA benchmarks in text/vision tasks.
Brief-details: Gemma 3 27B model in GGUF format - Google's advanced multimodal LLM supporting text & image input with 128K context window, optimized for efficient deployment
BRIEF-DETAILS: Gemma 3 27B Pretrained is Google's powerful language model with 27B parameters, requiring license agreement for access via Hugging Face.
Brief-details: DeepHermes-3-Mistral-24B is a hybrid reasoning LLM that uniquely combines intuitive responses with systematic reasoning, built on Mistral 24B architecture with advanced function calling capabilities.
Brief-details: Gemma-3-1b-pt is Google's 3.1B parameter pretrained language model requiring explicit license agreement, representing a compact yet powerful AI solution
Brief-details: OLMo-2-0325-32B-Instruct is a 32B parameter open language model fine-tuned on Tülu 3 dataset, optimized for diverse tasks including MATH and GSM8K.
BRIEF-DETAILS: OlympicCoder-32B: A 32B parameter code model fine-tuned from Qwen2.5-Coder, specialized in competitive programming tasks with strong IOI'24 performance.
Brief-details: Gemma 3 27B Instruction-Tuned (IT) - Google's powerful large language model optimized for instruction following, requiring license agreement for access