BRIEF-DETAILS: DeepSeek-VL2-Small: A 2.8B parameter MoE vision-language model offering state-of-the-art performance in visual QA, OCR, and document understanding tasks.
Brief-details: DistilBERT-based multilingual sentiment analyzer supporting 21 languages with 5-class classification. Built for global sentiment analysis tasks across social media, reviews, and feedback.
Brief-details: A specialized LoRA model for generating images of Korean palaces, particularly Gyeongbokgung, integrated with FLUX.1-dev base model for authentic architectural representation.
Brief Details: 107M parameter multilingual embedding model supporting 12 languages, producing 384-dim vectors. Ideal for text similarity and retrieval tasks. Apache 2.0 licensed.
BRIEF-DETAILS: EXAONE-3.5-2.4B-Instruct is a bilingual (English/Korean) LLM with 2.4B parameters, 32K context window, and state-of-the-art performance in real-world tasks.
BRIEF-DETAILS: 7B parameter multilingual instruction-tuned LLM supporting 32K context, optimized for STEM/reasoning tasks, with state-of-art performance in 4 languages
Brief-details: Advanced 78B parameter LLM based on Qwen2.5-72B, fine-tuned for generic domains with strong performance (52.02 avg on OpenLLM leaderboard), featuring GGUF and EXL2 quantization support
Brief-details: Multilingual Indic TTS model supporting 21 languages with 69 unique voices, offering natural speech synthesis with controllable features and high native speaker scores
BRIEF DETAILS: Command-focused AI model by CohereForAI, part of the C4AI series with Arabic language capabilities and related vision models.
Brief-details: Gemma-2-2b is Google's 2.2B parameter language model requiring license agreement, with instruction-tuned variants available and strong performance balance
OpenVoiceV2 - Advanced multi-lingual voice cloning model with improved audio quality, native support for 6 languages, and MIT license for commercial use. Features accurate tone cloning and style control.
Brief-details: QwQ-32B optimized with unsloth's 4-bit dynamic quantization - 32.5B parameter reasoning model with 131K context, featuring improved accuracy and selective quantization for enhanced performance.
Brief-details: AMD's 3B parameter LLM trained on 4T tokens, featuring 36 decoder layers and 32 attention heads, designed for advanced language understanding and problem-solving.
Brief-details: NuExtract-2-8B is an 8B parameter model specialized for structured information extraction from text and images, supporting multilingual content and JSON template-based extraction.
Brief Details: Korean language model with 14B parameters optimized for reasoning tasks. Achieves 37.68% avg score on HAE-RAE benchmark. Strong puzzle/math performance.
Brief-details: GGUF quantized variants of TinyR1-32B-Preview by Qihoo360, offering 24 different compression levels from 9.96GB to 34.82GB with varying quality-size tradeoffs
Brief Details: Bilingual Russian/English 8B parameter instruction-tuned LLM based on YandexGPT-5-Lite, specialized in RAG and instruction following
BRIEF-DETAILS: Ling-lite is a 16.8B parameter MoE LLM with 2.75B activated parameters, offering 64K context length and efficient scaling capabilities for diverse NLP tasks.
Brief-details: Japanese language instruction-tuned 1B parameter model with strong performance on Japanese/English tasks, outperforming similarly-sized competitors in multilingual benchmarks.
Brief-details: A comprehensive collection of SillyTavern settings, presets, and guides focused on optimizing smaller language models, including banned tokens, GM prompts, and jailbreak techniques.
Brief-details: SkyReels V1 is a state-of-the-art open-source human-centric video foundation model capable of generating cinematic-quality videos with advanced facial animations and 33 distinct expressions.