Brief Details: A specialized LoRA model for Stable Diffusion 3.5 that generates Chinese line art illustrations with detailed artistic styling and cultural elements.
Brief-details: A 3.3B parameter MoE transformer model optimized for text generation, featuring 40 experts and 800M active parameters, trained on diverse multilingual data across 12 languages.
Brief-details: A 2.5B parameter language model from IBM's Granite series, optimized for text generation with strong performance in reasoning and commonsense tasks. GGUF quantized version.
Brief Details: Quantized 8.17B parameter instruct model optimized for chat/instruction tasks. Strong multilingual support and impressive benchmarks across reasoning and code tasks.
Brief Details: Specialized 8B parameter LLaMA-3 model fine-tuned for financial Q&A, optimized for RAG applications with GGUF quantization
Brief-details: Thai language BERT model fine-tuned for text classification, based on WangchanBERTa. 105M params, specialized for research paper classification and sentiment analysis.
Brief-details: A 22B parameter language model fine-tuned on Mistral-Small-Instruct-2409, optimized for Claude 3-like prose quality with strong performance on IFEval (56.29%) and BBH tasks (35.55%).
Brief-details: A fine-tuned 3.2B parameter Llama model optimized with Unsloth for 2x faster training, focused on instruction-following and roleplay conversations using multiple datasets.
Brief-details: A 12.2B parameter LLM fine-tuned on Mistral-Nemo-Instruct, optimized for Claude 3-like prose quality with 33.93% IFEval accuracy and strong performance on complex tasks.
Brief Details: A 7.62B parameter GGUF model optimized for creative writing and roleplay, based on Qwen2.5-7B-Instruct with multi-language support (EN/ZH).
Brief Details: A lightweight 29.4M parameter LLaMA-based model optimized for feature extraction, using BF16 precision and incorporating transformer architecture.
Brief-details: A Tamil language speech recognition model based on Whisper-small architecture with 242M parameters, achieving 43.32% WER on Common Voice 11.0 dataset
Brief-details: A 14.8B parameter bilingual model fine-tuned on Chinese-Vietnamese translations, specializing in bidirectional translation with Qwen2.5 architecture
Brief-details: YOLO11 is Ultralytics' latest object detection model featuring improved accuracy, multi-language support, and versatile capabilities across detection, segmentation, and pose estimation tasks.
Brief-details: A 70B parameter LLM fine-tuned for high-quality summarization across 7 domains, showing superior performance in faithfulness, completeness, and conciseness.
Brief-details: Fine-tuned GLiNER model specialized in PII/PHI detection, achieving 0.91 accuracy and 0.95 F1 score. Ideal for privacy-compliant entity recognition.
Brief-details: A 135M parameter bilingual (Arabic-English) sentence transformer model for semantic similarity, fine-tuned on cross-lingual STS tasks with strong Arabic performance (85.6% STS17 score).
BRIEF-DETAILS: A massive 1.7T parameter LLaMA-based instruction model, notable for its ambitious scale and BF16 tensor format. Challenges conventional resource limitations.
Brief Details: An 8B parameter LLaMA-3.1-based model optimized for creative writing and roleplay, featuring 128K context length and diverse training data
Brief-details: A 7B parameter LLM specialized in mathematical reasoning, achieving SOTA performance on MATH/GSM8K benchmarks through error-driven insights and hierarchical thought templates.
Brief Details: SummLlama3-70B: A 70B parameter summarization model fine-tuned from Llama3, optimized for human-preferred summaries across 7 domains.