Brief-details: An AI model hosted by omar07ibrahim on Hugging Face, with limited public information available. Purpose and capabilities require further documentation.
Brief-details: A fine-tuned variant of TinyLlama optimized for 2x faster performance using Unsloth and TRL library, developed by omar07ibrahim under Apache-2.0 license.
Brief-details: A TinyLlama variant finetuned using Unsloth and TRL libraries, offering 2x faster training while maintaining LLaMA architecture capabilities
Brief-details: A variant of the Orca language model hosted on HuggingFace by omar07ibrahim, designed for natural language processing tasks and conversation.
Brief-details: Azerbaijani language model based on NLLB (No Language Left Behind) architecture, developed by omar07ibrahim for machine translation tasks.
Brief-details: Tesslate's 32B parameter model with multiple GGUF quantizations, offering flexible deployment options from 9GB to 65GB with varying quality-size tradeoffs
BRIEF DETAILS: Quantized 8B parameter instruction-tuned LLM from Yandex, optimized for GGUF format, featuring custom dialogue template and server/interactive modes
Brief-details: Qwen2.5-VL-3B is a versatile vision-language model offering advanced visual understanding, video processing, and agent capabilities in a compact 3B parameter format
Brief Details: Video-R1-7B is a 7B parameter model focused on video reasoning capabilities in Multi-modal Large Language Models (MLLMs) for enhanced video understanding.
Brief-details: A 32B parameter vision-language model optimized for 4-bit quantization, featuring enhanced mathematical reasoning, video understanding, and structured output capabilities.
BRIEF DETAILS: A sophisticated 70B parameter LLaMA merge combining 20 specialized models, focused on uncensored output, intelligence, creative writing, and roleplay capabilities. Notable for its DARE TIES merge methodology.
Brief Details: INT4-quantized version of Gemma 3 27B instruction-tuned model, offering efficient deployment while maintaining high performance across reasoning, STEM, and multilingual tasks.
BRIEF-DETAILS: Specialized 12B parameter variant of Google's Gemma designed for unbiased information retrieval, featuring reduced refusal mechanisms and neutral response protocols
Brief Details: A large Chinese embedding model built on Stella, trained on 100M+ samples with hard negative sampling and LLM data synthesis. Optimized for retrieval, classification, and clustering tasks.
BRIEF-DETAILS: LoRA model for text-to-image generation using Flux architecture. Requires TOK trigger word. Built on Replicate's flux-dev-lora-trainer.
Brief Details: An 8B parameter Llama-based language model fine-tuned for reasoning tasks, developed by SciMaker with focus on Taiwan-specific applications
BRIEF-DETAILS: A LoRA model trained on Replicate's Flux trainer, designed for image generation with diffusers library. Uses TOK as trigger word and requires CUDA support.
BRIEF-DETAILS: Infinity is a groundbreaking bitwise autoregressive model for high-res image generation, offering superior quality vs SD3/SDXL with 0.8s generation time at 1024x1024.
BRIEF-DETAILS: First Azerbaijani-focused LLM (7B params) based on LLaMA, achieving 36.7 BLEU score for EN→AZ translation with enhanced fluency and coherence
BRIEF-DETAILS: A collection of LoRA models for Wan2.1-T2V enhancing video generation with features like aesthetics, speed control, high-res fixes, and extended duration support
Brief-details: Qwen2.5-VL-7B-Instruct is a vision-language model featuring dynamic resolution processing, enhanced visual understanding, and support for long video analysis up to 1+ hour with GGUF quantization options