BRIEF-DETAILS: RAG-optimized Mistral-7B variant achieving 96.5% accuracy on QA tasks. Specialized for business/legal document analysis with minimal hallucination.
Brief-details: A specialized FLUX-based model for generating realistic handwriting images, featuring both neat and messy styles with customizable ink colors and paper backgrounds. Uses HWRIT trigger word.
Brief-details: Extended context (16K) version of LLaMA 3 8B base model, fine-tuned on LongAlpaca dataset. Optimized for longer context processing with modified rope_theta.
Brief-details: Quantized 12B parameter LLM with multiple GGUF variants optimized for different size/quality tradeoffs. Features imatrix quantization for improved performance.
Brief Details: GGUF quantized version of Aurora-SCE-12B-v2 with multiple compression options ranging from 3.1GB to 10.2GB, optimized for different performance/size tradeoffs.
Brief Details: EtherealAurora-12B-GGUF is a quantized version of the original EtherealAurora model, offering multiple compression variants from 4.9GB to 13.1GB, optimized for efficient deployment.
BRIEF-DETAILS: A 12B parameter GGUF-optimized language model offering multiple quantization options, from lightweight 4.9GB to high-quality 13.1GB versions, suitable for local deployment.
BRIEF-DETAILS: Quantized coding-specialized AI model available in multiple GGUF formats (Q2-Q8), optimized for performance with sizes ranging from 5.9GB to 15.8GB
Brief Details: A merged 24B parameter LLM combining Mistral variants using DARE TIES method. Notable for merging Dolphin3.0, Cydonia, and Arcee-Blitz models with equal weighting.
Brief-details: GeoAI is a specialized model adapted from ArcGIS Living Atlas pre-trained models, focused on geographical and spatial AI applications, maintained by giswqs on HuggingFace.
Brief-details: A 70B parameter GGUF quantized language model offering multiple compression variants from 15.4GB to 58GB, optimized for different performance/quality tradeoffs.
Brief-details: Fine-tuned Phi-4-multimodal-instruct model specialized for Turkish ASR, achieving 64.76 WER after training on 1300 hours of Turkish audio.
BRIEF DETAILS: 70B parameter LLaMA 3.3-based merged model optimized for roleplay and creative writing. Features balanced intelligence and creativity with reduced censorship.
Brief-details: A quantized version of Josiefied-Qwen2.5-3B offering multiple compression options (Q2_K to f16) with optimized performance and quality trade-offs. Ideal for efficient deployment.
Brief-details: MEXMA-SigLIP2 combines MEXMA multilingual encoder with SigLIP2 image encoder, enabling CLIP capabilities across 80 languages with SOTA performance on Crossmodal-3600.
Brief Details: TriplaneTurbo is an AI model by ZhiyuanthePony hosted on HuggingFace, likely focused on 3D representation learning using triplane architecture.
Brief Details: Uncensored 16B parameter LLM based on Moonlight, modified using abliteration technique to remove refusal behaviors. Implements transformer-based architecture.
BRIEF-DETAILS: A 14B parameter Qwen2.5-based model created using TIES merge method, designed as a stable baseline component for further development and specialized use cases.
BRIEF DETAILS: Preview version of Qwen2.5's fourth-gen YOYO series, 14B parameters, optimized for performance with promised 1M token context in final release.
Brief-details: Quantized version of Babel-83B-Chat with multiple compression options (26GB-74GB). Offers imatrix and static quantization variants optimized for different performance/size tradeoffs.
Brief-details: 12B parameter LLM with multiple GGUF quantization options (3.1GB-10.2GB), featuring iMatrix and standard quantization variants optimized for different performance/size tradeoffs