Brief Details: Vietnamese bidding law QA model achieving 88.3% exact match accuracy. Built on vi-mrc-large, trained on 5.3K samples, ideal for legal queries.
Brief-details: A 13.3B parameter language model created through mergekit, using passthrough merge method on L3-Umbral-Mind-RP-v3.0-8B with overlapping layer ranges for enhanced capabilities.
Brief-details: A powerful 70B parameter language model with multiple quantization options, optimized for conversational tasks and general text generation, featuring exceptional quality-to-size ratios.
Brief-details: GGUF-quantized 8B parameter Llama 3.1 Tulu model optimized for conversational tasks with multiple quantization options for different hardware configurations
Brief Details: An 8B parameter LLaMA-based model merging crypto, finance & coding capabilities, optimized with Model Stock method for enhanced performance.
Brief-details: A LoRA model for FLUX.1-dev that generates consistent multi-view images of scenes from different viewpoints, specializing in two-view scene generation with high spatial coherence.
Brief Details: An 8B parameter SLERP-merged LLM combining Llama3-Unholy-8B-OAS and L3-Dark-Planet-8B models, optimized for text generation and conversations.
Brief-details: txtai embeddings index for searching Hugging Face Posts dataset - enables semantic search and analysis of HF community discussions with Apache 2.0 license
Brief-details: 14B parameter LLM optimized for creative text generation with 128k context window, built on SuperNova-Medius, featuring enhanced stability and reduced censorship.
BRIEF DETAILS: A 20.4B parameter MLX-optimized model for text generation and function calling, featuring 8-bit precision and RLHF training, based on Qwen2.
Brief Details: LABahasa 11B - A powerful 11.4B parameter multimodal LLM combining vision, audio & text processing, optimized for Indonesian/English language tasks
Brief Details: A 22B parameter merged LLM combining Cydonia-22B-v1.3 and Magnum-v4-22b using SLERP method, optimized for enhanced conversational abilities.
Brief-details: A 12.2B parameter merged model combining MN-Slush and NemoMix Unleashed, built on Mistral-Nemo architecture with BF16 precision for enhanced text generation and conversational tasks.
Brief Details: Turkish language model based on Gemma-2-9B, fine-tuned using ORPO technique. 9.24B parameters, optimized for conversational AI and text generation.
Brief Details: MartyFLUX is a specialized text-to-image LoRA model built on FLUX.1-dev, offering custom image generation with "Marty" trigger word integration.
Brief-details: 0tak1_v1 is a specialized text-to-image LoRA model built on FLUX.1-dev, offering custom image generation capabilities with specific trigger words.
Brief-details: Healthcare intent classification model based on Llama-3.2-1B, supporting 11 predefined intents in multiple languages with 1.5B parameters
Brief-details: XenoGASM-MK2 is a refined text-to-image diffusion model with improved compatibility, featuring baked-in 840K VAE for enhanced detail generation and creative artwork synthesis.
Brief-details: A specialized LoRA model trained on drone imagery for wildlife detection, supporting 20 species including birds and mammals. Built on FLUX.1-dev base model for UAV applications.
Brief-details: Advanced 123B parameter LLM with multi-lingual support, 32K context window, and EXL2 quantization optimized for 48GB VRAM systems. Research-only license.
Brief-details: A quantized 8.03B parameter LLaMA model optimized for mesh generation, converted to GGUF format for efficient local deployment using llama.cpp