Brief-details: A LoRA fine-tuned version of Mochi-1 preview model specialized for text-to-video generation, focusing on black and white animated scenes with character interactions.
Brief-details: OpenHermes 2.5 Mistral (7B params) fine-tuned for BOLA Karate, offering multiple GGUF quantization variants optimized for different performance/quality trade-offs
Brief-details: A 12.5B parameter GGUF-quantized language model with 32k context window, offering multiple quantization options from 4.7GB to 13.4GB for efficient deployment.
Brief Details: Quantized 8B parameter LLM specialized in scientific literature synthesis, based on Llama-3.1. Apache 2.0 licensed, supports English language tasks.
Brief-details: Quantized 14.8B parameter Qwen2.5 model optimized for llama.cpp, supporting both Chinese and English, with uncensored instruction-following capabilities and Q4_K_M precision.
Brief Details: Qwen2.5-7B-Instruct-Uncensored GGUF model - 7.62B parameter bilingual (Chinese/English) uncensored instruction model optimized for llama.cpp deployment
Brief-details: A 72.7B parameter GGUF quantized language model offering multiple compression variants from 22.8GB to 64.4GB, optimized for efficient deployment and inference.
72B parameter GGUF-formatted language model with multiple quantization options (Q2-Q8) optimized for efficient deployment. Features comprehensive documentation and compatibility with major LLM frameworks.
Brief-details: OpenDiffusion is a 768x768 safetensors text-to-image model based on Stable Diffusion, optimized for high-quality digital art and concept illustrations with creative commons license.
Brief Details: Qwen2.5-14B quantized model optimized for instruction-following tasks. Features multiple GGUF variants with 14.8B parameters, ideal for efficient deployment.
Brief-details: 7.62B parameter GGUF quantized language model optimized for conversation, featuring multiple compression variants from 2GB to 6.4GB for different performance needs.
Brief-details: Qwen2.5-7B-Instruct variant optimized for GGUF format with multiple quantization options, offering flexible deployment from 3.1GB to 15.3GB sizes
BRIEF-DETAILS: 8B parameter GGUF quantized language model with multiple compression variants (2.1GB-6.7GB), optimized for efficient inference and deployment
Brief Details: 8B parameter Llama-3.1 model optimized with iMatrix quantization, offering multiple compression variants from 2.1GB to 6.7GB for efficient deployment
Brief-details: Quantized 8B parameter conversational AI model with multiple GGUF variants optimized for different performance/size tradeoffs, based on Tulu-3.1.
Brief Details: A 7.62B parameter GGUF-quantized language model optimized for conversational tasks, offering multiple quantization options from 3.1GB to 15.3GB file sizes.
Brief-details: A 7.24B parameter GGUF-optimized language model offering multiple quantization options from 2.8GB to 14.6GB, best suited for efficient deployment and conversational AI tasks.
Brief-details: An 8B parameter GGUF-quantized LLaMA 3.1 model based on Tulu-3, offering multiple quantization options from 3.3GB to 16.2GB with uncensored capabilities.
Brief-details: EVA-Tissint v1.2 14B GGUF is a quantized language model with 14.8B parameters, offering multiple compression variants optimized for different performance/quality trade-offs
Brief Details: WestKunai-Hermes is a 10.7B parameter GGUF-quantized language model offering multiple compression variants for efficient deployment.
Brief-details: EVA-Tissint v1.2 is a 14.8B parameter GGUF-quantized language model offering multiple compression variants, optimized for conversation and general text tasks.