Brief-details: A specialized LoRA model trained on Ghibli-style art, optimized for generating Studio Ghibli-inspired scenes with 64 network dimensions and 32 alpha parameters.
Brief-details: YandexGPT-5-Lite-8B-instruct is a Russian language model with 8B parameters, 32k context length, trained with SFT and RLHF, optimized for instruction-following tasks.
Brief Details: ZenCtrl Tools - An advanced AI agent system for automated visual content creation, featuring comprehensive preprocessing, control, and post-processing capabilities for image generation and editing
BRIEF-DETAILS: 14B parameter video control model supporting multiple conditions (Canny, Depth, Pose, MLSD) with multi-resolution capabilities and trajectory control at 16fps
Brief Details: TxGemma 27B - Google's powerful 27B parameter chat model requiring Health AI Developer Foundation agreement. Advanced language capabilities for healthcare.
Brief-details: Qwen2.5-VL-32B-Instruct-AWQ is a quantized vision-language model with enhanced mathematical and reasoning capabilities, supporting both image and video processing with advanced visual understanding features.
Brief Details: BERT base uncased model with 110M params, pretrained on English text using MLM and NSP objectives. Ideal for fine-tuning on downstream NLP tasks.
Brief-details: Advanced 3D shape modeling framework capable of ultra-high resolution mesh reconstruction up to 1024³, using SparseFlex for efficient computation and sharp feature preservation.
Brief-details: Large-scale 14B parameter text-to-video and image-to-video generation model by Alibaba PAI, supporting multi-resolution training and first-last frame prediction.
Brief Details: A 49B parameter Llama-based model optimized for GGUF format, offering high performance with Q6_K quantization, compatible with llama.cpp
Brief Details: Quantized 27B parameter Gemma model converted to GGUF format, optimized for llama.cpp deployment with Q4_K_M precision, focused on efficient local inference.
Brief-details: QwQ-R1984-32B quantized to 8-bit GGUF format, optimized for llama.cpp deployment. 32B parameter model with efficient local inference capabilities.
Brief Details: Advanced 32B parameter reasoning model with enhanced uncensored capabilities and web search integration. Built on Qwen series with 8K context window.
Brief Details: A quantized 32B parameter LLM optimized for llama.cpp, converted from Qwen/QwQ-32B to GGUF format for efficient local deployment
Brief-details: Qwen2.5-VL-32B-Instruct converted to GGUF format for efficient local deployment via llama.cpp, optimized for visual-language tasks with Q8 quantization
BRIEF-DETAILS: A GGUF-formatted 32B parameter language model converted from Qwen/QwQ-32B, optimized for local deployment using llama.cpp with Q8_0 quantization.
Brief-details: A 49B parameter Llama-3 variant converted to GGUF format, optimized for efficient local deployment using llama.cpp, featuring Q4_K_M quantization
Brief Details: A quantized GGUF version of Mistral's 24B instruction model, optimized for local deployment via llama.cpp with Q4_K_M compression
Brief-details: Quantized 24B parameter Mistral instruction model optimized for llama.cpp, converted to GGUF format for efficient local deployment and inference. Suitable for both CLI and server applications.
BRIEF-DETAILS: Quantized 24B parameter Mistral instruction model converted to GGUF format, optimized for local deployment via llama.cpp with Q8 precision
BRIEF-DETAILS: 27B parameter Gemma model quantized to 6-bit, optimized for llama.cpp deployment with GGUF format conversion from VIDraft/Gemma-3-R1984-27B