Brief Details: 8B parameter GGUF quantized language model with 1024k context window, offering multiple quantization variants optimized for different performance/quality tradeoffs
Brief-details: A 7.62B parameter GGUF quantized model offering multiple compression variants, optimized for efficient deployment with imatrix quantization techniques.
Brief Details: A 12B parameter Mistral-based model with multiple GGUF quantized versions optimized for different performance/quality tradeoffs and hardware configurations.
Brief-details: 8B parameter LLaMA-based model with multiple GGUF quantized versions, optimized for efficient deployment and memory usage, featuring imatrix quantization.
Brief-details: 8B parameter GGUF model optimized for efficient inference with multiple quantization options, featuring imatrix variants for enhanced performance and compression ratios.
Brief-details: 8B parameter GGUF quantized LLaMA model with multiple compression variants, optimized for efficient deployment and conversation tasks.
Brief-details: An 8B parameter GGUF-quantized language model based on Llama 3.1 Tulu, optimized for conversational tasks with multiple quantization options for different performance needs.
Brief Details: An 8B parameter Llama-3.1 variant optimized for conversational AI, offering multiple GGUF quantization options for efficient deployment and performance optimization.
Brief Details: Specialized 1.54B parameter LLM fine-tuned from Qwen2-1.5B for role-play scenarios. Features efficient dialogue generation with 32K context window.
Brief-details: AI Rolx is a FLUX-based LoRA text-to-image model trained on Replicate, featuring custom trigger word 'ROLX' and non-commercial licensing.
Brief-details: EVA-Tissint-v1.2-14B is a 14.8B parameter merged language model combining EVA v0.2 and Tissint v1.2, optimized with della_linear method for enhanced text generation capabilities.
Brief-details: Intelligence-7 is a 7.62B parameter merged model combining Marco-o1 and Qwen2.5-7b using SLERP method, optimized for text generation and conversation tasks.
Brief-details: A text-to-image LoRA model built on FLUX.1-dev, featuring specialized image generation capabilities triggered by 'AIWALLBIT' prompt, with non-commercial license.
BRIEF-DETAILS: 8B parameter GGUF-quantized language model with multiple compression variants, optimized for efficient deployment and conversational tasks
Brief-details: A specialized LoRA model for FLUX.1-dev that transforms realistic photos into chibi-style cartoon characters, featuring unique style transfer capabilities
Brief Details: A quantized 8B parameter Llama-3.1 model fine-tuned on Tulu-3 instruction dataset, optimized for math, reasoning, and chat tasks
Brief Details: A 3.61B parameter GGUF-quantized LLaMA model fine-tuned on NVIDIA ChatQA data, optimized for conversation and instruction-following with 1024 token context
Brief Details: A Diffusers-based model utilizing Safetensors format. Limited documentation but appears focused on image generation/manipulation tasks. Downloads: 294
Brief-details: Vietnamese bidding law Q&A model with 226M params, built on vit5-base. Achieves 75.09 ROUGE-1 score. Specializes in legal domain QA.
Brief-details: Visual auto-regressive model with collaborative decoding strategy achieving 1.7x speedup and 50% memory reduction while maintaining image quality for efficient generation
Brief Details: A 7.62B parameter Qwen-based merged model combining multiple LLMs using TIES method, optimized for text generation and conversational tasks in FP16 format.