Brief Details: Reflection Llama-3.1 70B - A powerful 70.6B parameter LLM with innovative reflection-tuning capabilities for self-correcting reasoning
Brief-details: A 123B parameter language model fine-tuned on Mistral-Large-Instruct, optimized for Claude-like prose quality with multi-language support and custom training methodology.
Brief-details: Large-scale Vision Transformer model (315M params) for image tagging, specialized in anime/manga content with support for ratings, characters, and general tags. Apache 2.0 licensed.
Brief-details: FLUX-AestheticAnime is a 16-rank LoRA model trained on Ghibli retro anime aesthetics, built on FLUX.1-dev base model for text-to-image generation
BRIEF-DETAILS: ControlNet Depth SDXL model supporting Zoe and Midas depth detection, enabling precise depth-aware image generation with SDXL base model
Brief-details: A bilingual Japanese-English LLaMA 3 variant with 8B parameters, trained on 22B tokens. Optimized for Japanese language tasks while maintaining English capabilities.
Brief-details: A 70B parameter SLERP-merged LLM combining Miqu and Midnight Rose models, optimized for roleplaying and creative writing with 32K+ context support.
Brief-details: A specialized diffusion model for generating backgrounds around salient objects while preventing object expansion, ideal for e-commerce and photo editing tasks.
Brief Details: A compact 1.1B parameter LLM based on Dolphin 2.8 dataset, optimized for text generation with enhanced capabilities in creative and instructional tasks.
Brief-details: A powerful 70B parameter code generation model optimized for instruction-following and programming tasks, available in multiple GGUF quantizations for efficient deployment
Brief-details: A specialized LoRA model for Stable Cascade that transforms images into pixel art style, optimized for 2048x2048 outputs with best results using img2img from 1024x1024 samples.
Brief-details: Powerful 46.7B parameter Mixtral-based model optimized with DPO, featuring ChatML format support and strong performance across benchmarks
Brief-details: Windows builds of OpenAI's Triton compiler for Python 3.10-3.12, supporting CUDA 12.x, offering optimized deep learning computations on Windows platforms
Brief Details: OPEN-SOLAR-KO-10.7B is a Korean-English bilingual LLM with 10.7B parameters, built on SOLAR architecture with expanded Korean vocabulary and public datasets.
Brief Details: A unique 24.2B parameter MoE model combining multiple Mistral-7B variants into a "clown car" architecture with BF16 precision
Brief Details: A 122B parameter language model created by interleaving layers of lzlv_70b, optimized for instruction following with NSFW capabilities. Built on Llama2 architecture.
Brief-details: A 70B parameter GPTQ-quantized LLaMA2 model fine-tuned on GPT-4 data, offering uncensored responses with multiple quantization options for efficient deployment
BRIEF-DETAILS: Intel's 7B parameter LLM fine-tuned on MetaMathQA dataset, optimized for math and general tasks. Features 68.29 avg benchmark score and 8192 token context.
Brief-details: WizardLM-13B-V1.2 GGML quantized model - Llama 2-based instruction-tuned LLM with 13B parameters, strong performance on benchmarks like MT-Bench (7.06) and AlpacaEval (89.17%)
Brief Details: A 70B parameter Llama-2 model fine-tuned with Guanaco dataset using QLoRA, optimized for fp16 precision. Created by Mikael110, converted by TheBloke.
Brief Details: GGML quantized variant of Orca Mini 13B - Efficient CPU/GPU model trained on WizardLM, Alpaca & Dolly datasets using Orca research approaches