Brief Details: VAR (Visual AutoRegressive) - A groundbreaking visual generation framework surpassing diffusion models using coarse-to-fine prediction approach with GPT-style architecture
Brief-details: Mixtral-8x7B-Instruct AWQ is a 4-bit quantized version of Mistral AI's MOE model, optimized for efficient inference while maintaining performance comparable to Llama 2 70B.
BRIEF DETAILS: Intel's Neural Chat 7B v3.1 GGUF - A 7.24B parameter Mistral-based model optimized for various quantization levels, offering strong performance across multiple benchmarks.
Brief Details: Qwen1.5-72B is a powerful 72.3B parameter language model, part of Qwen's latest series featuring 32K context length and improved multilingual capabilities.
Brief-details: CodeLlama-13B-GGUF is a 13B parameter code generation model optimized for GGUF format, offering multiple quantization options for efficient deployment.
Brief Details: A specialized LORA model for SDXL 1.0 that generates pixel art images with high-quality colorization capabilities and custom pixel art styling.
Brief-details: A high-performance distilled Stable Diffusion model optimized for speed (up to 80% faster) while maintaining quality, based on Realistic Vision V4.0
Brief-details: A quantized version of Nous-Hermes-Llama2 with 2.03B parameters, optimized for efficient inference with GPTQ compression and multiple quantization options.
Brief Details: A 65.3B parameter LLaMA-based language model fine-tuned on Orca-style datasets, optimized for instruction-following and safe interactions
Brief-details: 40B parameter instruct-tuned Falcon model optimized for CPU/GPU inference via GGML format. Features 2-8 bit quantization options and Apache 2.0 license.
Brief-details: A 4-bit quantized version of Vicuna 7B 1.1, optimized for efficient GPU inference using GPTQ compression, based on the LLaMA architecture
Brief-details: Optimized 13B parameter LLaMA model fine-tuned on ShareGPT and WizardLM datasets, available in 4-bit and 5-bit GGML quantized versions for CPU inference.
Brief-details: Experimental LyCORIS model investigating various training configurations, focusing on character/style training with different architectures (LoRA/LoHA/LoCon) and hyperparameters.
Brief-details: 4-bit quantized version of alpaca-native model, optimized for efficient inference using GPTQ. Features 128 groupsize and improved memory efficiency.
Brief Details: A specialized GPT-2 model trained on 80K safe anime prompts, optimized for generating high-quality anime image prompts with 88.2M parameters and contrastive search capabilities.
Brief-details: DGSpitzer-Art-Diffusion is a specialized text-to-image diffusion model trained on the artist's personal artwork, offering multiple artistic styles including anime, landscape, and sketch styles.
Brief Details: A specialized Stable Diffusion model fine-tuned for dark Victorian-era aesthetics, featuring moody and gothic imagery with 58 likes and 101 downloads.
BRIEF DETAILS: LED-based model (162M params) for long-form text summarization. Handles 16K tokens, optimized for books/technical content, ROUGE-1: 33.45
BRIEF DETAILS: Textural Inversion model for Stable Diffusion that generates low-poly style logos and icons with HD quality. MIT licensed, community-rated with 58 likes.
BRIEF-DETAILS: Artistic AI model creating marble statue effects with floral elements. Trained on 35 images, optimized for human subjects with abstract qualities.
Brief-details: T5-based recipe generation model with 223M parameters, trained on 2.2M+ recipes. Generates detailed cooking instructions from ingredients lists.