Brief-details: Massively multilingual text-to-speech model supporting 1107 languages, developed by Facebook. Uses VITS architecture with CC-BY-NC-4.0 license.
Brief-details: A GGML-quantized version of Wizard Vicuna 13B offering various quantization levels (2-8 bit) for CPU/GPU inference, optimized for memory efficiency and performance
Brief-details: A 13B parameter GPTQ-quantized language model combining Wizard-Vicuna with SuperHOT 8K technology, offering extended 8K context length and uncensored outputs. Popular for GPU deployment.
Brief Details: A specialized AI model for generating app icons with the IconsMI style, trained over 7200 steps with distinct checkpoints at different stages for varying creativity and quality levels.
Brief Details: A powerful 34B parameter code generation model merging Phind-CodeLlama-34B-v2 and WizardCoder-Python-34B-V1.0, optimized for Python and general coding tasks.
Brief-details: ShiratakiMix is a specialized 2D-style merge model optimized for anime-style image generation, featuring high-quality outputs with recommended DPM++ SDE Karras sampling.
Brief Details: Real-ESRGAN is a PyTorch-based image enhancement model that excels at face detail improvement and artifact removal, building upon the original ESRGAN architecture
Brief-details: DALL-E 3 XL is a powerful text-to-image diffusion model supporting multiple languages (EN/FR/RU), built on Juggernaut-XL-v5 with MIT license
Brief-details: A powerful 70B parameter LLaMA-3-based instructional model that achieves impressive performance on par with GPT-4-Turbo on MT-Bench and leads open-source models on Arena-Hard benchmarks.
Brief-details: A 4.46B parameter Mixture of Experts (MoE) model combining two phi-2 models, optimized for text generation and code tasks with improved performance metrics.
Brief Details: Facebook's 3B parameter conversational AI model trained for open-domain chatbot applications with multi-turn dialogue capabilities.
Brief Details: OpenOrca-Preview1-13B is a fine-tuned LLaMA-13B model trained on filtered GPT-4 data, achieving 60% Orca-paper improvements with just 6% data training.
Brief-details: Anime-focused text-to-image model based on SDXL 0.9, fine-tuned for high-quality anime art generation with aesthetic optimization
Brief Details: AuraFlow-v0.2 is a state-of-the-art flow-based text-to-image model, featuring improved training and enhanced generation capabilities with Apache 2.0 license.
Brief-details: DALL·E Mega is an advanced text-to-image transformer model, trained on TPU v3-256, capable of generating images from English text prompts with Apache 2.0 license.
Brief-details: DPO-tuned 7B parameter LLM based on Mistral architecture, optimized for reasoning and conversation with strong benchmark performance and 8k context.
BRIEF-DETAILS: Apache 2.0 licensed de-distilled version of FLUX.1-schnell with full T5 context length, attention masking, and restored classifier-free guidance functionality.
Brief-details: Specialized text-to-image model focused on fashion-shoot aesthetics, optimized for full/medium body shots with detailed clothing and composition.
Brief-details: A 30B parameter uncensored LLM based on Wizard-Vicuna architecture, achieving 57.89% average performance on key benchmarks with strong reasoning capabilities.
Brief-details: Powerful 72B parameter chat model with 32k context length, supporting Chinese/English/code tasks. Strong performance across benchmarks with quantization options for efficient deployment.
Brief-details: Grapefruit is a specialized text-to-image diffusion model focused on generating anime-style artwork with a bright, soft aesthetic, built on Stable Diffusion architecture.