Brief Details: A creative Stable Diffusion merge model specializing in anime-style image generation with realistic elements, offering three distinct variations and optimized settings.
Brief-details: SuperNova-Medius is a 14B parameter LLM built on Qwen2.5, combining knowledge from Qwen2.5-72B and Llama-3.1-405B through innovative cross-architecture distillation.
Brief Details: A 4.51B parameter LLM derived from Llama-3.1-8B through width pruning, featuring 32 attention heads and 32 layers. Optimized for commercial use with NVIDIA hardware.
BRIEF-DETAILS: A 13B parameter GPTQ-quantized LLaMA2-based model optimized for roleplay and creative writing, featuring unique tensor merging techniques and multiple quantization options.
Brief-details: A specialized VAE model focusing on enhanced contrast, offering multiple variants including the original blessed.vae.pt and customized versions, ideal for improving low-contrast image generation.
Brief Details: Llama-3-Refueled is an 8.03B parameter model fine-tuned for data labeling tasks, achieving competitive performance against larger models like GPT-4-Turbo and Claude-3.
Brief-details: Baichuan-13B-Base is a large-scale bilingual LLM with 13B parameters, trained on 1.4T tokens, featuring ALiBi positioning and achieving state-of-the-art performance in Chinese/English benchmarks.
Brief Details: Chinese text generation model based on RWKV-5 architecture, specifically trained for adult fiction writing. Available in 1.5B, 3B, and 7B parameter versions.
Brief-details: Specialized Stable Diffusion model fine-tuned for generating realistic robot images. Features nousr robot prompt style and Euler discrete scheduler.
BRIEF-DETAILS: Large-scale Chinese RoBERTa model with Whole Word Masking, developed by HFL team. Apache-2.0 licensed with 12K+ downloads. Optimized for Chinese NLP tasks.
Brief Details: A 70B parameter chat model fine-tuned from Llama-3 using RLHF, achieving 77.8% on Arena-Hard benchmark, competitive with proprietary models.
Brief Details: Yi-Coder-9B-Chat: A powerful 8.83B parameter coding LLM supporting 52 programming languages with 128K context length and state-of-the-art performance.
Brief Details: Emu3-Gen is an 8.49B parameter multimodal model excelling in text-to-image generation and perception tasks using next-token prediction, competing with SDXL and OpenSora.
Brief Details: Qwen2-72B is a powerful 72.7B parameter language model excelling in multilingual tasks, coding, and reasoning with state-of-the-art performance across benchmarks.
BRIEF DETAILS: MusicGen Melody (1.56B params) - Facebook's controllable AI music generator supporting text-to-audio and melody-guided generation at 32kHz with 4 codebooks
Brief Details: 7B parameter uncensored LLaMA-based model quantized to 4-bit, offering unrestricted responses with multiple GPTQ variants for efficient deployment
BRIEF-DETAILS: Text-to-image Stable Diffusion model based on Anything3.0, optimized for anime-style generation with high-quality output even from simple prompts
Brief Details: OCR-Donut-CORD is a vision-encoder-decoder model fine-tuned on CORD dataset for document parsing, combining Swin Transformer and BART for OCR-free document understanding
Brief-details: AnimateLCM-SVD-xt is an efficient image-to-video model that generates 25-frame animations in 2-8 steps at 576x1024 resolution, offering 12.5x faster computation than standard SVD models.
Brief-details: Microsoft's Phi-2 (2.7B params) converted to GGUF format for efficient CPU/GPU inference, optimized for research and code generation with multiple quantization options
Brief-details: A 2.78B parameter uncensored Phi-2 based model trained on diverse datasets, optimized for chat and instruction-following with strong performance across multiple benchmarks.